From patchwork Tue Aug 14 08:12:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 10565173 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE94E14C0 for ; Tue, 14 Aug 2018 08:12:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B821929817 for ; Tue, 14 Aug 2018 08:12:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC66629833; Tue, 14 Aug 2018 08:12:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A136B29817 for ; Tue, 14 Aug 2018 08:12:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9CBA26E241; Tue, 14 Aug 2018 08:12:30 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by gabe.freedesktop.org (Postfix) with ESMTPS id E68046E241; Tue, 14 Aug 2018 08:12:29 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id h15-v6so16409148wrs.7; Tue, 14 Aug 2018 01:12:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=3hIh8YDMy9AhWsKizzNmPPtQ4ffvfdW1OLBF814Heg0=; b=qxYzl4O0QOfOjHCDY2uYpn6H+mKO+pwvncKFjhHPncrmsxqy5Xu92eoSu664vQ2gDb A7HriT8gqc9AEFaeW47xM2cN3azb8xkjSYWz8CPaONqX/6Mk3XqvJHuf8WVmZlP4aSgn Y3TK2qjhExBVNUgJUXvR9Ad6RSwSbbBQ2T/kPvPXd+sH0/tg01XkF50V6ugYhDIG8c6k PvNpzwkjm/RFjUTpXvFtKD6QUnLmtOJYqhYdlCiQC3em/Rpx2p01qO3bbXx29gnbEOEx msWMJ20VxBA/tsRLIip9lAJrk1Q+hM4xSwtVn2oyrFaNlR5ZERW5pLRPKTb+Wbt2NgN6 wNHQ== X-Gm-Message-State: AOUpUlFJ4KKWq0uk2gGDswIl1OHfXOMN33ZtF7uYJJeo+bvU2yBSD/wW 5MVVKzpDSb5EKFqVYnWgw8MFIBqq X-Google-Smtp-Source: AA+uWPzOjMJBGUxbgBp65mzmzvQczZKEARanlZUpJbfqDNszJys/bB2AP3bMpBwg8p36oIZ1iKon3A== X-Received: by 2002:adf:93a3:: with SMTP id 32-v6mr13144933wrp.140.1534234348342; Tue, 14 Aug 2018 01:12:28 -0700 (PDT) Received: from baker.fritz.box ([2a02:908:1257:4460:941d:4348:9ad8:82c8]) by smtp.gmail.com with ESMTPSA id 187-v6sm10610218wmr.40.2018.08.14.01.12.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Aug 2018 01:12:27 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org Subject: [PATCH 1/4] drm/scheduler: trivial error handling fix Date: Tue, 14 Aug 2018 10:12:23 +0200 Message-Id: <20180814081226.76086-1-christian.koenig@amd.com> X-Mailer: git-send-email 2.14.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Return -ENOMEM when allocating the rq_list fails. Signed-off-by: Christian König Reviewed-by: Huang Rui Reviewed-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index f566405f49e3..85c1f95752cc 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -191,6 +191,9 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, entity->num_rq_list = num_rq_list; entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *), GFP_KERNEL); + if (!entity->rq_list) + return -ENOMEM; + for (i = 0; i < num_rq_list; ++i) entity->rq_list[i] = rq_list[i]; entity->last_scheduled = NULL; From patchwork Tue Aug 14 08:12:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 10565179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 325DF14C0 for ; Tue, 14 Aug 2018 08:12:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24E7E29817 for ; Tue, 14 Aug 2018 08:12:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1862F2983E; Tue, 14 Aug 2018 08:12:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9D88829817 for ; Tue, 14 Aug 2018 08:12:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8F8506E25E; Tue, 14 Aug 2018 08:12:33 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x242.google.com (mail-wm0-x242.google.com [IPv6:2a00:1450:400c:c09::242]) by gabe.freedesktop.org (Postfix) with ESMTPS id 607706E24B; Tue, 14 Aug 2018 08:12:31 +0000 (UTC) Received: by mail-wm0-x242.google.com with SMTP id c14-v6so11341290wmb.4; Tue, 14 Aug 2018 01:12:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TDstvZlGgk6h+N2QIhFqOzfV6I2KfBah1oq2sti0tPc=; b=fV2Jtb5dbqgkrU6kauYodHtGFeR3FqIOSd9c2KM8ujB/kDBlTgIDhIOPTkksdkWm3G Hgs+8SA1wxBbp45Y/eyCOfKOTjG/A8o7LKbFZKzbxwfqvDj96/ZmL69YDhfuf2DYNsph DHeVgqFBfEGlEfyOG2cOq/oSb0SXwXppDGNhSB3yUhnidxhRmostStW6REepFE08RrZn TOkw6p4v1rxLNM7c0OewrPvLq0gL0BhYN0OwxfCCSuHnkujHNs878Ai1MPAQAoVVm8I+ qaUsswiVMtbm53v/Qs3lpQQ9uS9PK/18z9QVtx08sBlYMsefgh/TItfF3IFILRf2hvut 9ZIg== X-Gm-Message-State: AOUpUlHVlhJvS8riq81CHSDx/O+5i5zL8uldwwFIHb6hdKTXzKuM+ZvO tv7h30xftcKNfd4MQnNzfeA391YS X-Google-Smtp-Source: AA+uWPyB+XMzE4PgOSImrhFZaIfabyTPccyRv9MJ8qOcWbHzdOFuy9YdNGOs9yZ+HqvNwB9ayJltpA== X-Received: by 2002:a1c:7ed8:: with SMTP id z207-v6mr10200092wmc.139.1534234349157; Tue, 14 Aug 2018 01:12:29 -0700 (PDT) Received: from baker.fritz.box ([2a02:908:1257:4460:941d:4348:9ad8:82c8]) by smtp.gmail.com with ESMTPSA id 187-v6sm10610218wmr.40.2018.08.14.01.12.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Aug 2018 01:12:28 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org Subject: [PATCH 2/4] drm/scheduler: move entity handling into separate file Date: Tue, 14 Aug 2018 10:12:24 +0200 Message-Id: <20180814081226.76086-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180814081226.76086-1-christian.koenig@amd.com> References: <20180814081226.76086-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP This is complex enough on it's own. Move it into a separate C file. Signed-off-by: Christian König Reviewed-by: Huang Rui --- drivers/gpu/drm/scheduler/Makefile | 2 +- drivers/gpu/drm/scheduler/gpu_scheduler.c | 441 +--------------------------- drivers/gpu/drm/scheduler/sched_entity.c | 459 ++++++++++++++++++++++++++++++ include/drm/gpu_scheduler.h | 28 +- 4 files changed, 484 insertions(+), 446 deletions(-) create mode 100644 drivers/gpu/drm/scheduler/sched_entity.c diff --git a/drivers/gpu/drm/scheduler/Makefile b/drivers/gpu/drm/scheduler/Makefile index 7665883f81d4..f23785d4b3c8 100644 --- a/drivers/gpu/drm/scheduler/Makefile +++ b/drivers/gpu/drm/scheduler/Makefile @@ -20,6 +20,6 @@ # OTHER DEALINGS IN THE SOFTWARE. # # -gpu-sched-y := gpu_scheduler.o sched_fence.o +gpu-sched-y := gpu_scheduler.o sched_fence.o sched_entity.o obj-$(CONFIG_DRM_SCHED) += gpu-sched.o diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index 85c1f95752cc..9ca741f3a0bc 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -58,8 +58,6 @@ #define to_drm_sched_job(sched_job) \ container_of((sched_job), struct drm_sched_job, queue_node) -static bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); -static void drm_sched_wakeup(struct drm_gpu_scheduler *sched); static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb); /** @@ -86,8 +84,8 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, * * Adds a scheduler entity to the run queue. */ -static void drm_sched_rq_add_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity) +void drm_sched_rq_add_entity(struct drm_sched_rq *rq, + struct drm_sched_entity *entity) { if (!list_empty(&entity->list)) return; @@ -104,8 +102,8 @@ static void drm_sched_rq_add_entity(struct drm_sched_rq *rq, * * Removes a scheduler entity from the run queue. */ -static void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity) +void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, + struct drm_sched_entity *entity) { if (list_empty(&entity->list)) return; @@ -158,301 +156,6 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq) return NULL; } -/** - * drm_sched_entity_init - Init a context entity used by scheduler when - * submit to HW ring. - * - * @entity: scheduler entity to init - * @rq_list: the list of run queue on which jobs from this - * entity can be submitted - * @num_rq_list: number of run queue in rq_list - * @guilty: atomic_t set to 1 when a job on this queue - * is found to be guilty causing a timeout - * - * Note: the rq_list should have atleast one element to schedule - * the entity - * - * Returns 0 on success or a negative error code on failure. -*/ -int drm_sched_entity_init(struct drm_sched_entity *entity, - struct drm_sched_rq **rq_list, - unsigned int num_rq_list, - atomic_t *guilty) -{ - int i; - - if (!(entity && rq_list && num_rq_list > 0 && rq_list[0])) - return -EINVAL; - - memset(entity, 0, sizeof(struct drm_sched_entity)); - INIT_LIST_HEAD(&entity->list); - entity->rq = rq_list[0]; - entity->guilty = guilty; - entity->num_rq_list = num_rq_list; - entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *), - GFP_KERNEL); - if (!entity->rq_list) - return -ENOMEM; - - for (i = 0; i < num_rq_list; ++i) - entity->rq_list[i] = rq_list[i]; - entity->last_scheduled = NULL; - - spin_lock_init(&entity->rq_lock); - spsc_queue_init(&entity->job_queue); - - atomic_set(&entity->fence_seq, 0); - entity->fence_context = dma_fence_context_alloc(2); - - return 0; -} -EXPORT_SYMBOL(drm_sched_entity_init); - -/** - * drm_sched_entity_is_idle - Check if entity is idle - * - * @entity: scheduler entity - * - * Returns true if the entity does not have any unscheduled jobs. - */ -static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity) -{ - rmb(); - - if (list_empty(&entity->list) || - spsc_queue_peek(&entity->job_queue) == NULL) - return true; - - return false; -} - -/** - * drm_sched_entity_is_ready - Check if entity is ready - * - * @entity: scheduler entity - * - * Return true if entity could provide a job. - */ -static bool drm_sched_entity_is_ready(struct drm_sched_entity *entity) -{ - if (spsc_queue_peek(&entity->job_queue) == NULL) - return false; - - if (READ_ONCE(entity->dependency)) - return false; - - return true; -} - -/** - * drm_sched_entity_get_free_sched - Get the rq from rq_list with least load - * - * @entity: scheduler entity - * - * Return the pointer to the rq with least load. - */ -static struct drm_sched_rq * -drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) -{ - struct drm_sched_rq *rq = NULL; - unsigned int min_jobs = UINT_MAX, num_jobs; - int i; - - for (i = 0; i < entity->num_rq_list; ++i) { - num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs); - if (num_jobs < min_jobs) { - min_jobs = num_jobs; - rq = entity->rq_list[i]; - } - } - - return rq; -} - -static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - drm_sched_fence_finished(job->s_fence); - WARN_ON(job->s_fence->parent); - dma_fence_put(&job->s_fence->finished); - job->sched->ops->free_job(job); -} - - -/** - * drm_sched_entity_flush - Flush a context entity - * - * @entity: scheduler entity - * @timeout: time to wait in for Q to become empty in jiffies. - * - * Splitting drm_sched_entity_fini() into two functions, The first one does the waiting, - * removes the entity from the runqueue and returns an error when the process was killed. - * - * Returns the remaining time in jiffies left from the input timeout - */ -long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) -{ - struct drm_gpu_scheduler *sched; - struct task_struct *last_user; - long ret = timeout; - - sched = entity->rq->sched; - /** - * The client will not queue more IBs during this fini, consume existing - * queued IBs or discard them on SIGKILL - */ - if (current->flags & PF_EXITING) { - if (timeout) - ret = wait_event_timeout( - sched->job_scheduled, - drm_sched_entity_is_idle(entity), - timeout); - } else - wait_event_killable(sched->job_scheduled, drm_sched_entity_is_idle(entity)); - - - /* For killed process disable any more IBs enqueue right now */ - last_user = cmpxchg(&entity->last_user, current->group_leader, NULL); - if ((!last_user || last_user == current->group_leader) && - (current->flags & PF_EXITING) && (current->exit_code == SIGKILL)) - drm_sched_rq_remove_entity(entity->rq, entity); - - return ret; -} -EXPORT_SYMBOL(drm_sched_entity_flush); - -/** - * drm_sched_entity_cleanup - Destroy a context entity - * - * @entity: scheduler entity - * - * This should be called after @drm_sched_entity_do_release. It goes over the - * entity and signals all jobs with an error code if the process was killed. - * - */ -void drm_sched_entity_fini(struct drm_sched_entity *entity) -{ - struct drm_gpu_scheduler *sched; - - sched = entity->rq->sched; - drm_sched_rq_remove_entity(entity->rq, entity); - - /* Consumption of existing IBs wasn't completed. Forcefully - * remove them here. - */ - if (spsc_queue_peek(&entity->job_queue)) { - struct drm_sched_job *job; - int r; - - /* Park the kernel for a moment to make sure it isn't processing - * our enity. - */ - kthread_park(sched->thread); - kthread_unpark(sched->thread); - if (entity->dependency) { - dma_fence_remove_callback(entity->dependency, - &entity->cb); - dma_fence_put(entity->dependency); - entity->dependency = NULL; - } - - while ((job = to_drm_sched_job(spsc_queue_pop(&entity->job_queue)))) { - struct drm_sched_fence *s_fence = job->s_fence; - drm_sched_fence_scheduled(s_fence); - dma_fence_set_error(&s_fence->finished, -ESRCH); - - /* - * When pipe is hanged by older entity, new entity might - * not even have chance to submit it's first job to HW - * and so entity->last_scheduled will remain NULL - */ - if (!entity->last_scheduled) { - drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); - } else { - r = dma_fence_add_callback(entity->last_scheduled, &job->finish_cb, - drm_sched_entity_kill_jobs_cb); - if (r == -ENOENT) - drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); - else if (r) - DRM_ERROR("fence add callback failed (%d)\n", r); - } - } - } - - dma_fence_put(entity->last_scheduled); - entity->last_scheduled = NULL; - kfree(entity->rq_list); -} -EXPORT_SYMBOL(drm_sched_entity_fini); - -/** - * drm_sched_entity_fini - Destroy a context entity - * - * @entity: scheduler entity - * - * Calls drm_sched_entity_do_release() and drm_sched_entity_cleanup() - */ -void drm_sched_entity_destroy(struct drm_sched_entity *entity) -{ - drm_sched_entity_flush(entity, MAX_WAIT_SCHED_ENTITY_Q_EMPTY); - drm_sched_entity_fini(entity); -} -EXPORT_SYMBOL(drm_sched_entity_destroy); - -static void drm_sched_entity_wakeup(struct dma_fence *f, struct dma_fence_cb *cb) -{ - struct drm_sched_entity *entity = - container_of(cb, struct drm_sched_entity, cb); - entity->dependency = NULL; - dma_fence_put(f); - drm_sched_wakeup(entity->rq->sched); -} - -static void drm_sched_entity_clear_dep(struct dma_fence *f, struct dma_fence_cb *cb) -{ - struct drm_sched_entity *entity = - container_of(cb, struct drm_sched_entity, cb); - entity->dependency = NULL; - dma_fence_put(f); -} - -/** - * drm_sched_entity_set_rq_priority - helper for drm_sched_entity_set_priority - */ -static void drm_sched_entity_set_rq_priority(struct drm_sched_rq **rq, - enum drm_sched_priority priority) -{ - *rq = &(*rq)->sched->sched_rq[priority]; -} - -/** - * drm_sched_entity_set_priority - Sets priority of the entity - * - * @entity: scheduler entity - * @priority: scheduler priority - * - * Update the priority of runqueus used for the entity. - */ -void drm_sched_entity_set_priority(struct drm_sched_entity *entity, - enum drm_sched_priority priority) -{ - unsigned int i; - - spin_lock(&entity->rq_lock); - - for (i = 0; i < entity->num_rq_list; ++i) - drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority); - - drm_sched_rq_remove_entity(entity->rq, entity); - drm_sched_entity_set_rq_priority(&entity->rq, priority); - drm_sched_rq_add_entity(entity->rq, entity); - - spin_unlock(&entity->rq_lock); -} -EXPORT_SYMBOL(drm_sched_entity_set_priority); - /** * drm_sched_dependency_optimized * @@ -479,140 +182,6 @@ bool drm_sched_dependency_optimized(struct dma_fence* fence, } EXPORT_SYMBOL(drm_sched_dependency_optimized); -static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) -{ - struct drm_gpu_scheduler *sched = entity->rq->sched; - struct dma_fence * fence = entity->dependency; - struct drm_sched_fence *s_fence; - - if (fence->context == entity->fence_context || - fence->context == entity->fence_context + 1) { - /* - * Fence is a scheduled/finished fence from a job - * which belongs to the same entity, we can ignore - * fences from ourself - */ - dma_fence_put(entity->dependency); - return false; - } - - s_fence = to_drm_sched_fence(fence); - if (s_fence && s_fence->sched == sched) { - - /* - * Fence is from the same scheduler, only need to wait for - * it to be scheduled - */ - fence = dma_fence_get(&s_fence->scheduled); - dma_fence_put(entity->dependency); - entity->dependency = fence; - if (!dma_fence_add_callback(fence, &entity->cb, - drm_sched_entity_clear_dep)) - return true; - - /* Ignore it when it is already scheduled */ - dma_fence_put(fence); - return false; - } - - if (!dma_fence_add_callback(entity->dependency, &entity->cb, - drm_sched_entity_wakeup)) - return true; - - dma_fence_put(entity->dependency); - return false; -} - -static struct drm_sched_job * -drm_sched_entity_pop_job(struct drm_sched_entity *entity) -{ - struct drm_gpu_scheduler *sched = entity->rq->sched; - struct drm_sched_job *sched_job = to_drm_sched_job( - spsc_queue_peek(&entity->job_queue)); - - if (!sched_job) - return NULL; - - while ((entity->dependency = sched->ops->dependency(sched_job, entity))) { - if (drm_sched_entity_add_dependency_cb(entity)) { - - trace_drm_sched_job_wait_dep(sched_job, entity->dependency); - return NULL; - } - } - - /* skip jobs from entity that marked guilty */ - if (entity->guilty && atomic_read(entity->guilty)) - dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED); - - dma_fence_put(entity->last_scheduled); - entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished); - - spsc_queue_pop(&entity->job_queue); - return sched_job; -} - -/** - * drm_sched_entity_select_rq - select a new rq for the entity - * - * @entity: scheduler entity - * - * Check all prerequisites and select a new rq for the entity for load - * balancing. - */ -static void drm_sched_entity_select_rq(struct drm_sched_entity *entity) -{ - struct dma_fence *fence; - struct drm_sched_rq *rq; - - if (!spsc_queue_count(&entity->job_queue) == 0 || - entity->num_rq_list <= 1) - return; - - fence = READ_ONCE(entity->last_scheduled); - if (fence && !dma_fence_is_signaled(fence)) - return; - - rq = drm_sched_entity_get_free_sched(entity); - spin_lock(&entity->rq_lock); - drm_sched_rq_remove_entity(entity->rq, entity); - entity->rq = rq; - spin_unlock(&entity->rq_lock); -} - -/** - * drm_sched_entity_push_job - Submit a job to the entity's job queue - * - * @sched_job: job to submit - * @entity: scheduler entity - * - * Note: To guarantee that the order of insertion to queue matches - * the job's fence sequence number this function should be - * called with drm_sched_job_init under common lock. - * - * Returns 0 for success, negative error code otherwise. - */ -void drm_sched_entity_push_job(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity) -{ - bool first; - - trace_drm_sched_job(sched_job, entity); - atomic_inc(&entity->rq->sched->num_jobs); - WRITE_ONCE(entity->last_user, current->group_leader); - first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); - - /* first job wakes up scheduler */ - if (first) { - /* Add the entity to the run queue */ - spin_lock(&entity->rq_lock); - drm_sched_rq_add_entity(entity->rq, entity); - spin_unlock(&entity->rq_lock); - drm_sched_wakeup(entity->rq->sched); - } -} -EXPORT_SYMBOL(drm_sched_entity_push_job); - /* job_finish is called after hw fence signaled */ static void drm_sched_job_finish(struct work_struct *work) @@ -840,7 +409,7 @@ static bool drm_sched_ready(struct drm_gpu_scheduler *sched) * @sched: scheduler instance * */ -static void drm_sched_wakeup(struct drm_gpu_scheduler *sched) +void drm_sched_wakeup(struct drm_gpu_scheduler *sched) { if (drm_sched_ready(sched)) wake_up_interruptible(&sched->wake_up_worker); diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c new file mode 100644 index 000000000000..1053f27af9df --- /dev/null +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -0,0 +1,459 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +#include +#include + +#include "gpu_scheduler_trace.h" + +#define to_drm_sched_job(sched_job) \ + container_of((sched_job), struct drm_sched_job, queue_node) + +/** + * drm_sched_entity_init - Init a context entity used by scheduler when + * submit to HW ring. + * + * @entity: scheduler entity to init + * @rq_list: the list of run queue on which jobs from this + * entity can be submitted + * @num_rq_list: number of run queue in rq_list + * @guilty: atomic_t set to 1 when a job on this queue + * is found to be guilty causing a timeout + * + * Note: the rq_list should have atleast one element to schedule + * the entity + * + * Returns 0 on success or a negative error code on failure. +*/ +int drm_sched_entity_init(struct drm_sched_entity *entity, + struct drm_sched_rq **rq_list, + unsigned int num_rq_list, + atomic_t *guilty) +{ + int i; + + if (!(entity && rq_list && num_rq_list > 0 && rq_list[0])) + return -EINVAL; + + memset(entity, 0, sizeof(struct drm_sched_entity)); + INIT_LIST_HEAD(&entity->list); + entity->rq = rq_list[0]; + entity->guilty = guilty; + entity->num_rq_list = num_rq_list; + entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *), + GFP_KERNEL); + if (!entity->rq_list) + return -ENOMEM; + + for (i = 0; i < num_rq_list; ++i) + entity->rq_list[i] = rq_list[i]; + entity->last_scheduled = NULL; + + spin_lock_init(&entity->rq_lock); + spsc_queue_init(&entity->job_queue); + + atomic_set(&entity->fence_seq, 0); + entity->fence_context = dma_fence_context_alloc(2); + + return 0; +} +EXPORT_SYMBOL(drm_sched_entity_init); + +/** + * drm_sched_entity_is_idle - Check if entity is idle + * + * @entity: scheduler entity + * + * Returns true if the entity does not have any unscheduled jobs. + */ +static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity) +{ + rmb(); + + if (list_empty(&entity->list) || + spsc_queue_peek(&entity->job_queue) == NULL) + return true; + + return false; +} + +/** + * drm_sched_entity_is_ready - Check if entity is ready + * + * @entity: scheduler entity + * + * Return true if entity could provide a job. + */ +bool drm_sched_entity_is_ready(struct drm_sched_entity *entity) +{ + if (spsc_queue_peek(&entity->job_queue) == NULL) + return false; + + if (READ_ONCE(entity->dependency)) + return false; + + return true; +} + +/** + * drm_sched_entity_get_free_sched - Get the rq from rq_list with least load + * + * @entity: scheduler entity + * + * Return the pointer to the rq with least load. + */ +static struct drm_sched_rq * +drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) +{ + struct drm_sched_rq *rq = NULL; + unsigned int min_jobs = UINT_MAX, num_jobs; + int i; + + for (i = 0; i < entity->num_rq_list; ++i) { + num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs); + if (num_jobs < min_jobs) { + min_jobs = num_jobs; + rq = entity->rq_list[i]; + } + } + + return rq; +} + +static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, + struct dma_fence_cb *cb) +{ + struct drm_sched_job *job = container_of(cb, struct drm_sched_job, + finish_cb); + drm_sched_fence_finished(job->s_fence); + WARN_ON(job->s_fence->parent); + dma_fence_put(&job->s_fence->finished); + job->sched->ops->free_job(job); +} + + +/** + * drm_sched_entity_flush - Flush a context entity + * + * @entity: scheduler entity + * @timeout: time to wait in for Q to become empty in jiffies. + * + * Splitting drm_sched_entity_fini() into two functions, The first one does the waiting, + * removes the entity from the runqueue and returns an error when the process was killed. + * + * Returns the remaining time in jiffies left from the input timeout + */ +long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) +{ + struct drm_gpu_scheduler *sched; + struct task_struct *last_user; + long ret = timeout; + + sched = entity->rq->sched; + /** + * The client will not queue more IBs during this fini, consume existing + * queued IBs or discard them on SIGKILL + */ + if (current->flags & PF_EXITING) { + if (timeout) + ret = wait_event_timeout( + sched->job_scheduled, + drm_sched_entity_is_idle(entity), + timeout); + } else { + wait_event_killable(sched->job_scheduled, + drm_sched_entity_is_idle(entity)); + } + + /* For killed process disable any more IBs enqueue right now */ + last_user = cmpxchg(&entity->last_user, current->group_leader, NULL); + if ((!last_user || last_user == current->group_leader) && + (current->flags & PF_EXITING) && (current->exit_code == SIGKILL)) + drm_sched_rq_remove_entity(entity->rq, entity); + + return ret; +} +EXPORT_SYMBOL(drm_sched_entity_flush); + +/** + * drm_sched_entity_cleanup - Destroy a context entity + * + * @entity: scheduler entity + * + * This should be called after @drm_sched_entity_do_release. It goes over the + * entity and signals all jobs with an error code if the process was killed. + * + */ +void drm_sched_entity_fini(struct drm_sched_entity *entity) +{ + struct drm_gpu_scheduler *sched; + + sched = entity->rq->sched; + drm_sched_rq_remove_entity(entity->rq, entity); + + /* Consumption of existing IBs wasn't completed. Forcefully + * remove them here. + */ + if (spsc_queue_peek(&entity->job_queue)) { + struct drm_sched_job *job; + int r; + + /* Park the kernel for a moment to make sure it isn't processing + * our enity. + */ + kthread_park(sched->thread); + kthread_unpark(sched->thread); + if (entity->dependency) { + dma_fence_remove_callback(entity->dependency, + &entity->cb); + dma_fence_put(entity->dependency); + entity->dependency = NULL; + } + + while ((job = to_drm_sched_job(spsc_queue_pop(&entity->job_queue)))) { + struct drm_sched_fence *s_fence = job->s_fence; + drm_sched_fence_scheduled(s_fence); + dma_fence_set_error(&s_fence->finished, -ESRCH); + + /* + * When pipe is hanged by older entity, new entity might + * not even have chance to submit it's first job to HW + * and so entity->last_scheduled will remain NULL + */ + if (!entity->last_scheduled) { + drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); + } else { + r = dma_fence_add_callback(entity->last_scheduled, &job->finish_cb, + drm_sched_entity_kill_jobs_cb); + if (r == -ENOENT) + drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); + else if (r) + DRM_ERROR("fence add callback failed (%d)\n", r); + } + } + } + + dma_fence_put(entity->last_scheduled); + entity->last_scheduled = NULL; + kfree(entity->rq_list); +} +EXPORT_SYMBOL(drm_sched_entity_fini); + +/** + * drm_sched_entity_fini - Destroy a context entity + * + * @entity: scheduler entity + * + * Calls drm_sched_entity_do_release() and drm_sched_entity_cleanup() + */ +void drm_sched_entity_destroy(struct drm_sched_entity *entity) +{ + drm_sched_entity_flush(entity, MAX_WAIT_SCHED_ENTITY_Q_EMPTY); + drm_sched_entity_fini(entity); +} +EXPORT_SYMBOL(drm_sched_entity_destroy); + +static void drm_sched_entity_wakeup(struct dma_fence *f, struct dma_fence_cb *cb) +{ + struct drm_sched_entity *entity = + container_of(cb, struct drm_sched_entity, cb); + entity->dependency = NULL; + dma_fence_put(f); + drm_sched_wakeup(entity->rq->sched); +} + +static void drm_sched_entity_clear_dep(struct dma_fence *f, struct dma_fence_cb *cb) +{ + struct drm_sched_entity *entity = + container_of(cb, struct drm_sched_entity, cb); + entity->dependency = NULL; + dma_fence_put(f); +} + +/** + * drm_sched_entity_set_rq_priority - helper for drm_sched_entity_set_priority + */ +static void drm_sched_entity_set_rq_priority(struct drm_sched_rq **rq, + enum drm_sched_priority priority) +{ + *rq = &(*rq)->sched->sched_rq[priority]; +} + +/** + * drm_sched_entity_set_priority - Sets priority of the entity + * + * @entity: scheduler entity + * @priority: scheduler priority + * + * Update the priority of runqueus used for the entity. + */ +void drm_sched_entity_set_priority(struct drm_sched_entity *entity, + enum drm_sched_priority priority) +{ + unsigned int i; + + spin_lock(&entity->rq_lock); + + for (i = 0; i < entity->num_rq_list; ++i) + drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority); + + drm_sched_rq_remove_entity(entity->rq, entity); + drm_sched_entity_set_rq_priority(&entity->rq, priority); + drm_sched_rq_add_entity(entity->rq, entity); + + spin_unlock(&entity->rq_lock); +} +EXPORT_SYMBOL(drm_sched_entity_set_priority); + +static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) +{ + struct drm_gpu_scheduler *sched = entity->rq->sched; + struct dma_fence * fence = entity->dependency; + struct drm_sched_fence *s_fence; + + if (fence->context == entity->fence_context || + fence->context == entity->fence_context + 1) { + /* + * Fence is a scheduled/finished fence from a job + * which belongs to the same entity, we can ignore + * fences from ourself + */ + dma_fence_put(entity->dependency); + return false; + } + + s_fence = to_drm_sched_fence(fence); + if (s_fence && s_fence->sched == sched) { + + /* + * Fence is from the same scheduler, only need to wait for + * it to be scheduled + */ + fence = dma_fence_get(&s_fence->scheduled); + dma_fence_put(entity->dependency); + entity->dependency = fence; + if (!dma_fence_add_callback(fence, &entity->cb, + drm_sched_entity_clear_dep)) + return true; + + /* Ignore it when it is already scheduled */ + dma_fence_put(fence); + return false; + } + + if (!dma_fence_add_callback(entity->dependency, &entity->cb, + drm_sched_entity_wakeup)) + return true; + + dma_fence_put(entity->dependency); + return false; +} + +struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) +{ + struct drm_gpu_scheduler *sched = entity->rq->sched; + struct drm_sched_job *sched_job = to_drm_sched_job( + spsc_queue_peek(&entity->job_queue)); + + if (!sched_job) + return NULL; + + while ((entity->dependency = sched->ops->dependency(sched_job, entity))) { + if (drm_sched_entity_add_dependency_cb(entity)) { + + trace_drm_sched_job_wait_dep(sched_job, entity->dependency); + return NULL; + } + } + + /* skip jobs from entity that marked guilty */ + if (entity->guilty && atomic_read(entity->guilty)) + dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED); + + dma_fence_put(entity->last_scheduled); + entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished); + + spsc_queue_pop(&entity->job_queue); + return sched_job; +} + +/** + * drm_sched_entity_select_rq - select a new rq for the entity + * + * @entity: scheduler entity + * + * Check all prerequisites and select a new rq for the entity for load + * balancing. + */ +void drm_sched_entity_select_rq(struct drm_sched_entity *entity) +{ + struct dma_fence *fence; + struct drm_sched_rq *rq; + + if (!spsc_queue_count(&entity->job_queue) == 0 || + entity->num_rq_list <= 1) + return; + + fence = READ_ONCE(entity->last_scheduled); + if (fence && !dma_fence_is_signaled(fence)) + return; + + rq = drm_sched_entity_get_free_sched(entity); + spin_lock(&entity->rq_lock); + drm_sched_rq_remove_entity(entity->rq, entity); + entity->rq = rq; + spin_unlock(&entity->rq_lock); +} + +/** + * drm_sched_entity_push_job - Submit a job to the entity's job queue + * + * @sched_job: job to submit + * @entity: scheduler entity + * + * Note: To guarantee that the order of insertion to queue matches + * the job's fence sequence number this function should be + * called with drm_sched_job_init under common lock. + * + * Returns 0 for success, negative error code otherwise. + */ +void drm_sched_entity_push_job(struct drm_sched_job *sched_job, + struct drm_sched_entity *entity) +{ + bool first; + + trace_drm_sched_job(sched_job, entity); + atomic_inc(&entity->rq->sched->num_jobs); + WRITE_ONCE(entity->last_user, current->group_leader); + first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); + + /* first job wakes up scheduler */ + if (first) { + /* Add the entity to the run queue */ + spin_lock(&entity->rq_lock); + drm_sched_rq_add_entity(entity->rq, entity); + spin_unlock(&entity->rq_lock); + drm_sched_wakeup(entity->rq->sched); + } +} +EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 22c0f88f7d8f..919ae572f775 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -288,6 +288,21 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, uint32_t hw_submission, unsigned hang_limit, long timeout, const char *name); void drm_sched_fini(struct drm_gpu_scheduler *sched); +int drm_sched_job_init(struct drm_sched_job *job, + struct drm_sched_entity *entity, + void *owner); +void drm_sched_wakeup(struct drm_gpu_scheduler *sched); +void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched, + struct drm_sched_job *job); +void drm_sched_job_recovery(struct drm_gpu_scheduler *sched); +bool drm_sched_dependency_optimized(struct dma_fence* fence, + struct drm_sched_entity *entity); +void drm_sched_job_kickout(struct drm_sched_job *s_job); + +void drm_sched_rq_add_entity(struct drm_sched_rq *rq, + struct drm_sched_entity *entity); +void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, + struct drm_sched_entity *entity); int drm_sched_entity_init(struct drm_sched_entity *entity, struct drm_sched_rq **rq_list, @@ -296,22 +311,17 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout); void drm_sched_entity_fini(struct drm_sched_entity *entity); void drm_sched_entity_destroy(struct drm_sched_entity *entity); +void drm_sched_entity_select_rq(struct drm_sched_entity *entity); +struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity); void drm_sched_entity_push_job(struct drm_sched_job *sched_job, struct drm_sched_entity *entity); void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); +bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); + struct drm_sched_fence *drm_sched_fence_create( struct drm_sched_entity *s_entity, void *owner); void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence); -int drm_sched_job_init(struct drm_sched_job *job, - struct drm_sched_entity *entity, - void *owner); -void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched, - struct drm_sched_job *job); -void drm_sched_job_recovery(struct drm_gpu_scheduler *sched); -bool drm_sched_dependency_optimized(struct dma_fence* fence, - struct drm_sched_entity *entity); -void drm_sched_job_kickout(struct drm_sched_job *s_job); #endif From patchwork Tue Aug 14 08:12:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 10565175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 88B3B157B for ; Tue, 14 Aug 2018 08:12:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E4E329817 for ; Tue, 14 Aug 2018 08:12:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 72FC029833; Tue, 14 Aug 2018 08:12:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E45E629817 for ; Tue, 14 Aug 2018 08:12:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DE2A26E24B; Tue, 14 Aug 2018 08:12:32 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x243.google.com (mail-wm0-x243.google.com [IPv6:2a00:1450:400c:c09::243]) by gabe.freedesktop.org (Postfix) with ESMTPS id A46AA6E24C; Tue, 14 Aug 2018 08:12:31 +0000 (UTC) Received: by mail-wm0-x243.google.com with SMTP id n11-v6so11260573wmc.2; Tue, 14 Aug 2018 01:12:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kTJIjIOgMS5R+WBADKm5bA7hgn0ZpcfRqOBbBtJGqAA=; b=hLUnDxyvi5hnX8U97h9f+SrXwYHSw20WInYoTz4+iJhQO9nWcnVAmHo8jTnE1p39+e MaXsitrYmfPXcTRIrU6OS5KQiFV+6eKTPJuLyCV5lZt+6mH6QzJbFDSRbd56n+pt0urT 7pXUVzEtXiRs7/X3L5pMmeAgAvD03NMqu2wm18aQ6pbdaetqOnzHWDX83vfmRZ28xQOR MgM1q7nim9uuCxBQBjyejGPvEgm0pjbNqsJ9swU+zD5sovXVTCobFBM4tucVK/j2pvW3 BoQn8Z6Z6wg4hR8u1IDW+lmRgR785+CINSL0Wdu8LFV+3pkb6ysITt+7WKvowCstFncM dwyw== X-Gm-Message-State: AOUpUlEqmcTXDYHWH6Lk5zBcGnpNJYdGvDohGHYXNY97laP1aICrrfWv rIpguYJo7hnzutRmT7AAXVzExOCU X-Google-Smtp-Source: AA+uWPwzBib82HZ063kUww7IocsuLwioXEosxIljxGpTFFr4pIcj9B7tOJkaknPqKD9sAIwC4WgmrA== X-Received: by 2002:a1c:7408:: with SMTP id p8-v6mr9688110wmc.118.1534234349915; Tue, 14 Aug 2018 01:12:29 -0700 (PDT) Received: from baker.fritz.box ([2a02:908:1257:4460:941d:4348:9ad8:82c8]) by smtp.gmail.com with ESMTPSA id 187-v6sm10610218wmr.40.2018.08.14.01.12.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Aug 2018 01:12:29 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org Subject: [PATCH 3/4] drm/scheduler: cleanup entity coding style Date: Tue, 14 Aug 2018 10:12:25 +0200 Message-Id: <20180814081226.76086-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180814081226.76086-1-christian.koenig@amd.com> References: <20180814081226.76086-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Cleanup coding style in sched_entity.c Signed-off-by: Christian König --- drivers/gpu/drm/scheduler/sched_entity.c | 167 ++++++++++++++++++++----------- 1 file changed, 110 insertions(+), 57 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 1053f27af9df..1416edb2642a 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -44,7 +44,7 @@ * the entity * * Returns 0 on success or a negative error code on failure. -*/ + */ int drm_sched_entity_init(struct drm_sched_entity *entity, struct drm_sched_rq **rq_list, unsigned int num_rq_list, @@ -88,7 +88,7 @@ EXPORT_SYMBOL(drm_sched_entity_init); */ static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity) { - rmb(); + rmb(); /* for list_empty to work without lock */ if (list_empty(&entity->list) || spsc_queue_peek(&entity->job_queue) == NULL) @@ -140,26 +140,15 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) return rq; } -static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - drm_sched_fence_finished(job->s_fence); - WARN_ON(job->s_fence->parent); - dma_fence_put(&job->s_fence->finished); - job->sched->ops->free_job(job); -} - - /** * drm_sched_entity_flush - Flush a context entity * * @entity: scheduler entity * @timeout: time to wait in for Q to become empty in jiffies. * - * Splitting drm_sched_entity_fini() into two functions, The first one does the waiting, - * removes the entity from the runqueue and returns an error when the process was killed. + * Splitting drm_sched_entity_fini() into two functions, The first one does the + * waiting, removes the entity from the runqueue and returns an error when the + * process was killed. * * Returns the remaining time in jiffies left from the input timeout */ @@ -173,7 +162,7 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) /** * The client will not queue more IBs during this fini, consume existing * queued IBs or discard them on SIGKILL - */ + */ if (current->flags & PF_EXITING) { if (timeout) ret = wait_event_timeout( @@ -195,6 +184,65 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) } EXPORT_SYMBOL(drm_sched_entity_flush); +/** + * drm_sched_entity_kill_jobs - helper for drm_sched_entity_kill_jobs + * + * @f: signaled fence + * @cb: our callback structure + * + * Signal the scheduler finished fence when the entity in question is killed. + */ +static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, + struct dma_fence_cb *cb) +{ + struct drm_sched_job *job = container_of(cb, struct drm_sched_job, + finish_cb); + + drm_sched_fence_finished(job->s_fence); + WARN_ON(job->s_fence->parent); + dma_fence_put(&job->s_fence->finished); + job->sched->ops->free_job(job); +} + +/** + * drm_sched_entity_kill_jobs - Make sure all remaining jobs are killed + * + * @entity: entity which is cleaned up + * + * Makes sure that all remaining jobs in an entity are killed before it is + * destroyed. + */ +static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity) +{ + struct drm_sched_job *job; + int r; + + while ((job = to_drm_sched_job(spsc_queue_pop(&entity->job_queue)))) { + struct drm_sched_fence *s_fence = job->s_fence; + + drm_sched_fence_scheduled(s_fence); + dma_fence_set_error(&s_fence->finished, -ESRCH); + + /* + * When pipe is hanged by older entity, new entity might + * not even have chance to submit it's first job to HW + * and so entity->last_scheduled will remain NULL + */ + if (!entity->last_scheduled) { + drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); + continue; + } + + r = dma_fence_add_callback(entity->last_scheduled, + &job->finish_cb, + drm_sched_entity_kill_jobs_cb); + if (r == -ENOENT) + drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); + else if (r) + DRM_ERROR("fence add callback failed (%d)\n", r); + } +} + /** * drm_sched_entity_cleanup - Destroy a context entity * @@ -215,9 +263,6 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity) * remove them here. */ if (spsc_queue_peek(&entity->job_queue)) { - struct drm_sched_job *job; - int r; - /* Park the kernel for a moment to make sure it isn't processing * our enity. */ @@ -230,27 +275,7 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity) entity->dependency = NULL; } - while ((job = to_drm_sched_job(spsc_queue_pop(&entity->job_queue)))) { - struct drm_sched_fence *s_fence = job->s_fence; - drm_sched_fence_scheduled(s_fence); - dma_fence_set_error(&s_fence->finished, -ESRCH); - - /* - * When pipe is hanged by older entity, new entity might - * not even have chance to submit it's first job to HW - * and so entity->last_scheduled will remain NULL - */ - if (!entity->last_scheduled) { - drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); - } else { - r = dma_fence_add_callback(entity->last_scheduled, &job->finish_cb, - drm_sched_entity_kill_jobs_cb); - if (r == -ENOENT) - drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb); - else if (r) - DRM_ERROR("fence add callback failed (%d)\n", r); - } - } + drm_sched_entity_kill_jobs(entity); } dma_fence_put(entity->last_scheduled); @@ -273,21 +298,31 @@ void drm_sched_entity_destroy(struct drm_sched_entity *entity) } EXPORT_SYMBOL(drm_sched_entity_destroy); -static void drm_sched_entity_wakeup(struct dma_fence *f, struct dma_fence_cb *cb) +/** + * drm_sched_entity_clear_dep - callback to clear the entities dependency + */ +static void drm_sched_entity_clear_dep(struct dma_fence *f, + struct dma_fence_cb *cb) { struct drm_sched_entity *entity = container_of(cb, struct drm_sched_entity, cb); + entity->dependency = NULL; dma_fence_put(f); - drm_sched_wakeup(entity->rq->sched); } -static void drm_sched_entity_clear_dep(struct dma_fence *f, struct dma_fence_cb *cb) +/** + * drm_sched_entity_clear_dep - callback to clear the entities dependency and + * wake up scheduler + */ +static void drm_sched_entity_wakeup(struct dma_fence *f, + struct dma_fence_cb *cb) { struct drm_sched_entity *entity = container_of(cb, struct drm_sched_entity, cb); - entity->dependency = NULL; - dma_fence_put(f); + + drm_sched_entity_clear_dep(f, cb); + drm_sched_wakeup(entity->rq->sched); } /** @@ -325,19 +360,27 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, } EXPORT_SYMBOL(drm_sched_entity_set_priority); +/** + * drm_sched_entity_add_dependency_cb - add callback for the entities dependency + * + * @entity: entity with dependency + * + * Add a callback to the current dependency of the entity to wake up the + * scheduler when the entity becomes available. + */ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) { struct drm_gpu_scheduler *sched = entity->rq->sched; - struct dma_fence * fence = entity->dependency; + struct dma_fence *fence = entity->dependency; struct drm_sched_fence *s_fence; if (fence->context == entity->fence_context || - fence->context == entity->fence_context + 1) { - /* - * Fence is a scheduled/finished fence from a job - * which belongs to the same entity, we can ignore - * fences from ourself - */ + fence->context == entity->fence_context + 1) { + /* + * Fence is a scheduled/finished fence from a job + * which belongs to the same entity, we can ignore + * fences from ourself + */ dma_fence_put(entity->dependency); return false; } @@ -369,19 +412,29 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) return false; } +/** + * drm_sched_entity_pop_job - get a ready to be scheduled job from the entity + * + * @entity: entity to get the job from + * + * Process all dependencies and try to get one job from the entities queue. + */ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) { struct drm_gpu_scheduler *sched = entity->rq->sched; - struct drm_sched_job *sched_job = to_drm_sched_job( - spsc_queue_peek(&entity->job_queue)); + struct drm_sched_job *sched_job; + sched_job = to_drm_sched_job(spsc_queue_peek(&entity->job_queue)); if (!sched_job) return NULL; - while ((entity->dependency = sched->ops->dependency(sched_job, entity))) { + while ((entity->dependency = + sched->ops->dependency(sched_job, entity))) { + if (drm_sched_entity_add_dependency_cb(entity)) { - trace_drm_sched_job_wait_dep(sched_job, entity->dependency); + trace_drm_sched_job_wait_dep(sched_job, + entity->dependency); return NULL; } } From patchwork Tue Aug 14 08:12:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 10565177 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A51E8157B for ; Tue, 14 Aug 2018 08:12:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9ACD02983A for ; Tue, 14 Aug 2018 08:12:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8ECEF2983F; Tue, 14 Aug 2018 08:12:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5646D2983A for ; Tue, 14 Aug 2018 08:12:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0DE756E25C; Tue, 14 Aug 2018 08:12:34 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com [IPv6:2a00:1450:400c:c09::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3C13B6E24B; Tue, 14 Aug 2018 08:12:32 +0000 (UTC) Received: by mail-wm0-x244.google.com with SMTP id y2-v6so11357252wma.1; Tue, 14 Aug 2018 01:12:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7VkiiuVPv8JGWC26s4la+W9XvrTyLZLcplQwrGNZTZQ=; b=t28Bl9zElwAbRmesmAnbQCi02tVZTBizH6KuXvg2IVr7R322FU+gmc9ZmwAkos4jD9 An2EDMFd648OBqxqhHl+bLFZrU30L0Eg+XJF88fHh9KuNgFiVZEUo57OMtUsIT7xLHkk qiNSNzAPzgOpcnxmA7I093FVDCR+FecrUcM1kciuNx6z1MpuZZIQ1TSiTBsy02C7sgk3 57mAfSkLvHpZ7qwCqWi+7L+lzDqYmu74iM8O3oKJBISAMf8C8bQkVrDQN/8X4xfYtTfj yOKV6qACGffAisOyBharcfXlCp2/EHnIWfDZ/FrcEW2Sfk2m2A3m3tCIflaB4Tfkvwj0 235w== X-Gm-Message-State: AOUpUlE+GNSZGryXeZvmtZRCMGxcJVdwPgwN7SzTD1b+jY8pepuB63qJ Sn4ndDcgXhaUoEufX7v4rtc4TYAN X-Google-Smtp-Source: AA+uWPy3WMm+2f+Ipq+MLGLE+VWx9ErQi5n1iLKpzPL11rgD5W0l5psp//hSl4Q0AH4NlSiWVysESQ== X-Received: by 2002:a1c:c44a:: with SMTP id u71-v6mr9815247wmf.43.1534234350741; Tue, 14 Aug 2018 01:12:30 -0700 (PDT) Received: from baker.fritz.box ([2a02:908:1257:4460:941d:4348:9ad8:82c8]) by smtp.gmail.com with ESMTPSA id 187-v6sm10610218wmr.40.2018.08.14.01.12.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Aug 2018 01:12:30 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org Subject: [PATCH 4/4] drm/scheduler: rename gpu_scheduler.c to sched_main.c Date: Tue, 14 Aug 2018 10:12:26 +0200 Message-Id: <20180814081226.76086-4-christian.koenig@amd.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180814081226.76086-1-christian.koenig@amd.com> References: <20180814081226.76086-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Better match the naming of the other components. Signed-off-by: Christian König --- drivers/gpu/drm/scheduler/Makefile | 2 +- drivers/gpu/drm/scheduler/{gpu_scheduler.c => sched_main.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename drivers/gpu/drm/scheduler/{gpu_scheduler.c => sched_main.c} (100%) diff --git a/drivers/gpu/drm/scheduler/Makefile b/drivers/gpu/drm/scheduler/Makefile index f23785d4b3c8..53863621829f 100644 --- a/drivers/gpu/drm/scheduler/Makefile +++ b/drivers/gpu/drm/scheduler/Makefile @@ -20,6 +20,6 @@ # OTHER DEALINGS IN THE SOFTWARE. # # -gpu-sched-y := gpu_scheduler.o sched_fence.o sched_entity.o +gpu-sched-y := sched_main.o sched_fence.o sched_entity.o obj-$(CONFIG_DRM_SCHED) += gpu-sched.o diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/sched_main.c similarity index 100% rename from drivers/gpu/drm/scheduler/gpu_scheduler.c rename to drivers/gpu/drm/scheduler/sched_main.c