From patchwork Wed Feb 17 13:38:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Varad Gautam X-Patchwork-Id: 8339021 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 31F0A9F399 for ; Wed, 17 Feb 2016 13:38:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3725520260 for ; Wed, 17 Feb 2016 13:38:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 28F4A201F2 for ; Wed, 17 Feb 2016 13:38:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9FE9A6E9AF; Wed, 17 Feb 2016 13:38:37 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-pa0-f45.google.com (mail-pa0-f45.google.com [209.85.220.45]) by gabe.freedesktop.org (Postfix) with ESMTPS id CACDA6E9AF for ; Wed, 17 Feb 2016 13:38:35 +0000 (UTC) Received: by mail-pa0-f45.google.com with SMTP id yy13so11509859pab.3 for ; Wed, 17 Feb 2016 05:38:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=4aizNWPdMD+xn0aDtUmRTK8iTUu/EaXppWX0GFc4F5w=; b=LU35gP8YPfFY5JJo5s8U0hVgp5+YaEDENVG8OUwnTTFiyFhL3dTzfC1GY5QzxrfYX2 ka46iN0BeArO12o5T8NdPUPQuhoKqxkMEnwsuYsjHzm7z8qE72I0w8B0gBVMbyBKNRyE GvCKGEVN2V5kHOc859VH/NWXPb6X8GkD6r8GtvfGTgOmJqcjzSYBj2WUdXX0GdlkwYz4 tPemtpbYjgACDpWIhT8ZkWvbbiHoIW+pJXVVI5JCT+mNnNnj0wdvijD4Cm+VJqA0JB/c fLBxhgJG/o0qKezOMKP7KS8LS215DCKX6CnDK5iA9LBmWvSp/EVIRBVtjlcCRL371Cvf dNNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=4aizNWPdMD+xn0aDtUmRTK8iTUu/EaXppWX0GFc4F5w=; b=ULXulUPLdtTUEXgARkDyNUN3fRQ29KWjLep4HyRgFtVxJGdprP4Smkb3nGqRwqi/6W bN506RGAm4vEminNtW0YFUUlK+sw89MmCRfiu4sAYmS+jwl86MEuBcs4NcUdreWOpPUc En7MH+QL1QTDsg4PT0k8t7IFuP8FI9u16MmLhNSpskrivkS+wd7y2dqagolyEuxYB9SJ g8FUtZowNFTxdUE8ilL1UgY5OBJIdKMiwybbCtX0cJOt29itrJoU7PxZFYX5qQ7KSaay QgD3qn6PE/3T4Ugy8ATXRvt/jv5OhI1rxw63cQJ7pC3xrNZbYVN5zuMbxId92lL6FAYJ F13g== X-Gm-Message-State: AG10YOSGhhqt5CN9YctEn0o6DSwK34c223qnlwDd+5VnRBe7EiRuQaQZfCQ77YvIkIsqqw== X-Received: by 10.66.100.129 with SMTP id ey1mr2169461pab.100.1455716315391; Wed, 17 Feb 2016 05:38:35 -0800 (PST) Received: from linux-e1tz.suse ([116.202.252.243]) by smtp.googlemail.com with ESMTPSA id yh5sm2950115pab.13.2016.02.17.05.38.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 17 Feb 2016 05:38:34 -0800 (PST) From: Varad Gautam To: dri-devel@lists.freedesktop.org Subject: [PATCH] drm/vc4: improve throughput by pipelining binning and rendering jobs Date: Wed, 17 Feb 2016 19:08:21 +0530 Message-Id: <1455716301-14586-1-git-send-email-varadgautam@gmail.com> X-Mailer: git-send-email 2.6.2 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The hardware provides us with separate threads for binning and rendering, and the existing model waits for them both to complete before submitting the next job. Splitting the binning and rendering submissions reduces idle time and gives us approx 20-30% speedup with several x11perf tests. Thanks to anholt for suggesting this. Signed-off-by: Varad Gautam Reviewed-by: Eric Anholt bin_job_list)) + return NULL; + return list_first_entry(&vc4->bin_job_list, struct vc4_exec_info, head); +} + +static inline struct vc4_exec_info * +vc4_first_render_job(struct vc4_dev *vc4) { - if (list_empty(&vc4->job_list)) + if (list_empty(&vc4->render_job_list)) return NULL; - return list_first_entry(&vc4->job_list, struct vc4_exec_info, head); + return list_first_entry(&vc4->render_job_list, + struct vc4_exec_info, head); } /** @@ -394,7 +411,9 @@ int vc4_wait_seqno_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int vc4_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); -void vc4_submit_next_job(struct drm_device *dev); +void vc4_submit_next_bin_job(struct drm_device *dev); +void vc4_submit_next_render_job(struct drm_device *dev); +void vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec); int vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns, bool interruptible); void vc4_job_handle_completed(struct vc4_dev *vc4); diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c index 48ce30a..b08f74b 100644 --- a/drivers/gpu/drm/vc4/vc4_gem.c +++ b/drivers/gpu/drm/vc4/vc4_gem.c @@ -140,10 +140,10 @@ vc4_save_hang_state(struct drm_device *dev) struct vc4_dev *vc4 = to_vc4_dev(dev); struct drm_vc4_get_hang_state *state; struct vc4_hang_state *kernel_state; - struct vc4_exec_info *exec; + struct vc4_exec_info *exec[2]; struct vc4_bo *bo; unsigned long irqflags; - unsigned int i, unref_list_count; + unsigned int i, j, unref_list_count, prev_idx; kernel_state = kcalloc(1, sizeof(*kernel_state), GFP_KERNEL); if (!kernel_state) @@ -152,37 +152,55 @@ vc4_save_hang_state(struct drm_device *dev) state = &kernel_state->user_state; spin_lock_irqsave(&vc4->job_lock, irqflags); - exec = vc4_first_job(vc4); - if (!exec) { + exec[0] = vc4_first_bin_job(vc4); + exec[1] = vc4_first_render_job(vc4); + if (!exec[0] && !exec[1]) { spin_unlock_irqrestore(&vc4->job_lock, irqflags); return; } - unref_list_count = 0; - list_for_each_entry(bo, &exec->unref_list, unref_head) - unref_list_count++; + /* Get the bos from both binner and renderer into hang state. */ + state->bo_count = 0; + for (i = 0; i < 2; i++) { + if (!exec[i]) + continue; + + unref_list_count = 0; + list_for_each_entry(bo, &exec[i]->unref_list, unref_head) + unref_list_count++; + state->bo_count += exec[i]->bo_count + unref_list_count; + } + + kernel_state->bo = kcalloc(state->bo_count, + sizeof(*kernel_state->bo), GFP_ATOMIC); - state->bo_count = exec->bo_count + unref_list_count; - kernel_state->bo = kcalloc(state->bo_count, sizeof(*kernel_state->bo), - GFP_ATOMIC); if (!kernel_state->bo) { spin_unlock_irqrestore(&vc4->job_lock, irqflags); return; } - for (i = 0; i < exec->bo_count; i++) { - drm_gem_object_reference(&exec->bo[i]->base); - kernel_state->bo[i] = &exec->bo[i]->base; - } + prev_idx = 0; + for (i = 0; i < 2; i++) { + if (!exec[i]) + continue; - list_for_each_entry(bo, &exec->unref_list, unref_head) { - drm_gem_object_reference(&bo->base.base); - kernel_state->bo[i] = &bo->base.base; - i++; + for (j = 0; j < exec[i]->bo_count; j++) { + drm_gem_object_reference(&exec[i]->bo[j]->base); + kernel_state->bo[j + prev_idx] = &exec[i]->bo[j]->base; + } + + list_for_each_entry(bo, &exec[i]->unref_list, unref_head) { + drm_gem_object_reference(&bo->base.base); + kernel_state->bo[j + prev_idx] = &bo->base.base; + j++; + } + prev_idx = j + 1; } - state->start_bin = exec->ct0ca; - state->start_render = exec->ct1ca; + if (exec[0]) + state->start_bin = exec[0]->ct0ca; + if (exec[1]) + state->start_render = exec[1]->ct1ca; spin_unlock_irqrestore(&vc4->job_lock, irqflags); @@ -259,7 +277,7 @@ vc4_hangcheck_elapsed(unsigned long data) uint32_t ct0ca, ct1ca; /* If idle, we can stop watching for hangs. */ - if (list_empty(&vc4->job_list)) + if (list_empty(&vc4->bin_job_list) && list_empty(&vc4->render_job_list)) return; ct0ca = V3D_READ(V3D_CTNCA(0)); @@ -373,11 +391,13 @@ vc4_flush_caches(struct drm_device *dev) * The job_lock should be held during this. */ void -vc4_submit_next_job(struct drm_device *dev) +vc4_submit_next_bin_job(struct drm_device *dev) { struct vc4_dev *vc4 = to_vc4_dev(dev); - struct vc4_exec_info *exec = vc4_first_job(vc4); + struct vc4_exec_info *exec; +again: + exec = vc4_first_bin_job(vc4); if (!exec) return; @@ -387,11 +407,40 @@ vc4_submit_next_job(struct drm_device *dev) V3D_WRITE(V3D_BPOA, 0); V3D_WRITE(V3D_BPOS, 0); - if (exec->ct0ca != exec->ct0ea) + /* Either put the job in the binner if it uses the binner, or + * immediately move it to the to-be-rendered queue. + */ + if (exec->ct0ca != exec->ct0ea) { submit_cl(dev, 0, exec->ct0ca, exec->ct0ea); + } else { + vc4_move_job_to_render(dev, exec); + goto again; + } +} + +void +vc4_submit_next_render_job(struct drm_device *dev) +{ + struct vc4_dev *vc4 = to_vc4_dev(dev); + struct vc4_exec_info *exec = vc4_first_render_job(vc4); + + if (!exec) + return; + submit_cl(dev, 1, exec->ct1ca, exec->ct1ea); } +void +vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec) +{ + struct vc4_dev *vc4 = to_vc4_dev(dev); + bool was_empty = list_empty(&vc4->render_job_list); + + list_move_tail(&exec->head, &vc4->render_job_list); + if (was_empty) + vc4_submit_next_render_job(dev); +} + static void vc4_update_bo_seqnos(struct vc4_exec_info *exec, uint64_t seqno) { @@ -430,14 +479,14 @@ vc4_queue_submit(struct drm_device *dev, struct vc4_exec_info *exec) exec->seqno = seqno; vc4_update_bo_seqnos(exec, seqno); - list_add_tail(&exec->head, &vc4->job_list); + list_add_tail(&exec->head, &vc4->bin_job_list); /* If no job was executing, kick ours off. Otherwise, it'll - * get started when the previous job's frame done interrupt + * get started when the previous job's flush done interrupt * occurs. */ - if (vc4_first_job(vc4) == exec) { - vc4_submit_next_job(dev); + if (vc4_first_bin_job(vc4) == exec) { + vc4_submit_next_bin_job(dev); vc4_queue_hangcheck(dev); } @@ -828,7 +877,8 @@ vc4_gem_init(struct drm_device *dev) { struct vc4_dev *vc4 = to_vc4_dev(dev); - INIT_LIST_HEAD(&vc4->job_list); + INIT_LIST_HEAD(&vc4->bin_job_list); + INIT_LIST_HEAD(&vc4->render_job_list); INIT_LIST_HEAD(&vc4->job_done_list); INIT_LIST_HEAD(&vc4->seqno_cb_list); spin_lock_init(&vc4->job_lock); diff --git a/drivers/gpu/drm/vc4/vc4_irq.c b/drivers/gpu/drm/vc4/vc4_irq.c index b68060e..b02e728 100644 --- a/drivers/gpu/drm/vc4/vc4_irq.c +++ b/drivers/gpu/drm/vc4/vc4_irq.c @@ -30,6 +30,10 @@ * disables that specific interrupt, and 0s written are ignored * (reading either one returns the set of enabled interrupts). * + * When we take a binning flush done interrupt, we need to submit the + * next frame for binning and move the finished frame to the render + * thread. + * * When we take a render frame interrupt, we need to wake the * processes waiting for some frame to be done, and get the next frame * submitted ASAP (so the hardware doesn't sit idle when there's work @@ -44,6 +48,7 @@ #include "vc4_regs.h" #define V3D_DRIVER_IRQS (V3D_INT_OUTOMEM | \ + V3D_INT_FLDONE | \ V3D_INT_FRDONE) DECLARE_WAIT_QUEUE_HEAD(render_wait); @@ -77,7 +82,7 @@ vc4_overflow_mem_work(struct work_struct *work) unsigned long irqflags; spin_lock_irqsave(&vc4->job_lock, irqflags); - current_exec = vc4_first_job(vc4); + current_exec = vc4_first_bin_job(vc4); if (current_exec) { vc4->overflow_mem->seqno = vc4->finished_seqno + 1; list_add_tail(&vc4->overflow_mem->unref_head, @@ -98,17 +103,43 @@ vc4_overflow_mem_work(struct work_struct *work) } static void -vc4_irq_finish_job(struct drm_device *dev) +vc4_irq_finish_bin_job(struct drm_device *dev) +{ + struct vc4_dev *vc4 = to_vc4_dev(dev); + struct vc4_exec_info *exec = vc4_first_bin_job(vc4); + + if (!exec) + return; + + vc4_move_job_to_render(dev, exec); + vc4_submit_next_bin_job(dev); +} + +static void +vc4_cancel_bin_job(struct drm_device *dev) +{ + struct vc4_dev *vc4 = to_vc4_dev(dev); + struct vc4_exec_info *exec = vc4_first_bin_job(vc4); + + if (!exec) + return; + + list_move_tail(&exec->head, &vc4->bin_job_list); + vc4_submit_next_bin_job(dev); +} + +static void +vc4_irq_finish_render_job(struct drm_device *dev) { struct vc4_dev *vc4 = to_vc4_dev(dev); - struct vc4_exec_info *exec = vc4_first_job(vc4); + struct vc4_exec_info *exec = vc4_first_render_job(vc4); if (!exec) return; vc4->finished_seqno++; list_move_tail(&exec->head, &vc4->job_done_list); - vc4_submit_next_job(dev); + vc4_submit_next_render_job(dev); wake_up_all(&vc4->job_wait_queue); schedule_work(&vc4->job_done_work); @@ -125,9 +156,10 @@ vc4_irq(int irq, void *arg) barrier(); intctl = V3D_READ(V3D_INTCTL); - /* Acknowledge the interrupts we're handling here. The render - * frame done interrupt will be cleared, while OUTOMEM will - * stay high until the underlying cause is cleared. + /* Acknowledge the interrupts we're handling here. The binner + * last flush / render frame done interrupt will be cleared, + * while OUTOMEM will stay high until the underlying cause is + * cleared. */ V3D_WRITE(V3D_INTCTL, intctl); @@ -138,9 +170,16 @@ vc4_irq(int irq, void *arg) status = IRQ_HANDLED; } + if (intctl & V3D_INT_FLDONE) { + spin_lock(&vc4->job_lock); + vc4_irq_finish_bin_job(dev); + spin_unlock(&vc4->job_lock); + status = IRQ_HANDLED; + } + if (intctl & V3D_INT_FRDONE) { spin_lock(&vc4->job_lock); - vc4_irq_finish_job(dev); + vc4_irq_finish_render_job(dev); spin_unlock(&vc4->job_lock); status = IRQ_HANDLED; } @@ -205,6 +244,7 @@ void vc4_irq_reset(struct drm_device *dev) V3D_WRITE(V3D_INTENA, V3D_DRIVER_IRQS); spin_lock_irqsave(&vc4->job_lock, irqflags); - vc4_irq_finish_job(dev); + vc4_cancel_bin_job(dev); + vc4_irq_finish_render_job(dev); spin_unlock_irqrestore(&vc4->job_lock, irqflags); }