From patchwork Mon Aug 6 08:30:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10556683 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0B9514E5 for ; Mon, 6 Aug 2018 08:31:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBEC12927E for ; Mon, 6 Aug 2018 08:31:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C020A29281; Mon, 6 Aug 2018 08:31:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 50CE72927E for ; Mon, 6 Aug 2018 08:31:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 926746E20C; Mon, 6 Aug 2018 08:31:33 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id A9B9B6E20C for ; Mon, 6 Aug 2018 08:31:31 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12646504-1500050 for multiple; Mon, 06 Aug 2018 09:30:22 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 6 Aug 2018 09:30:12 +0100 Message-Id: <20180806083017.32215-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0 Subject: [Intel-gfx] [PATCH 1/6] drm/i915: Limit C-states when waiting for the active request X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eero Tamminen MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP If we are waiting for the currently executing request, we have a good idea that it will be completed in the very near future and so want to cap the CPU_DMA_LATENCY to ensure that we wake up the client quickly. v2: Not allowed to block in kmalloc after setting TASK_INTERRUPTIBLE. v3: Avoid the blocking notifier as well for TASK_INTERRUPTIBLE v4: Beautification? v5: And ignore the preemptibility of queue_work before schedule. v6: Use highpri wq to keep our pm_qos window as small as possible. Testcase: igt/gem_sync/store-default Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Cc: Joonas Lahtinen Cc: Eero Tamminen Cc: Francisco Jerez Cc: Mika Kuoppala Cc: Dmitry Rogozhkin --- drivers/gpu/drm/i915/i915_request.c | 59 +++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 5c2c93cbab12..67fd2ec75d78 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1258,6 +1258,58 @@ static bool __i915_wait_request_check_and_reset(struct i915_request *request) return true; } +struct wait_dma_qos { + struct pm_qos_request req; + struct work_struct add, del; +}; + +static void __wait_dma_qos_add(struct work_struct *work) +{ + struct wait_dma_qos *qos = container_of(work, typeof(*qos), add); + + pm_qos_add_request(&qos->req, PM_QOS_CPU_DMA_LATENCY, 50); +} + +static void __wait_dma_qos_del(struct work_struct *work) +{ + struct wait_dma_qos *qos = container_of(work, typeof(*qos), del); + + if (!cancel_work_sync(&qos->add)) + pm_qos_remove_request(&qos->req); + + kfree(qos); +} + +static struct wait_dma_qos *wait_dma_qos_add(void) +{ + struct wait_dma_qos *qos; + + /* Called under TASK_INTERRUPTIBLE, so not allowed to sleep/block. */ + qos = kzalloc(sizeof(*qos), GFP_NOWAIT | __GFP_NOWARN); + if (!qos) + return NULL; + + INIT_WORK(&qos->add, __wait_dma_qos_add); + INIT_WORK(&qos->del, __wait_dma_qos_del); + + /* + * Schedule the enabling work on the local cpu so that it should only + * take effect if we actually sleep. If schedule() short circuits due to + * our request already being completed, we should then be able to cancel + * the work before it is even run. + */ + queue_work_on(raw_smp_processor_id(), system_highpri_wq, &qos->add); + + return qos; +} + +static void wait_dma_qos_del(struct wait_dma_qos *qos) +{ + /* Defer to worker so not incur extra latency for our woken client. */ + if (qos) + queue_work(system_highpri_wq, &qos->del); +} + /** * i915_request_wait - wait until execution of request has finished * @rq: the request to wait upon @@ -1286,6 +1338,7 @@ long i915_request_wait(struct i915_request *rq, wait_queue_head_t *errq = &rq->i915->gpu_error.wait_queue; DEFINE_WAIT_FUNC(reset, default_wake_function); DEFINE_WAIT_FUNC(exec, default_wake_function); + struct wait_dma_qos *qos = NULL; struct intel_wait wait; might_sleep(); @@ -1363,6 +1416,11 @@ long i915_request_wait(struct i915_request *rq, break; } + if (!qos && + i915_seqno_passed(intel_engine_get_seqno(rq->engine), + wait.seqno - 1)) + qos = wait_dma_qos_add(); + timeout = io_schedule_timeout(timeout); if (intel_wait_complete(&wait) && @@ -1412,6 +1470,7 @@ long i915_request_wait(struct i915_request *rq, if (flags & I915_WAIT_LOCKED) remove_wait_queue(errq, &reset); remove_wait_queue(&rq->execute, &exec); + wait_dma_qos_del(qos); trace_i915_request_wait_end(rq); return timeout;