From patchwork Thu May 3 06:36:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10377141 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2CD0B6037D for ; Thu, 3 May 2018 06:38:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F5AC28E26 for ; Thu, 3 May 2018 06:38:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 146F328E66; Thu, 3 May 2018 06:38:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id ADDB128E26 for ; Thu, 3 May 2018 06:38:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DF2AC6E6BE; Thu, 3 May 2018 06:38:36 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id EEF236E6B5 for ; Thu, 3 May 2018 06:38:32 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 11578230-1500050 for multiple; Thu, 03 May 2018 07:38:25 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Thu, 03 May 2018 07:38:24 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Thu, 3 May 2018 07:36:58 +0100 Message-Id: <20180503063757.22238-12-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180503063757.22238-1-chris@chris-wilson.co.uk> References: <20180503063757.22238-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Subject: [Intel-gfx] [PATCH 12/71] drm/i915: Split execlists/guc reset preparations X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP In the next patch, we will make the execlists reset prepare callback take into account preemption by flushing the context-switch handler. This is not applicable to the GuC submission backend, so split the two into their own backend callbacks. Signed-off-by: Chris Wilson Cc: MichaƂ Winiarski CC: Michel Thierry Cc: Jeff McGee Reviewed-by: Jeff McGee --- drivers/gpu/drm/i915/intel_guc_submission.c | 41 +++++++++++++++++++++ drivers/gpu/drm/i915/intel_lrc.c | 11 +----- 2 files changed, 42 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c index 9dbbe5b5390b..6dda87edb4b6 100644 --- a/drivers/gpu/drm/i915/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/intel_guc_submission.c @@ -798,6 +798,46 @@ static void guc_submission_tasklet(unsigned long data) guc_dequeue(engine); } +static struct i915_request * +guc_reset_prepare(struct intel_engine_cs *engine) +{ + struct intel_engine_execlists * const execlists = &engine->execlists; + + GEM_TRACE("%s\n", engine->name); + + /* + * Prevent request submission to the hardware until we have + * completed the reset in i915_gem_reset_finish(). If a request + * is completed by one engine, it may then queue a request + * to a second via its execlists->tasklet *just* as we are + * calling engine->init_hw() and also writing the ELSP. + * Turning off the execlists->tasklet until the reset is over + * prevents the race. + * + * Note that this needs to be a single atomic operation on the + * tasklet (flush existing tasks, prevent new tasks) to prevent + * a race between reset and set-wedged. It is not, so we do the best + * we can atm and make sure we don't lock the machine up in the more + * common case of recursively being called from set-wedged from inside + * i915_reset. + */ + if (!atomic_read(&execlists->tasklet.count)) + tasklet_kill(&execlists->tasklet); + tasklet_disable(&execlists->tasklet); + + /* + * We're using worker to queue preemption requests from the tasklet in + * GuC submission mode. + * Even though tasklet was disabled, we may still have a worker queued. + * Let's make sure that all workers scheduled before disabling the + * tasklet are completed before continuing with the reset. + */ + if (engine->i915->guc.preempt_wq) + flush_workqueue(engine->i915->guc.preempt_wq); + + return i915_gem_find_active_request(engine); +} + /* * Everything below here is concerned with setup & teardown, and is * therefore not part of the somewhat time-critical batch-submission @@ -1258,6 +1298,7 @@ int intel_guc_submission_enable(struct intel_guc *guc) &engine->execlists; execlists->tasklet.func = guc_submission_tasklet; + engine->reset.prepare = guc_reset_prepare; engine->park = guc_submission_park; engine->unpark = guc_submission_unpark; diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index d23386823d94..5c29991c8183 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1856,16 +1856,6 @@ execlists_reset_prepare(struct intel_engine_cs *engine) tasklet_kill(&execlists->tasklet); tasklet_disable(&execlists->tasklet); - /* - * We're using worker to queue preemption requests from the tasklet in - * GuC submission mode. - * Even though tasklet was disabled, we may still have a worker queued. - * Let's make sure that all workers scheduled before disabling the - * tasklet are completed before continuing with the reset. - */ - if (engine->i915->guc.preempt_wq) - flush_workqueue(engine->i915->guc.preempt_wq); - return i915_gem_find_active_request(engine); } @@ -2270,6 +2260,7 @@ static void execlists_set_default_submission(struct intel_engine_cs *engine) engine->cancel_requests = execlists_cancel_requests; engine->schedule = execlists_schedule; engine->execlists.tasklet.func = execlists_submission_tasklet; + engine->reset.prepare = execlists_reset_prepare; engine->park = NULL; engine->unpark = NULL;