From patchwork Wed Aug 14 09:26:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76CB314F7 for ; Wed, 14 Aug 2019 09:44:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 60A7D28737 for ; Wed, 14 Aug 2019 09:44:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 525EF2874F; Wed, 14 Aug 2019 09:44:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A03F428737 for ; Wed, 14 Aug 2019 09:44:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1C9A589B48; Wed, 14 Aug 2019 09:44:48 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A3F589B48 for ; Wed, 14 Aug 2019 09:44:46 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076216-1500050 for multiple; Wed, 14 Aug 2019 10:26:44 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:36 +0100 Message-Id: <20190814092643.1908-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 1/8] drm/i915/execlists: Lift process_csb() out of the irq-off spinlock X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP If we only call process_csb() from the tasklet, though we lose the ability to bypass ksoftirqd interrupt processing on direct submission paths, we can push it out of the irq-off spinlock. The penalty is that we then allow schedule_out to be called concurrently with schedule_in requiring us to handle the usage count (baked into the pointer itself) atomically. As we do kick the tasklets (via local_bh_enable()) after our submission, there is a possibility there to see if we can pull the local softirq processing back from the ksoftirqd. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gt/intel_context_types.h | 4 +- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 +- drivers/gpu/drm/i915/gt/intel_lrc.c | 130 +++++++++++------- drivers/gpu/drm/i915/i915_utils.h | 20 ++- 4 files changed, 94 insertions(+), 62 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index a632b20ec4d8..d8ce266c049f 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -41,9 +41,7 @@ struct intel_context { struct intel_engine_cs *engine; struct intel_engine_cs *inflight; #define intel_context_inflight(ce) ptr_mask_bits((ce)->inflight, 2) -#define intel_context_inflight_count(ce) ptr_unmask_bits((ce)->inflight, 2) -#define intel_context_inflight_inc(ce) ptr_count_inc(&(ce)->inflight) -#define intel_context_inflight_dec(ce) ptr_count_dec(&(ce)->inflight) +#define intel_context_inflight_count(ce) ptr_unmask_bits((ce)->inflight, 2) struct i915_address_space *vm; struct i915_gem_context *gem_context; diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index d20750e420c0..abd4fde2e52d 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -1460,7 +1460,7 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine) for (port = execlists->pending; (rq = *port); port++) { /* Exclude any contexts already counted in active */ - if (intel_context_inflight_count(rq->hw_context) == 1) + if (!intel_context_inflight_count(rq->hw_context)) engine->stats.active++; } diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 5c26c4ae139b..09d8cb8615cf 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -547,27 +547,39 @@ execlists_context_status_change(struct i915_request *rq, unsigned long status) status, rq); } +static inline struct intel_engine_cs * +__execlists_schedule_in(struct i915_request *rq) +{ + struct intel_engine_cs * const engine = rq->engine; + struct intel_context * const ce = rq->hw_context; + + intel_context_get(ce); + + intel_gt_pm_get(engine->gt); + execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN); + intel_engine_context_in(engine); + + return engine; +} + static inline struct i915_request * execlists_schedule_in(struct i915_request *rq, int idx) { - struct intel_context *ce = rq->hw_context; - int count; + struct intel_context * const ce = rq->hw_context; + struct intel_engine_cs *old; + GEM_BUG_ON(!intel_engine_pm_is_awake(rq->engine)); trace_i915_request_in(rq, idx); - count = intel_context_inflight_count(ce); - if (!count) { - intel_context_get(ce); - ce->inflight = rq->engine; - - intel_gt_pm_get(ce->inflight->gt); - execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN); - intel_engine_context_in(ce->inflight); - } + old = READ_ONCE(ce->inflight); + do { + if (!old) { + WRITE_ONCE(ce->inflight, __execlists_schedule_in(rq)); + break; + } + } while (!try_cmpxchg(&ce->inflight, &old, ptr_inc(old))); - intel_context_inflight_inc(ce); GEM_BUG_ON(intel_context_inflight(ce) != rq->engine); - return i915_request_get(rq); } @@ -581,35 +593,45 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce) } static inline void -execlists_schedule_out(struct i915_request *rq) +__execlists_schedule_out(struct i915_request *rq, + struct intel_engine_cs * const engine) { - struct intel_context *ce = rq->hw_context; + struct intel_context * const ce = rq->hw_context; - GEM_BUG_ON(!intel_context_inflight_count(ce)); + intel_engine_context_out(engine); + execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); + intel_gt_pm_put(engine->gt); - trace_i915_request_out(rq); + /* + * If this is part of a virtual engine, its next request may + * have been blocked waiting for access to the active context. + * We have to kick all the siblings again in case we need to + * switch (e.g. the next request is not runnable on this + * engine). Hopefully, we will already have submitted the next + * request before the tasklet runs and do not need to rebuild + * each virtual tree and kick everyone again. + */ + if (ce->engine != engine) + kick_siblings(rq, ce); - intel_context_inflight_dec(ce); - if (!intel_context_inflight_count(ce)) { - intel_engine_context_out(ce->inflight); - execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); - intel_gt_pm_put(ce->inflight->gt); + intel_context_put(ce); +} - /* - * If this is part of a virtual engine, its next request may - * have been blocked waiting for access to the active context. - * We have to kick all the siblings again in case we need to - * switch (e.g. the next request is not runnable on this - * engine). Hopefully, we will already have submitted the next - * request before the tasklet runs and do not need to rebuild - * each virtual tree and kick everyone again. - */ - ce->inflight = NULL; - if (rq->engine != ce->engine) - kick_siblings(rq, ce); +static inline void +execlists_schedule_out(struct i915_request *rq) +{ + struct intel_context * const ce = rq->hw_context; + struct intel_engine_cs *cur, *old; - intel_context_put(ce); - } + trace_i915_request_out(rq); + GEM_BUG_ON(intel_context_inflight(ce) != rq->engine); + + old = READ_ONCE(ce->inflight); + do + cur = ptr_unmask_bits(old, 2) ? ptr_dec(old) : NULL; + while (!try_cmpxchg(&ce->inflight, &old, cur)); + if (!cur) + __execlists_schedule_out(rq, old); i915_request_put(rq); } @@ -684,6 +706,9 @@ assert_pending_valid(const struct intel_engine_execlists *execlists, trace_ports(execlists, msg, execlists->pending); + if (!execlists->pending[0]) + return false; + if (execlists->pending[execlists_num_ports(execlists)]) return false; @@ -944,9 +969,21 @@ need_timeslice(struct intel_engine_cs *engine, const struct i915_request *rq) static bool enable_timeslice(struct intel_engine_cs *engine) { - struct i915_request *last = last_active(&engine->execlists); + struct i915_request * const *port; + int hint; + + port = engine->execlists.active; + while (port[0] && i915_request_completed(port[0])) + port++; + if (!port[0]) + return false; - return last && need_timeslice(engine, last); + hint = engine->execlists.queue_priority_hint; + if (port[1]) + hint = max(rq_prio(port[1]), hint); + + /* Compare the two end-points as an unlocked approximation */ + return hint >= effective_prio(port[0]); } static void record_preemption(struct intel_engine_execlists *execlists) @@ -1356,7 +1393,6 @@ static void process_csb(struct intel_engine_cs *engine) const u8 num_entries = execlists->csb_size; u8 head, tail; - lockdep_assert_held(&engine->active.lock); GEM_BUG_ON(USES_GUC_SUBMISSION(engine->i915)); /* @@ -1427,15 +1463,14 @@ static void process_csb(struct intel_engine_cs *engine) execlists->pending, execlists_num_ports(execlists) * sizeof(*execlists->pending)); - execlists->pending[0] = NULL; - - trace_ports(execlists, "promoted", execlists->active); if (enable_timeslice(engine)) mod_timer(&execlists->timer, jiffies + 1); if (!inject_preempt_hang(execlists)) ring_set_paused(engine, 0); + + WRITE_ONCE(execlists->pending[0], NULL); break; case CSB_COMPLETE: /* port0 completed, advanced to port1 */ @@ -1479,8 +1514,6 @@ static void process_csb(struct intel_engine_cs *engine) static void __execlists_submission_tasklet(struct intel_engine_cs *const engine) { lockdep_assert_held(&engine->active.lock); - - process_csb(engine); if (!engine->execlists.pending[0]) execlists_dequeue(engine); } @@ -1494,9 +1527,12 @@ static void execlists_submission_tasklet(unsigned long data) struct intel_engine_cs * const engine = (struct intel_engine_cs *)data; unsigned long flags; - spin_lock_irqsave(&engine->active.lock, flags); - __execlists_submission_tasklet(engine); - spin_unlock_irqrestore(&engine->active.lock, flags); + process_csb(engine); + if (!engine->execlists.pending[0]) { + spin_lock_irqsave(&engine->active.lock, flags); + __execlists_submission_tasklet(engine); + spin_unlock_irqrestore(&engine->active.lock, flags); + } } static void execlists_submission_timer(struct timer_list *timer) diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index d652ba5d2320..562f756da421 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -161,17 +161,15 @@ __check_struct_size(size_t base, size_t arr, size_t count, size_t *size) ((typeof(ptr))((unsigned long)(ptr) | __bits)); \ }) -#define ptr_count_dec(p_ptr) do { \ - typeof(p_ptr) __p = (p_ptr); \ - unsigned long __v = (unsigned long)(*__p); \ - *__p = (typeof(*p_ptr))(--__v); \ -} while (0) - -#define ptr_count_inc(p_ptr) do { \ - typeof(p_ptr) __p = (p_ptr); \ - unsigned long __v = (unsigned long)(*__p); \ - *__p = (typeof(*p_ptr))(++__v); \ -} while (0) +#define ptr_dec(ptr) ({ \ + unsigned long __v = (unsigned long)(ptr); \ + (typeof(ptr))(__v - 1); \ +}) + +#define ptr_inc(ptr) ({ \ + unsigned long __v = (unsigned long)(ptr); \ + (typeof(ptr))(__v + 1); \ +}) #define page_mask_bits(ptr) ptr_mask_bits(ptr, PAGE_SHIFT) #define page_unmask_bits(ptr) ptr_unmask_bits(ptr, PAGE_SHIFT) From patchwork Wed Aug 14 09:26:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093629 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF9FB14F7 for ; Wed, 14 Aug 2019 09:44:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A98C328737 for ; Wed, 14 Aug 2019 09:44:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D74A2874F; Wed, 14 Aug 2019 09:44:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id ED59028737 for ; Wed, 14 Aug 2019 09:44:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 65A0E89B9F; Wed, 14 Aug 2019 09:44:52 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7DF9D89B9F for ; Wed, 14 Aug 2019 09:44:46 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076217-1500050 for multiple; Wed, 14 Aug 2019 10:26:44 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:37 +0100 Message-Id: <20190814092643.1908-2-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 2/8] drm/i915/gt: Track timeline activeness in enter/exit X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Lift moving the timeline to/from the active_list on enter/exit in order to shorten the active tracking span in comparison to the existing pin/unpin. Signed-off-by: Chris Wilson Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/gem/i915_gem_pm.c | 1 - drivers/gpu/drm/i915/gt/intel_context.c | 2 + drivers/gpu/drm/i915/gt/intel_engine_pm.c | 2 + drivers/gpu/drm/i915/gt/intel_lrc.c | 4 + drivers/gpu/drm/i915/gt/intel_timeline.c | 98 +++++++------------ drivers/gpu/drm/i915/gt/intel_timeline.h | 3 +- .../gpu/drm/i915/gt/intel_timeline_types.h | 18 ++++ drivers/gpu/drm/i915/gt/selftest_timeline.c | 2 - 8 files changed, 64 insertions(+), 66 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c index 17e3618241c5..92e53c25424c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c @@ -37,7 +37,6 @@ static void i915_gem_park(struct drm_i915_private *i915) for_each_engine(engine, i915, id) call_idle_barriers(engine); /* cleanup after wedging */ - intel_timelines_park(i915); i915_vma_parked(i915); i915_globals_park(); diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 77833f1558a9..9114953bf920 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -280,10 +280,12 @@ int __init i915_global_context_init(void) void intel_context_enter_engine(struct intel_context *ce) { intel_engine_pm_get(ce->engine); + intel_timeline_enter(ce->timeline); } void intel_context_exit_engine(struct intel_context *ce) { + intel_timeline_exit(ce->timeline); intel_engine_pm_put(ce->engine); } diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index 49ad02c3720f..f3f0109f9e22 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -66,6 +66,8 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine) /* Context switch failed, hope for the best! Maybe reset? */ return true; + intel_timeline_enter(rq->timeline); + /* Check again on the next retirement. */ engine->wakeref_serial = engine->serial + 1; i915_request_add_active_barriers(rq); diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 09d8cb8615cf..116d7b3f946b 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -3255,6 +3255,8 @@ static void virtual_context_enter(struct intel_context *ce) for (n = 0; n < ve->num_siblings; n++) intel_engine_pm_get(ve->siblings[n]); + + intel_timeline_enter(ce->timeline); } static void virtual_context_exit(struct intel_context *ce) @@ -3262,6 +3264,8 @@ static void virtual_context_exit(struct intel_context *ce) struct virtual_engine *ve = container_of(ce, typeof(*ve), context); unsigned int n; + intel_timeline_exit(ce->timeline); + for (n = 0; n < ve->num_siblings; n++) intel_engine_pm_put(ve->siblings[n]); } diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index 6daa9eb59e19..4af0b9801d91 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -278,64 +278,11 @@ void intel_timelines_init(struct drm_i915_private *i915) timelines_init(&i915->gt); } -static void timeline_add_to_active(struct intel_timeline *tl) -{ - struct intel_gt_timelines *gt = &tl->gt->timelines; - - mutex_lock(>->mutex); - list_add(&tl->link, >->active_list); - mutex_unlock(>->mutex); -} - -static void timeline_remove_from_active(struct intel_timeline *tl) -{ - struct intel_gt_timelines *gt = &tl->gt->timelines; - - mutex_lock(>->mutex); - list_del(&tl->link); - mutex_unlock(>->mutex); -} - -static void timelines_park(struct intel_gt *gt) -{ - struct intel_gt_timelines *timelines = >->timelines; - struct intel_timeline *timeline; - - mutex_lock(&timelines->mutex); - list_for_each_entry(timeline, &timelines->active_list, link) { - /* - * All known fences are completed so we can scrap - * the current sync point tracking and start afresh, - * any attempt to wait upon a previous sync point - * will be skipped as the fence was signaled. - */ - i915_syncmap_free(&timeline->sync); - } - mutex_unlock(&timelines->mutex); -} - -/** - * intel_timelines_park - called when the driver idles - * @i915: the drm_i915_private device - * - * When the driver is completely idle, we know that all of our sync points - * have been signaled and our tracking is then entirely redundant. Any request - * to wait upon an older sync point will be completed instantly as we know - * the fence is signaled and therefore we will not even look them up in the - * sync point map. - */ -void intel_timelines_park(struct drm_i915_private *i915) -{ - timelines_park(&i915->gt); -} - void intel_timeline_fini(struct intel_timeline *timeline) { GEM_BUG_ON(timeline->pin_count); GEM_BUG_ON(!list_empty(&timeline->requests)); - i915_syncmap_free(&timeline->sync); - if (timeline->hwsp_cacheline) cacheline_free(timeline->hwsp_cacheline); else @@ -370,6 +317,7 @@ int intel_timeline_pin(struct intel_timeline *tl) if (tl->pin_count++) return 0; GEM_BUG_ON(!tl->pin_count); + GEM_BUG_ON(tl->active_count); err = i915_vma_pin(tl->hwsp_ggtt, 0, 0, PIN_GLOBAL | PIN_HIGH); if (err) @@ -380,7 +328,6 @@ int intel_timeline_pin(struct intel_timeline *tl) offset_in_page(tl->hwsp_offset); cacheline_acquire(tl->hwsp_cacheline); - timeline_add_to_active(tl); return 0; @@ -389,6 +336,40 @@ int intel_timeline_pin(struct intel_timeline *tl) return err; } +void intel_timeline_enter(struct intel_timeline *tl) +{ + struct intel_gt_timelines *timelines = &tl->gt->timelines; + + GEM_BUG_ON(!tl->pin_count); + if (tl->active_count++) + return; + GEM_BUG_ON(!tl->active_count); /* overflow? */ + + mutex_lock(&timelines->mutex); + list_add(&tl->link, &timelines->active_list); + mutex_unlock(&timelines->mutex); +} + +void intel_timeline_exit(struct intel_timeline *tl) +{ + struct intel_gt_timelines *timelines = &tl->gt->timelines; + + GEM_BUG_ON(!tl->active_count); + if (--tl->active_count) + return; + + mutex_lock(&timelines->mutex); + list_del(&tl->link); + mutex_unlock(&timelines->mutex); + + /* + * Since this timeline is idle, all bariers upon which we were waiting + * must also be complete and so we can discard the last used barriers + * without loss of information. + */ + i915_syncmap_free(&tl->sync); +} + static u32 timeline_advance(struct intel_timeline *tl) { GEM_BUG_ON(!tl->pin_count); @@ -546,16 +527,9 @@ void intel_timeline_unpin(struct intel_timeline *tl) if (--tl->pin_count) return; - timeline_remove_from_active(tl); + GEM_BUG_ON(tl->active_count); cacheline_release(tl->hwsp_cacheline); - /* - * Since this timeline is idle, all bariers upon which we were waiting - * must also be complete and so we can discard the last used barriers - * without loss of information. - */ - i915_syncmap_free(&tl->sync); - __i915_vma_unpin(tl->hwsp_ggtt); } diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h index e08cebf64833..f583af1ba18d 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.h +++ b/drivers/gpu/drm/i915/gt/intel_timeline.h @@ -77,9 +77,11 @@ static inline bool intel_timeline_sync_is_later(struct intel_timeline *tl, } int intel_timeline_pin(struct intel_timeline *tl); +void intel_timeline_enter(struct intel_timeline *tl); int intel_timeline_get_seqno(struct intel_timeline *tl, struct i915_request *rq, u32 *seqno); +void intel_timeline_exit(struct intel_timeline *tl); void intel_timeline_unpin(struct intel_timeline *tl); int intel_timeline_read_hwsp(struct i915_request *from, @@ -87,7 +89,6 @@ int intel_timeline_read_hwsp(struct i915_request *from, u32 *hwsp_offset); void intel_timelines_init(struct drm_i915_private *i915); -void intel_timelines_park(struct drm_i915_private *i915); void intel_timelines_fini(struct drm_i915_private *i915); #endif diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h index 9a71aea7a338..b1a9f0c54bf0 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h +++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h @@ -25,7 +25,25 @@ struct intel_timeline { struct mutex mutex; /* protects the flow of requests */ + /* + * pin_count and active_count track essentially the same thing: + * How many requests are in flight or may be under construction. + * + * We need two distinct counters so that we can assign different + * lifetimes to the events for different use-cases. For example, + * we want to permanently keep the timeline pinned for the kernel + * context so that we can issue requests at any time without having + * to acquire space in the GGTT. However, we want to keep tracking + * the activity (to be able to detect when we become idle) along that + * permanently pinned timeline and so end up requiring two counters. + * + * Note that the active_count is protected by the intel_timeline.mutex, + * but the pin_count is protected by a combination of serialisation + * from the intel_context caller plus internal atomicity. + */ unsigned int pin_count; + unsigned int active_count; + const u32 *hwsp_seqno; struct i915_vma *hwsp_ggtt; u32 hwsp_offset; diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c index f0a840030382..d54113697745 100644 --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c @@ -816,8 +816,6 @@ static int live_hwsp_recycle(void *arg) if (err) goto out; - - intel_timelines_park(i915); /* Encourage recycling! */ } while (!__igt_timeout(end_time, NULL)); } From patchwork Wed Aug 14 09:26:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB1A713B1 for ; Wed, 14 Aug 2019 09:27:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C569B28720 for ; Wed, 14 Aug 2019 09:27:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B94E828726; Wed, 14 Aug 2019 09:27:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4D0EC28720 for ; Wed, 14 Aug 2019 09:27:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B001E89C37; Wed, 14 Aug 2019 09:27:04 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id B951F89AEA for ; Wed, 14 Aug 2019 09:27:02 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076218-1500050 for multiple; Wed, 14 Aug 2019 10:26:45 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:38 +0100 Message-Id: <20190814092643.1908-3-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 3/8] drm/i915/gt: Convert timeline tracking to spinlock X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Convert the list manipulation of active to use spinlocks so that we can perform the updates from underneath a quick interrupt callback. Signed-off-by: Chris Wilson Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/gt/intel_gt_types.h | 2 +- drivers/gpu/drm/i915/gt/intel_reset.c | 10 ++++++++-- drivers/gpu/drm/i915/gt/intel_timeline.c | 12 +++++------- drivers/gpu/drm/i915/i915_gem.c | 14 +++++++------- 4 files changed, 21 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h index adab4d2c29ac..143b2d78c2cc 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h @@ -40,7 +40,7 @@ struct intel_gt { struct intel_uc uc; struct intel_gt_timelines { - struct mutex mutex; /* protects list */ + spinlock_t lock; /* protects active_list */ struct list_head active_list; /* Pack multiple timelines' seqnos into the same page */ diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c index ec85740de942..077716442c90 100644 --- a/drivers/gpu/drm/i915/gt/intel_reset.c +++ b/drivers/gpu/drm/i915/gt/intel_reset.c @@ -811,7 +811,7 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt) * * No more can be submitted until we reset the wedged bit. */ - mutex_lock(&timelines->mutex); + spin_lock(&timelines->lock); list_for_each_entry(tl, &timelines->active_list, link) { struct i915_request *rq; @@ -819,6 +819,8 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt) if (!rq) continue; + spin_unlock(&timelines->lock); + /* * All internal dependencies (i915_requests) will have * been flushed by the set-wedge, but we may be stuck waiting @@ -828,8 +830,12 @@ static bool __intel_gt_unset_wedged(struct intel_gt *gt) */ dma_fence_default_wait(&rq->fence, false, MAX_SCHEDULE_TIMEOUT); i915_request_put(rq); + + /* Restart iteration after droping lock */ + spin_lock(&timelines->lock); + tl = list_entry(&timelines->active_list, typeof(*tl), link); } - mutex_unlock(&timelines->mutex); + spin_unlock(&timelines->lock); intel_gt_sanitize(gt, false); diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index 4af0b9801d91..355dfc52c804 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -266,7 +266,7 @@ static void timelines_init(struct intel_gt *gt) { struct intel_gt_timelines *timelines = >->timelines; - mutex_init(&timelines->mutex); + spin_lock_init(&timelines->lock); INIT_LIST_HEAD(&timelines->active_list); spin_lock_init(&timelines->hwsp_lock); @@ -345,9 +345,9 @@ void intel_timeline_enter(struct intel_timeline *tl) return; GEM_BUG_ON(!tl->active_count); /* overflow? */ - mutex_lock(&timelines->mutex); + spin_lock(&timelines->lock); list_add(&tl->link, &timelines->active_list); - mutex_unlock(&timelines->mutex); + spin_unlock(&timelines->lock); } void intel_timeline_exit(struct intel_timeline *tl) @@ -358,9 +358,9 @@ void intel_timeline_exit(struct intel_timeline *tl) if (--tl->active_count) return; - mutex_lock(&timelines->mutex); + spin_lock(&timelines->lock); list_del(&tl->link); - mutex_unlock(&timelines->mutex); + spin_unlock(&timelines->lock); /* * Since this timeline is idle, all bariers upon which we were waiting @@ -548,8 +548,6 @@ static void timelines_fini(struct intel_gt *gt) GEM_BUG_ON(!list_empty(&timelines->active_list)); GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list)); - - mutex_destroy(&timelines->mutex); } void intel_timelines_fini(struct drm_i915_private *i915) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index b4c39a06fee5..07da7d5f855a 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -897,18 +897,18 @@ static long wait_for_timelines(struct drm_i915_private *i915, unsigned int flags, long timeout) { - struct intel_gt_timelines *gt = &i915->gt.timelines; + struct intel_gt_timelines *timelines = &i915->gt.timelines; struct intel_timeline *tl; - mutex_lock(>->mutex); - list_for_each_entry(tl, >->active_list, link) { + spin_lock(&timelines->lock); + list_for_each_entry(tl, &timelines->active_list, link) { struct i915_request *rq; rq = i915_active_request_get_unlocked(&tl->last_request); if (!rq) continue; - mutex_unlock(>->mutex); + spin_unlock(&timelines->lock); /* * "Race-to-idle". @@ -928,10 +928,10 @@ wait_for_timelines(struct drm_i915_private *i915, return timeout; /* restart after reacquiring the lock */ - mutex_lock(>->mutex); - tl = list_entry(>->active_list, typeof(*tl), link); + spin_lock(&timelines->lock); + tl = list_entry(&timelines->active_list, typeof(*tl), link); } - mutex_unlock(>->mutex); + spin_unlock(&timelines->lock); return timeout; } From patchwork Wed Aug 14 09:26:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FCCA13B1 for ; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A60D28720 for ; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6EA6A28726; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 13C2928720 for ; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 541C789FC9; Wed, 14 Aug 2019 09:28:38 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0017889E3B for ; Wed, 14 Aug 2019 09:28:34 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076220-1500050 for multiple; Wed, 14 Aug 2019 10:26:45 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:39 +0100 Message-Id: <20190814092643.1908-4-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 4/8] drm/i915/gt: Guard timeline pinning with its own mutex X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP In preparation for removing struct_mutex from around context retirement, we need to make timeline pinning safe. Since multiple engines/contexts can share a single timeline, it needs to be protected by a mutex. Signed-off-by: Chris Wilson Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/gt/intel_timeline.c | 27 +++++++++---------- .../gpu/drm/i915/gt/intel_timeline_types.h | 2 +- drivers/gpu/drm/i915/gt/mock_engine.c | 6 ++--- 3 files changed, 16 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index 355dfc52c804..7b476cd55dac 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -211,9 +211,9 @@ int intel_timeline_init(struct intel_timeline *timeline, void *vaddr; kref_init(&timeline->kref); + atomic_set(&timeline->pin_count, 0); timeline->gt = gt; - timeline->pin_count = 0; timeline->has_initial_breadcrumb = !hwsp; timeline->hwsp_cacheline = NULL; @@ -280,7 +280,7 @@ void intel_timelines_init(struct drm_i915_private *i915) void intel_timeline_fini(struct intel_timeline *timeline) { - GEM_BUG_ON(timeline->pin_count); + GEM_BUG_ON(atomic_read(&timeline->pin_count)); GEM_BUG_ON(!list_empty(&timeline->requests)); if (timeline->hwsp_cacheline) @@ -314,33 +314,31 @@ int intel_timeline_pin(struct intel_timeline *tl) { int err; - if (tl->pin_count++) + if (atomic_add_unless(&tl->pin_count, 1, 0)) return 0; - GEM_BUG_ON(!tl->pin_count); - GEM_BUG_ON(tl->active_count); err = i915_vma_pin(tl->hwsp_ggtt, 0, 0, PIN_GLOBAL | PIN_HIGH); if (err) - goto unpin; + return err; tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + offset_in_page(tl->hwsp_offset); cacheline_acquire(tl->hwsp_cacheline); + if (atomic_fetch_inc(&tl->pin_count)) { + cacheline_release(tl->hwsp_cacheline); + __i915_vma_unpin(tl->hwsp_ggtt); + } return 0; - -unpin: - tl->pin_count = 0; - return err; } void intel_timeline_enter(struct intel_timeline *tl) { struct intel_gt_timelines *timelines = &tl->gt->timelines; - GEM_BUG_ON(!tl->pin_count); + GEM_BUG_ON(!atomic_read(&tl->pin_count)); if (tl->active_count++) return; GEM_BUG_ON(!tl->active_count); /* overflow? */ @@ -372,7 +370,7 @@ void intel_timeline_exit(struct intel_timeline *tl) static u32 timeline_advance(struct intel_timeline *tl) { - GEM_BUG_ON(!tl->pin_count); + GEM_BUG_ON(!atomic_read(&tl->pin_count)); GEM_BUG_ON(tl->seqno & tl->has_initial_breadcrumb); return tl->seqno += 1 + tl->has_initial_breadcrumb; @@ -523,11 +521,10 @@ int intel_timeline_read_hwsp(struct i915_request *from, void intel_timeline_unpin(struct intel_timeline *tl) { - GEM_BUG_ON(!tl->pin_count); - if (--tl->pin_count) + GEM_BUG_ON(!atomic_read(&tl->pin_count)); + if (!atomic_dec_and_test(&tl->pin_count)) return; - GEM_BUG_ON(tl->active_count); cacheline_release(tl->hwsp_cacheline); __i915_vma_unpin(tl->hwsp_ggtt); diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h index b1a9f0c54bf0..2b1baf2fcc8e 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h +++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h @@ -41,7 +41,7 @@ struct intel_timeline { * but the pin_count is protected by a combination of serialisation * from the intel_context caller plus internal atomicity. */ - unsigned int pin_count; + atomic_t pin_count; unsigned int active_count; const u32 *hwsp_seqno; diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c index a63dd8a42cd4..54a11dde3076 100644 --- a/drivers/gpu/drm/i915/gt/mock_engine.c +++ b/drivers/gpu/drm/i915/gt/mock_engine.c @@ -34,13 +34,13 @@ static void mock_timeline_pin(struct intel_timeline *tl) { - tl->pin_count++; + atomic_inc(&tl->pin_count); } static void mock_timeline_unpin(struct intel_timeline *tl) { - GEM_BUG_ON(!tl->pin_count); - tl->pin_count--; + GEM_BUG_ON(!atomic_read(&tl->pin_count)); + atomic_dec(&tl->pin_count); } static struct intel_ring *mock_ring(struct intel_engine_cs *engine) From patchwork Wed Aug 14 09:26:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCEFB1395 for ; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5ACB2871C for ; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9EAB28722; Wed, 14 Aug 2019 09:28:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 489752871C for ; Wed, 14 Aug 2019 09:28:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C67D489E3B; Wed, 14 Aug 2019 09:28:37 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 086BB89F0B for ; Wed, 14 Aug 2019 09:28:34 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076221-1500050 for multiple; Wed, 14 Aug 2019 10:26:45 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:40 +0100 Message-Id: <20190814092643.1908-5-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 5/8] drm/i915: Protect request retirement with timeline->mutex X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Forgo the struct_mutex requirement for request retirement as we have been transitioning over to only using the timeline->mutex for controlling the lifetime of a request on that timeline. Signed-off-by: Chris Wilson Reviewed-by: Matthew Auld --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 183 ++++++++++-------- drivers/gpu/drm/i915/gt/intel_context.h | 18 +- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 1 - drivers/gpu/drm/i915/gt/intel_engine_types.h | 3 - drivers/gpu/drm/i915/gt/intel_gt.c | 2 - drivers/gpu/drm/i915/gt/intel_gt_types.h | 2 - drivers/gpu/drm/i915/gt/intel_lrc.c | 1 + drivers/gpu/drm/i915/gt/intel_ringbuffer.c | 19 +- drivers/gpu/drm/i915/gt/mock_engine.c | 1 - drivers/gpu/drm/i915/gt/selftest_context.c | 9 +- drivers/gpu/drm/i915/i915_request.c | 156 +++++++-------- drivers/gpu/drm/i915/i915_request.h | 3 - 12 files changed, 209 insertions(+), 189 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 91512cc6d7a6..1263b02011f4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -735,63 +735,6 @@ static int eb_select_context(struct i915_execbuffer *eb) return 0; } -static struct i915_request *__eb_wait_for_ring(struct intel_ring *ring) -{ - struct i915_request *rq; - - /* - * Completely unscientific finger-in-the-air estimates for suitable - * maximum user request size (to avoid blocking) and then backoff. - */ - if (intel_ring_update_space(ring) >= PAGE_SIZE) - return NULL; - - /* - * Find a request that after waiting upon, there will be at least half - * the ring available. The hysteresis allows us to compete for the - * shared ring and should mean that we sleep less often prior to - * claiming our resources, but not so long that the ring completely - * drains before we can submit our next request. - */ - list_for_each_entry(rq, &ring->request_list, ring_link) { - if (__intel_ring_space(rq->postfix, - ring->emit, ring->size) > ring->size / 2) - break; - } - if (&rq->ring_link == &ring->request_list) - return NULL; /* weird, we will check again later for real */ - - return i915_request_get(rq); -} - -static int eb_wait_for_ring(const struct i915_execbuffer *eb) -{ - struct i915_request *rq; - int ret = 0; - - /* - * Apply a light amount of backpressure to prevent excessive hogs - * from blocking waiting for space whilst holding struct_mutex and - * keeping all of their resources pinned. - */ - - rq = __eb_wait_for_ring(eb->context->ring); - if (rq) { - mutex_unlock(&eb->i915->drm.struct_mutex); - - if (i915_request_wait(rq, - I915_WAIT_INTERRUPTIBLE, - MAX_SCHEDULE_TIMEOUT) < 0) - ret = -EINTR; - - i915_request_put(rq); - - mutex_lock(&eb->i915->drm.struct_mutex); - } - - return ret; -} - static int eb_lookup_vmas(struct i915_execbuffer *eb) { struct radix_tree_root *handles_vma = &eb->gem_context->handles_vma; @@ -2132,10 +2075,75 @@ static const enum intel_engine_id user_ring_map[] = { [I915_EXEC_VEBOX] = VECS0 }; -static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce) +static struct i915_request *eb_throttle(struct intel_context *ce) +{ + struct intel_ring *ring = ce->ring; + struct intel_timeline *tl = ce->timeline; + struct i915_request *rq; + + /* + * Completely unscientific finger-in-the-air estimates for suitable + * maximum user request size (to avoid blocking) and then backoff. + */ + if (intel_ring_update_space(ring) >= PAGE_SIZE) + return NULL; + + /* + * Find a request that after waiting upon, there will be at least half + * the ring available. The hysteresis allows us to compete for the + * shared ring and should mean that we sleep less often prior to + * claiming our resources, but not so long that the ring completely + * drains before we can submit our next request. + */ + list_for_each_entry(rq, &tl->requests, link) { + if (rq->ring != ring) + continue; + + if (__intel_ring_space(rq->postfix, + ring->emit, ring->size) > ring->size / 2) + break; + } + if (&rq->link == &tl->requests) + return NULL; /* weird, we will check again later for real */ + + return i915_request_get(rq); +} + +static int +__eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce) { int err; + if (likely(atomic_inc_not_zero(&ce->pin_count))) + return 0; + + err = mutex_lock_interruptible(&eb->i915->drm.struct_mutex); + if (err) + return err; + + err = __intel_context_do_pin(ce); + mutex_unlock(&eb->i915->drm.struct_mutex); + + return err; +} + +static void +__eb_unpin_context(struct i915_execbuffer *eb, struct intel_context *ce) +{ + if (likely(atomic_add_unless(&ce->pin_count, -1, 1))) + return; + + mutex_lock(&eb->i915->drm.struct_mutex); + intel_context_unpin(ce); + mutex_unlock(&eb->i915->drm.struct_mutex); +} + +static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce) +{ + struct intel_timeline *tl; + struct i915_request *rq; + int err; + /* * ABI: Before userspace accesses the GPU (e.g. execbuffer), report * EIO if the GPU is already wedged. @@ -2149,7 +2157,7 @@ static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce) * GGTT space, so do this first before we reserve a seqno for * ourselves. */ - err = intel_context_pin(ce); + err = __eb_pin_context(eb, ce); if (err) return err; @@ -2161,23 +2169,43 @@ static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce) * until the timeline is idle, which in turn releases the wakeref * taken on the engine, and the parent device. */ - err = intel_context_timeline_lock(ce); - if (err) + tl = intel_context_timeline_lock(ce); + if (IS_ERR(tl)) { + err = PTR_ERR(tl); goto err_unpin; + } intel_context_enter(ce); - intel_context_timeline_unlock(ce); + rq = eb_throttle(ce); + + intel_context_timeline_unlock(tl); + + if (rq) { + if (i915_request_wait(rq, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT) < 0) { + i915_request_put(rq); + err = -EINTR; + goto err_exit; + } + + i915_request_put(rq); + } eb->engine = ce->engine; eb->context = ce; return 0; +err_exit: + mutex_lock(&tl->mutex); + intel_context_exit(ce); + intel_context_timeline_unlock(tl); err_unpin: - intel_context_unpin(ce); + __eb_unpin_context(eb, ce); return err; } -static void eb_unpin_context(struct i915_execbuffer *eb) +static void eb_unpin_engine(struct i915_execbuffer *eb) { struct intel_context *ce = eb->context; struct intel_timeline *tl = ce->timeline; @@ -2186,7 +2214,7 @@ static void eb_unpin_context(struct i915_execbuffer *eb) intel_context_exit(ce); mutex_unlock(&tl->mutex); - intel_context_unpin(ce); + __eb_unpin_context(eb, ce); } static unsigned int @@ -2231,9 +2259,9 @@ eb_select_legacy_ring(struct i915_execbuffer *eb, } static int -eb_select_engine(struct i915_execbuffer *eb, - struct drm_file *file, - struct drm_i915_gem_execbuffer2 *args) +eb_pin_engine(struct i915_execbuffer *eb, + struct drm_file *file, + struct drm_i915_gem_execbuffer2 *args) { struct intel_context *ce; unsigned int idx; @@ -2248,7 +2276,7 @@ eb_select_engine(struct i915_execbuffer *eb, if (IS_ERR(ce)) return PTR_ERR(ce); - err = eb_pin_context(eb, ce); + err = __eb_pin_engine(eb, ce); intel_context_put(ce); return err; @@ -2466,16 +2494,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (unlikely(err)) goto err_destroy; - err = i915_mutex_lock_interruptible(dev); - if (err) - goto err_context; - - err = eb_select_engine(&eb, file, args); + err = eb_pin_engine(&eb, file, args); if (unlikely(err)) - goto err_unlock; + goto err_context; - err = eb_wait_for_ring(&eb); /* may temporarily drop struct_mutex */ - if (unlikely(err)) + err = i915_mutex_lock_interruptible(dev); + if (err) goto err_engine; err = eb_relocate(&eb); @@ -2633,10 +2657,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, err_vma: if (eb.exec) eb_release_vmas(&eb); -err_engine: - eb_unpin_context(&eb); -err_unlock: mutex_unlock(&dev->struct_mutex); +err_engine: + eb_unpin_engine(&eb); err_context: i915_gem_context_put(eb.gem_context); err_destroy: diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 9fa8b588f18e..053a1307ecb4 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -12,6 +12,7 @@ #include "i915_active.h" #include "intel_context_types.h" #include "intel_engine_types.h" +#include "intel_timeline_types.h" void intel_context_init(struct intel_context *ce, struct i915_gem_context *ctx, @@ -118,17 +119,24 @@ static inline void intel_context_put(struct intel_context *ce) kref_put(&ce->ref, ce->ops->destroy); } -static inline int __must_check +static inline struct intel_timeline *__must_check intel_context_timeline_lock(struct intel_context *ce) __acquires(&ce->timeline->mutex) { - return mutex_lock_interruptible(&ce->timeline->mutex); + struct intel_timeline *tl = ce->timeline; + int err; + + err = mutex_lock_interruptible(&tl->mutex); + if (err) + return ERR_PTR(err); + + return tl; } -static inline void intel_context_timeline_unlock(struct intel_context *ce) - __releases(&ce->timeline->mutex) +static inline void intel_context_timeline_unlock(struct intel_timeline *tl) + __releases(&tl->mutex) { - mutex_unlock(&ce->timeline->mutex); + mutex_unlock(&tl->mutex); } int intel_context_prepare_remote_request(struct intel_context *ce, diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index abd4fde2e52d..ba457c1c7dc0 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -679,7 +679,6 @@ static int measure_breadcrumb_dw(struct intel_engine_cs *engine) engine->status_page.vma)) goto out_frame; - INIT_LIST_HEAD(&frame->ring.request_list); frame->ring.vaddr = frame->cs; frame->ring.size = sizeof(frame->cs); frame->ring.effective_size = frame->ring.size; diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index a0f372807dd4..9965a32601d6 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -69,9 +69,6 @@ struct intel_ring { struct i915_vma *vma; void *vaddr; - struct list_head request_list; - struct list_head active_link; - /* * As we have two types of rings, one global to the engine used * by ringbuffer submission and those that are exclusive to a diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index 914bd2db3bc7..d48ec9a76ed1 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -15,8 +15,6 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915) spin_lock_init(>->irq_lock); - INIT_LIST_HEAD(>->active_rings); - INIT_LIST_HEAD(>->closed_vma); spin_lock_init(>->closed_lock); diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h index 143b2d78c2cc..f882e2cf159c 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h @@ -48,8 +48,6 @@ struct intel_gt { struct list_head hwsp_free_list; } timelines; - struct list_head active_rings; - struct intel_wakeref wakeref; struct list_head closed_vma; diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 116d7b3f946b..565567caed6f 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -1619,6 +1619,7 @@ static void execlists_context_unpin(struct intel_context *ce) { i915_gem_context_unpin_hw_id(ce->gem_context); i915_gem_object_unpin_map(ce->state->obj); + intel_ring_reset(ce->ring, ce->ring->tail); } static void diff --git a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c index 409d764f8c6d..1d9c125b76d0 100644 --- a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c @@ -1250,7 +1250,7 @@ void intel_ring_unpin(struct intel_ring *ring) return; /* Discard any unused bytes beyond that submitted to hw. */ - intel_ring_reset(ring, ring->tail); + intel_ring_reset(ring, ring->emit); i915_vma_unset_ggtt_write(vma); if (i915_vma_is_map_and_fenceable(vma)) @@ -1311,7 +1311,6 @@ intel_engine_create_ring(struct intel_engine_cs *engine, int size) return ERR_PTR(-ENOMEM); kref_init(&ring->ref); - INIT_LIST_HEAD(&ring->request_list); ring->size = size; /* Workaround an erratum on the i830 which causes a hang if @@ -1865,7 +1864,10 @@ static int ring_request_alloc(struct i915_request *request) return 0; } -static noinline int wait_for_space(struct intel_ring *ring, unsigned int bytes) +static noinline int +wait_for_space(struct intel_ring *ring, + struct intel_timeline *tl, + unsigned int bytes) { struct i915_request *target; long timeout; @@ -1873,15 +1875,18 @@ static noinline int wait_for_space(struct intel_ring *ring, unsigned int bytes) if (intel_ring_update_space(ring) >= bytes) return 0; - GEM_BUG_ON(list_empty(&ring->request_list)); - list_for_each_entry(target, &ring->request_list, ring_link) { + GEM_BUG_ON(list_empty(&tl->requests)); + list_for_each_entry(target, &tl->requests, link) { + if (target->ring != ring) + continue; + /* Would completion of this request free enough space? */ if (bytes <= __intel_ring_space(target->postfix, ring->emit, ring->size)) break; } - if (WARN_ON(&target->ring_link == &ring->request_list)) + if (GEM_WARN_ON(&target->link == &tl->requests)) return -ENOSPC; timeout = i915_request_wait(target, @@ -1948,7 +1953,7 @@ u32 *intel_ring_begin(struct i915_request *rq, unsigned int num_dwords) */ GEM_BUG_ON(!rq->reserved_space); - ret = wait_for_space(ring, total_bytes); + ret = wait_for_space(ring, rq->timeline, total_bytes); if (unlikely(ret)) return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c index 54a11dde3076..5d43cbc3f345 100644 --- a/drivers/gpu/drm/i915/gt/mock_engine.c +++ b/drivers/gpu/drm/i915/gt/mock_engine.c @@ -58,7 +58,6 @@ static struct intel_ring *mock_ring(struct intel_engine_cs *engine) ring->vaddr = (void *)(ring + 1); atomic_set(&ring->pin_count, 1); - INIT_LIST_HEAD(&ring->request_list); intel_ring_update_space(ring); return ring; diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c index da9c49e2adaf..6fbc72bc290e 100644 --- a/drivers/gpu/drm/i915/gt/selftest_context.c +++ b/drivers/gpu/drm/i915/gt/selftest_context.c @@ -20,10 +20,13 @@ static int request_sync(struct i915_request *rq) i915_request_add(rq); timeout = i915_request_wait(rq, 0, HZ / 10); - if (timeout < 0) + if (timeout < 0) { err = timeout; - else + } else { + mutex_lock(&rq->timeline->mutex); i915_request_retire_upto(rq); + mutex_unlock(&rq->timeline->mutex); + } i915_request_put(rq); @@ -35,6 +38,7 @@ static int context_sync(struct intel_context *ce) struct intel_timeline *tl = ce->timeline; int err = 0; + mutex_lock(&tl->mutex); do { struct i915_request *rq; long timeout; @@ -55,6 +59,7 @@ static int context_sync(struct intel_context *ce) i915_request_put(rq); } while (!err); + mutex_unlock(&tl->mutex); return err; } diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 4021334dd2c5..f744acbf5541 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -181,40 +181,6 @@ i915_request_remove_from_client(struct i915_request *request) spin_unlock(&file_priv->mm.lock); } -static void advance_ring(struct i915_request *request) -{ - struct intel_ring *ring = request->ring; - unsigned int tail; - - /* - * We know the GPU must have read the request to have - * sent us the seqno + interrupt, so use the position - * of tail of the request to update the last known position - * of the GPU head. - * - * Note this requires that we are always called in request - * completion order. - */ - GEM_BUG_ON(!list_is_first(&request->ring_link, &ring->request_list)); - if (list_is_last(&request->ring_link, &ring->request_list)) { - /* - * We may race here with execlists resubmitting this request - * as we retire it. The resubmission will move the ring->tail - * forwards (to request->wa_tail). We either read the - * current value that was written to hw, or the value that - * is just about to be. Either works, if we miss the last two - * noops - they are safe to be replayed on a reset. - */ - tail = READ_ONCE(request->tail); - list_del(&ring->active_link); - } else { - tail = request->postfix; - } - list_del_init(&request->ring_link); - - ring->head = tail; -} - static void free_capture_list(struct i915_request *request) { struct i915_capture_list *capture; @@ -232,7 +198,7 @@ static bool i915_request_retire(struct i915_request *rq) { struct i915_active_request *active, *next; - lockdep_assert_held(&rq->i915->drm.struct_mutex); + lockdep_assert_held(&rq->timeline->mutex); if (!i915_request_completed(rq)) return false; @@ -244,7 +210,17 @@ static bool i915_request_retire(struct i915_request *rq) GEM_BUG_ON(!i915_sw_fence_signaled(&rq->submit)); trace_i915_request_retire(rq); - advance_ring(rq); + /* + * We know the GPU must have read the request to have + * sent us the seqno + interrupt, so use the position + * of tail of the request to update the last known position + * of the GPU head. + * + * Note this requires that we are always called in request + * completion order. + */ + GEM_BUG_ON(!list_is_first(&rq->link, &rq->timeline->requests)); + rq->ring->head = rq->postfix; /* * Walk through the active list, calling retire on each. This allows @@ -321,7 +297,7 @@ static bool i915_request_retire(struct i915_request *rq) void i915_request_retire_upto(struct i915_request *rq) { - struct intel_ring *ring = rq->ring; + struct intel_timeline * const tl = rq->timeline; struct i915_request *tmp; GEM_TRACE("%s fence %llx:%lld, current %d\n", @@ -329,15 +305,11 @@ void i915_request_retire_upto(struct i915_request *rq) rq->fence.context, rq->fence.seqno, hwsp_seqno(rq)); - lockdep_assert_held(&rq->i915->drm.struct_mutex); + lockdep_assert_held(&tl->mutex); GEM_BUG_ON(!i915_request_completed(rq)); - if (list_empty(&rq->ring_link)) - return; - do { - tmp = list_first_entry(&ring->request_list, - typeof(*tmp), ring_link); + tmp = list_first_entry(&tl->requests, typeof(*tmp), link); } while (i915_request_retire(tmp) && tmp != rq); } @@ -564,29 +536,28 @@ semaphore_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) return NOTIFY_DONE; } -static void ring_retire_requests(struct intel_ring *ring) +static void retire_requests(struct intel_timeline *tl) { struct i915_request *rq, *rn; - list_for_each_entry_safe(rq, rn, &ring->request_list, ring_link) + list_for_each_entry_safe(rq, rn, &tl->requests, link) if (!i915_request_retire(rq)) break; } static noinline struct i915_request * -request_alloc_slow(struct intel_context *ce, gfp_t gfp) +request_alloc_slow(struct intel_timeline *tl, gfp_t gfp) { - struct intel_ring *ring = ce->ring; struct i915_request *rq; - if (list_empty(&ring->request_list)) + if (list_empty(&tl->requests)) goto out; if (!gfpflags_allow_blocking(gfp)) goto out; /* Move our oldest request to the slab-cache (if not in use!) */ - rq = list_first_entry(&ring->request_list, typeof(*rq), ring_link); + rq = list_first_entry(&tl->requests, typeof(*rq), link); i915_request_retire(rq); rq = kmem_cache_alloc(global.slab_requests, @@ -595,11 +566,11 @@ request_alloc_slow(struct intel_context *ce, gfp_t gfp) return rq; /* Ratelimit ourselves to prevent oom from malicious clients */ - rq = list_last_entry(&ring->request_list, typeof(*rq), ring_link); + rq = list_last_entry(&tl->requests, typeof(*rq), link); cond_synchronize_rcu(rq->rcustate); /* Retire our old requests in the hope that we free some */ - ring_retire_requests(ring); + retire_requests(tl); out: return kmem_cache_alloc(global.slab_requests, gfp); @@ -650,7 +621,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) rq = kmem_cache_alloc(global.slab_requests, gfp | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); if (unlikely(!rq)) { - rq = request_alloc_slow(ce, gfp); + rq = request_alloc_slow(tl, gfp); if (!rq) { ret = -ENOMEM; goto err_unreserve; @@ -742,15 +713,15 @@ struct i915_request * i915_request_create(struct intel_context *ce) { struct i915_request *rq; - int err; + struct intel_timeline *tl; - err = intel_context_timeline_lock(ce); - if (err) - return ERR_PTR(err); + tl = intel_context_timeline_lock(ce); + if (IS_ERR(tl)) + return ERR_CAST(tl); /* Move our oldest request to the slab-cache (if not in use!) */ - rq = list_first_entry(&ce->ring->request_list, typeof(*rq), ring_link); - if (!list_is_last(&rq->ring_link, &ce->ring->request_list)) + rq = list_first_entry(&tl->requests, typeof(*rq), link); + if (!list_is_last(&rq->link, &tl->requests)) i915_request_retire(rq); intel_context_enter(ce); @@ -760,22 +731,22 @@ i915_request_create(struct intel_context *ce) goto err_unlock; /* Check that we do not interrupt ourselves with a new request */ - rq->cookie = lockdep_pin_lock(&ce->timeline->mutex); + rq->cookie = lockdep_pin_lock(&tl->mutex); return rq; err_unlock: - intel_context_timeline_unlock(ce); + intel_context_timeline_unlock(tl); return rq; } static int i915_request_await_start(struct i915_request *rq, struct i915_request *signal) { - if (list_is_first(&signal->ring_link, &signal->ring->request_list)) + if (list_is_first(&signal->link, &signal->timeline->requests)) return 0; - signal = list_prev_entry(signal, ring_link); + signal = list_prev_entry(signal, link); if (intel_timeline_sync_is_later(rq->timeline, &signal->fence)) return 0; @@ -1155,7 +1126,6 @@ struct i915_request *__i915_request_commit(struct i915_request *rq) { struct intel_engine_cs *engine = rq->engine; struct intel_ring *ring = rq->ring; - struct i915_request *prev; u32 *cs; GEM_TRACE("%s fence %llx:%lld\n", @@ -1168,6 +1138,7 @@ struct i915_request *__i915_request_commit(struct i915_request *rq) */ GEM_BUG_ON(rq->reserved_space > ring->space); rq->reserved_space = 0; + rq->emitted_jiffies = jiffies; /* * Record the position of the start of the breadcrumb so that @@ -1179,14 +1150,7 @@ struct i915_request *__i915_request_commit(struct i915_request *rq) GEM_BUG_ON(IS_ERR(cs)); rq->postfix = intel_ring_offset(rq, cs); - prev = __i915_request_add_to_timeline(rq); - - list_add_tail(&rq->ring_link, &ring->request_list); - if (list_is_first(&rq->ring_link, &ring->request_list)) - list_add(&ring->active_link, &rq->i915->gt.active_rings); - rq->emitted_jiffies = jiffies; - - return prev; + return __i915_request_add_to_timeline(rq); } void __i915_request_queue(struct i915_request *rq, @@ -1214,10 +1178,11 @@ void __i915_request_queue(struct i915_request *rq, void i915_request_add(struct i915_request *rq) { struct i915_sched_attr attr = rq->gem_context->sched; + struct intel_timeline * const tl = rq->timeline; struct i915_request *prev; - lockdep_assert_held(&rq->timeline->mutex); - lockdep_unpin_lock(&rq->timeline->mutex, rq->cookie); + lockdep_assert_held(&tl->mutex); + lockdep_unpin_lock(&tl->mutex, rq->cookie); trace_i915_request_add(rq); @@ -1266,10 +1231,10 @@ void i915_request_add(struct i915_request *rq) * work on behalf of others -- but instead we should benefit from * improved resource management. (Well, that's the theory at least.) */ - if (prev && i915_request_completed(prev)) + if (prev && i915_request_completed(prev) && prev->timeline == tl) i915_request_retire_upto(prev); - mutex_unlock(&rq->timeline->mutex); + mutex_unlock(&tl->mutex); } static unsigned long local_clock_us(unsigned int *cpu) @@ -1489,18 +1454,43 @@ long i915_request_wait(struct i915_request *rq, bool i915_retire_requests(struct drm_i915_private *i915) { - struct intel_ring *ring, *tmp; + struct intel_gt_timelines *timelines = &i915->gt.timelines; + struct intel_timeline *tl, *tn; + LIST_HEAD(free); + + spin_lock(&timelines->lock); + list_for_each_entry_safe(tl, tn, &timelines->active_list, link) { + if (!mutex_trylock(&tl->mutex)) + continue; - lockdep_assert_held(&i915->drm.struct_mutex); + intel_timeline_get(tl); + GEM_BUG_ON(!tl->active_count); + tl->active_count++; /* pin the list element */ + spin_unlock(&timelines->lock); - list_for_each_entry_safe(ring, tmp, - &i915->gt.active_rings, active_link) { - intel_ring_get(ring); /* last rq holds reference! */ - ring_retire_requests(ring); - intel_ring_put(ring); + retire_requests(tl); + + spin_lock(&timelines->lock); + + /* Restart iteration after dropping lock */ + list_safe_reset_next(tl, tn, link); + if (!--tl->active_count) + list_del(&tl->link); + + mutex_unlock(&tl->mutex); + + /* Defer the final release to after the spinlock */ + if (refcount_dec_and_test(&tl->kref.refcount)) { + GEM_BUG_ON(tl->active_count); + list_add(&tl->link, &free); + } } + spin_unlock(&timelines->lock); + + list_for_each_entry_safe(tl, tn, &free, link) + __intel_timeline_free(&tl->kref); - return !list_empty(&i915->gt.active_rings); + return !list_empty(&timelines->active_list); } #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index fec1d5f17c94..8ac6e1226a56 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -223,9 +223,6 @@ struct i915_request { /** timeline->request entry for this request */ struct list_head link; - /** ring->request_list entry for this request */ - struct list_head ring_link; - struct drm_i915_file_private *file_priv; /** file_priv list entry for this request */ struct list_head client_link; From patchwork Wed Aug 14 09:26:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30BD513B1 for ; Wed, 14 Aug 2019 09:28:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BA332871C for ; Wed, 14 Aug 2019 09:28:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1044828722; Wed, 14 Aug 2019 09:28:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8B6AD28720 for ; Wed, 14 Aug 2019 09:28:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1660C89F27; Wed, 14 Aug 2019 09:28:35 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id EFD0A89E3B for ; Wed, 14 Aug 2019 09:28:32 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076223-1500050 for multiple; Wed, 14 Aug 2019 10:26:45 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:41 +0100 Message-Id: <20190814092643.1908-6-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 6/8] drm/i915/gt: Mark context->active_count as protected by timeline->mutex X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP We use timeline->mutex to protect modifications to context->active_count, and the associated enable/disable callbacks. Due to complications with engine-pm barrier there is a path where we used a "superlock" to provide serialised protect and so could not unconditionally assert with lockdep that it was always held. However, we can mark the mutex as taken (noting that we may be nested underneath ourselves) which means we can be reassured the right timeline->mutex is always treated as held and let lockdep roam free. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- drivers/gpu/drm/i915/gt/intel_context.h | 3 +++ drivers/gpu/drm/i915/gt/intel_context_types.h | 2 +- drivers/gpu/drm/i915/gt/intel_engine_pm.c | 14 ++++++++++++++ drivers/gpu/drm/i915/gt/intel_timeline.c | 4 ++++ drivers/gpu/drm/i915/i915_request.c | 3 ++- 5 files changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 053a1307ecb4..dd742ac2fbdb 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -89,17 +89,20 @@ void intel_context_exit_engine(struct intel_context *ce); static inline void intel_context_enter(struct intel_context *ce) { + lockdep_assert_held(&ce->timeline->mutex); if (!ce->active_count++) ce->ops->enter(ce); } static inline void intel_context_mark_active(struct intel_context *ce) { + lockdep_assert_held(&ce->timeline->mutex); ++ce->active_count; } static inline void intel_context_exit(struct intel_context *ce) { + lockdep_assert_held(&ce->timeline->mutex); GEM_BUG_ON(!ce->active_count); if (!--ce->active_count) ce->ops->exit(ce); diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index d8ce266c049f..bf9cedfccbf0 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -59,7 +59,7 @@ struct intel_context { u32 *lrc_reg_state; u64 lrc_desc; - unsigned int active_count; /* notionally protected by timeline->mutex */ + unsigned int active_count; /* protected by timeline->mutex */ atomic_t pin_count; struct mutex pin_mutex; /* guards pinning and associated on-gpuing */ diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index f3f0109f9e22..5f03f7dcad72 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -37,6 +37,16 @@ static int __engine_unpark(struct intel_wakeref *wf) return 0; } +static inline void __timeline_mark_lock(struct intel_context *ce) +{ + mutex_acquire(&ce->timeline->mutex.dep_map, 2, 0, _THIS_IP_); +} + +static inline void __timeline_mark_unlock(struct intel_context *ce) +{ + mutex_release(&ce->timeline->mutex.dep_map, 0, _THIS_IP_); +} + static bool switch_to_kernel_context(struct intel_engine_cs *engine) { struct i915_request *rq; @@ -61,6 +71,8 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine) * retiring the last request, thus all rings should be empty and * all timelines idle. */ + __timeline_mark_lock(engine->kernel_context); + rq = __i915_request_create(engine->kernel_context, GFP_NOWAIT); if (IS_ERR(rq)) /* Context switch failed, hope for the best! Maybe reset? */ @@ -80,6 +92,8 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine) __intel_wakeref_defer_park(&engine->wakeref); __i915_request_queue(rq, NULL); + __timeline_mark_unlock(engine->kernel_context); + return false; } diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index 7b476cd55dac..eafd94d5e211 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -338,6 +338,8 @@ void intel_timeline_enter(struct intel_timeline *tl) { struct intel_gt_timelines *timelines = &tl->gt->timelines; + lockdep_assert_held(&tl->mutex); + GEM_BUG_ON(!atomic_read(&tl->pin_count)); if (tl->active_count++) return; @@ -352,6 +354,8 @@ void intel_timeline_exit(struct intel_timeline *tl) { struct intel_gt_timelines *timelines = &tl->gt->timelines; + lockdep_assert_held(&tl->mutex); + GEM_BUG_ON(!tl->active_count); if (--tl->active_count) return; diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index f744acbf5541..216ebd1bd980 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1087,7 +1087,8 @@ __i915_request_add_to_timeline(struct i915_request *rq) * precludes optimising to use semaphores serialisation of a single * timeline across engines. */ - prev = rcu_dereference_protected(timeline->last_request.request, 1); + prev = rcu_dereference_protected(timeline->last_request.request, + lockdep_is_held(&timeline->mutex)); if (prev && !i915_request_completed(prev)) { if (is_power_of_2(prev->engine->mask | rq->engine->mask)) i915_sw_fence_await_sw_fence(&rq->submit, From patchwork Wed Aug 14 09:26:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093573 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F41571395 for ; Wed, 14 Aug 2019 09:27:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA4832871C for ; Wed, 14 Aug 2019 09:27:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CE2D328722; Wed, 14 Aug 2019 09:27:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C78F22871C for ; Wed, 14 Aug 2019 09:27:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4034389AEA; Wed, 14 Aug 2019 09:27:06 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6574C89AEA for ; Wed, 14 Aug 2019 09:27:04 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076225-1500050 for multiple; Wed, 14 Aug 2019 10:26:46 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:42 +0100 Message-Id: <20190814092643.1908-7-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 7/8] drm/i915: Extract intel_frontbuffer active tracking X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Move the active tracking for the frontbuffer operations out of the i915_gem_object and into its own first class (refcounted) object. In the process of detangling, we switch from low level request tracking to the easier i915_active -- with the plan that this avoids any potential atomic callbacks as the frontbuffer tracking wishes to sleep as it flushes. Signed-off-by: Chris Wilson Reviewed-by: Matthew Auld --- Documentation/gpu/i915.rst | 3 - drivers/gpu/drm/i915/display/intel_display.c | 70 +++-- .../drm/i915/display/intel_display_types.h | 1 + drivers/gpu/drm/i915/display/intel_fbdev.c | 40 ++- .../gpu/drm/i915/display/intel_frontbuffer.c | 255 +++++++++++++----- .../gpu/drm/i915/display/intel_frontbuffer.h | 70 +++-- drivers/gpu/drm/i915/display/intel_overlay.c | 8 +- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 14 +- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 - drivers/gpu/drm/i915/gem/i915_gem_object.c | 27 +- drivers/gpu/drm/i915/gem/i915_gem_object.h | 2 +- .../gpu/drm/i915/gem/i915_gem_object_types.h | 8 +- drivers/gpu/drm/i915/i915_debugfs.c | 5 - drivers/gpu/drm/i915/i915_drv.h | 4 - drivers/gpu/drm/i915/i915_gem.c | 47 +--- drivers/gpu/drm/i915/i915_vma.c | 6 +- 17 files changed, 306 insertions(+), 260 deletions(-) diff --git a/Documentation/gpu/i915.rst b/Documentation/gpu/i915.rst index 0e322688be5c..3415255ad3dc 100644 --- a/Documentation/gpu/i915.rst +++ b/Documentation/gpu/i915.rst @@ -91,9 +91,6 @@ Frontbuffer Tracking .. kernel-doc:: drivers/gpu/drm/i915/display/intel_frontbuffer.c :internal: -.. kernel-doc:: drivers/gpu/drm/i915/i915_gem.c - :functions: i915_gem_track_fb - Display FIFO Underrun Reporting ------------------------------- diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 5b733e38eae3..c6a3f5753769 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -3049,12 +3049,13 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc, { struct drm_device *dev = crtc->base.dev; struct drm_i915_private *dev_priv = to_i915(dev); - struct drm_i915_gem_object *obj = NULL; struct drm_mode_fb_cmd2 mode_cmd = { 0 }; struct drm_framebuffer *fb = &plane_config->fb->base; u32 base_aligned = round_down(plane_config->base, PAGE_SIZE); u32 size_aligned = round_up(plane_config->base + plane_config->size, PAGE_SIZE); + struct drm_i915_gem_object *obj; + bool ret = false; size_aligned -= base_aligned; @@ -3096,7 +3097,7 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc, break; default: MISSING_CASE(plane_config->tiling); - return false; + goto out; } mode_cmd.pixel_format = fb->format->format; @@ -3108,16 +3109,15 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc, if (intel_framebuffer_init(to_intel_framebuffer(fb), obj, &mode_cmd)) { DRM_DEBUG_KMS("intel fb init failed\n"); - goto out_unref_obj; + goto out; } DRM_DEBUG_KMS("initial plane fb obj %p\n", obj); - return true; - -out_unref_obj: + ret = true; +out: i915_gem_object_put(obj); - return false; + return ret; } static void @@ -3174,6 +3174,12 @@ static void intel_plane_disable_noatomic(struct intel_crtc *crtc, intel_disable_plane(plane, crtc_state); } +static struct intel_frontbuffer * +to_intel_frontbuffer(struct drm_framebuffer *fb) +{ + return fb ? to_intel_framebuffer(fb)->frontbuffer : NULL; +} + static void intel_find_initial_plane_obj(struct intel_crtc *intel_crtc, struct intel_initial_plane_config *plane_config) @@ -3181,7 +3187,6 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc, struct drm_device *dev = intel_crtc->base.dev; struct drm_i915_private *dev_priv = to_i915(dev); struct drm_crtc *c; - struct drm_i915_gem_object *obj; struct drm_plane *primary = intel_crtc->base.primary; struct drm_plane_state *plane_state = primary->state; struct intel_plane *intel_plane = to_intel_plane(primary); @@ -3257,8 +3262,7 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc, return; } - obj = intel_fb_obj(fb); - intel_fb_obj_flush(obj, ORIGIN_DIRTYFB); + intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB); plane_state->src_x = 0; plane_state->src_y = 0; @@ -3273,14 +3277,14 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc, intel_state->base.src = drm_plane_state_src(plane_state); intel_state->base.dst = drm_plane_state_dest(plane_state); - if (i915_gem_object_is_tiled(obj)) + if (plane_config->tiling) dev_priv->preserve_bios_swizzle = true; plane_state->fb = fb; plane_state->crtc = &intel_crtc->base; atomic_or(to_intel_plane(primary)->frontbuffer_bit, - &obj->frontbuffer_bits); + &to_intel_frontbuffer(fb)->bits); } static int skl_max_plane_width(const struct drm_framebuffer *fb, @@ -14132,9 +14136,9 @@ static void intel_atomic_track_fbs(struct intel_atomic_state *state) for_each_oldnew_intel_plane_in_state(state, plane, old_plane_state, new_plane_state, i) - i915_gem_track_fb(intel_fb_obj(old_plane_state->base.fb), - intel_fb_obj(new_plane_state->base.fb), - plane->frontbuffer_bit); + intel_frontbuffer_track(to_intel_frontbuffer(old_plane_state->base.fb), + to_intel_frontbuffer(new_plane_state->base.fb), + plane->frontbuffer_bit); } static int intel_atomic_commit(struct drm_device *dev, @@ -14418,7 +14422,7 @@ intel_prepare_plane_fb(struct drm_plane *plane, return ret; fb_obj_bump_render_priority(obj); - intel_fb_obj_flush(obj, ORIGIN_DIRTYFB); + intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_DIRTYFB); if (!new_state->fence) { /* implicit fencing */ struct dma_fence *fence; @@ -14681,13 +14685,12 @@ intel_legacy_cursor_update(struct drm_plane *plane, struct drm_modeset_acquire_ctx *ctx) { struct drm_i915_private *dev_priv = to_i915(crtc->dev); - int ret; struct drm_plane_state *old_plane_state, *new_plane_state; struct intel_plane *intel_plane = to_intel_plane(plane); - struct drm_framebuffer *old_fb; struct intel_crtc_state *crtc_state = to_intel_crtc_state(crtc->state); struct intel_crtc_state *new_crtc_state; + int ret; /* * When crtc is inactive or there is a modeset pending, @@ -14755,11 +14758,10 @@ intel_legacy_cursor_update(struct drm_plane *plane, if (ret) goto out_unlock; - intel_fb_obj_flush(intel_fb_obj(fb), ORIGIN_FLIP); - - old_fb = old_plane_state->fb; - i915_gem_track_fb(intel_fb_obj(old_fb), intel_fb_obj(fb), - intel_plane->frontbuffer_bit); + intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_FLIP); + intel_frontbuffer_track(to_intel_frontbuffer(old_plane_state->fb), + to_intel_frontbuffer(fb), + intel_plane->frontbuffer_bit); /* Swap plane state */ plane->state = new_plane_state; @@ -15540,15 +15542,9 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv) static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb) { struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); - struct drm_i915_gem_object *obj = intel_fb_obj(fb); drm_framebuffer_cleanup(fb); - - i915_gem_object_lock(obj); - WARN_ON(!obj->framebuffer_references--); - i915_gem_object_unlock(obj); - - i915_gem_object_put(obj); + intel_frontbuffer_put(intel_fb->frontbuffer); kfree(intel_fb); } @@ -15576,7 +15572,7 @@ static int intel_user_framebuffer_dirty(struct drm_framebuffer *fb, struct drm_i915_gem_object *obj = intel_fb_obj(fb); i915_gem_object_flush_if_display(obj); - intel_fb_obj_flush(obj, ORIGIN_DIRTYFB); + intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB); return 0; } @@ -15598,8 +15594,11 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, int ret = -EINVAL; int i; + intel_fb->frontbuffer = intel_frontbuffer_get(obj); + if (!intel_fb->frontbuffer) + return -ENOMEM; + i915_gem_object_lock(obj); - obj->framebuffer_references++; tiling = i915_gem_object_get_tiling(obj); stride = i915_gem_object_get_stride(obj); i915_gem_object_unlock(obj); @@ -15716,9 +15715,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, return 0; err: - i915_gem_object_lock(obj); - obj->framebuffer_references--; - i915_gem_object_unlock(obj); + intel_frontbuffer_put(intel_fb->frontbuffer); return ret; } @@ -15736,8 +15733,7 @@ intel_user_framebuffer_create(struct drm_device *dev, return ERR_PTR(-ENOENT); fb = intel_framebuffer_create(obj, &mode_cmd); - if (IS_ERR(fb)) - i915_gem_object_put(obj); + i915_gem_object_put(obj); return fb; } diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index a88ec9aa9ca0..3c1a5f3e1d22 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -84,6 +84,7 @@ enum intel_broadcast_rgb { struct intel_framebuffer { struct drm_framebuffer base; + struct intel_frontbuffer *frontbuffer; struct intel_rotation_info rot_info; /* for each plane in the normal GTT view */ diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c index 2d74c33a504d..d59eee5c5d9c 100644 --- a/drivers/gpu/drm/i915/display/intel_fbdev.c +++ b/drivers/gpu/drm/i915/display/intel_fbdev.c @@ -47,13 +47,14 @@ #include "intel_fbdev.h" #include "intel_frontbuffer.h" -static void intel_fbdev_invalidate(struct intel_fbdev *ifbdev) +static struct intel_frontbuffer *to_frontbuffer(struct intel_fbdev *ifbdev) { - struct drm_i915_gem_object *obj = intel_fb_obj(&ifbdev->fb->base); - unsigned int origin = - ifbdev->vma_flags & PLANE_HAS_FENCE ? ORIGIN_GTT : ORIGIN_CPU; + return ifbdev->fb->frontbuffer; +} - intel_fb_obj_invalidate(obj, origin); +static void intel_fbdev_invalidate(struct intel_fbdev *ifbdev) +{ + intel_frontbuffer_invalidate(to_frontbuffer(ifbdev), ORIGIN_CPU); } static int intel_fbdev_set_par(struct fb_info *info) @@ -120,7 +121,7 @@ static int intelfb_alloc(struct drm_fb_helper *helper, struct drm_i915_private *dev_priv = to_i915(dev); struct drm_mode_fb_cmd2 mode_cmd = {}; struct drm_i915_gem_object *obj; - int size, ret; + int size; /* we don't do packed 24bpp */ if (sizes->surface_bpp == 24) @@ -147,24 +148,16 @@ static int intelfb_alloc(struct drm_fb_helper *helper, obj = i915_gem_object_create_shmem(dev_priv, size); if (IS_ERR(obj)) { DRM_ERROR("failed to allocate framebuffer\n"); - ret = PTR_ERR(obj); - goto err; + return PTR_ERR(obj); } fb = intel_framebuffer_create(obj, &mode_cmd); - if (IS_ERR(fb)) { - ret = PTR_ERR(fb); - goto err_obj; - } + i915_gem_object_put(obj); + if (IS_ERR(fb)) + return PTR_ERR(fb); ifbdev->fb = to_intel_framebuffer(fb); - return 0; - -err_obj: - i915_gem_object_put(obj); -err: - return ret; } static int intelfb_create(struct drm_fb_helper *helper, @@ -180,7 +173,6 @@ static int intelfb_create(struct drm_fb_helper *helper, const struct i915_ggtt_view view = { .type = I915_GGTT_VIEW_NORMAL, }; - struct drm_framebuffer *fb; intel_wakeref_t wakeref; struct fb_info *info; struct i915_vma *vma; @@ -226,8 +218,7 @@ static int intelfb_create(struct drm_fb_helper *helper, goto out_unlock; } - fb = &ifbdev->fb->base; - intel_fb_obj_flush(intel_fb_obj(fb), ORIGIN_DIRTYFB); + intel_frontbuffer_flush(to_frontbuffer(ifbdev), ORIGIN_DIRTYFB); info = drm_fb_helper_alloc_fbi(helper); if (IS_ERR(info)) { @@ -236,7 +227,7 @@ static int intelfb_create(struct drm_fb_helper *helper, goto out_unpin; } - ifbdev->helper.fb = fb; + ifbdev->helper.fb = &ifbdev->fb->base; info->fbops = &intelfb_ops; @@ -263,13 +254,14 @@ static int intelfb_create(struct drm_fb_helper *helper, * If the object is stolen however, it will be full of whatever * garbage was left in there. */ - if (intel_fb_obj(fb)->stolen && !prealloc) + if (vma->obj->stolen && !prealloc) memset_io(info->screen_base, 0, info->screen_size); /* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */ DRM_DEBUG_KMS("allocated %dx%d fb: 0x%08x\n", - fb->width, fb->height, i915_ggtt_offset(vma)); + ifbdev->fb->base.width, ifbdev->fb->base.height, + i915_ggtt_offset(vma)); ifbdev->vma = vma; ifbdev->vma_flags = flags; diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.c b/drivers/gpu/drm/i915/display/intel_frontbuffer.c index 9cda88e41d29..719379774fa5 100644 --- a/drivers/gpu/drm/i915/display/intel_frontbuffer.c +++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.c @@ -30,11 +30,11 @@ * Many features require us to track changes to the currently active * frontbuffer, especially rendering targeted at the frontbuffer. * - * To be able to do so GEM tracks frontbuffers using a bitmask for all possible - * frontbuffer slots through i915_gem_track_fb(). The function in this file are - * then called when the contents of the frontbuffer are invalidated, when - * frontbuffer rendering has stopped again to flush out all the changes and when - * the frontbuffer is exchanged with a flip. Subsystems interested in + * To be able to do so we track frontbuffers using a bitmask for all possible + * frontbuffer slots through intel_frontbuffer_track(). The functions in this + * file are then called when the contents of the frontbuffer are invalidated, + * when frontbuffer rendering has stopped again to flush out all the changes + * and when the frontbuffer is exchanged with a flip. Subsystems interested in * frontbuffer changes (e.g. PSR, FBC, DRRS) should directly put their callbacks * into the relevant places and filter for the frontbuffer slots that they are * interested int. @@ -63,28 +63,9 @@ #include "intel_frontbuffer.h" #include "intel_psr.h" -void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, - enum fb_op_origin origin, - unsigned int frontbuffer_bits) -{ - struct drm_i915_private *dev_priv = to_i915(obj->base.dev); - - if (origin == ORIGIN_CS) { - spin_lock(&dev_priv->fb_tracking.lock); - dev_priv->fb_tracking.busy_bits |= frontbuffer_bits; - dev_priv->fb_tracking.flip_bits &= ~frontbuffer_bits; - spin_unlock(&dev_priv->fb_tracking.lock); - } - - might_sleep(); - intel_psr_invalidate(dev_priv, frontbuffer_bits, origin); - intel_edp_drrs_invalidate(dev_priv, frontbuffer_bits); - intel_fbc_invalidate(dev_priv, frontbuffer_bits, origin); -} - /** - * intel_frontbuffer_flush - flush frontbuffer - * @dev_priv: i915 device + * frontbuffer_flush - flush frontbuffer + * @i915: i915 device * @frontbuffer_bits: frontbuffer plane tracking bits * @origin: which operation caused the flush * @@ -94,45 +75,27 @@ void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, * * Can be called without any locks held. */ -static void intel_frontbuffer_flush(struct drm_i915_private *dev_priv, - unsigned frontbuffer_bits, - enum fb_op_origin origin) +static void frontbuffer_flush(struct drm_i915_private *i915, + unsigned int frontbuffer_bits, + enum fb_op_origin origin) { /* Delay flushing when rings are still busy.*/ - spin_lock(&dev_priv->fb_tracking.lock); - frontbuffer_bits &= ~dev_priv->fb_tracking.busy_bits; - spin_unlock(&dev_priv->fb_tracking.lock); + spin_lock(&i915->fb_tracking.lock); + frontbuffer_bits &= ~i915->fb_tracking.busy_bits; + spin_unlock(&i915->fb_tracking.lock); if (!frontbuffer_bits) return; might_sleep(); - intel_edp_drrs_flush(dev_priv, frontbuffer_bits); - intel_psr_flush(dev_priv, frontbuffer_bits, origin); - intel_fbc_flush(dev_priv, frontbuffer_bits, origin); -} - -void __intel_fb_obj_flush(struct drm_i915_gem_object *obj, - enum fb_op_origin origin, - unsigned int frontbuffer_bits) -{ - struct drm_i915_private *dev_priv = to_i915(obj->base.dev); - - if (origin == ORIGIN_CS) { - spin_lock(&dev_priv->fb_tracking.lock); - /* Filter out new bits since rendering started. */ - frontbuffer_bits &= dev_priv->fb_tracking.busy_bits; - dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits; - spin_unlock(&dev_priv->fb_tracking.lock); - } - - if (frontbuffer_bits) - intel_frontbuffer_flush(dev_priv, frontbuffer_bits, origin); + intel_edp_drrs_flush(i915, frontbuffer_bits); + intel_psr_flush(i915, frontbuffer_bits, origin); + intel_fbc_flush(i915, frontbuffer_bits, origin); } /** * intel_frontbuffer_flip_prepare - prepare asynchronous frontbuffer flip - * @dev_priv: i915 device + * @i915: i915 device * @frontbuffer_bits: frontbuffer plane tracking bits * * This function gets called after scheduling a flip on @obj. The actual @@ -142,19 +105,19 @@ void __intel_fb_obj_flush(struct drm_i915_gem_object *obj, * * Can be called without any locks held. */ -void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv, +void intel_frontbuffer_flip_prepare(struct drm_i915_private *i915, unsigned frontbuffer_bits) { - spin_lock(&dev_priv->fb_tracking.lock); - dev_priv->fb_tracking.flip_bits |= frontbuffer_bits; + spin_lock(&i915->fb_tracking.lock); + i915->fb_tracking.flip_bits |= frontbuffer_bits; /* Remove stale busy bits due to the old buffer. */ - dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits; - spin_unlock(&dev_priv->fb_tracking.lock); + i915->fb_tracking.busy_bits &= ~frontbuffer_bits; + spin_unlock(&i915->fb_tracking.lock); } /** * intel_frontbuffer_flip_complete - complete asynchronous frontbuffer flip - * @dev_priv: i915 device + * @i915: i915 device * @frontbuffer_bits: frontbuffer plane tracking bits * * This function gets called after the flip has been latched and will complete @@ -162,23 +125,22 @@ void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv, * * Can be called without any locks held. */ -void intel_frontbuffer_flip_complete(struct drm_i915_private *dev_priv, +void intel_frontbuffer_flip_complete(struct drm_i915_private *i915, unsigned frontbuffer_bits) { - spin_lock(&dev_priv->fb_tracking.lock); + spin_lock(&i915->fb_tracking.lock); /* Mask any cancelled flips. */ - frontbuffer_bits &= dev_priv->fb_tracking.flip_bits; - dev_priv->fb_tracking.flip_bits &= ~frontbuffer_bits; - spin_unlock(&dev_priv->fb_tracking.lock); + frontbuffer_bits &= i915->fb_tracking.flip_bits; + i915->fb_tracking.flip_bits &= ~frontbuffer_bits; + spin_unlock(&i915->fb_tracking.lock); if (frontbuffer_bits) - intel_frontbuffer_flush(dev_priv, - frontbuffer_bits, ORIGIN_FLIP); + frontbuffer_flush(i915, frontbuffer_bits, ORIGIN_FLIP); } /** * intel_frontbuffer_flip - synchronous frontbuffer flip - * @dev_priv: i915 device + * @i915: i915 device * @frontbuffer_bits: frontbuffer plane tracking bits * * This function gets called after scheduling a flip on @obj. This is for @@ -187,13 +149,160 @@ void intel_frontbuffer_flip_complete(struct drm_i915_private *dev_priv, * * Can be called without any locks held. */ -void intel_frontbuffer_flip(struct drm_i915_private *dev_priv, +void intel_frontbuffer_flip(struct drm_i915_private *i915, unsigned frontbuffer_bits) { - spin_lock(&dev_priv->fb_tracking.lock); + spin_lock(&i915->fb_tracking.lock); /* Remove stale busy bits due to the old buffer. */ - dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits; - spin_unlock(&dev_priv->fb_tracking.lock); + i915->fb_tracking.busy_bits &= ~frontbuffer_bits; + spin_unlock(&i915->fb_tracking.lock); - intel_frontbuffer_flush(dev_priv, frontbuffer_bits, ORIGIN_FLIP); + frontbuffer_flush(i915, frontbuffer_bits, ORIGIN_FLIP); +} + +void __intel_fb_invalidate(struct intel_frontbuffer *front, + enum fb_op_origin origin, + unsigned int frontbuffer_bits) +{ + struct drm_i915_private *i915 = to_i915(front->obj->base.dev); + + if (origin == ORIGIN_CS) { + spin_lock(&i915->fb_tracking.lock); + i915->fb_tracking.busy_bits |= frontbuffer_bits; + i915->fb_tracking.flip_bits &= ~frontbuffer_bits; + spin_unlock(&i915->fb_tracking.lock); + } + + might_sleep(); + intel_psr_invalidate(i915, frontbuffer_bits, origin); + intel_edp_drrs_invalidate(i915, frontbuffer_bits); + intel_fbc_invalidate(i915, frontbuffer_bits, origin); +} + +void __intel_fb_flush(struct intel_frontbuffer *front, + enum fb_op_origin origin, + unsigned int frontbuffer_bits) +{ + struct drm_i915_private *i915 = to_i915(front->obj->base.dev); + + if (origin == ORIGIN_CS) { + spin_lock(&i915->fb_tracking.lock); + /* Filter out new bits since rendering started. */ + frontbuffer_bits &= i915->fb_tracking.busy_bits; + i915->fb_tracking.busy_bits &= ~frontbuffer_bits; + spin_unlock(&i915->fb_tracking.lock); + } + + if (frontbuffer_bits) + frontbuffer_flush(i915, frontbuffer_bits, origin); +} + +static int frontbuffer_active(struct i915_active *ref) +{ + struct intel_frontbuffer *front = + container_of(ref, typeof(*front), write); + + kref_get(&front->ref); + return 0; +} + +static void frontbuffer_retire(struct i915_active *ref) +{ + struct intel_frontbuffer *front = + container_of(ref, typeof(*front), write); + + intel_frontbuffer_flush(front, ORIGIN_CS); + intel_frontbuffer_put(front); +} + +static void frontbuffer_release(struct kref *ref) + __releases(&to_i915(front->obj->base.dev)->fb_tracking.lock) +{ + struct intel_frontbuffer *front = + container_of(ref, typeof(*front), ref); + + front->obj->frontbuffer = NULL; + spin_unlock(&to_i915(front->obj->base.dev)->fb_tracking.lock); + + i915_gem_object_put(front->obj); + kfree(front); +} + +struct intel_frontbuffer * +intel_frontbuffer_get(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_frontbuffer *front; + + spin_lock(&i915->fb_tracking.lock); + front = obj->frontbuffer; + if (front) + kref_get(&front->ref); + spin_unlock(&i915->fb_tracking.lock); + if (front) + return front; + + front = kmalloc(sizeof(*front), GFP_KERNEL); + if (!front) + return NULL; + + front->obj = obj; + kref_init(&front->ref); + atomic_set(&front->bits, 0); + i915_active_init(i915, &front->write, + frontbuffer_active, frontbuffer_retire); + + spin_lock(&i915->fb_tracking.lock); + if (obj->frontbuffer) { + kfree(front); + front = obj->frontbuffer; + kref_get(&front->ref); + } else { + i915_gem_object_get(obj); + obj->frontbuffer = front; + } + spin_unlock(&i915->fb_tracking.lock); + + return front; +} + +void intel_frontbuffer_put(struct intel_frontbuffer *front) +{ + kref_put_lock(&front->ref, + frontbuffer_release, + &to_i915(front->obj->base.dev)->fb_tracking.lock); +} + +/** + * intel_frontbuffer_track - update frontbuffer tracking + * @old: current buffer for the frontbuffer slots + * @new: new buffer for the frontbuffer slots + * @frontbuffer_bits: bitmask of frontbuffer slots + * + * This updates the frontbuffer tracking bits @frontbuffer_bits by clearing them + * from @old and setting them in @new. Both @old and @new can be NULL. + */ +void intel_frontbuffer_track(struct intel_frontbuffer *old, + struct intel_frontbuffer *new, + unsigned int frontbuffer_bits) +{ + /* + * Control of individual bits within the mask are guarded by + * the owning plane->mutex, i.e. we can never see concurrent + * manipulation of individual bits. But since the bitfield as a whole + * is updated using RMW, we need to use atomics in order to update + * the bits. + */ + BUILD_BUG_ON(INTEL_FRONTBUFFER_BITS_PER_PIPE * I915_MAX_PIPES > + BITS_PER_TYPE(atomic_t)); + + if (old) { + WARN_ON(!(atomic_read(&old->bits) & frontbuffer_bits)); + atomic_andnot(frontbuffer_bits, &old->bits); + } + + if (new) { + WARN_ON(atomic_read(&new->bits) & frontbuffer_bits); + atomic_or(frontbuffer_bits, &new->bits); + } } diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.h b/drivers/gpu/drm/i915/display/intel_frontbuffer.h index 5727320c8084..adc64d61a4a5 100644 --- a/drivers/gpu/drm/i915/display/intel_frontbuffer.h +++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.h @@ -24,7 +24,10 @@ #ifndef __INTEL_FRONTBUFFER_H__ #define __INTEL_FRONTBUFFER_H__ -#include "gem/i915_gem_object.h" +#include +#include + +#include "i915_active.h" struct drm_i915_private; struct drm_i915_gem_object; @@ -37,23 +40,30 @@ enum fb_op_origin { ORIGIN_DIRTYFB, }; -void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv, +struct intel_frontbuffer { + struct kref ref; + atomic_t bits; + struct i915_active write; + struct drm_i915_gem_object *obj; +}; + +void intel_frontbuffer_flip_prepare(struct drm_i915_private *i915, unsigned frontbuffer_bits); -void intel_frontbuffer_flip_complete(struct drm_i915_private *dev_priv, +void intel_frontbuffer_flip_complete(struct drm_i915_private *i915, unsigned frontbuffer_bits); -void intel_frontbuffer_flip(struct drm_i915_private *dev_priv, +void intel_frontbuffer_flip(struct drm_i915_private *i915, unsigned frontbuffer_bits); -void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, - enum fb_op_origin origin, - unsigned int frontbuffer_bits); -void __intel_fb_obj_flush(struct drm_i915_gem_object *obj, - enum fb_op_origin origin, - unsigned int frontbuffer_bits); +struct intel_frontbuffer * +intel_frontbuffer_get(struct drm_i915_gem_object *obj); + +void __intel_fb_invalidate(struct intel_frontbuffer *front, + enum fb_op_origin origin, + unsigned int frontbuffer_bits); /** - * intel_fb_obj_invalidate - invalidate frontbuffer object - * @obj: GEM object to invalidate + * intel_frontbuffer_invalidate - invalidate frontbuffer object + * @front: GEM object to invalidate * @origin: which operation caused the invalidation * * This function gets called every time rendering on the given object starts and @@ -62,37 +72,53 @@ void __intel_fb_obj_flush(struct drm_i915_gem_object *obj, * until the rendering completes or a flip on this frontbuffer plane is * scheduled. */ -static inline bool intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, - enum fb_op_origin origin) +static inline bool intel_frontbuffer_invalidate(struct intel_frontbuffer *front, + enum fb_op_origin origin) { unsigned int frontbuffer_bits; - frontbuffer_bits = atomic_read(&obj->frontbuffer_bits); + if (!front) + return false; + + frontbuffer_bits = atomic_read(&front->bits); if (!frontbuffer_bits) return false; - __intel_fb_obj_invalidate(obj, origin, frontbuffer_bits); + __intel_fb_invalidate(front, origin, frontbuffer_bits); return true; } +void __intel_fb_flush(struct intel_frontbuffer *front, + enum fb_op_origin origin, + unsigned int frontbuffer_bits); + /** - * intel_fb_obj_flush - flush frontbuffer object - * @obj: GEM object to flush + * intel_frontbuffer_flush - flush frontbuffer object + * @front: GEM object to flush * @origin: which operation caused the flush * * This function gets called every time rendering on the given object has * completed and frontbuffer caching can be started again. */ -static inline void intel_fb_obj_flush(struct drm_i915_gem_object *obj, - enum fb_op_origin origin) +static inline void intel_frontbuffer_flush(struct intel_frontbuffer *front, + enum fb_op_origin origin) { unsigned int frontbuffer_bits; - frontbuffer_bits = atomic_read(&obj->frontbuffer_bits); + if (!front) + return; + + frontbuffer_bits = atomic_read(&front->bits); if (!frontbuffer_bits) return; - __intel_fb_obj_flush(obj, origin, frontbuffer_bits); + __intel_fb_flush(front, origin, frontbuffer_bits); } +void intel_frontbuffer_track(struct intel_frontbuffer *old, + struct intel_frontbuffer *new, + unsigned int frontbuffer_bits); + +void intel_frontbuffer_put(struct intel_frontbuffer *front); + #endif /* __INTEL_FRONTBUFFER_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c index 4f78586ee05e..e1248eace0e1 100644 --- a/drivers/gpu/drm/i915/display/intel_overlay.c +++ b/drivers/gpu/drm/i915/display/intel_overlay.c @@ -281,9 +281,9 @@ static void intel_overlay_flip_prepare(struct intel_overlay *overlay, WARN_ON(overlay->old_vma); - i915_gem_track_fb(overlay->vma ? overlay->vma->obj : NULL, - vma ? vma->obj : NULL, - INTEL_FRONTBUFFER_OVERLAY(pipe)); + intel_frontbuffer_track(overlay->vma ? overlay->vma->obj->frontbuffer : NULL, + vma ? vma->obj->frontbuffer : NULL, + INTEL_FRONTBUFFER_OVERLAY(pipe)); intel_frontbuffer_flip_prepare(overlay->i915, INTEL_FRONTBUFFER_OVERLAY(pipe)); @@ -768,7 +768,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay, ret = PTR_ERR(vma); goto out_pin_section; } - intel_fb_obj_flush(new_bo, ORIGIN_DIRTYFB); + intel_frontbuffer_flush(new_bo->frontbuffer, ORIGIN_DIRTYFB); ret = i915_vma_put_fence(vma); if (ret) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c index 175dc14acf14..724242ec3972 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c @@ -49,7 +49,7 @@ static void __i915_do_clflush(struct drm_i915_gem_object *obj) { GEM_BUG_ON(!i915_gem_object_has_pages(obj)); drm_clflush_sg(obj->mm.pages); - intel_fb_obj_flush(obj, ORIGIN_CPU); + intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); } static void i915_clflush_work(struct work_struct *work) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 2e3ce2a69653..a1afc2690e9e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -551,13 +551,6 @@ i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write) return 0; } -static inline enum fb_op_origin -fb_write_origin(struct drm_i915_gem_object *obj, unsigned int domain) -{ - return (domain == I915_GEM_DOMAIN_GTT ? - obj->frontbuffer_ggtt_origin : ORIGIN_CPU); -} - /** * Called when user space prepares to use an object with the CPU, either * through the mmap ioctl's mapping or a GTT mapping. @@ -661,9 +654,8 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, i915_gem_object_unlock(obj); - if (write_domain != 0) - intel_fb_obj_invalidate(obj, - fb_write_origin(obj, write_domain)); + if (write_domain) + intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); out_unpin: i915_gem_object_unpin_pages(obj); @@ -783,7 +775,7 @@ int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj, } out: - intel_fb_obj_invalidate(obj, ORIGIN_CPU); + intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); obj->mm.dirty = true; /* return with the pages pinned */ return 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 1e7311493530..48c2cbe9b278 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -101,9 +101,6 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data, up_write(&mm->mmap_sem); if (IS_ERR_VALUE(addr)) goto err; - - /* This may race, but that's ok, it only gets set */ - WRITE_ONCE(obj->frontbuffer_ggtt_origin, ORIGIN_CPU); } i915_gem_object_put(obj); @@ -283,7 +280,6 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf) * Userspace is now writing through an untracked VMA, abandon * all hope that the hardware is able to track future writes. */ - obj->frontbuffer_ggtt_origin = ORIGIN_CPU; vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags); if (IS_ERR(vma) && !view.type) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 67dc61e02c9f..d7855dc5a5c5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -46,16 +46,6 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj) return kmem_cache_free(global.slab_objects, obj); } -static void -frontbuffer_retire(struct i915_active_request *active, - struct i915_request *request) -{ - struct drm_i915_gem_object *obj = - container_of(active, typeof(*obj), frontbuffer_write); - - intel_fb_obj_flush(obj, ORIGIN_CS); -} - void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops) { @@ -72,10 +62,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, obj->ops = ops; - obj->frontbuffer_ggtt_origin = ORIGIN_GTT; - i915_active_request_init(&obj->frontbuffer_write, - NULL, frontbuffer_retire); - obj->mm.madv = I915_MADV_WILLNEED; INIT_RADIX_TREE(&obj->mm.get_page.radix, GFP_KERNEL | __GFP_NOWARN); mutex_init(&obj->mm.get_page.lock); @@ -187,7 +173,6 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, GEM_BUG_ON(atomic_read(&obj->bind_count)); GEM_BUG_ON(obj->userfault_count); - GEM_BUG_ON(atomic_read(&obj->frontbuffer_bits)); GEM_BUG_ON(!list_empty(&obj->lut_list)); atomic_set(&obj->mm.pages_pin_count, 0); @@ -230,6 +215,8 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj) struct drm_i915_gem_object *obj = to_intel_bo(gem_obj); struct drm_i915_private *i915 = to_i915(obj->base.dev); + GEM_BUG_ON(i915_gem_object_is_framebuffer(obj)); + /* * Before we free the object, make sure any pure RCU-only * read-side critical sections are complete, e.g. @@ -261,13 +248,6 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj) queue_work(i915->wq, &i915->mm.free_work); } -static inline enum fb_op_origin -fb_write_origin(struct drm_i915_gem_object *obj, unsigned int domain) -{ - return (domain == I915_GEM_DOMAIN_GTT ? - obj->frontbuffer_ggtt_origin : ORIGIN_CPU); -} - static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj) { return !(obj->cache_level == I915_CACHE_NONE || @@ -290,8 +270,7 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj, for_each_ggtt_vma(vma, obj) intel_gt_flush_ggtt_writes(vma->vm->gt); - intel_fb_obj_flush(obj, - fb_write_origin(obj, I915_GEM_DOMAIN_GTT)); + intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); for_each_ggtt_vma(vma, obj) { if (vma->iomap) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 74bcd580ccc3..5efb9936e05b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -161,7 +161,7 @@ i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj) static inline bool i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj) { - return READ_ONCE(obj->framebuffer_references); + return READ_ONCE(obj->frontbuffer); } static inline unsigned int diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index d474c6ac4100..ede0eb4218a8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -13,6 +13,7 @@ #include "i915_selftest.h" struct drm_i915_gem_object; +struct intel_fronbuffer; /* * struct i915_lut_handle tracks the fast lookups from handle to vma used @@ -141,9 +142,7 @@ struct drm_i915_gem_object { */ u16 write_domain; - atomic_t frontbuffer_bits; - unsigned int frontbuffer_ggtt_origin; /* write once */ - struct i915_active_request frontbuffer_write; + struct intel_frontbuffer *frontbuffer; /** Current tiling stride for the object, if it's tiled. */ unsigned int tiling_and_stride; @@ -224,9 +223,6 @@ struct drm_i915_gem_object { bool quirked:1; } mm; - /** References from framebuffers, locks out tiling changes. */ - unsigned int framebuffer_references; - /** Record of address bit 17 of each page at last unbind. */ unsigned long *bit_17; diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index ef4f84e060ee..e7ce739fe545 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -138,7 +138,6 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct intel_engine_cs *engine; struct i915_vma *vma; - unsigned int frontbuffer_bits; int pin_count = 0; seq_printf(m, "%pK: %c%c%c%c %8zdKiB %02x %02x %s%s%s", @@ -228,10 +227,6 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) engine = i915_gem_object_last_write_engine(obj); if (engine) seq_printf(m, " (%s)", engine->name); - - frontbuffer_bits = atomic_read(&obj->frontbuffer_bits); - if (frontbuffer_bits) - seq_printf(m, " (frontbuffer: 0x%03x)", frontbuffer_bits); } struct file_stats { diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index c4406a60f3e4..0527ede6e0f2 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2355,10 +2355,6 @@ int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev, u32 handle, u64 *offset); int i915_gem_mmap_gtt_version(void); -void i915_gem_track_fb(struct drm_i915_gem_object *old, - struct drm_i915_gem_object *new, - unsigned frontbuffer_bits); - int __must_check i915_gem_set_global_seqno(struct drm_device *dev, u32 seqno); static inline u32 i915_reset_count(struct i915_gpu_error *error) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 07da7d5f855a..3f0d88cb47dd 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -139,17 +139,19 @@ i915_gem_phys_pwrite(struct drm_i915_gem_object *obj, void *vaddr = obj->phys_handle->vaddr + args->offset; char __user *user_data = u64_to_user_ptr(args->data_ptr); - /* We manually control the domain here and pretend that it + /* + * We manually control the domain here and pretend that it * remains coherent i.e. in the GTT domain, like shmem_pwrite. */ - intel_fb_obj_invalidate(obj, ORIGIN_CPU); + intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); + if (copy_from_user(vaddr, user_data, args->size)) return -EFAULT; drm_clflush_virt_range(vaddr, args->size); intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt); - intel_fb_obj_flush(obj, ORIGIN_CPU); + intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); return 0; } @@ -594,7 +596,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, goto out_unpin; } - intel_fb_obj_invalidate(obj, ORIGIN_CPU); + intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU); user_data = u64_to_user_ptr(args->data_ptr); offset = args->offset; @@ -636,7 +638,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, user_data += page_length; offset += page_length; } - intel_fb_obj_flush(obj, ORIGIN_CPU); + intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); i915_gem_object_unlock_fence(obj, fence); out_unpin: @@ -729,7 +731,7 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, offset = 0; } - intel_fb_obj_flush(obj, ORIGIN_CPU); + intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU); i915_gem_object_unlock_fence(obj, fence); return ret; @@ -1763,39 +1765,6 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file) return ret; } -/** - * i915_gem_track_fb - update frontbuffer tracking - * @old: current GEM buffer for the frontbuffer slots - * @new: new GEM buffer for the frontbuffer slots - * @frontbuffer_bits: bitmask of frontbuffer slots - * - * This updates the frontbuffer tracking bits @frontbuffer_bits by clearing them - * from @old and setting them in @new. Both @old and @new can be NULL. - */ -void i915_gem_track_fb(struct drm_i915_gem_object *old, - struct drm_i915_gem_object *new, - unsigned frontbuffer_bits) -{ - /* Control of individual bits within the mask are guarded by - * the owning plane->mutex, i.e. we can never see concurrent - * manipulation of individual bits. But since the bitfield as a whole - * is updated using RMW, we need to use atomics in order to update - * the bits. - */ - BUILD_BUG_ON(INTEL_FRONTBUFFER_BITS_PER_PIPE * I915_MAX_PIPES > - BITS_PER_TYPE(atomic_t)); - - if (old) { - WARN_ON(!(atomic_read(&old->frontbuffer_bits) & frontbuffer_bits)); - atomic_andnot(frontbuffer_bits, &old->frontbuffer_bits); - } - - if (new) { - WARN_ON(atomic_read(&new->frontbuffer_bits) & frontbuffer_bits); - atomic_or(frontbuffer_bits, &new->frontbuffer_bits); - } -} - #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftests/mock_gem_device.c" #include "selftests/i915_gem.c" diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index cc0848ef5df0..a32243951b29 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -908,8 +908,10 @@ int i915_vma_move_to_active(struct i915_vma *vma, return err; if (flags & EXEC_OBJECT_WRITE) { - if (intel_fb_obj_invalidate(obj, ORIGIN_CS)) - __i915_active_request_set(&obj->frontbuffer_write, rq); + if (intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CS)) + i915_active_ref(&obj->frontbuffer->write, + rq->fence.context, + rq); dma_resv_add_excl_fence(vma->resv, &rq->fence); obj->write_domain = I915_GEM_DOMAIN_RENDER; From patchwork Wed Aug 14 09:26:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11093571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A69EB1395 for ; Wed, 14 Aug 2019 09:27:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 915042871C for ; Wed, 14 Aug 2019 09:27:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8522628722; Wed, 14 Aug 2019 09:27:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B7E1028720 for ; Wed, 14 Aug 2019 09:27:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6103789CD3; Wed, 14 Aug 2019 09:27:05 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8F6B889AEA for ; Wed, 14 Aug 2019 09:27:03 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 18076227-1500050 for multiple; Wed, 14 Aug 2019 10:26:46 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 14 Aug 2019 10:26:43 +0100 Message-Id: <20190814092643.1908-8-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.23.0.rc1 In-Reply-To: <20190814092643.1908-1-chris@chris-wilson.co.uk> References: <20190814092643.1908-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 8/8] drm/i915: Markup expected timeline locks for i915_active X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP As every i915_active_request should be serialised by a dedicated lock, i915_active consists of a tree of locks; one for each node. Markup up the i915_active_request with what lock is supposed to be guarding it so that we can verify that the serialised updated are indeed serialised. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/display/intel_overlay.c | 2 +- .../gpu/drm/i915/gem/i915_gem_client_blt.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_context.c | 2 +- drivers/gpu/drm/i915/gt/intel_context.c | 11 +++-------- drivers/gpu/drm/i915/gt/intel_engine_pool.h | 2 +- drivers/gpu/drm/i915/gt/intel_timeline.c | 7 +++---- drivers/gpu/drm/i915/gt/selftest_timeline.c | 4 ++++ .../gpu/drm/i915/gt/selftests/mock_timeline.c | 2 +- drivers/gpu/drm/i915/i915_active.c | 19 +++++++++++++++---- drivers/gpu/drm/i915/i915_active.h | 12 ++++++++++-- drivers/gpu/drm/i915/i915_active_types.h | 3 +++ drivers/gpu/drm/i915/i915_vma.c | 4 ++-- drivers/gpu/drm/i915/selftests/i915_active.c | 3 +-- 13 files changed, 46 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c index e1248eace0e1..eca41c4a5aa6 100644 --- a/drivers/gpu/drm/i915/display/intel_overlay.c +++ b/drivers/gpu/drm/i915/display/intel_overlay.c @@ -230,7 +230,7 @@ alloc_request(struct intel_overlay *overlay, void (*fn)(struct intel_overlay *)) if (IS_ERR(rq)) return rq; - err = i915_active_ref(&overlay->last_flip, rq->fence.context, rq); + err = i915_active_ref(&overlay->last_flip, rq->timeline, rq); if (err) { i915_request_add(rq); return ERR_PTR(err); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c index 6a4e84157bf2..818ac6915bc5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c @@ -211,7 +211,7 @@ static void clear_pages_worker(struct work_struct *work) * keep track of the GPU activity within this vma/request, and * propagate the signal from the request to w->dma. */ - err = i915_active_ref(&vma->active, rq->fence.context, rq); + err = i915_active_ref(&vma->active, rq->timeline, rq); if (err) goto out_request; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index a6b0cb714292..cd1fd2e5423a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -908,7 +908,7 @@ static int context_barrier_task(struct i915_gem_context *ctx, if (emit) err = emit(rq, data); if (err == 0) - err = i915_active_ref(&cb->base, rq->fence.context, rq); + err = i915_active_ref(&cb->base, rq->timeline, rq); i915_request_add(rq); if (err) diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 9114953bf920..f55691d151ae 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -306,10 +306,10 @@ int intel_context_prepare_remote_request(struct intel_context *ce, /* Queue this switch after current activity by this context. */ err = i915_active_request_set(&tl->last_request, rq); + mutex_unlock(&tl->mutex); if (err) - goto unlock; + return err; } - lockdep_assert_held(&tl->mutex); /* * Guarantee context image and the timeline remains pinned until the @@ -319,12 +319,7 @@ int intel_context_prepare_remote_request(struct intel_context *ce, * words transfer the pinned ce object to tracked active request. */ GEM_BUG_ON(i915_active_is_idle(&ce->active)); - err = i915_active_ref(&ce->active, rq->fence.context, rq); - -unlock: - if (rq->timeline != tl) - mutex_unlock(&tl->mutex); - return err; + return i915_active_ref(&ce->active, rq->timeline, rq); } struct i915_request *intel_context_create_request(struct intel_context *ce) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pool.h b/drivers/gpu/drm/i915/gt/intel_engine_pool.h index f7a0a660c1c9..8d069efd9457 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pool.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_pool.h @@ -18,7 +18,7 @@ static inline int intel_engine_pool_mark_active(struct intel_engine_pool_node *node, struct i915_request *rq) { - return i915_active_ref(&node->active, rq->fence.context, rq); + return i915_active_ref(&node->active, rq->timeline, rq); } static inline void diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index eafd94d5e211..02fbe11b671b 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -254,7 +254,7 @@ int intel_timeline_init(struct intel_timeline *timeline, mutex_init(&timeline->mutex); - INIT_ACTIVE_REQUEST(&timeline->last_request); + INIT_ACTIVE_REQUEST(&timeline->last_request, &timeline->mutex); INIT_LIST_HEAD(&timeline->requests); i915_syncmap_init(&timeline->sync); @@ -440,8 +440,7 @@ __intel_timeline_get_seqno(struct intel_timeline *tl, * free it after the current request is retired, which ensures that * all writes into the cacheline from previous requests are complete. */ - err = i915_active_ref(&tl->hwsp_cacheline->active, - tl->fence_context, rq); + err = i915_active_ref(&tl->hwsp_cacheline->active, tl, rq); if (err) goto err_cacheline; @@ -492,7 +491,7 @@ int intel_timeline_get_seqno(struct intel_timeline *tl, static int cacheline_ref(struct intel_timeline_cacheline *cl, struct i915_request *rq) { - return i915_active_ref(&cl->active, rq->fence.context, rq); + return i915_active_ref(&cl->active, rq->timeline, rq); } int intel_timeline_read_hwsp(struct i915_request *from, diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c index d54113697745..321481403165 100644 --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c @@ -689,7 +689,9 @@ static int live_hwsp_wrap(void *arg) tl->seqno = -4u; + mutex_lock_nested(&tl->mutex, SINGLE_DEPTH_NESTING); err = intel_timeline_get_seqno(tl, rq, &seqno[0]); + mutex_unlock(&tl->mutex); if (err) { i915_request_add(rq); goto out; @@ -704,7 +706,9 @@ static int live_hwsp_wrap(void *arg) } hwsp_seqno[0] = tl->hwsp_seqno; + mutex_lock_nested(&tl->mutex, SINGLE_DEPTH_NESTING); err = intel_timeline_get_seqno(tl, rq, &seqno[1]); + mutex_unlock(&tl->mutex); if (err) { i915_request_add(rq); goto out; diff --git a/drivers/gpu/drm/i915/gt/selftests/mock_timeline.c b/drivers/gpu/drm/i915/gt/selftests/mock_timeline.c index 5c549205828a..598170efcaf6 100644 --- a/drivers/gpu/drm/i915/gt/selftests/mock_timeline.c +++ b/drivers/gpu/drm/i915/gt/selftests/mock_timeline.c @@ -15,7 +15,7 @@ void mock_timeline_init(struct intel_timeline *timeline, u64 context) mutex_init(&timeline->mutex); - INIT_ACTIVE_REQUEST(&timeline->last_request); + INIT_ACTIVE_REQUEST(&timeline->last_request, &timeline->mutex); INIT_LIST_HEAD(&timeline->requests); i915_syncmap_init(&timeline->sync); diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index 2439c4f62ad8..df6164591702 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -169,10 +169,11 @@ node_retire(struct i915_active_request *base, struct i915_request *rq) } static struct i915_active_request * -active_instance(struct i915_active *ref, u64 idx) +active_instance(struct i915_active *ref, struct intel_timeline *tl) { struct active_node *node, *prealloc; struct rb_node **p, *parent; + u64 idx = tl->fence_context; /* * We track the most recently used timeline to skip a rbtree search @@ -211,7 +212,7 @@ active_instance(struct i915_active *ref, u64 idx) } node = prealloc; - i915_active_request_init(&node->base, NULL, node_retire); + i915_active_request_init(&node->base, &tl->mutex, NULL, node_retire); node->ref = ref; node->timeline = idx; @@ -294,18 +295,20 @@ __active_del_barrier(struct i915_active *ref, struct active_node *node) } int i915_active_ref(struct i915_active *ref, - u64 timeline, + struct intel_timeline *tl, struct i915_request *rq) { struct i915_active_request *active; int err; + lockdep_assert_held(&tl->mutex); + /* Prevent reaping in case we malloc/wait while building the tree */ err = i915_active_acquire(ref); if (err) return err; - active = active_instance(ref, timeline); + active = active_instance(ref, tl); if (!active) { err = -ENOMEM; goto out; @@ -596,6 +599,10 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref, goto unwind; } +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) + node->base.lock = + &engine->kernel_context->timeline->mutex; +#endif RCU_INIT_POINTER(node->base.request, NULL); node->base.retire = node_retire; node->timeline = idx; @@ -701,6 +708,10 @@ int i915_active_request_set(struct i915_active_request *active, { int err; +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) + lockdep_assert_held(active->lock); +#endif + /* Must maintain ordering wrt previous active requests */ err = i915_request_await_active_request(rq, active); if (err) diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h index f6d730cf2fe6..f95058f99057 100644 --- a/drivers/gpu/drm/i915/i915_active.h +++ b/drivers/gpu/drm/i915/i915_active.h @@ -58,15 +58,20 @@ void i915_active_retire_noop(struct i915_active_request *active, */ static inline void i915_active_request_init(struct i915_active_request *active, + struct mutex *lock, struct i915_request *rq, i915_active_retire_fn retire) { RCU_INIT_POINTER(active->request, rq); INIT_LIST_HEAD(&active->link); active->retire = retire ?: i915_active_retire_noop; +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) + active->lock = lock; +#endif } -#define INIT_ACTIVE_REQUEST(name) i915_active_request_init((name), NULL, NULL) +#define INIT_ACTIVE_REQUEST(name, lock) \ + i915_active_request_init((name), (lock), NULL, NULL) /** * i915_active_request_set - updates the tracker to watch the current request @@ -81,6 +86,9 @@ static inline void __i915_active_request_set(struct i915_active_request *active, struct i915_request *request) { +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) + lockdep_assert_held(active->lock); +#endif list_move(&active->link, &request->active_list); rcu_assign_pointer(active->request, request); } @@ -362,7 +370,7 @@ void __i915_active_init(struct drm_i915_private *i915, } while (0) int i915_active_ref(struct i915_active *ref, - u64 timeline, + struct intel_timeline *tl, struct i915_request *rq); int i915_active_wait(struct i915_active *ref); diff --git a/drivers/gpu/drm/i915/i915_active_types.h b/drivers/gpu/drm/i915/i915_active_types.h index ae3ee441c114..d857bd12aa7e 100644 --- a/drivers/gpu/drm/i915/i915_active_types.h +++ b/drivers/gpu/drm/i915/i915_active_types.h @@ -24,6 +24,9 @@ struct i915_active_request { struct i915_request __rcu *request; struct list_head link; i915_active_retire_fn retire; +#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) + struct mutex *lock; +#endif }; struct active_node; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index a32243951b29..73b02234affe 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -903,14 +903,14 @@ int i915_vma_move_to_active(struct i915_vma *vma, * add the active reference first and queue for it to be dropped * *last*. */ - err = i915_active_ref(&vma->active, rq->fence.context, rq); + err = i915_active_ref(&vma->active, rq->timeline, rq); if (unlikely(err)) return err; if (flags & EXEC_OBJECT_WRITE) { if (intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CS)) i915_active_ref(&obj->frontbuffer->write, - rq->fence.context, + rq->timeline, rq); dma_resv_add_excl_fence(vma->resv, &rq->fence); diff --git a/drivers/gpu/drm/i915/selftests/i915_active.c b/drivers/gpu/drm/i915/selftests/i915_active.c index e5cd5d47e380..77d844ac8b71 100644 --- a/drivers/gpu/drm/i915/selftests/i915_active.c +++ b/drivers/gpu/drm/i915/selftests/i915_active.c @@ -110,8 +110,7 @@ __live_active_setup(struct drm_i915_private *i915) submit, GFP_KERNEL); if (err >= 0) - err = i915_active_ref(&active->base, - rq->fence.context, rq); + err = i915_active_ref(&active->base, rq->timeline, rq); i915_request_add(rq); if (err) { pr_err("Failed to track active ref!\n");