From patchwork Fri Aug 14 15:35:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imre Deak X-Patchwork-Id: 7016111 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E04509F358 for ; Fri, 14 Aug 2015 15:35:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EB035207F7 for ; Fri, 14 Aug 2015 15:35:32 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id C64CA207F6 for ; Fri, 14 Aug 2015 15:35:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2A0E36EFA3; Fri, 14 Aug 2015 08:35:31 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id C04E56EFA3 for ; Fri, 14 Aug 2015 08:35:29 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 14 Aug 2015 08:35:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,678,1432623600"; d="scan'208";a="783760083" Received: from ideak-desk.fi.intel.com ([10.237.72.74]) by fmsmga002.fm.intel.com with ESMTP; 14 Aug 2015 08:35:28 -0700 From: Imre Deak To: intel-gfx@lists.freedesktop.org Date: Fri, 14 Aug 2015 18:35:27 +0300 Message-Id: <1439566527-28692-1-git-send-email-imre.deak@intel.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1439555937-8016-2-git-send-email-imre.deak@intel.com> References: <1439555937-8016-2-git-send-email-imre.deak@intel.com> Cc: Jani Nikula , Mika Kuoppala Subject: [Intel-gfx] [PATCH v3 1/2] drm/i915/bxt: work around HW coherency issue when accessing GPU seqno X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP By running igt/store_dword_loop_render on BXT we can hit a coherency problem where the seqno written at GPU command completion time is not seen by the CPU. This results in __i915_wait_request seeing the stale seqno and not completing the request (not considering the lost interrupt/GPU reset mechanism). I also verified that this isn't a case of a lost interrupt, or that the command didn't complete somehow: when the coherency issue occured I read the seqno via an uncached GTT mapping too. While the cached version of the seqno still showed the stale value the one read via the uncached mapping was the correct one. Work around this issue by clflushing the corresponding CPU cacheline following any store of the seqno and preceding any reading of it. When reading it do this only when the caller expects a coherent view. v2: - fix using the proper logical && instead of a bitwise & (Jani, Mika) - limit the workaround to A stepping, on later steppings this HW issue is fixed v3: - use a separate get_seqno/set_seqno vfunc (Chris) Testcase: igt/store_dword_loop_render Signed-off-by: Imre Deak Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/intel_lrc.c | 64 ++++++++++++++++++++++++++++----- drivers/gpu/drm/i915/intel_ringbuffer.h | 7 ++++ 2 files changed, 63 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 138964a..1269d57 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1699,6 +1699,34 @@ static void gen8_set_seqno(struct intel_engine_cs *ring, u32 seqno) intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno); } +static u32 bxt_a_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency) +{ + + /* + * On BXT A steppings there is a HW coherency issue whereby the + * MI_STORE_DATA_IMM storing the completed request's seqno + * occasionally doesn't invalidate the CPU cache. Work around this by + * clflushing the corresponding cacheline whenever the caller wants + * the coherency to be guaranteed. Note that this cacheline is known + * to be clean at this point, since we only write it in + * bxt_a_set_seqno(), where we also do a clflush after the write. So + * this clflush in practice becomes an invalidate operation. + */ + + if (!lazy_coherency) + intel_flush_status_page(ring, I915_GEM_HWS_INDEX); + + return intel_read_status_page(ring, I915_GEM_HWS_INDEX); +} + +static void bxt_a_set_seqno(struct intel_engine_cs *ring, u32 seqno) +{ + intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno); + + /* See bxt_a_get_seqno() explaining the reason for the clflush. */ + intel_flush_status_page(ring, I915_GEM_HWS_INDEX); +} + static int gen8_emit_request(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; @@ -1868,8 +1896,13 @@ static int logical_render_ring_init(struct drm_device *dev) ring->init_hw = gen8_init_render_ring; ring->init_context = gen8_init_rcs_context; ring->cleanup = intel_fini_pipe_control; - ring->get_seqno = gen8_get_seqno; - ring->set_seqno = gen8_set_seqno; + if (IS_BROXTON(dev) && INTEL_REVID(dev) < BXT_REVID_B0) { + ring->get_seqno = bxt_a_get_seqno; + ring->set_seqno = bxt_a_set_seqno; + } else { + ring->get_seqno = gen8_get_seqno; + ring->set_seqno = gen8_set_seqno; + } ring->emit_request = gen8_emit_request; ring->emit_flush = gen8_emit_flush_render; ring->irq_get = gen8_logical_ring_get_irq; @@ -1915,8 +1948,13 @@ static int logical_bsd_ring_init(struct drm_device *dev) GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT; ring->init_hw = gen8_init_common_ring; - ring->get_seqno = gen8_get_seqno; - ring->set_seqno = gen8_set_seqno; + if (IS_BROXTON(dev) && INTEL_REVID(dev) < BXT_REVID_B0) { + ring->get_seqno = bxt_a_get_seqno; + ring->set_seqno = bxt_a_set_seqno; + } else { + ring->get_seqno = gen8_get_seqno; + ring->set_seqno = gen8_set_seqno; + } ring->emit_request = gen8_emit_request; ring->emit_flush = gen8_emit_flush; ring->irq_get = gen8_logical_ring_get_irq; @@ -1965,8 +2003,13 @@ static int logical_blt_ring_init(struct drm_device *dev) GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT; ring->init_hw = gen8_init_common_ring; - ring->get_seqno = gen8_get_seqno; - ring->set_seqno = gen8_set_seqno; + if (IS_BROXTON(dev) && INTEL_REVID(dev) < BXT_REVID_B0) { + ring->get_seqno = bxt_a_get_seqno; + ring->set_seqno = bxt_a_set_seqno; + } else { + ring->get_seqno = gen8_get_seqno; + ring->set_seqno = gen8_set_seqno; + } ring->emit_request = gen8_emit_request; ring->emit_flush = gen8_emit_flush; ring->irq_get = gen8_logical_ring_get_irq; @@ -1990,8 +2033,13 @@ static int logical_vebox_ring_init(struct drm_device *dev) GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT; ring->init_hw = gen8_init_common_ring; - ring->get_seqno = gen8_get_seqno; - ring->set_seqno = gen8_set_seqno; + if (IS_BROXTON(dev) && INTEL_REVID(dev) < BXT_REVID_B0) { + ring->get_seqno = bxt_a_get_seqno; + ring->set_seqno = bxt_a_set_seqno; + } else { + ring->get_seqno = gen8_get_seqno; + ring->set_seqno = gen8_set_seqno; + } ring->emit_request = gen8_emit_request; ring->emit_flush = gen8_emit_flush; ring->irq_get = gen8_logical_ring_get_irq; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 2e85fda..95b0b4b 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -377,6 +377,13 @@ intel_ring_sync_index(struct intel_engine_cs *ring, return idx; } +static inline void +intel_flush_status_page(struct intel_engine_cs *ring, int reg) +{ + drm_clflush_virt_range(&ring->status_page.page_addr[reg], + sizeof(uint32_t)); +} + static inline u32 intel_read_status_page(struct intel_engine_cs *ring, int reg)