From patchwork Sat Jan 16 09:46:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8048791 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D78D3BEEE5 for ; Sat, 16 Jan 2016 09:46:42 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C954820386 for ; Sat, 16 Jan 2016 09:46:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id A34B0203DF for ; Sat, 16 Jan 2016 09:46:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1851B6E213; Sat, 16 Jan 2016 01:46:40 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id AB6216E213 for ; Sat, 16 Jan 2016 01:46:38 -0800 (PST) Received: by mail-wm0-f65.google.com with SMTP id u188so6986825wmu.0 for ; Sat, 16 Jan 2016 01:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=4T52Zkcw2VWsdtLhfGkjYqN7P/amS/zyHWR4BnkY8XQ=; b=FRLbwGjPAZthtIw2T+fDttnIO9UgP4CYGHSuLUBYv1VQdvz9u8Tdh0sHZ+M9k1n1WB 9tJz+HnFbydAQLkEUM79l/IpFAITxLGOsLBwNnhaG8CGPV/93Ss0Yykz8Et6DRcpfQQk bby1BtVROlAeM1q6/hbmBWTzspz2qEJUgBQ9tDV3PPcEAYr8lBB2VqNuo0nVcLRaAN7y nyn3N3Rxz+3v9CEedKsRUTQbeicyYzqCoMuzjvA7F3vWeve6WRlNiDReED9LAO/551z7 Y60Wtz0D1UUNSVyzO7YTGYUAS0jLdTo7/2v4+x5FFdbahw94ujA198nRLPEYyJ3ftUa1 j8fA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=4T52Zkcw2VWsdtLhfGkjYqN7P/amS/zyHWR4BnkY8XQ=; b=EpYVIaAE2frKJ3LDgI7NFag0Crxlr3NeoqxpyVw/2Neh0ODRntMSJVyWxEwWgo1xnu lMuv9tSVrax/h3WLVbL1KXIwPAJxBZyBtqtUrxOZYRgEcBoS/6OVeslqWlw9chu2qAYP E7Bdfg+qdAETgnngmQWMXQIVqP1EL8OvrH7ON1L2RGKAxiekTP8WRXqjevV8rVRSZAyK PDtl3YJQkkN76s4mFOzIM23WFlhQZypjpvaDqIHrXBsXFVi5c5jgxgxHEBuJyWYPJFjE EbS5E9ryixo9Y2m61CDAgfkQKnBWRnCzPw//S8RgEzr03e+/KqPNtdQfLf/yT2NnaZX9 3kKg== X-Gm-Message-State: AG10YOTiT5aVAWi5BRA1UrCX9Vkp/r727wjwjz+10k656MW29YRo0DqwVrHecS4PE27X6g== X-Received: by 10.28.173.71 with SMTP id w68mr3197828wme.88.1452937597248; Sat, 16 Jan 2016 01:46:37 -0800 (PST) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id ei9sm14283620wjd.40.2016.01.16.01.46.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 16 Jan 2016 01:46:36 -0800 (PST) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Sat, 16 Jan 2016 09:46:17 +0000 Message-Id: <1452937580-3625-3-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1452937580-3625-1-git-send-email-chris@chris-wilson.co.uk> References: <1452868545-19586-1-git-send-email-chris@chris-wilson.co.uk> <1452937580-3625-1-git-send-email-chris@chris-wilson.co.uk> Cc: Mika Kuoppala Subject: [Intel-gfx] [PATCH v2 3/6] drm/i915: Use ordered seqno write interrupt generation on gen8+ execlists X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Broadwell and later currently use the same unordered command sequence to update the seqno in the HWS status page and then assert the user interrupt. We should apply the w/a from legacy (where we do an mmio read to delay the seqno read after the interrupt), but this is not enough to enforce coherent seqno visibilty on Skylake. Rather than search for the proper post-interrupt seqno barrier, use a strongly ordered command sequence to write the seqno, then assert the user interrupt from the ring. Signed-off-by: Chris Wilson Cc: Mika Kuoppala --- drivers/gpu/drm/i915/intel_lrc.c | 50 +++++++++++++++++++++++++++++---- drivers/gpu/drm/i915/intel_ringbuffer.h | 1 + 2 files changed, 45 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 3ed4ab7f571e..7b5b3180bed9 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1818,7 +1818,6 @@ static int gen8_emit_request(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; struct intel_engine_cs *ring = ringbuf->ring; - u32 cmd; int ret; /* @@ -1830,13 +1829,12 @@ static int gen8_emit_request(struct drm_i915_gem_request *request) if (ret) return ret; - cmd = MI_STORE_DWORD_IMM_GEN4; - cmd |= MI_GLOBAL_GTT; - - intel_logical_ring_emit(ringbuf, cmd); intel_logical_ring_emit(ringbuf, + (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW); + intel_logical_ring_emit(ringbuf, + MI_FLUSH_DW_USE_GTT | (ring->status_page.gfx_addr + - (I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT))); + I915_GEM_HWS_INDEX_ADDR)); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request)); intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT); @@ -1854,6 +1852,45 @@ static int gen8_emit_request(struct drm_i915_gem_request *request) return 0; } +static int gen8_emit_request_render(struct drm_i915_gem_request *request) +{ + struct intel_ringbuffer *ringbuf = request->ringbuf; + struct intel_engine_cs *ring = ringbuf->ring; + int ret; + + /* + * Reserve space for 2 NOOPs at the end of each request to be + * used as a workaround for not being allowed to do lite + * restore with HEAD==TAIL (WaIdleLiteRestore). + */ + ret = intel_logical_ring_begin(request, 8); + if (ret) + return ret; + + intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(5)); + intel_logical_ring_emit(ringbuf, + (PIPE_CONTROL_GLOBAL_GTT_IVB | + PIPE_CONTROL_CS_STALL | + PIPE_CONTROL_QW_WRITE)); + intel_logical_ring_emit(ringbuf, + ring->status_page.gfx_addr + + I915_GEM_HWS_INDEX_ADDR); + intel_logical_ring_emit(ringbuf, 0); + intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request)); + intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT); + intel_logical_ring_advance_and_submit(request); + + /* + * Here we add two extra NOOPs as padding to avoid + * lite restore of a context with HEAD==TAIL. + */ + intel_logical_ring_emit(ringbuf, MI_NOOP); + intel_logical_ring_emit(ringbuf, MI_NOOP); + intel_logical_ring_advance(ringbuf); + + return 0; +} + static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req) { struct render_state so; @@ -2034,6 +2071,7 @@ static int logical_render_ring_init(struct drm_device *dev) ring->init_context = gen8_init_rcs_context; ring->cleanup = intel_fini_pipe_control; ring->emit_flush = gen8_emit_flush_render; + ring->emit_request = gen8_emit_request_render; ring->dev = dev; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 8fb02b21e75d..e1797d42054c 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -424,6 +424,7 @@ intel_write_status_page(struct intel_engine_cs *ring, * The area from dword 0x30 to 0x3ff is available for driver usage. */ #define I915_GEM_HWS_INDEX 0x30 +#define I915_GEM_HWS_INDEX_ADDR (I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT) #define I915_GEM_HWS_SCRATCH_INDEX 0x40 #define I915_GEM_HWS_SCRATCH_ADDR (I915_GEM_HWS_SCRATCH_INDEX << MI_STORE_DWORD_INDEX_SHIFT)