From patchwork Sat May 10 03:59:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 4146451 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BD758BFF02 for ; Sat, 10 May 2014 04:02:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C3913201DE for ; Sat, 10 May 2014 04:02:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id C8BCF201DC for ; Sat, 10 May 2014 04:02:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4203B6F0A6; Fri, 9 May 2014 21:02:47 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id D93FF6F0A6 for ; Fri, 9 May 2014 21:02:45 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id E4C1558094; Fri, 9 May 2014 21:02:44 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from ironside.intel.com (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by mail.bwidawsk.net (Postfix) with ESMTPSA id F087558068; Fri, 9 May 2014 21:00:27 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Fri, 9 May 2014 20:59:42 -0700 Message-Id: <1399694391-3935-48-git-send-email-benjamin.widawsky@intel.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1399694391-3935-1-git-send-email-benjamin.widawsky@intel.com> References: <1399694391-3935-1-git-send-email-benjamin.widawsky@intel.com> Cc: Ben Widawsky , Ben Widawsky Subject: [Intel-gfx] [PATCH 47/56] drm/i915/bdw: 4 level pages tables X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Map is easy, it's the same register as the PDP descriptor 0, but it only has one entry. Also, the mapping code is now trivial thanks to all of the prep patches. Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_gem_gtt.c | 53 +++++++++++++++++++++++++++++++++---- drivers/gpu/drm/i915/i915_gem_gtt.h | 4 ++- drivers/gpu/drm/i915/i915_reg.h | 1 + 3 files changed, 52 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 3478bf5..15e61d8 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -528,9 +528,9 @@ static int gen8_write_pdp(struct intel_ring_buffer *ring, return 0; } -static int gen8_mm_switch(struct i915_hw_ppgtt *ppgtt, - struct intel_ring_buffer *ring, - bool synchronous) +static int gen8_legacy_mm_switch(struct i915_hw_ppgtt *ppgtt, + struct intel_ring_buffer *ring, + bool synchronous) { int i, ret; @@ -547,6 +547,13 @@ static int gen8_mm_switch(struct i915_hw_ppgtt *ppgtt, return 0; } +static int gen8_48b_mm_switch(struct i915_hw_ppgtt *ppgtt, + struct intel_ring_buffer *ring, + bool synchronous) +{ + return gen8_write_pdp(ring, 0, ppgtt->pml4.daddr, synchronous); +} + static void gen8_ppgtt_clear_range(struct i915_address_space *vm, uint64_t start, uint64_t length, @@ -674,6 +681,7 @@ static void gen8_map_pagetable_range(struct i915_address_space *vm, kunmap_atomic(pagedir); } + static void gen8_map_pagedir(struct i915_pagedir *pd, struct i915_pagetab *pt, int entry, @@ -693,6 +701,35 @@ static void gen8_unmap_pagetable(struct i915_hw_ppgtt *ppgtt, gen8_map_pagedir(pd, ppgtt->scratch_pt, pde, ppgtt->base.dev); } +static void gen8_map_page_directory(struct i915_pagedirpo *pdp, + struct i915_pagedir *pd, + int index, + struct drm_device *dev) +{ + gen8_ppgtt_pdpe_t *pagedirpo; + gen8_ppgtt_pdpe_t pdpe; + + if (!HAS_48B_PPGTT(dev)) + return; + + pagedirpo = kmap_atomic(pdp->page); + pdpe = gen8_pde_encode(dev, pd->daddr, I915_CACHE_LLC); + pagedirpo[index] = pdpe; + kunmap_atomic(pagedirpo); +} + +static void gen8_map_page_directory_pointer(struct i915_pml4 *pml4, + struct i915_pagedirpo *pdp, + int index, + struct drm_device *dev) +{ + gen8_ppgtt_pml4e_t *pagemap = kmap_atomic(pml4->page); + gen8_ppgtt_pml4e_t pml4e = gen8_pde_encode(dev, pdp->daddr, I915_CACHE_LLC); + BUG_ON(!HAS_48B_PPGTT(dev)); + pagemap[index] = pml4e; + kunmap_atomic(pagemap); +} + static void gen8_teardown_va_range_3lvl(struct i915_address_space *vm, struct i915_pagedirpo *pdp, uint64_t start, uint64_t length) @@ -1065,6 +1102,7 @@ static int gen8_alloc_va_range_3lvl(struct i915_address_space *vm, set_bit(pdpe, pdp->used_pdpes); gen8_map_pagetable_range(vm, pd, start, length); + gen8_map_page_directory(pdp, pd, pdpe, dev); } free_gen8_temp_bitmaps(new_page_dirs, new_page_tables, pdpes); @@ -1132,6 +1170,8 @@ static int gen8_alloc_va_range_4lvl(struct i915_address_space *vm, ret = gen8_alloc_va_range_3lvl(vm, pdp, start, length); if (ret) goto err_out; + + gen8_map_page_directory_pointer(pml4, pdp, pml4e, vm->dev); } WARN(bitmap_weight(pml4->used_pml4es, GEN8_PML4ES_PER_PML4) > 2, @@ -1201,6 +1241,7 @@ static int gen8_ppgtt_init_common(struct i915_hw_ppgtt *ppgtt, uint64_t size) free_pt_scratch(ppgtt->scratch_pd, ppgtt->base.dev); return ret; } + ppgtt->switch_mm = gen8_48b_mm_switch; } else { int ret = __pdp_init(&ppgtt->pdp, false); if (ret) { @@ -1208,7 +1249,7 @@ static int gen8_ppgtt_init_common(struct i915_hw_ppgtt *ppgtt, uint64_t size) return ret; } - ppgtt->switch_mm = gen8_mm_switch; + ppgtt->switch_mm = gen8_legacy_mm_switch; trace_i915_pagedirpo_alloc(&ppgtt->base, 0, 0, GEN8_PML4E_SHIFT); } @@ -1235,6 +1276,7 @@ static int gen8_aliasing_ppgtt_init(struct i915_hw_ppgtt *ppgtt) return ret; } + /* FIXME: PML4 */ gen8_for_each_pdpe(pd, pdp, start, size, temp, pdpe) gen8_map_pagetable_range(&ppgtt->base, pd, start, size); @@ -1472,8 +1514,9 @@ static int gen8_ppgtt_enable(struct i915_hw_ppgtt *ppgtt) int j, ret; for_each_ring(ring, dev_priv, j) { + u32 four_level = HAS_48B_PPGTT(dev) ? GEN8_GFX_PPGTT_64B : 0; I915_WRITE(RING_MODE_GEN7(ring), - _MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE)); + _MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE | four_level)); /* We promise to do a switch later with FULL PPGTT. If this is * aliasing, this is the one and only switch we'll do */ diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index 0e5cd58..3904ae5 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -36,7 +36,9 @@ typedef uint32_t gen6_gtt_pte_t; typedef uint64_t gen8_gtt_pte_t; -typedef gen8_gtt_pte_t gen8_ppgtt_pde_t; +typedef gen8_gtt_pte_t gen8_ppgtt_pde_t; +typedef gen8_ppgtt_pde_t gen8_ppgtt_pdpe_t; +typedef gen8_ppgtt_pdpe_t gen8_ppgtt_pml4e_t; /* GEN Agnostic defines */ #define I915_PAGE_SIZE 4096 diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index bc34250..d8ee8ed 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -969,6 +969,7 @@ enum punit_power_well { #define GFX_REPLAY_MODE (1<<11) #define GFX_PSMI_GRANULARITY (1<<10) #define GFX_PPGTT_ENABLE (1<<9) +#define GEN8_GFX_PPGTT_64B (1<<7) #define VLV_DISPLAY_BASE 0x180000