From patchwork Fri May 22 17:05:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 6466771 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 567BBC0020 for ; Fri, 22 May 2015 17:06:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5A2A320499 for ; Fri, 22 May 2015 17:06:14 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 574882045E for ; Fri, 22 May 2015 17:06:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 905956EAF1; Fri, 22 May 2015 10:06:12 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTP id 2376B6EAF9 for ; Fri, 22 May 2015 10:06:09 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP; 22 May 2015 10:05:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,476,1427785200"; d="scan'208";a="698983470" Received: from rosetta.fi.intel.com (HELO rosetta) ([10.237.72.80]) by orsmga001.jf.intel.com with ESMTP; 22 May 2015 10:05:52 -0700 Received: by rosetta (Postfix, from userid 1000) id 7BB1B80094; Fri, 22 May 2015 20:05:17 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Fri, 22 May 2015 20:05:04 +0300 Message-Id: <1432314314-23530-12-git-send-email-mika.kuoppala@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1432314314-23530-1-git-send-email-mika.kuoppala@intel.com> References: <1432314314-23530-1-git-send-email-mika.kuoppala@intel.com> Cc: miku@iki.fi Subject: [Intel-gfx] [PATCH 11/21] drm/i915/gtt: Introduce fill_page_dma() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When we setup page directories and tables, we point the entries to a to the next level scratch structure. Make this generic by introducing a fill_page_dma which maps and flushes. We also need 32 bit variant for legacy gens. v2: Fix flushes and handle valleyview (Ville) Signed-off-by: Mika Kuoppala --- drivers/gpu/drm/i915/i915_gem_gtt.c | 71 +++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 34 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index f747bd3..d020b5e 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -330,6 +330,31 @@ static void cleanup_page_dma(struct drm_device *dev, struct i915_page_dma *p) memset(p, 0, sizeof(*p)); } +static void fill_page_dma(struct drm_device *dev, struct i915_page_dma *p, + const uint64_t val) +{ + int i; + uint64_t * const vaddr = kmap_atomic(p->page); + + for (i = 0; i < 512; i++) + vaddr[i] = val; + + if (!HAS_LLC(dev) && !IS_VALLEYVIEW(dev)) + drm_clflush_virt_range(vaddr, PAGE_SIZE); + + kunmap_atomic(vaddr); +} + +static void fill_page_dma_32(struct drm_device *dev, struct i915_page_dma *p, + const uint32_t val32) +{ + uint64_t v = val32; + + v = v << 32 | val32; + + fill_page_dma(dev, p, v); +} + static void free_pt(struct drm_device *dev, struct i915_page_table *pt) { cleanup_page_dma(dev, &pt->base); @@ -340,19 +365,11 @@ static void free_pt(struct drm_device *dev, struct i915_page_table *pt) static void gen8_initialize_pt(struct i915_address_space *vm, struct i915_page_table *pt) { - gen8_pte_t *pt_vaddr, scratch_pte; - int i; - - pt_vaddr = kmap_atomic(pt->base.page); - scratch_pte = gen8_pte_encode(vm->scratch.addr, - I915_CACHE_LLC, true); + gen8_pte_t scratch_pte; - for (i = 0; i < GEN8_PTES; i++) - pt_vaddr[i] = scratch_pte; + scratch_pte = gen8_pte_encode(vm->scratch.addr, I915_CACHE_LLC, true); - if (!HAS_LLC(vm->dev)) - drm_clflush_virt_range(pt_vaddr, PAGE_SIZE); - kunmap_atomic(pt_vaddr); + fill_page_dma(vm->dev, &pt->base, scratch_pte); } static struct i915_page_table *alloc_pt(struct drm_device *dev) @@ -585,20 +602,13 @@ static void gen8_initialize_pd(struct i915_address_space *vm, struct i915_page_directory *pd) { struct i915_hw_ppgtt *ppgtt = - container_of(vm, struct i915_hw_ppgtt, base); - gen8_pde_t *page_directory; - struct i915_page_table *pt; - int i; + container_of(vm, struct i915_hw_ppgtt, base); + gen8_pde_t scratch_pde; - page_directory = kmap_atomic(pd->base.page); - pt = ppgtt->scratch_pt; - for (i = 0; i < I915_PDES; i++) - /* Map the PDE to the page table */ - __gen8_do_map_pt(page_directory + i, pt, vm->dev); + scratch_pde = gen8_pde_encode(vm->dev, ppgtt->scratch_pt->base.daddr, + I915_CACHE_LLC); - if (!HAS_LLC(vm->dev)) - drm_clflush_virt_range(page_directory, PAGE_SIZE); - kunmap_atomic(page_directory); + fill_page_dma(vm->dev, &pd->base, scratch_pde); } static void gen8_free_page_tables(struct i915_page_directory *pd, struct drm_device *dev) @@ -1292,22 +1302,15 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm, } static void gen6_initialize_pt(struct i915_address_space *vm, - struct i915_page_table *pt) + struct i915_page_table *pt) { - gen6_pte_t *pt_vaddr, scratch_pte; - int i; + gen6_pte_t scratch_pte; WARN_ON(vm->scratch.addr == 0); - scratch_pte = vm->pte_encode(vm->scratch.addr, - I915_CACHE_LLC, true, 0); - - pt_vaddr = kmap_atomic(pt->base.page); - - for (i = 0; i < GEN6_PTES; i++) - pt_vaddr[i] = scratch_pte; + scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true, 0); - kunmap_atomic(pt_vaddr); + fill_page_dma_32(vm->dev, &pt->base, scratch_pte); } static int gen6_alloc_va_range(struct i915_address_space *vm,