From patchwork Thu May 21 14:37:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 6455681 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F38D69F440 for ; Thu, 21 May 2015 14:38:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 363982047C for ; Thu, 21 May 2015 14:38:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 6323920457 for ; Thu, 21 May 2015 14:37:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1B0C86E90A; Thu, 21 May 2015 07:37:56 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 4EF256E909 for ; Thu, 21 May 2015 07:37:55 -0700 (PDT) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 21 May 2015 07:37:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,469,1427785200"; d="scan'208";a="713688359" Received: from rosetta.fi.intel.com (HELO rosetta) ([10.237.72.80]) by fmsmga001.fm.intel.com with ESMTP; 21 May 2015 07:37:53 -0700 Received: by rosetta (Postfix, from userid 1000) id E11E780094; Thu, 21 May 2015 17:37:50 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Thu, 21 May 2015 17:37:39 +0300 Message-Id: <1432219068-25391-12-git-send-email-mika.kuoppala@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1432219068-25391-1-git-send-email-mika.kuoppala@intel.com> References: <1432219068-25391-1-git-send-email-mika.kuoppala@intel.com> Cc: miku@iki.fi Subject: [Intel-gfx] [PATCH 11/20] drm/i915/gtt: Introduce fill_page_dma() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When we setup page directories and tables, we point the entries to a to the next level scratch structure. Make this generic by introducing a fill_page_dma which maps and flushes. We also need 32 bit variant for legacy gens. Signed-off-by: Mika Kuoppala --- drivers/gpu/drm/i915/i915_gem_gtt.c | 61 +++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 5175eb8..a3ee710 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -330,6 +330,27 @@ static void cleanup_page_dma(struct drm_device *dev, struct i915_page_dma *p) memset(p, 0, sizeof(*p)); } +static void fill_page_dma(struct drm_device *dev, struct i915_page_dma *p, + const uint64_t val) +{ + int i; + uint64_t * const vaddr = kmap_atomic(p->page); + + for (i = 0; i < 512; i++) + vaddr[i] = val; + + kunmap_atomic(vaddr); +} + +static void fill_page_dma_32(struct drm_device *dev, struct i915_page_dma *p, + const uint32_t val32) +{ + uint64_t v = val32; + v = v << 32 | val32; + + fill_page_dma(dev, p, v); +} + static void free_pt(struct drm_device *dev, struct i915_page_table *pt) { cleanup_page_dma(dev, &pt->base); @@ -340,19 +361,12 @@ static void free_pt(struct drm_device *dev, struct i915_page_table *pt) static void gen8_initialize_pt(struct i915_address_space *vm, struct i915_page_table *pt) { - gen8_pte_t *pt_vaddr, scratch_pte; - int i; + gen8_pte_t scratch_pte; - pt_vaddr = kmap_atomic(pt->base.page); scratch_pte = gen8_pte_encode(vm->scratch.addr, I915_CACHE_LLC, true); - for (i = 0; i < GEN8_PTES; i++) - pt_vaddr[i] = scratch_pte; - - if (!HAS_LLC(vm->dev)) - drm_clflush_virt_range(pt_vaddr, PAGE_SIZE); - kunmap_atomic(pt_vaddr); + fill_page_dma(vm->dev, &pt->base, scratch_pte); } static struct i915_page_table *alloc_pt(struct drm_device *dev) @@ -585,20 +599,13 @@ static void gen8_initialize_pd(struct i915_address_space *vm, struct i915_page_directory *pd) { struct i915_hw_ppgtt *ppgtt = - container_of(vm, struct i915_hw_ppgtt, base); - gen8_pde_t *page_directory; - struct i915_page_table *pt; - int i; + container_of(vm, struct i915_hw_ppgtt, base); + gen8_pde_t scratch_pde; - page_directory = kmap_atomic(pd->base.page); - pt = ppgtt->scratch_pt; - for (i = 0; i < I915_PDES; i++) - /* Map the PDE to the page table */ - __gen8_do_map_pt(page_directory + i, pt, vm->dev); + scratch_pde = gen8_pde_encode(vm->dev, ppgtt->scratch_pt->base.daddr, + I915_CACHE_LLC); - if (!HAS_LLC(vm->dev)) - drm_clflush_virt_range(page_directory, PAGE_SIZE); - kunmap_atomic(page_directory); + fill_page_dma(vm->dev, &pd->base, scratch_pde); } static void gen8_free_page_tables(struct i915_page_directory *pd, struct drm_device *dev) @@ -1242,22 +1249,16 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm, } static void gen6_initialize_pt(struct i915_address_space *vm, - struct i915_page_table *pt) + struct i915_page_table *pt) { - gen6_pte_t *pt_vaddr, scratch_pte; - int i; + gen6_pte_t scratch_pte; WARN_ON(vm->scratch.addr == 0); scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true, 0); - pt_vaddr = kmap_atomic(pt->base.page); - - for (i = 0; i < GEN6_PTES; i++) - pt_vaddr[i] = scratch_pte; - - kunmap_atomic(pt_vaddr); + fill_page_dma_32(vm->dev, &pt->base, scratch_pte); } static int gen6_alloc_va_range(struct i915_address_space *vm,