From patchwork Sat May 25 19:26:52 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2614061 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id A3C483FD4E for ; Sat, 25 May 2013 19:32:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 91141E5C3B for ; Sat, 25 May 2013 12:32:58 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from shiva.localdomain (unknown [209.20.75.48]) by gabe.freedesktop.org (Postfix) with ESMTP id A0D79E5DD7 for ; Sat, 25 May 2013 12:24:49 -0700 (PDT) Received: by shiva.localdomain (Postfix, from userid 1005) id 5C0E98874F; Sat, 25 May 2013 19:24:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on shiva.chad-versace.us X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00, URIBL_BLOCKED autolearn=unavailable version=3.3.2 Received: from lundgren.kumite (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by shiva.localdomain (Postfix) with ESMTPSA id 199BD8874A; Sat, 25 May 2013 19:24:45 +0000 (UTC) From: Ben Widawsky To: Intel GFX Date: Sat, 25 May 2013 12:26:52 -0700 Message-Id: <1369510028-3343-19-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1369510028-3343-1-git-send-email-ben@bwidawsk.net> References: <1369510028-3343-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 18/34] drm/i915: Drop dev from pte_encode X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org The original pte_encode function needed the dev argument so we could do platform specific handling via IS_GENX, etc. With the merging of a pte encoding function there should never been a need to quirk away gen specific details. The patch doesn't do much but makes the upcoming reworks in gtt/ppgtt/mm slightly (albeit, ever so) easier. CC: Kenneth Graunke Signed-off-by: Ben Widawsky Reviewed-by: Kenneth Graunke --- drivers/gpu/drm/i915/i915_drv.h | 6 ++---- drivers/gpu/drm/i915/i915_gem_gtt.c | 21 ++++++++------------- 2 files changed, 10 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 307c062..a94f128 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -437,8 +437,7 @@ struct i915_gtt { struct sg_table *st, unsigned int pg_start, enum i915_cache_level cache_level); - gen6_gtt_pte_t (*pte_encode)(struct drm_device *dev, - dma_addr_t addr, + gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr, enum i915_cache_level level); }; #define gtt_total_entries(gtt) ((gtt).total >> PAGE_SHIFT) @@ -459,8 +458,7 @@ struct i915_hw_ppgtt { struct sg_table *st, unsigned int pg_start, enum i915_cache_level cache_level); - gen6_gtt_pte_t (*pte_encode)(struct drm_device *dev, - dma_addr_t addr, + gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr, enum i915_cache_level level); int (*enable)(struct drm_device *dev); void (*cleanup)(struct i915_hw_ppgtt *ppgtt); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 521e1ef..391cec9 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -45,8 +45,7 @@ #define GEN6_PTE_CACHE_LLC_MLC (3 << 1) #define GEN6_PTE_ADDR_ENCODE(addr) GEN6_GTT_ADDR_ENCODE(addr) -static gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev, - dma_addr_t addr, +static gen6_gtt_pte_t gen6_pte_encode(dma_addr_t addr, enum i915_cache_level level) { gen6_gtt_pte_t pte = GEN6_PTE_VALID; @@ -72,8 +71,7 @@ static gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev, #define BYT_PTE_WRITEABLE (1 << 1) #define BYT_PTE_SNOOPED_BY_CPU_CACHES (1 << 2) -static gen6_gtt_pte_t byt_pte_encode(struct drm_device *dev, - dma_addr_t addr, +static gen6_gtt_pte_t byt_pte_encode(dma_addr_t addr, enum i915_cache_level level) { gen6_gtt_pte_t pte = GEN6_PTE_VALID; @@ -90,8 +88,7 @@ static gen6_gtt_pte_t byt_pte_encode(struct drm_device *dev, return pte; } -static gen6_gtt_pte_t hsw_pte_encode(struct drm_device *dev, - dma_addr_t addr, +static gen6_gtt_pte_t hsw_pte_encode(dma_addr_t addr, enum i915_cache_level level) { gen6_gtt_pte_t pte = GEN6_PTE_VALID; @@ -194,8 +191,7 @@ static void gen6_ppgtt_clear_range(struct i915_hw_ppgtt *ppgtt, unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES; unsigned last_pte, i; - scratch_pte = ppgtt->pte_encode(ppgtt->dev, - dev_priv->gtt.scratch.addr, + scratch_pte = ppgtt->pte_encode(dev_priv->gtt.scratch.addr, I915_CACHE_LLC); while (num_entries) { @@ -231,8 +227,7 @@ static void gen6_ppgtt_insert_entries(struct i915_hw_ppgtt *ppgtt, dma_addr_t page_addr; page_addr = sg_page_iter_dma_address(&sg_iter); - pt_vaddr[act_pte] = ppgtt->pte_encode(ppgtt->dev, page_addr, - cache_level); + pt_vaddr[act_pte] = ppgtt->pte_encode(page_addr, cache_level); if (++act_pte == I915_PPGTT_PT_ENTRIES) { kunmap_atomic(pt_vaddr); act_pt++; @@ -482,7 +477,7 @@ static void gen6_ggtt_insert_entries(struct drm_device *dev, for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) { addr = sg_page_iter_dma_address(&sg_iter); - iowrite32(dev_priv->gtt.pte_encode(dev, addr, level), + iowrite32(dev_priv->gtt.pte_encode(addr, level), >t_entries[i]); i++; } @@ -495,7 +490,7 @@ static void gen6_ggtt_insert_entries(struct drm_device *dev, */ if (i != 0) WARN_ON(readl(>t_entries[i-1]) - != dev_priv->gtt.pte_encode(dev, addr, level)); + != dev_priv->gtt.pte_encode(addr, level)); /* This next bit makes the above posting read even more important. We * want to flush the TLBs only after we're certain all the PTE updates @@ -520,7 +515,7 @@ static void gen6_ggtt_clear_range(struct drm_device *dev, first_entry, num_entries, max_entries)) num_entries = max_entries; - scratch_pte = dev_priv->gtt.pte_encode(dev, dev_priv->gtt.scratch.addr, + scratch_pte = dev_priv->gtt.pte_encode(dev_priv->gtt.scratch.addr, I915_CACHE_LLC); for (i = 0; i < num_entries; i++) iowrite32(scratch_pte, >t_base[i]);