From patchwork Mon Aug 3 08:53:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Thierry X-Patchwork-Id: 6928051 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B4D38C05AC for ; Mon, 3 Aug 2015 08:53:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B2A112054A for ; Mon, 3 Aug 2015 08:53:32 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 4C4DA20547 for ; Mon, 3 Aug 2015 08:53:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BE6FD6E218; Mon, 3 Aug 2015 01:53:30 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTP id 4D8616E218 for ; Mon, 3 Aug 2015 01:53:29 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 03 Aug 2015 01:53:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,600,1432623600"; d="scan'208";a="775564120" Received: from michelth-linux2.isw.intel.com ([10.102.226.189]) by fmsmga002.fm.intel.com with ESMTP; 03 Aug 2015 01:53:27 -0700 From: Michel Thierry To: intel-gfx@lists.freedesktop.org Date: Mon, 3 Aug 2015 09:53:27 +0100 Message-Id: <1438592007-3354-1-git-send-email-michel.thierry@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1438187043-34267-11-git-send-email-michel.thierry@intel.com> References: <1438187043-34267-11-git-send-email-michel.thierry@intel.com> Cc: Akash Goel Subject: [Intel-gfx] [PATCH v9 10/19] drm/i915/gen8: Add 4 level support in insert_entries and clear_range X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When 48b is enabled, gen8_ppgtt_insert_entries needs to read the Page Map Level 4 (PML4), before it selects which Page Directory Pointer (PDP) it will write to. Similarly, gen8_ppgtt_clear_range needs to get the correct PDP/PD range. This patch was inspired by Ben's "Depend exclusively on map and unmap_vma". v2: Rebase after s/page_tables/page_table/. v3: Remove unnecessary pdpe loop in gen8_ppgtt_clear_range_4lvl and use clamp_pdp in gen8_ppgtt_insert_entries (Akash). v4: Merge gen8_ppgtt_clear_range_4lvl into gen8_ppgtt_clear_range to maintain symmetry with gen8_ppgtt_insert_entries (Akash). v5: Do not mix pages and bytes in insert_entries (Akash). v6: Prevent overflow in sg_nents << PAGE_SHIFT, when inserting 4GB at once. v7: Rebase after Mika's ppgtt cleanup / scratch merge patch series. Use gen8_px_index functions, and remove unnecessary number of pages parameter in insert_pte_entries. v8: Change gen8_ppgtt_clear_pte_range to stop at PDP boundary, instead of adding and extra clamp function; remove unnecessary pdp_start/pdp_len variables (Akash). v9: pages->orig_nents instead of sg_nents(pages->sgl) to get the length (Akash). v10: Remove pdp warning check ingen8_ppgtt_insert_pte_entries until this commit (Akash). Reviewed-by: Akash Goel (v9) Cc: Akash Goel Signed-off-by: Michel Thierry Reviewed-by: "Akash Goel " --- drivers/gpu/drm/i915/i915_gem_gtt.c | 52 +++++++++++++++++++++++++------------ 1 file changed, 36 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 31fc672..d5ae5de 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -681,9 +681,9 @@ static void gen8_ppgtt_clear_pte_range(struct i915_address_space *vm, struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); gen8_pte_t *pt_vaddr; - unsigned pdpe = start >> GEN8_PDPE_SHIFT & GEN8_PDPE_MASK; - unsigned pde = start >> GEN8_PDE_SHIFT & GEN8_PDE_MASK; - unsigned pte = start >> GEN8_PTE_SHIFT & GEN8_PTE_MASK; + unsigned pdpe = gen8_pdpe_index(start); + unsigned pde = gen8_pde_index(start); + unsigned pte = gen8_pte_index(start); unsigned num_entries = length >> PAGE_SHIFT; unsigned last_pte, i; @@ -722,7 +722,8 @@ static void gen8_ppgtt_clear_pte_range(struct i915_address_space *vm, pte = 0; if (++pde == I915_PDES) { - pdpe++; + if (++pdpe == I915_PDPES_PER_PDP(vm->dev)) + break; pde = 0; } } @@ -735,12 +736,21 @@ static void gen8_ppgtt_clear_range(struct i915_address_space *vm, { struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); - struct i915_page_directory_pointer *pdp = &ppgtt->pdp; /* FIXME: 48b */ - gen8_pte_t scratch_pte = gen8_pte_encode(px_dma(vm->scratch_page), I915_CACHE_LLC, use_scratch); - gen8_ppgtt_clear_pte_range(vm, pdp, start, length, scratch_pte); + if (!USES_FULL_48BIT_PPGTT(vm->dev)) { + gen8_ppgtt_clear_pte_range(vm, &ppgtt->pdp, start, length, + scratch_pte); + } else { + uint64_t templ4, pml4e; + struct i915_page_directory_pointer *pdp; + + gen8_for_each_pml4e(pdp, &ppgtt->pml4, start, length, templ4, pml4e) { + gen8_ppgtt_clear_pte_range(vm, pdp, start, length, + scratch_pte); + } + } } static void @@ -753,16 +763,13 @@ gen8_ppgtt_insert_pte_entries(struct i915_address_space *vm, struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); gen8_pte_t *pt_vaddr; - unsigned pdpe = start >> GEN8_PDPE_SHIFT & GEN8_PDPE_MASK; - unsigned pde = start >> GEN8_PDE_SHIFT & GEN8_PDE_MASK; - unsigned pte = start >> GEN8_PTE_SHIFT & GEN8_PTE_MASK; + unsigned pdpe = gen8_pdpe_index(start); + unsigned pde = gen8_pde_index(start); + unsigned pte = gen8_pte_index(start); pt_vaddr = NULL; while (__sg_page_iter_next(sg_iter)) { - if (WARN_ON(pdpe >= GEN8_LEGACY_PDPES)) - break; - if (pt_vaddr == NULL) { struct i915_page_directory *pd = pdp->page_directory[pdpe]; struct i915_page_table *pt = pd->page_table[pde]; @@ -776,7 +783,8 @@ gen8_ppgtt_insert_pte_entries(struct i915_address_space *vm, kunmap_px(ppgtt, pt_vaddr); pt_vaddr = NULL; if (++pde == I915_PDES) { - pdpe++; + if (++pdpe == I915_PDPES_PER_PDP(vm->dev)) + break; pde = 0; } pte = 0; @@ -795,11 +803,23 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm, { struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); - struct i915_page_directory_pointer *pdp = &ppgtt->pdp; /* FIXME: 48b */ struct sg_page_iter sg_iter; __sg_page_iter_start(&sg_iter, pages->sgl, sg_nents(pages->sgl), 0); - gen8_ppgtt_insert_pte_entries(vm, pdp, &sg_iter, start, cache_level); + + if (!USES_FULL_48BIT_PPGTT(vm->dev)) { + gen8_ppgtt_insert_pte_entries(vm, &ppgtt->pdp, &sg_iter, start, + cache_level); + } else { + struct i915_page_directory_pointer *pdp; + uint64_t templ4, pml4e; + uint64_t length = (uint64_t)pages->orig_nents << PAGE_SHIFT; + + gen8_for_each_pml4e(pdp, &ppgtt->pml4, start, length, templ4, pml4e) { + gen8_ppgtt_insert_pte_entries(vm, pdp, &sg_iter, + start, cache_level); + } + } } static void gen8_free_page_tables(struct drm_device *dev,