From patchwork Tue Feb 1 10:41:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12731520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBE2AC433F5 for ; Tue, 1 Feb 2022 10:41:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DF61810E8E2; Tue, 1 Feb 2022 10:41:49 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id EE94F10E8C6; Tue, 1 Feb 2022 10:41:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643712107; x=1675248107; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mbuRJ0e7nK525gI4ZD7nkJZtF2bVkOJgOdvwDTzxYXI=; b=E8JW/ybl35ABHWVmwW3WBN7YNYYHKtElaHjaQkiI0bQHAblSsnCcEFFh iMtghkr/BpeqncNCGFeFQkNxKe5Dd6LArvIWEZOcn+oTSf3g12YiNT28L F51TCJM2yvuN1Ri4DTXrlprRCi/bLG18j61GwzKXfRUPEzWLiNtbY5c6X c999ouYFZYRFm+n4zkXqXpz925UEy9EZIO4r9cmYKClwyagNcbNtLvNKm 0ME1Rtoakg9nAjY8CgDGQ49xiyHsDBQckdeyWHhquoBQmx/mzh9+5dlHl 6xkSkBdhRJ1OXR9G07O6bZn+YrE3Nb898dNrVVpZpXQTmv0dabFpf/kPL Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10244"; a="334020562" X-IronPort-AV: E=Sophos;i="5.88,333,1635231600"; d="scan'208";a="334020562" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Feb 2022 02:41:47 -0800 X-IronPort-AV: E=Sophos;i="5.88,333,1635231600"; d="scan'208";a="523011147" Received: from ramaling-i9x.iind.intel.com ([10.203.144.108]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Feb 2022 02:41:44 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Tue, 1 Feb 2022 16:11:19 +0530 Message-Id: <20220201104132.3050-7-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220201104132.3050-1-ramalingam.c@intel.com> References: <20220201104132.3050-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v5 06/19] drm/i915/gtt: add xehpsdv_ppgtt_insert_entry X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld If this is LMEM then we get a 32 entry PT, with each PTE pointing to some 64K block of memory, otherwise it's just the usual 512 entry PT. This very much assumes the caller knows what they are doing. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Ramalingam C Reviewed-by: Ramalingam C --- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 50 ++++++++++++++++++++++++++-- 1 file changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 62471730266c..f574da00eff1 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -715,13 +715,56 @@ static void gen8_ppgtt_insert_entry(struct i915_address_space *vm, gen8_pdp_for_page_index(vm, idx); struct i915_page_directory *pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2)); + struct i915_page_table *pt = i915_pt_entry(pd, gen8_pd_index(idx, 1)); gen8_pte_t *vaddr; - vaddr = px_vaddr(i915_pt_entry(pd, gen8_pd_index(idx, 1))); + GEM_BUG_ON(pt->is_compact); + + vaddr = px_vaddr(pt); vaddr[gen8_pd_index(idx, 0)] = gen8_pte_encode(addr, level, flags); clflush_cache_range(&vaddr[gen8_pd_index(idx, 0)], sizeof(*vaddr)); } +static void __xehpsdv_ppgtt_insert_entry_lm(struct i915_address_space *vm, + dma_addr_t addr, + u64 offset, + enum i915_cache_level level, + u32 flags) +{ + u64 idx = offset >> GEN8_PTE_SHIFT; + struct i915_page_directory * const pdp = + gen8_pdp_for_page_index(vm, idx); + struct i915_page_directory *pd = + i915_pd_entry(pdp, gen8_pd_index(idx, 2)); + struct i915_page_table *pt = i915_pt_entry(pd, gen8_pd_index(idx, 1)); + gen8_pte_t *vaddr; + + GEM_BUG_ON(!IS_ALIGNED(addr, SZ_64K)); + GEM_BUG_ON(!IS_ALIGNED(offset, SZ_64K)); + + if (!pt->is_compact) { + vaddr = px_vaddr(pd); + vaddr[gen8_pd_index(idx, 1)] |= GEN12_PDE_64K; + pt->is_compact = true; + } + + vaddr = px_vaddr(pt); + vaddr[gen8_pd_index(idx, 0) / 16] = gen8_pte_encode(addr, level, flags); +} + +static void xehpsdv_ppgtt_insert_entry(struct i915_address_space *vm, + dma_addr_t addr, + u64 offset, + enum i915_cache_level level, + u32 flags) +{ + if (flags & PTE_LM) + return __xehpsdv_ppgtt_insert_entry_lm(vm, addr, offset, + level, flags); + + return gen8_ppgtt_insert_entry(vm, addr, offset, level, flags); +} + static int gen8_init_scratch(struct i915_address_space *vm) { u32 pte_flags; @@ -921,7 +964,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, ppgtt->vm.bind_async_flags = I915_VMA_LOCAL_BIND; ppgtt->vm.insert_entries = gen8_ppgtt_insert; - ppgtt->vm.insert_page = gen8_ppgtt_insert_entry; + if (HAS_64K_PAGES(gt->i915)) + ppgtt->vm.insert_page = xehpsdv_ppgtt_insert_entry; + else + ppgtt->vm.insert_page = gen8_ppgtt_insert_entry; ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc; ppgtt->vm.clear_range = gen8_ppgtt_clear; ppgtt->vm.foreach = gen8_ppgtt_foreach;