From patchwork Tue Oct 10 18:44:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA7C8CD8CAC for ; Tue, 10 Oct 2023 18:55:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 18F4A10E0D3; Tue, 10 Oct 2023 18:55:09 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id B868C10E0E6 for ; Tue, 10 Oct 2023 18:55:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964106; x=1728500106; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c/evRpj07yX0afyZrbKtdGPeICGefxhD8hF1/2mocdY=; b=Gw2+2jxrG3KJAA3ahVcPW2fJg3cRHOo9g7ZpkJI5T1mCBKCosEFZgoRO ujW6MFcHectrYSE9UPcUp3B/uYwrP1J4Y1LTYVK1eKKbQVSiZ26PpM7s1 L6vVjlYn3vD+4iplu2oajLJbVV39wret+J+2zBM0M9zV8fnzQq8QuFlF4 +yb21t1JEku/8E1QC7EgWSJvujShJI/B+Vu+Unp6Akw4Ju6+Aw/lTIIEr 4Fhg+AV3MLASTbID18jN0zyE/h+9Ot9lT5utHionJUg4Z5iJyIPMztzfT /bSvern1qpNFawJ230ThbBHzk9HTw2zgR4wp5Eb9pW8inHBjzTCLsL4e0 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776790" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776790" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234846" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234846" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:06 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:12 -0700 Message-Id: <20231010184423.2118908-2-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 01/10] drm/i915: Add GuC TLB Invalidation device info flags X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add device info flags for if GuC TLB Invalidation is enabled. Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/i915/i915_drv.h | 2 ++ drivers/gpu/drm/i915/intel_device_info.h | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index cb60fc9cf8737..6a2a78c61f212 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -794,6 +794,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915, #define HAS_GUC_DEPRIVILEGE(i915) \ (INTEL_INFO(i915)->has_guc_deprivilege) +#define HAS_GUC_TLB_INVALIDATION(i915) (INTEL_INFO(i915)->has_guc_tlb_invalidation) + #define HAS_3D_PIPELINE(i915) (INTEL_INFO(i915)->has_3d_pipeline) #define HAS_ONE_EU_PER_FUSE_BIT(i915) (INTEL_INFO(i915)->has_one_eu_per_fuse_bit) diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index 39817490b13fd..eba2f0b919c87 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -153,6 +153,7 @@ enum intel_ppgtt_type { func(has_heci_pxp); \ func(has_heci_gscfi); \ func(has_guc_deprivilege); \ + func(has_guc_tlb_invalidation); \ func(has_l3_ccs_read); \ func(has_l3_dpf); \ func(has_llc); \ From patchwork Tue Oct 10 18:44:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6D77BCD8CB1 for ; Tue, 10 Oct 2023 18:55:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B028410E16A; Tue, 10 Oct 2023 18:55:08 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2CFBC10E0E6 for ; Tue, 10 Oct 2023 18:55:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964107; x=1728500107; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7i1Oo64hV2KMDKjOgiwyFj4zEwdGmFaTlSDFI3UzrGA=; b=PHDG3+IEQQhzCLXJNj1KyRKBC/ueT3xqEnG0bJ8xCHo3CsiKiMORLRIL 9bmptTUgDFZL5C8wW1GwxcvLVdAAKeOMOHfk9J4kk//i017Fjjt24lOFh zHvDSJld5TfwmaNM6kAvmkJ+6TJGerKgYqDBVEWzjKWN0HYrfJlp68/EE 3N4ZnSti8ZTDSfX111qmuaIlIRBc2i5LCh8P4IChHzT3BWzmI8MFKRBko yfJ8ZKZQAhwevIz5ixQP66KjGculd0+LrQyp5TBiGQ+gGPJdKDupvfHf/ YIbnw3zA5UzAfcE43kRGq8c00pR/WlQHEcU7uAGULUrE3HeiABsZ87Y/Y Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776795" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776795" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234854" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234854" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:06 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:14 -0700 Message-Id: <20231010184423.2118908-4-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH dii-client 2/2] drm/i915: Use selective tlb invalidations where supported X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For platforms supporting selective tlb invalidations, we don't need to do a full tlb invalidation. Rather do a range based tlb invalidation for every unbind of purged vma belongs to an active vm. Signed-off-by: Prathap Kumar Valsan Cc: Niranjana Vishwanathapura Cc: Fei Yang Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 2 +- drivers/gpu/drm/i915/i915_vma.c | 14 +++++++++----- drivers/gpu/drm/i915/i915_vma.h | 3 ++- 3 files changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index d07a4f97b9434..b43dae3cbd59f 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -211,7 +211,7 @@ void ppgtt_unbind_vma(struct i915_address_space *vm, return; vm->clear_range(vm, vma_res->start, vma_res->vma_size); - vma_invalidate_tlb(vm, vma_res->tlb); + vma_invalidate_tlb(vm, vma_res->tlb, vma_res->start, vma_res->vma_size); } static unsigned long pd_count(u64 size, int shift) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index d09aad34ba37f..cb05d794f0d0f 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1339,7 +1339,8 @@ I915_SELFTEST_EXPORT int i915_vma_get_pages(struct i915_vma *vma) return err; } -void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb) +void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb + u64 start, u64 size) { struct intel_gt *gt; int id; @@ -1355,9 +1356,11 @@ void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb) * the most recent TLB invalidation seqno, and if we have not yet * flushed the TLBs upon release, perform a full invalidation. */ - for_each_gt(gt, vm->i915, id) - WRITE_ONCE(tlb[id], - intel_gt_next_invalidate_tlb_full(gt)); + for_each_gt(gt, vm->i915, id) { + if (!intel_gt_invalidate_tlb_range(gt, start, size)) + WRITE_ONCE(tlb[id], + intel_gt_next_invalidate_tlb_full(gt)); + } } static void __vma_put_pages(struct i915_vma *vma, unsigned int count) @@ -2041,7 +2044,8 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async) dma_fence_put(unbind_fence); unbind_fence = NULL; } - vma_invalidate_tlb(vma->vm, vma->obj->mm.tlb); + vma_invalidate_tlb(vma->vm, vma->obj->mm.tlb, + vma->node.start, vma->size); } /* diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index e356dfb883d34..5a604aad55dfe 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -260,7 +260,8 @@ bool i915_vma_misplaced(const struct i915_vma *vma, u64 size, u64 alignment, u64 flags); void __i915_vma_set_map_and_fenceable(struct i915_vma *vma); void i915_vma_revoke_mmap(struct i915_vma *vma); -void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb); +void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb, + u64 start, u64 size); struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async); int __i915_vma_unbind(struct i915_vma *vma); int __must_check i915_vma_unbind(struct i915_vma *vma); From patchwork Tue Oct 10 18:44:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60960CD8CB1 for ; Tue, 10 Oct 2023 18:55:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 437F410E3BD; Tue, 10 Oct 2023 18:55:17 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id D901710E0D3 for ; Tue, 10 Oct 2023 18:55:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964107; x=1728500107; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O3/Cw/JWjp/MNxoIdc/SjWDUdI7GQ3X+aTigGhhEZzQ=; b=n9SYPx3bv6sa9l6mbygufpv/39yD3J992BRw6Ip2NVdFo8p0EzRh72Ze jZtBSatjIiXXnIYamgK1r8uP0sy/Nh7+xsA2hCsVxwEiSu2DAKO/UhFq8 9noHpmuKQaM9SCuXLxV6eX4pg+E9cGU+Uj+CwGSrBKj2IPC9DdMn4QKng QbgJDScW2H3pIOAdXPt0s4SrKzPJWxYTIm38hQkk98/a2ITYflPgYRF2d A9UjjmKAcRfPCVFgJ6xvKEsw8HAx+wXy0JL2pXac688H/oPVve0PXNtdf mppnw6fr8hCYa5qDKPJF+jyzEzj8ccebpi8NysRpuv0Da/lqr+0cjXU+R g==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776798" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776798" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234861" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234861" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:07 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:16 -0700 Message-Id: <20231010184423.2118908-6-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 03/10] drm/i915: Define and use GuC and CTB TLB invalidation routines X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Prathap Kumar Valsan The GuC firmware had defined the interface for Translation Look-Aside Buffer (TLB) invalidation. We should use this interface when invalidating the engine and GuC TLBs. Add additional functionality to intel_gt_invalidate_tlb, invalidating the GuC TLBs and falling back to GT invalidation when the GuC is disabled. The invalidation is done by sending a request directly to the GuC tlb_lookup that invalidates the table. The invalidation is submitted as a wait request and is performed in the CT event handler. This means we cannot perform this TLB invalidation path if the CT is not enabled. If the request isn't fulfilled in two seconds, this would constitute an error in the invalidation as that would constitute either a lost request or a severe GuC overload. With this new invalidation routine, we can perform GuC-based GGTT invalidations. GuC-based GGTT invalidation is incompatible with MMIO invalidation so we should not perform MMIO invalidation when GuC-based GGTT invalidation is expected. The additional complexity incurred in this patch will be necessary for range-based tlb invalidations, which will be platformed in the future. Signed-off-by: Prathap Kumar Valsan Signed-off-by: Bruce Chang Signed-off-by: Chris Wilson Signed-off-by: Umesh Nerlige Ramappa Signed-off-by: Jonathan Cavitt Signed-off-by: Aravind Iddamsetty Signed-off-by: Fei Yang CC: Andi Shyti --- drivers/gpu/drm/i915/gt/intel_ggtt.c | 34 +++- drivers/gpu/drm/i915/gt/intel_tlb.c | 16 +- .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h | 33 ++++ drivers/gpu/drm/i915/gt/uc/intel_guc.h | 22 +++ drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 4 + drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 1 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 186 +++++++++++++++++- 7 files changed, 284 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index 4d7d88b92632b..a1f7bdc602996 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -206,22 +206,38 @@ static void gen8_ggtt_invalidate(struct i915_ggtt *ggtt) intel_uncore_write_fw(uncore, GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN); } +static void guc_ggtt_ct_invalidate(struct intel_gt *gt) +{ + struct intel_uncore *uncore = gt->uncore; + intel_wakeref_t wakeref; + + with_intel_runtime_pm_if_active(uncore->rpm, wakeref) { + struct intel_guc *guc = >->uc.guc; + + intel_guc_invalidate_tlb_guc(guc); + } +} + static void guc_ggtt_invalidate(struct i915_ggtt *ggtt) { struct drm_i915_private *i915 = ggtt->vm.i915; + struct intel_gt *gt; - gen8_ggtt_invalidate(ggtt); - - if (GRAPHICS_VER(i915) >= 12) { - struct intel_gt *gt; + if (!HAS_GUC_TLB_INVALIDATION(i915)) + gen8_ggtt_invalidate(ggtt); - list_for_each_entry(gt, &ggtt->gt_list, ggtt_link) + list_for_each_entry(gt, &ggtt->gt_list, ggtt_link) { + if (HAS_GUC_TLB_INVALIDATION(i915) && + intel_guc_is_ready(>->uc.guc)) { + guc_ggtt_ct_invalidate(gt); + } else if (GRAPHICS_VER(i915) >= 12) { intel_uncore_write_fw(gt->uncore, GEN12_GUC_TLB_INV_CR, GEN12_GUC_TLB_INV_CR_INVALIDATE); - } else { - intel_uncore_write_fw(ggtt->vm.gt->uncore, - GEN8_GTCR, GEN8_GTCR_INVALIDATE); + } else { + intel_uncore_write_fw(gt->uncore, + GEN8_GTCR, GEN8_GTCR_INVALIDATE); + } } } @@ -1243,7 +1259,7 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt) ggtt->vm.raw_insert_page = gen8_ggtt_insert_page; } - if (intel_uc_wants_guc(&ggtt->vm.gt->uc)) + if (intel_uc_wants_guc_submission(&ggtt->vm.gt->uc)) ggtt->invalidate = guc_ggtt_invalidate; else ggtt->invalidate = gen8_ggtt_invalidate; diff --git a/drivers/gpu/drm/i915/gt/intel_tlb.c b/drivers/gpu/drm/i915/gt/intel_tlb.c index 139608c30d978..4bb13d1890e37 100644 --- a/drivers/gpu/drm/i915/gt/intel_tlb.c +++ b/drivers/gpu/drm/i915/gt/intel_tlb.c @@ -12,6 +12,7 @@ #include "intel_gt_print.h" #include "intel_gt_regs.h" #include "intel_tlb.h" +#include "uc/intel_guc.h" /* * HW architecture suggest typical invalidation time at 40us, @@ -131,11 +132,24 @@ void intel_gt_invalidate_tlb_full(struct intel_gt *gt, u32 seqno) return; with_intel_gt_pm_if_awake(gt, wakeref) { + struct intel_guc *guc = >->uc.guc; + mutex_lock(>->tlb.invalidate_lock); if (tlb_seqno_passed(gt, seqno)) goto unlock; - mmio_invalidate_full(gt); + if (HAS_GUC_TLB_INVALIDATION(gt->i915)) { + /* + * Only perform GuC TLB invalidation if GuC is ready. + * The only time GuC could not be ready is on GT reset, + * which would clobber all the TLBs anyways, making + * any TLB invalidation path here unnecessary. + */ + if (intel_guc_is_ready(guc)) + intel_guc_invalidate_tlb_engines(guc); + } else { + mmio_invalidate_full(gt); + } write_seqcount_invalidate(>->tlb.seqno); unlock: diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h index f359bef046e0b..33f253410d0c8 100644 --- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h +++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h @@ -138,6 +138,8 @@ enum intel_guc_action { INTEL_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC = 0x4601, INTEL_GUC_ACTION_CLIENT_SOFT_RESET = 0x5507, INTEL_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A, + INTEL_GUC_ACTION_TLB_INVALIDATION = 0x7000, + INTEL_GUC_ACTION_TLB_INVALIDATION_DONE = 0x7001, INTEL_GUC_ACTION_STATE_CAPTURE_NOTIFICATION = 0x8002, INTEL_GUC_ACTION_NOTIFY_FLUSH_LOG_BUFFER_TO_FILE = 0x8003, INTEL_GUC_ACTION_NOTIFY_CRASH_DUMP_POSTED = 0x8004, @@ -181,4 +183,35 @@ enum intel_guc_state_capture_event_status { #define INTEL_GUC_STATE_CAPTURE_EVENT_STATUS_MASK 0x000000FF +#define INTEL_GUC_TLB_INVAL_TYPE_MASK REG_GENMASK(7, 0) +#define INTEL_GUC_TLB_INVAL_MODE_MASK REG_GENMASK(11, 8) +#define INTEL_GUC_TLB_INVAL_FLUSH_CACHE REG_BIT(31) + +enum intel_guc_tlb_invalidation_type { + INTEL_GUC_TLB_INVAL_ENGINES = 0x0, + INTEL_GUC_TLB_INVAL_GUC = 0x3, +}; + +/* + * 0: Heavy mode of Invalidation: + * The pipeline of the engine(s) for which the invalidation is targeted to is + * blocked, and all the in-flight transactions are guaranteed to be Globally + * Observed before completing the TLB invalidation + * 1: Lite mode of Invalidation: + * TLBs of the targeted engine(s) are immediately invalidated. + * In-flight transactions are NOT guaranteed to be Globally Observed before + * completing TLB invalidation. + * Light Invalidation Mode is to be used only when + * it can be guaranteed (by SW) that the address translations remain invariant + * for the in-flight transactions across the TLB invalidation. In other words, + * this mode can be used when the TLB invalidation is intended to clear out the + * stale cached translations that are no longer in use. Light Invalidation Mode + * is much faster than the Heavy Invalidation Mode, as it does not wait for the + * in-flight transactions to be GOd. + */ +enum intel_guc_tlb_inval_mode { + INTEL_GUC_TLB_INVAL_MODE_HEAVY = 0x0, + INTEL_GUC_TLB_INVAL_MODE_LITE = 0x1, +}; + #endif /* _ABI_GUC_ACTIONS_ABI_H */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 818c8c146fd47..f5ede14b18aae 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -79,6 +79,18 @@ struct intel_guc { */ atomic_t outstanding_submission_g2h; + /** @tlb_lookup: xarray to store all pending TLB invalidation requests */ + struct xarray tlb_lookup; + + /** + * @serial_slot: id to the initial waiter created in tlb_lookup, + * which is used only when failed to allocate new waiter. + */ + u32 serial_slot; + + /** @next_seqno: the next id (sequence no.) to allocate. */ + u32 next_seqno; + /** @interrupts: pointers to GuC interrupt-managing functions. */ struct { bool enabled; @@ -297,6 +309,11 @@ struct intel_guc { #define GUC_SUBMIT_VER(guc) MAKE_GUC_VER_STRUCT((guc)->submission_version) #define GUC_FIRMWARE_VER(guc) MAKE_GUC_VER_STRUCT((guc)->fw.file_selected.ver) +struct intel_guc_tlb_wait { + struct wait_queue_head wq; + bool busy; +}; + static inline struct intel_guc *log_to_guc(struct intel_guc_log *log) { return container_of(log, struct intel_guc, log); @@ -419,6 +436,11 @@ static inline bool intel_guc_is_supported(struct intel_guc *guc) return intel_uc_fw_is_supported(&guc->fw); } +int intel_guc_invalidate_tlb_engines(struct intel_guc *guc); +int intel_guc_invalidate_tlb_guc(struct intel_guc *guc); +int intel_guc_tlb_invalidation_done(struct intel_guc *guc, + const u32 *payload, u32 len); + static inline bool intel_guc_is_wanted(struct intel_guc *guc) { return intel_uc_fw_is_enabled(&guc->fw); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c index c33210ead1ef7..8114b12ac91e3 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c @@ -1115,6 +1115,9 @@ static int ct_process_request(struct intel_guc_ct *ct, struct ct_incoming_msg *r case INTEL_GUC_ACTION_NOTIFY_EXCEPTION: ret = intel_guc_crash_process_msg(guc, action); break; + case INTEL_GUC_ACTION_TLB_INVALIDATION_DONE: + ret = intel_guc_tlb_invalidation_done(guc, payload, len); + break; default: ret = -EOPNOTSUPP; break; @@ -1186,6 +1189,7 @@ static int ct_handle_event(struct intel_guc_ct *ct, struct ct_incoming_msg *requ switch (action) { case INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE: case INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE: + case INTEL_GUC_ACTION_TLB_INVALIDATION_DONE: g2h_release_space(ct, request->size); } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h index 123ad75d2eb28..8ae1846431da7 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h @@ -22,6 +22,7 @@ /* Payload length only i.e. don't include G2H header length */ #define G2H_LEN_DW_SCHED_CONTEXT_MODE_SET 2 #define G2H_LEN_DW_DEREGISTER_CONTEXT 1 +#define G2H_LEN_DW_INVALIDATE_TLB 1 #define GUC_CONTEXT_DISABLE 0 #define GUC_CONTEXT_ENABLE 1 diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 2cce5ec1ff00d..e9854652c2b52 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1798,9 +1798,11 @@ static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t st void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stalled) { + struct intel_guc_tlb_wait *wait; struct intel_context *ce; unsigned long index; unsigned long flags; + unsigned long i; if (unlikely(!guc_submission_initialized(guc))) { /* Reset called during driver load? GuC not yet initialised! */ @@ -1826,6 +1828,15 @@ void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stall /* GuC is blown away, drop all references to contexts */ xa_destroy(&guc->context_lookup); + + /* + * The full GT reset will have cleared the TLB caches and flushed the + * G2H message queue; we can release all the blocked waiters. + */ + xa_lock_irq(&guc->tlb_lookup); + xa_for_each(&guc->tlb_lookup, i, wait) + wake_up(&wait->wq); + xa_unlock_irq(&guc->tlb_lookup); } static void guc_cancel_context_requests(struct intel_context *ce) @@ -1948,6 +1959,46 @@ void intel_guc_submission_reset_finish(struct intel_guc *guc) static void destroyed_worker_func(struct work_struct *w); static void reset_fail_worker_func(struct work_struct *w); +static int init_tlb_lookup(struct intel_guc *guc) +{ + struct intel_guc_tlb_wait *wait; + int err; + + if (!HAS_GUC_TLB_INVALIDATION(guc_to_gt(guc)->i915)) + return 0; + + xa_init_flags(&guc->tlb_lookup, XA_FLAGS_ALLOC); + + wait = kzalloc(sizeof(*wait), GFP_KERNEL); + if (!wait) + return -ENOMEM; + + init_waitqueue_head(&wait->wq); + + /* Preallocate a shared id for use under memory pressure. */ + err = xa_alloc_cyclic_irq(&guc->tlb_lookup, &guc->serial_slot, wait, + xa_limit_32b, &guc->next_seqno, GFP_KERNEL); + if (err < 0) { + kfree(wait); + return err; + } + + return 0; +} + +static void fini_tlb_lookup(struct intel_guc *guc) +{ + struct intel_guc_tlb_wait *wait; + + if (!HAS_GUC_TLB_INVALIDATION(guc_to_gt(guc)->i915)) + return; + + wait = xa_load(&guc->tlb_lookup, guc->serial_slot); + kfree(wait); + + xa_destroy(&guc->tlb_lookup); +} + /* * Set up the memory resources to be shared with the GuC (via the GGTT) * at firmware loading time. @@ -1966,11 +2017,15 @@ int intel_guc_submission_init(struct intel_guc *guc) return ret; } + ret = init_tlb_lookup(guc); + if (ret) + goto destroy_pool; + guc->submission_state.guc_ids_bitmap = bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL); if (!guc->submission_state.guc_ids_bitmap) { ret = -ENOMEM; - goto destroy_pool; + goto destroy_tlb; } guc->timestamp.ping_delay = (POLL_TIME_CLKS / gt->clock_frequency + 1) * HZ; @@ -1979,9 +2034,10 @@ int intel_guc_submission_init(struct intel_guc *guc) return 0; +destroy_tlb: + fini_tlb_lookup(guc); destroy_pool: guc_lrc_desc_pool_destroy_v69(guc); - return ret; } @@ -1994,6 +2050,7 @@ void intel_guc_submission_fini(struct intel_guc *guc) guc_lrc_desc_pool_destroy_v69(guc); i915_sched_engine_put(guc->sched_engine); bitmap_free(guc->submission_state.guc_ids_bitmap); + fini_tlb_lookup(guc); guc->submission_initialized = false; } @@ -4624,6 +4681,131 @@ g2h_context_lookup(struct intel_guc *guc, u32 ctx_id) return ce; } +static void wait_wake_outstanding_tlb_g2h(struct intel_guc *guc, u32 seqno) +{ + struct intel_guc_tlb_wait *wait; + unsigned long flags; + + xa_lock_irqsave(&guc->tlb_lookup, flags); + wait = xa_load(&guc->tlb_lookup, seqno); + + if (wait) + wake_up(&wait->wq); + else + guc_dbg(guc, + "Stale TLB invalidation response with seqno %d\n", seqno); + + xa_unlock_irqrestore(&guc->tlb_lookup, flags); +} + +int intel_guc_tlb_invalidation_done(struct intel_guc *guc, + const u32 *payload, u32 len) +{ + wait_wake_outstanding_tlb_g2h(guc, payload[0]); + return 0; +} + +static long must_wait_woken(struct wait_queue_entry *wq_entry, long timeout) +{ + /* + * This is equivalent to wait_woken() with the exception that + * we do not wake up early if the kthread task has been completed. + * As we are called from page reclaim in any task context, + * we may be invoked from stopped kthreads, but we *must* + * complete the wait from the HW. + */ + do { + set_current_state(TASK_UNINTERRUPTIBLE); + if (wq_entry->flags & WQ_FLAG_WOKEN) + break; + + timeout = schedule_timeout(timeout); + } while (timeout); + __set_current_state(TASK_RUNNING); + + /* See wait_woken() and woken_wake_function() */ + smp_store_mb(wq_entry->flags, wq_entry->flags & ~WQ_FLAG_WOKEN); + + return timeout; +} + +static int guc_send_invalidate_tlb(struct intel_guc *guc, + enum intel_guc_tlb_invalidation_type type) +{ + struct intel_guc_tlb_wait _wq, *wq = &_wq; + DEFINE_WAIT_FUNC(wait, woken_wake_function); + int err; + u32 seqno; + u32 action[] = { + INTEL_GUC_ACTION_TLB_INVALIDATION, + 0, + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_TYPE_MASK, type) | + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_MODE_MASK, + INTEL_GUC_TLB_INVAL_MODE_HEAVY) | + INTEL_GUC_TLB_INVAL_FLUSH_CACHE, + }; + u32 size = ARRAY_SIZE(action); + + init_waitqueue_head(&_wq.wq); + + if (xa_alloc_cyclic_irq(&guc->tlb_lookup, &seqno, wq, + xa_limit_32b, &guc->next_seqno, + GFP_ATOMIC | __GFP_NOWARN) < 0) { + /* Under severe memory pressure? Serialise TLB allocations */ + xa_lock_irq(&guc->tlb_lookup); + wq = xa_load(&guc->tlb_lookup, guc->serial_slot); + wait_event_lock_irq(wq->wq, + !READ_ONCE(wq->busy), + guc->tlb_lookup.xa_lock); + /* + * Update wq->busy under lock to ensure only one waiter can + * issue the TLB invalidation command using the serial slot at a + * time. The condition is set to true before releasing the lock + * so that other caller continue to wait until woken up again. + */ + wq->busy = true; + xa_unlock_irq(&guc->tlb_lookup); + + seqno = guc->serial_slot; + } + + action[1] = seqno; + + add_wait_queue(&wq->wq, &wait); + + /* + * This is a critical reclaim path and thus we must loop here: + * We cannot block for anything that is on the GPU. + */ + err = intel_guc_send_busy_loop(guc, action, size, G2H_LEN_DW_INVALIDATE_TLB, true); + if (err) + goto out; + + if (!must_wait_woken(&wait, intel_guc_ct_expected_delay(&guc->ct))) { + guc_err(guc, + "TLB invalidation response timed out for seqno %u\n", seqno); + err = -ETIME; + } +out: + remove_wait_queue(&wq->wq, &wait); + if (seqno != guc->serial_slot) + xa_erase_irq(&guc->tlb_lookup, seqno); + + return err; +} + +/* Full TLB invalidation */ +int intel_guc_invalidate_tlb_engines(struct intel_guc *guc) +{ + return guc_send_invalidate_tlb(guc, INTEL_GUC_TLB_INVAL_ENGINES); +} + +/* GuC TLB Invalidation: Invalidate the TLB's of GuC itself. */ +int intel_guc_invalidate_tlb_guc(struct intel_guc *guc) +{ + return guc_send_invalidate_tlb(guc, INTEL_GUC_TLB_INVAL_GUC); +} + int intel_guc_deregister_done_process_msg(struct intel_guc *guc, const u32 *msg, u32 len) From patchwork Tue Oct 10 18:44:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05E23CD8CAC for ; Tue, 10 Oct 2023 18:55:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B126510E3BC; Tue, 10 Oct 2023 18:55:16 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 12F5A10E16A for ; Tue, 10 Oct 2023 18:55:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964108; x=1728500108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o8SyUYhtdIkJvawK6EZby0fwmns6sBcXgPCGpd7TRGo=; b=F9wUYc5fiHbgzQ5xOdGdLB5SDTRhiSpQW/T3/1FwXeYhVZXJ/WEMJkmw xp/8SzYsiNy4ADd1gtKSs3xRDA0XRvWpZuLzOtueGqIXbjjPFpae9YBdr 1Cb8iik1sEL0QwFCEtDc2IG4iBXVW7hYen+LrMXt0XoFjVagNhu2BFso+ C/S0FGumCdudLEOv926p/WqlPLxZO6mJQDiaCISXUPtNR6/3gar6RuVU+ jrNcjSl/CI5rLRE1Q/qfN+9uasYzeCudrqpzJYv9SCK1NTCPoZhtwBK0q 3T/6CIsnUefUzMyvgmQEvzbrNOAu/RmnM5AMx03a5tGzxH0j8yxBRdtVd w==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776799" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776799" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234864" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234864" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:07 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:17 -0700 Message-Id: <20231010184423.2118908-7-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 04/10] drm/i915: No TLB invalidation on suspended GT X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" In case of GT is suspended, don't allow submission of new TLB invalidation request and cancel all pending requests. The TLB entries will be invalidated either during GuC reload or on system resume. Signed-off-by: Fei Yang Signed-off-by: Jonathan Cavitt CC: John Harrison --- drivers/gpu/drm/i915/gt/uc/intel_guc.h | 1 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 21 +++++++++++++------ drivers/gpu/drm/i915/gt/uc/intel_uc.c | 7 +++++++ 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index f5ede14b18aae..3fbf4b33ce139 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -537,4 +537,5 @@ void intel_guc_dump_time_info(struct intel_guc *guc, struct drm_printer *p); int intel_guc_sched_disable_gucid_threshold_max(struct intel_guc *guc); +void wake_up_all_tlb_invalidate(struct intel_guc *guc); #endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index e9854652c2b52..b9c168ea57270 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1796,13 +1796,25 @@ static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t st intel_context_put(parent); } -void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stalled) +void wake_up_all_tlb_invalidate(struct intel_guc *guc) { struct intel_guc_tlb_wait *wait; + unsigned long i; + + if (!HAS_GUC_TLB_INVALIDATION(guc_to_gt(guc)->i915)) + return; + + xa_lock_irq(&guc->tlb_lookup); + xa_for_each(&guc->tlb_lookup, i, wait) + wake_up(&wait->wq); + xa_unlock_irq(&guc->tlb_lookup); +} + +void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stalled) +{ struct intel_context *ce; unsigned long index; unsigned long flags; - unsigned long i; if (unlikely(!guc_submission_initialized(guc))) { /* Reset called during driver load? GuC not yet initialised! */ @@ -1833,10 +1845,7 @@ void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stall * The full GT reset will have cleared the TLB caches and flushed the * G2H message queue; we can release all the blocked waiters. */ - xa_lock_irq(&guc->tlb_lookup); - xa_for_each(&guc->tlb_lookup, i, wait) - wake_up(&wait->wq); - xa_unlock_irq(&guc->tlb_lookup); + wake_up_all_tlb_invalidate(guc); } static void guc_cancel_context_requests(struct intel_context *ce) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c index 98b103375b7ab..750cb63503dd7 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c @@ -688,6 +688,8 @@ void intel_uc_suspend(struct intel_uc *uc) /* flush the GSC worker */ intel_gsc_uc_flush_work(&uc->gsc); + wake_up_all_tlb_invalidate(guc); + if (!intel_guc_is_ready(guc)) { guc->interrupts.enabled = false; return; @@ -736,6 +738,11 @@ static int __uc_resume(struct intel_uc *uc, bool enable_communication) intel_gsc_uc_resume(&uc->gsc); + if (HAS_GUC_TLB_INVALIDATION(gt->i915)) { + intel_guc_invalidate_tlb_engines(guc); + intel_guc_invalidate_tlb_guc(guc); + } + return 0; } From patchwork Tue Oct 10 18:44:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30830CD8CB1 for ; Tue, 10 Oct 2023 18:55:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 70A4F10E3B7; Tue, 10 Oct 2023 18:55:14 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9954510E0D3 for ; Tue, 10 Oct 2023 18:55:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964108; x=1728500108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+r/lc4mDefQ6hIT5sQlKNW10ag7ly8xSz8WuI83JJxs=; b=RRK/wuSwQa1cYKFQ9wKrH7fjrkT/PrbfrYMaLNHHeK+jVjS0EDUx2dz3 QNsaXvGPZVJg0qRUodJPhrcN2H5Jp6EgcatC8tpi9OkHHKUgP/voL4tHV q7qAC7ZIxfhy/9yxzysWhrcZrdcj1a1bguHqVd0fHct0WFgTEX2Hqz1m4 FepkbsUwTijtVPP66N5hjtE0NH7v3/eZGkOkSoQClXVlHx1uevIptDnLZ jn+BnVFktpytQCcdIeKE/enGsvJy3Bs2xVZp3wEO6ZQPL6kTiO+nYK8fB ZHIHzyz3A/TPGySzkRXGyIixg5gzXt18Q9nKfbsrZDNz3G6uXP7shhoAR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776801" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776801" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234868" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234868" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:07 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:18 -0700 Message-Id: <20231010184423.2118908-8-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 05/10] drm/i915: No TLB invalidation on wedged GT X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" It is not an error for GuC TLB invalidations to fail when the GT is wedged or disabled, so do not process a wait failure as one in guc_send_invalidate_tlb. Signed-off-by: Fei Yang Signed-off-by: Jonathan Cavitt CC: John Harrison --- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index b9c168ea57270..c3c45d3b9e89b 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -32,6 +32,7 @@ #include "i915_drv.h" #include "i915_reg.h" +#include "i915_irq.h" #include "i915_trace.h" /** @@ -1941,6 +1942,12 @@ void intel_guc_submission_cancel_requests(struct intel_guc *guc) /* GuC is blown away, drop all references to contexts */ xa_destroy(&guc->context_lookup); + + /* + * Wedged GT won't respond to any TLB invalidation request. Simply + * release all the blocked waiters. + */ + wake_up_all_tlb_invalidate(guc); } void intel_guc_submission_reset_finish(struct intel_guc *guc) @@ -4738,6 +4745,14 @@ static long must_wait_woken(struct wait_queue_entry *wq_entry, long timeout) return timeout; } +static bool intel_gt_is_enabled(const struct intel_gt *gt) +{ + /* Check if GT is wedged or suspended */ + if (intel_gt_is_wedged(gt) || !intel_irqs_enabled(gt->i915)) + return false; + return true; +} + static int guc_send_invalidate_tlb(struct intel_guc *guc, enum intel_guc_tlb_invalidation_type type) { @@ -4790,7 +4805,8 @@ static int guc_send_invalidate_tlb(struct intel_guc *guc, if (err) goto out; - if (!must_wait_woken(&wait, intel_guc_ct_expected_delay(&guc->ct))) { + if (intel_gt_is_enabled(guc_to_gt(guc)) && + !must_wait_woken(&wait, intel_guc_ct_expected_delay(&guc->ct))) { guc_err(guc, "TLB invalidation response timed out for seqno %u\n", seqno); err = -ETIME; From patchwork Tue Oct 10 18:44:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B0E5CD8CB2 for ; Tue, 10 Oct 2023 18:55:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E9DC510E3B8; Tue, 10 Oct 2023 18:55:15 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id B095F10E3B4 for ; Tue, 10 Oct 2023 18:55:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964108; x=1728500108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aya2YuM5tbecZPP1LDqLzQtR+v0iUIvsUP0HUO5HGSk=; b=LjWfXGZiot119CNUFXimDVjPATvFK4AmxE8/9Icye7r7Xq0PgRE6IqE6 Oa8/SFGuKwFgGz/hl+EPDEfB8q3EKW7NGZitqhnuvaPtDMc1TRGyaaY+f wIRkXCmhKazQTUTG83MQpMHGxWJzEWOKxO6Xz3YOrRrsyxYHtb6w2Ox8O SWvxaD4M3D2iWu2WX1+2joDpU9gTY6oxPerh0xtxWNry1emK/EfJjR/1X bfHATqPqn6Bpdr0mrVX8iJUQEdmUbDy930ybRjtJPv08k1LIlvsH7vEow mzB2wzNqFItx5KuVjUC7VFm9uQJuja9HggmtoZsG7Q/ZdASefPNqT5kWp Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776802" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776802" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234872" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234872" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:08 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:19 -0700 Message-Id: <20231010184423.2118908-9-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 06/10] drm/i915/gt: Increase sleep in gt_tlb selftest sanitycheck X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For the gt_tlb live selftest, when operating on the GSC engine, increase the timeout from 10 ms to 200 ms because the GSC engine is a bit slower than the rest. Additionally, increase the default timeout from 10 ms to 20 ms because msleep < 20ms can sleep for up to 20ms. Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/i915/gt/selftest_tlb.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_tlb.c b/drivers/gpu/drm/i915/gt/selftest_tlb.c index 7e41f69fc818f..24beb94aa7a37 100644 --- a/drivers/gpu/drm/i915/gt/selftest_tlb.c +++ b/drivers/gpu/drm/i915/gt/selftest_tlb.c @@ -136,8 +136,15 @@ pte_tlbinv(struct intel_context *ce, i915_request_get(rq); i915_request_add(rq); - /* Short sleep to sanitycheck the batch is spinning before we begin */ - msleep(10); + /* + * Short sleep to sanitycheck the batch is spinning before we begin. + * FIXME: Why is GSC so slow? + */ + if (ce->engine->class == OTHER_CLASS) + msleep(200); + else + msleep(20); + if (va == vb) { if (!i915_request_completed(rq)) { pr_err("%s(%s): Semaphore sanitycheck failed %llx, with alignment %llx, using PTE size %x (phys %x, sg %x)\n", From patchwork Tue Oct 10 18:44:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43CF7CD8CB2 for ; Tue, 10 Oct 2023 18:55:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A03EC10E3BB; Tue, 10 Oct 2023 18:55:16 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id E9D1F10E3B7 for ; Tue, 10 Oct 2023 18:55:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964108; x=1728500108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sCPg81uT6dezkH2hVM6SDIRyOuX6w9glL6jnh8lBjcM=; b=Md53IL0y1YLc36oeSAGalLTAh4+ujHSHyPJK2nHjBRg+bfBCgwgX1vEk bwurIlK1uz8xwBDVANwfQyx+LRcadKhadi7qmM2Yi8XsETDz8yc84oCF0 yzYj2PCw778Tvb0KJhZkmns7izNVdmPhA1aiIfTALz53echWOw88ZsHF6 Hz3CXw9lDTUS4n1OJcLwR3r+bhCHMafLpJFgc2J48+LTcofe1XPdh2RIA 4FRvKo5faC4sy720azLvjQQuj+RJ0VtJ1IWo5lm08kIlb3yuvj8oqqM2V DYf2U74R3k1//5roBWvIzGVW7OibQl7vF8EG+HoF4D2p7W+s1lu9kM5ye g==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776803" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776803" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234875" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234875" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:08 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:20 -0700 Message-Id: <20231010184423.2118908-10-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 07/10] drm/i915: Enable GuC TLB invalidations for MTL X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable GuC TLB invalidations for MTL. Though more platforms than just MTL support GuC TLB invalidations, MTL is presently the only platform that requires it for any purpose, so only enable it there for now to minimize cross-platform impact. Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/i915/i915_pci.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index df7c261410f79..d4b51ececbb12 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -829,6 +829,7 @@ static const struct intel_device_info mtl_info = { .has_flat_ccs = 0, .has_gmd_id = 1, .has_guc_deprivilege = 1, + .has_guc_tlb_invalidation = 1, .has_llc = 0, .has_mslice_steering = 0, .has_snoop = 1, From patchwork Tue Oct 10 18:44:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7731BCD8CB3 for ; Tue, 10 Oct 2023 18:55:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 46F5F10E3BE; Tue, 10 Oct 2023 18:55:17 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 50CD110E3B4 for ; Tue, 10 Oct 2023 18:55:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964109; x=1728500109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Gx0fKgQ63mPpcjcyMcdaJg8nHI032rvcEtCeYHVNvnE=; b=mPilF+Or4CdU97qTD2bOatXUa9uq2aDeAH48Ie5aBuGbQf4jzLWMf9dy LylXnBQvuZl4EbdqLlJPpF/kLMaP7XcBaqRwS11bBK9hszfQaeTNbsdKS x0Ix8cpNCiw6arsnvbpNU+mMNDgiKg44J4pCtdAXFCx88Z9alEsUY7siV /1CPsqPCDlCr/dcujewm2bbFbgxtHB8PgckKIu5MiO6oe/pZlbjyzl8TU Dwyxxeed9nmKTjO89zZBlFExBvWzXIOlt/NBqYnhcUastGxslAJgEz/hT ZjxgtpJ+QIEi2grjBI7EH7DP+LvoXrm6SkfglXc4Z0TP8tQBDlzNi5ORx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776804" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776804" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234878" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234878" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:08 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:21 -0700 Message-Id: <20231010184423.2118908-11-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 08/10] drm/i915: Define GuC Based TLB invalidation routines X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Prathap Kumar Valsan The GuC firmware has defined the interface for selective TLB invalidation support. This patch adds routines to interface with GuC. Signed-off-by: Prathap Kumar Valsan CC: Matthew Brost --- .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h | 2 + drivers/gpu/drm/i915/gt/uc/intel_guc.h | 11 ++ .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 105 +++++++++++++++--- 3 files changed, 105 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h index 33f253410d0c8..7bb710fcd9087 100644 --- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h +++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h @@ -189,6 +189,8 @@ enum intel_guc_state_capture_event_status { enum intel_guc_tlb_invalidation_type { INTEL_GUC_TLB_INVAL_ENGINES = 0x0, + INTEL_GUC_TLB_INVAL_PAGE_SELECTIVE = 0x1, + INTEL_GUC_TLB_INVAL_PAGE_SELECTIVE_CTX = 0x2, INTEL_GUC_TLB_INVAL_GUC = 0x3, }; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 3fbf4b33ce139..369fd2be1c725 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -431,6 +431,17 @@ int intel_guc_allocate_and_map_vma(struct intel_guc *guc, u32 size, int intel_guc_self_cfg32(struct intel_guc *guc, u16 key, u32 value); int intel_guc_self_cfg64(struct intel_guc *guc, u16 key, u64 value); +int intel_guc_g2g_register(struct intel_guc *guc); + +int intel_guc_invalidate_tlb_full(struct intel_guc *guc); +int intel_guc_invalidate_tlb_page_selective(struct intel_guc *guc, + enum intel_guc_tlb_inval_mode mode, + u64 start, u64 length); +int intel_guc_invalidate_tlb_page_selective_ctx(struct intel_guc *guc, + enum intel_guc_tlb_inval_mode mode, + u64 start, u64 length, u32 ctxid); +int intel_guc_invalidate_tlb_guc(struct intel_guc *guc); + static inline bool intel_guc_is_supported(struct intel_guc *guc) { return intel_uc_fw_is_supported(&guc->fw); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index c3c45d3b9e89b..8c31000525b59 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -4753,22 +4753,12 @@ static bool intel_gt_is_enabled(const struct intel_gt *gt) return true; } -static int guc_send_invalidate_tlb(struct intel_guc *guc, - enum intel_guc_tlb_invalidation_type type) +static int guc_send_invalidate_tlb(struct intel_guc *guc, u32 *action, u32 size) { struct intel_guc_tlb_wait _wq, *wq = &_wq; DEFINE_WAIT_FUNC(wait, woken_wake_function); int err; u32 seqno; - u32 action[] = { - INTEL_GUC_ACTION_TLB_INVALIDATION, - 0, - REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_TYPE_MASK, type) | - REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_MODE_MASK, - INTEL_GUC_TLB_INVAL_MODE_HEAVY) | - INTEL_GUC_TLB_INVAL_FLUSH_CACHE, - }; - u32 size = ARRAY_SIZE(action); init_waitqueue_head(&_wq.wq); @@ -4822,13 +4812,102 @@ static int guc_send_invalidate_tlb(struct intel_guc *guc, /* Full TLB invalidation */ int intel_guc_invalidate_tlb_engines(struct intel_guc *guc) { - return guc_send_invalidate_tlb(guc, INTEL_GUC_TLB_INVAL_ENGINES); + u32 action[] = { + INTEL_GUC_ACTION_TLB_INVALIDATION, + 0, + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_TYPE_MASK, + INTEL_GUC_TLB_INVAL_ENGINES) | + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_MODE_MASK, + INTEL_GUC_TLB_INVAL_MODE_HEAVY) | + INTEL_GUC_TLB_INVAL_FLUSH_CACHE, + }; + u32 size = ARRAY_SIZE(action); + return guc_send_invalidate_tlb(guc, action, size); +} + +/* + * Selective TLB Invalidation for Address Range: + * TLB's in the Address Range is Invalidated across all engines. + */ +int intel_guc_invalidate_tlb_page_selective(struct intel_guc *guc, + enum intel_guc_tlb_inval_mode mode, + u64 start, u64 length) +{ + u64 vm_total = BIT_ULL(RUNTIME_INFO(guc_to_gt(guc)->i915)->ppgtt_size); + + /* + * For page selective invalidations, this specifies the number of contiguous + * PPGTT pages that needs to be invalidated. + */ + u32 address_mask = length >= vm_total ? 0 : ilog2(length) - ilog2(SZ_4K); + u32 action[] = { + INTEL_GUC_ACTION_TLB_INVALIDATION, + 0, + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_TYPE_MASK, + INTEL_GUC_TLB_INVAL_PAGE_SELECTIVE) | + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_MODE_MASK, mode) | + INTEL_GUC_TLB_INVAL_FLUSH_CACHE, + 0, + length >= vm_total ? 1 : lower_32_bits(start), + upper_32_bits(start), + address_mask, + }; + + GEM_BUG_ON(length < SZ_4K); + GEM_BUG_ON(!is_power_of_2(length)); + GEM_BUG_ON(!IS_ALIGNED(start, length)); + GEM_BUG_ON(range_overflows(start, length, vm_total)); + + return guc_send_invalidate_tlb(guc, action, ARRAY_SIZE(action)); +} + +/* + * Selective TLB Invalidation for Context: + * Invalidates all TLB's for a specific context across all engines. + */ +int intel_guc_invalidate_tlb_page_selective_ctx(struct intel_guc *guc, + enum intel_guc_tlb_inval_mode mode, + u64 start, u64 length, u32 ctxid) +{ + u64 vm_total = BIT_ULL(RUNTIME_INFO(guc_to_gt(guc)->i915)->ppgtt_size); + u32 address_mask = (ilog2(length) - ilog2(I915_GTT_PAGE_SIZE_4K)); + u32 full_range = vm_total == length; + u32 action[] = { + INTEL_GUC_ACTION_TLB_INVALIDATION, + 0, + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_TYPE_MASK, + INTEL_GUC_TLB_INVAL_PAGE_SELECTIVE_CTX) | + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_MODE_MASK, mode) | + INTEL_GUC_TLB_INVAL_FLUSH_CACHE, + ctxid, + full_range ? full_range : lower_32_bits(start), + full_range ? 0 : upper_32_bits(start), + full_range ? 0 : address_mask, + }; + + GEM_BUG_ON(length < SZ_4K); + GEM_BUG_ON(!is_power_of_2(length)); + GEM_BUG_ON(length & GENMASK(ilog2(SZ_16M) - 1, ilog2(SZ_2M) + 1)); + GEM_BUG_ON(!IS_ALIGNED(start, length)); + GEM_BUG_ON(range_overflows(start, length, vm_total)); + + return guc_send_invalidate_tlb(guc, action, ARRAY_SIZE(action)); } /* GuC TLB Invalidation: Invalidate the TLB's of GuC itself. */ int intel_guc_invalidate_tlb_guc(struct intel_guc *guc) { - return guc_send_invalidate_tlb(guc, INTEL_GUC_TLB_INVAL_GUC); + u32 action[] = { + INTEL_GUC_ACTION_TLB_INVALIDATION, + 0, + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_TYPE_MASK, + INTEL_GUC_TLB_INVAL_GUC) | + REG_FIELD_PREP(INTEL_GUC_TLB_INVAL_MODE_MASK, + INTEL_GUC_TLB_INVAL_MODE_HEAVY) | + INTEL_GUC_TLB_INVAL_FLUSH_CACHE, + }; + u32 size = ARRAY_SIZE(action); + return guc_send_invalidate_tlb(guc, action, size); } int intel_guc_deregister_done_process_msg(struct intel_guc *guc, From patchwork Tue Oct 10 18:44:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B91BDCD8CB3 for ; Tue, 10 Oct 2023 18:55:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8A0A610E3C3; Tue, 10 Oct 2023 18:55:30 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 99A8710E3B7 for ; Tue, 10 Oct 2023 18:55:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964109; x=1728500109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uaylNEXls+33+Xlek+AnCkn1Z3JDjTHHuhG54Dv83RM=; b=JYLTbO2Iu3flzhSrylHO4xxUpxHgEs+Jely4cRVObOfmBcfzysUPYL6y sXPZ7oMgQOMwf/koT0IldExjAGOpfbj9HXMs1eWJp3/gWRVczUdBqSXhu YY08dG0uAzK35e1PSRl6qwW7mCz0iTznKzpV5n/4dWG9r7wEdNkSEYvTd 2U6LE/R1h1BDiNQWbEngZ/XOE/rTfPIeFSsy4Oimuqtz9N0zsIlrO9cy9 nntAAlYJ64b2LSAlwWGihd7sjiQb204DEiW12Es7t9q23P5KWBy6Ojolb RkZj2jlhZmS6p/GCc4cdAR1liE8FPmiLBtFJaj58nhB7xtLmpUXPF35mf Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776805" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776805" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234882" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234882" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:09 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:22 -0700 Message-Id: <20231010184423.2118908-12-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 09/10] drm/i915: Add generic interface for tlb invalidation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Prathap Kumar Valsan This supports selective and full tlb invalidations. When GuC is enabled the tlb invalidations use guc ct otherwise use mmio interface. Signed-off-by: Prathap Kumar Valsan CC: Niranjana Vishwanathapura CC: Fei Yang Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/i915/gt/intel_gt_regs.h | 8 ++ drivers/gpu/drm/i915/gt/intel_tlb.c | 52 +++++++++++ drivers/gpu/drm/i915/gt/intel_tlb.h | 1 + drivers/gpu/drm/i915/gt/selftest_tlb.c | 88 +++++++++++++++++++ .../drm/i915/selftests/i915_mock_selftests.h | 1 + 5 files changed, 150 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h index eecd0a87a6478..f2ca1c26ecde5 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h @@ -1124,6 +1124,14 @@ #define GEN12_GAM_DONE _MMIO(0xcf68) +#define XEHPSDV_TLB_INV_DESC0 _MMIO(0xcf7c) +#define XEHPSDV_TLB_INV_DESC0_ADDR_LO REG_GENMASK(31, 12) +#define XEHPSDV_TLB_INV_DESC0_ADDR_MASK REG_GENMASK(8, 3) +#define XEHPSDV_TLB_INV_DESC0_G REG_GENMASK(2, 1) +#define XEHPSDV_TLB_INV_DESC0_VALID REG_BIT(0) +#define XEHPSDV_TLB_INV_DESC1 _MMIO(0xcf80) +#define XEHPSDV_TLB_INV_DESC0_ADDR_HI REG_GENMASK(31, 0) + #define GEN7_HALF_SLICE_CHICKEN1 _MMIO(0xe100) /* IVB GT1 + VLV */ #define GEN8_HALF_SLICE_CHICKEN1 MCR_REG(0xe100) #define GEN7_MAX_PS_THREAD_DEP (8 << 12) diff --git a/drivers/gpu/drm/i915/gt/intel_tlb.c b/drivers/gpu/drm/i915/gt/intel_tlb.c index 4bb13d1890e37..c31fd0875ac4f 100644 --- a/drivers/gpu/drm/i915/gt/intel_tlb.c +++ b/drivers/gpu/drm/i915/gt/intel_tlb.c @@ -157,6 +157,58 @@ void intel_gt_invalidate_tlb_full(struct intel_gt *gt, u32 seqno) } } +static u64 tlb_page_selective_size(u64 *addr, u64 length) +{ + const u64 end = *addr + length; + u64 start; + + /* + * Minimum invalidation size for a 2MB page that the hardware expects is + * 16MB + */ + length = max_t(u64, roundup_pow_of_two(length), SZ_4K); + if (length >= SZ_2M) + length = max_t(u64, SZ_16M, length); + + /* + * We need to invalidate a higher granularity if start address is not + * aligned to length. When start is not aligned with length we need to + * find the length large enough to create an address mask covering the + * required range. + */ + start = round_down(*addr, length); + while (start + length < end) { + length <<= 1; + start = round_down(*addr, length); + } + + *addr = start; + return length; +} + +bool intel_gt_invalidate_tlb_range(struct intel_gt *gt, + u64 start, u64 length) +{ + struct intel_guc *guc = >->uc.guc; + intel_wakeref_t wakeref; + u64 size, vm_total; + bool ret = true; + + if (intel_gt_is_wedged(gt)) + return true; + + vm_total = BIT_ULL(RUNTIME_INFO(gt->i915)->ppgtt_size); + /* Align start and length */ + size = min_t(u64, vm_total, tlb_page_selective_size(&start, length)); + + with_intel_gt_pm_if_awake(gt, wakeref) + ret = intel_guc_invalidate_tlb_page_selective(guc, + INTEL_GUC_TLB_INVAL_MODE_HEAVY, + start, size) == 0; + + return ret; +} + void intel_gt_init_tlb(struct intel_gt *gt) { mutex_init(>->tlb.invalidate_lock); diff --git a/drivers/gpu/drm/i915/gt/intel_tlb.h b/drivers/gpu/drm/i915/gt/intel_tlb.h index 337327af92ac4..9e5fc40c2b08e 100644 --- a/drivers/gpu/drm/i915/gt/intel_tlb.h +++ b/drivers/gpu/drm/i915/gt/intel_tlb.h @@ -12,6 +12,7 @@ #include "intel_gt_types.h" void intel_gt_invalidate_tlb_full(struct intel_gt *gt, u32 seqno); +bool intel_gt_invalidate_tlb_range(struct intel_gt *gt, u64 start, u64 length); void intel_gt_init_tlb(struct intel_gt *gt); void intel_gt_fini_tlb(struct intel_gt *gt); diff --git a/drivers/gpu/drm/i915/gt/selftest_tlb.c b/drivers/gpu/drm/i915/gt/selftest_tlb.c index 24beb94aa7a37..29f137a6e0362 100644 --- a/drivers/gpu/drm/i915/gt/selftest_tlb.c +++ b/drivers/gpu/drm/i915/gt/selftest_tlb.c @@ -382,10 +382,45 @@ static int invalidate_full(void *arg) return err; } +static void tlbinv_range(struct i915_address_space *vm, u64 addr, u64 length) +{ + if (!intel_gt_invalidate_tlb_range(vm->gt, addr, length)) + pr_err("range invalidate failed\n"); +} + +static bool has_invalidate_range(struct intel_gt *gt) +{ + intel_wakeref_t wf; + bool result = false; + + with_intel_gt_pm(gt, wf) + result = intel_gt_invalidate_tlb_range(gt, 0, gt->vm->total); + + return result; +} + +static int invalidate_range(void *arg) +{ + struct intel_gt *gt = arg; + int err; + + if (!has_invalidate_range(gt)) + return 0; + + err = mem_tlbinv(gt, create_smem, tlbinv_range); + if (err == 0) + err = mem_tlbinv(gt, create_lmem, tlbinv_range); + if (err == -ENODEV || err == -ENXIO) + err = 0; + + return err; +} + int intel_tlb_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { SUBTEST(invalidate_full), + SUBTEST(invalidate_range), }; struct intel_gt *gt; unsigned int i; @@ -403,3 +438,56 @@ int intel_tlb_live_selftests(struct drm_i915_private *i915) return 0; } + +static int tlb_page_size(void *arg) +{ + int start, size, offset; + + for (start = 0; start < 57; start++) { + for (size = 0; size <= 57 - start; size++) { + for (offset = 0; offset <= size; offset++) { + u64 len = BIT(size); + u64 addr = BIT(start) + len - BIT(offset); + u64 expected_start = addr; + u64 expected_end = addr + len - 1; + int err = 0; + + if (addr + len < addr) + continue; + + len = tlb_page_selective_size(&addr, len); + if (addr > expected_start) { + pr_err("(start:%d, size:%d, offset:%d, range:[%llx, %llx]) invalidate range:[%llx + %llx] after start:%llx\n", + start, size, offset, + expected_start, expected_end, + addr, len, + expected_start); + err = -EINVAL; + } + + if (addr + len < expected_end) { + pr_err("(start:%d, size:%d, offset:%d, range:[%llx, %llx]) invalidate range:[%llx + %llx] before end:%llx\n", + start, size, offset, + expected_start, expected_end, + addr, len, + expected_end); + err = -EINVAL; + } + + if (err) + return err; + } + } + } + + return 0; +} + +int intel_tlb_mock_selftests(void) +{ + static const struct i915_subtest tests[] = { + SUBTEST(tlb_page_size), + }; + + return i915_subtests(tests, NULL); +} diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h index 0c22e0fc9059c..3e00cd2b6e53c 100644 --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h @@ -21,6 +21,7 @@ selftest(fence, i915_sw_fence_mock_selftests) selftest(scatterlist, scatterlist_mock_selftests) selftest(syncmap, i915_syncmap_mock_selftests) selftest(uncore, intel_uncore_mock_selftests) +selftest(tlb, intel_tlb_mock_selftests) selftest(ring, intel_ring_mock_selftests) selftest(engine, intel_engine_cs_mock_selftests) selftest(timelines, intel_timeline_mock_selftests) From patchwork Tue Oct 10 18:44:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 13415875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9396CD8CB2 for ; Tue, 10 Oct 2023 18:55:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BCE2B10E3C2; Tue, 10 Oct 2023 18:55:29 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id D200010E3B8 for ; Tue, 10 Oct 2023 18:55:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696964109; x=1728500109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7i1Oo64hV2KMDKjOgiwyFj4zEwdGmFaTlSDFI3UzrGA=; b=g/bmc2RRs4LYoIgIZ6+RTzuV2hG9va2rz+fVpjthL5mRwAcqGfYArI+W R/CTijnhxR8L3PwIAV0y85OlGifKthscQB/jiZchZ0sO0MLT6gu4yOCee l1Wg/5Ok3+nTzXsi2hZVDumV7clXQuVsYwb1ASSBw3iPo1y9KmlnKXuIJ vKShx9q966D0LxfQF0B2Vck9F4n//b4qRdAhyRx4gI1J1VYTXMMzug5J8 7qdSURrVxjthQGzXOnfWBjEoE9gVbIHMc+EAHVtrDCbhDihb0AVF4wL5B vtvXRPcUUiKcHX/HsvPlIC1R0xy7uwE/UANP3buqREvN5+CX+rHe7/G/m w==; X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="364776807" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="364776807" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10859"; a="757234886" X-IronPort-AV: E=Sophos;i="6.03,213,1694761200"; d="scan'208";a="757234886" Received: from dut-internal-9dd7.jf.intel.com ([10.165.21.194]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2023 11:55:09 -0700 From: Jonathan Cavitt To: intel-gfx@lists.freedesktop.org Date: Tue, 10 Oct 2023 11:44:23 -0700 Message-Id: <20231010184423.2118908-13-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010184423.2118908-1-jonathan.cavitt@intel.com> References: <20231010184423.2118908-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 10/10] drm/i915: Use selective tlb invalidations where supported X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, jonathan.cavitt@intel.com, saurabhg.gupta@intel.com, nirmoy.das@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For platforms supporting selective tlb invalidations, we don't need to do a full tlb invalidation. Rather do a range based tlb invalidation for every unbind of purged vma belongs to an active vm. Signed-off-by: Prathap Kumar Valsan Cc: Niranjana Vishwanathapura Cc: Fei Yang Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 2 +- drivers/gpu/drm/i915/i915_vma.c | 14 +++++++++----- drivers/gpu/drm/i915/i915_vma.h | 3 ++- 3 files changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index d07a4f97b9434..b43dae3cbd59f 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -211,7 +211,7 @@ void ppgtt_unbind_vma(struct i915_address_space *vm, return; vm->clear_range(vm, vma_res->start, vma_res->vma_size); - vma_invalidate_tlb(vm, vma_res->tlb); + vma_invalidate_tlb(vm, vma_res->tlb, vma_res->start, vma_res->vma_size); } static unsigned long pd_count(u64 size, int shift) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index d09aad34ba37f..cb05d794f0d0f 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1339,7 +1339,8 @@ I915_SELFTEST_EXPORT int i915_vma_get_pages(struct i915_vma *vma) return err; } -void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb) +void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb + u64 start, u64 size) { struct intel_gt *gt; int id; @@ -1355,9 +1356,11 @@ void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb) * the most recent TLB invalidation seqno, and if we have not yet * flushed the TLBs upon release, perform a full invalidation. */ - for_each_gt(gt, vm->i915, id) - WRITE_ONCE(tlb[id], - intel_gt_next_invalidate_tlb_full(gt)); + for_each_gt(gt, vm->i915, id) { + if (!intel_gt_invalidate_tlb_range(gt, start, size)) + WRITE_ONCE(tlb[id], + intel_gt_next_invalidate_tlb_full(gt)); + } } static void __vma_put_pages(struct i915_vma *vma, unsigned int count) @@ -2041,7 +2044,8 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async) dma_fence_put(unbind_fence); unbind_fence = NULL; } - vma_invalidate_tlb(vma->vm, vma->obj->mm.tlb); + vma_invalidate_tlb(vma->vm, vma->obj->mm.tlb, + vma->node.start, vma->size); } /* diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index e356dfb883d34..5a604aad55dfe 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -260,7 +260,8 @@ bool i915_vma_misplaced(const struct i915_vma *vma, u64 size, u64 alignment, u64 flags); void __i915_vma_set_map_and_fenceable(struct i915_vma *vma); void i915_vma_revoke_mmap(struct i915_vma *vma); -void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb); +void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb, + u64 start, u64 size); struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async); int __i915_vma_unbind(struct i915_vma *vma); int __must_check i915_vma_unbind(struct i915_vma *vma);