From patchwork Wed May 31 23:54:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 28AC9C7EE2D for ; Wed, 31 May 2023 23:54:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7782710E1F4; Wed, 31 May 2023 23:54:34 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5CE7210E0E5; Wed, 31 May 2023 23:54:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577265; x=1717113265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aVLtvxyC/LPOBjoCPGdNIPpWIwI3kgK9faVAaOq+UYs=; b=idOsQ/+Hg5eWNDiOXFwrlmYm6YoxyFq68RsNvNgsf0fRnJKzRUhbqHXG nlixB5ePrHaCP7YCefNgpVV66YsYPGp3jIDTNoVffHM3aAWsgt2HzwfLR Hi8RjByoHkiGHrLNkG5Mdo772BcUruakS4Sr3q+cZEwKVJnNujfs+k5z4 qg7lmHQdGmOPHx3tp9wF4kgUM50+p8hAbGT2400wJ+wZZC9Ln1herFR9n hKdWtF09W3PDnkBwbtcjR9tJ2qjgF5EIx5aEMToU7wGiDuE3P2kvR4NIT WHM++/QUwGKp4DQrGd12n5g9/gdt3MxTv1PYpGBunf0trabTSQ82VjzEL A==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747021" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747021" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109914" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109914" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:24 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 1/7] drm/i915/uc: perma-pin firmwares Date: Wed, 31 May 2023 16:54:09 -0700 Message-Id: <20230531235415.1467475-2-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Harrison , Daniele Ceraolo Spurio , Alan Previn , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that each FW has its own reserved area, we can keep them always pinned and skip the pin/unpin dance on reset. This will make things easier for the 2-step HuC authentication, which requires the FW to be pinned in GGTT after the xfer is completed. Since the vma is now valid for a long time and not just for the quick pin-load-unpin dance, the name "dummy" is no longer appropriare and has been replaced with vma_res. All the functions have also been updated to operate on vma_res for consistency. Given that we pin the vma behind the allocator's back (which is ok because we do the pinning in an area that was previously reserved for thus purpose), we do need to explicitly re-pin on resume because the automated helper won't cover us. v2: better comments and commit message, s/dummy/vma_res/ Signed-off-by: Daniele Ceraolo Spurio Cc: Alan Previn Cc: John Harrison Reviewed-by: John Harrison --- drivers/gpu/drm/i915/gt/intel_ggtt.c | 3 ++ drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c | 7 +++- drivers/gpu/drm/i915/gt/uc/intel_guc.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_uc.c | 8 ++++ drivers/gpu/drm/i915/gt/uc/intel_uc.h | 2 + drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 50 ++++++++++++++++------- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h | 22 ++++++---- 8 files changed, 71 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index 2a7942fac798..f4e8aa8246e8 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -1326,6 +1326,9 @@ void i915_ggtt_resume(struct i915_ggtt *ggtt) ggtt->vm.scratch_range(&ggtt->vm, ggtt->error_capture.start, ggtt->error_capture.size); + list_for_each_entry(gt, &ggtt->gt_list, ggtt_link) + intel_uc_resume_mappings(>->uc); + ggtt->invalidate(ggtt); if (flush) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c index fb0984f875f9..b26f493f86fa 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c @@ -90,7 +90,12 @@ void intel_gsc_uc_init_early(struct intel_gsc_uc *gsc) { struct intel_gt *gt = gsc_uc_to_gt(gsc); - intel_uc_fw_init_early(&gsc->fw, INTEL_UC_FW_TYPE_GSC); + /* + * GSC FW needs to be copied to a dedicated memory allocations for + * loading (see gsc->local), so we don't need to GGTT map the FW image + * itself into GGTT. + */ + intel_uc_fw_init_early(&gsc->fw, INTEL_UC_FW_TYPE_GSC, false); INIT_WORK(&gsc->work, gsc_work); /* we can arrive here from i915_driver_early_probe for primary diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c index c9f20385f6a0..2eb891b270ae 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c @@ -164,7 +164,7 @@ void intel_guc_init_early(struct intel_guc *guc) struct intel_gt *gt = guc_to_gt(guc); struct drm_i915_private *i915 = gt->i915; - intel_uc_fw_init_early(&guc->fw, INTEL_UC_FW_TYPE_GUC); + intel_uc_fw_init_early(&guc->fw, INTEL_UC_FW_TYPE_GUC, true); intel_guc_ct_init_early(&guc->ct); intel_guc_log_init_early(&guc->log); intel_guc_submission_init_early(guc); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 04724ff56ded..268e036f8f28 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -276,7 +276,7 @@ void intel_huc_init_early(struct intel_huc *huc) struct drm_i915_private *i915 = huc_to_gt(huc)->i915; struct intel_gt *gt = huc_to_gt(huc); - intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC); + intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC, true); /* * we always init the fence as already completed, even if HuC is not diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c index c8b9cbb7ba3a..1e7f5cc9d550 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c @@ -700,6 +700,12 @@ void intel_uc_suspend(struct intel_uc *uc) } } +static void __uc_resume_mappings(struct intel_uc *uc) +{ + intel_uc_fw_resume_mapping(&uc->guc.fw); + intel_uc_fw_resume_mapping(&uc->huc.fw); +} + static int __uc_resume(struct intel_uc *uc, bool enable_communication) { struct intel_guc *guc = &uc->guc; @@ -767,4 +773,6 @@ static const struct intel_uc_ops uc_ops_on = { .init_hw = __uc_init_hw, .fini_hw = __uc_fini_hw, + + .resume_mappings = __uc_resume_mappings, }; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h index d585524d94de..014bb7d83689 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h @@ -24,6 +24,7 @@ struct intel_uc_ops { void (*fini)(struct intel_uc *uc); int (*init_hw)(struct intel_uc *uc); void (*fini_hw)(struct intel_uc *uc); + void (*resume_mappings)(struct intel_uc *uc); }; struct intel_uc { @@ -114,6 +115,7 @@ intel_uc_ops_function(init, init, int, 0); intel_uc_ops_function(fini, fini, void, ); intel_uc_ops_function(init_hw, init_hw, int, 0); intel_uc_ops_function(fini_hw, fini_hw, void, ); +intel_uc_ops_function(resume_mappings, resume_mappings, void, ); #undef intel_uc_ops_function #endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index dc5c96c503a9..31776c279f32 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -471,12 +471,14 @@ static void __uc_fw_user_override(struct drm_i915_private *i915, struct intel_uc * intel_uc_fw_init_early - initialize the uC object and select the firmware * @uc_fw: uC firmware * @type: type of uC + * @needs_ggtt_mapping: whether the FW needs to be GGTT mapped for loading * * Initialize the state of our uC object and relevant tracking and select the * firmware to fetch and load. */ void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw, - enum intel_uc_fw_type type) + enum intel_uc_fw_type type, + bool needs_ggtt_mapping) { struct intel_gt *gt = ____uc_fw_to_gt(uc_fw, type); struct drm_i915_private *i915 = gt->i915; @@ -490,6 +492,7 @@ void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw, GEM_BUG_ON(uc_fw->file_selected.path); uc_fw->type = type; + uc_fw->needs_ggtt_mapping = needs_ggtt_mapping; if (HAS_GT_UC(i915)) { if (!validate_fw_table_type(i915, type)) { @@ -755,7 +758,7 @@ static int try_firmware_load(struct intel_uc_fw *uc_fw, const struct firmware ** if (err) return err; - if ((*fw)->size > INTEL_UC_RSVD_GGTT_PER_FW) { + if (uc_fw->needs_ggtt_mapping && (*fw)->size > INTEL_UC_RSVD_GGTT_PER_FW) { gt_err(gt, "%s firmware %s: size (%zuKB) exceeds max supported size (%uKB)\n", intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, (*fw)->size / SZ_1K, INTEL_UC_RSVD_GGTT_PER_FW / SZ_1K); @@ -940,29 +943,32 @@ static void uc_fw_bind_ggtt(struct intel_uc_fw *uc_fw) { struct drm_i915_gem_object *obj = uc_fw->obj; struct i915_ggtt *ggtt = __uc_fw_to_gt(uc_fw)->ggtt; - struct i915_vma_resource *dummy = &uc_fw->dummy; + struct i915_vma_resource *vma_res = &uc_fw->vma_res; u32 pte_flags = 0; - dummy->start = uc_fw_ggtt_offset(uc_fw); - dummy->node_size = obj->base.size; - dummy->bi.pages = obj->mm.pages; + if (!uc_fw->needs_ggtt_mapping) + return; + + vma_res->start = uc_fw_ggtt_offset(uc_fw); + vma_res->node_size = obj->base.size; + vma_res->bi.pages = obj->mm.pages; GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); /* uc_fw->obj cache domains were not controlled across suspend */ if (i915_gem_object_has_struct_page(obj)) - drm_clflush_sg(dummy->bi.pages); + drm_clflush_sg(vma_res->bi.pages); if (i915_gem_object_is_lmem(obj)) pte_flags |= PTE_LM; if (ggtt->vm.raw_insert_entries) - ggtt->vm.raw_insert_entries(&ggtt->vm, dummy, + ggtt->vm.raw_insert_entries(&ggtt->vm, vma_res, i915_gem_get_pat_index(ggtt->vm.i915, I915_CACHE_NONE), pte_flags); else - ggtt->vm.insert_entries(&ggtt->vm, dummy, + ggtt->vm.insert_entries(&ggtt->vm, vma_res, i915_gem_get_pat_index(ggtt->vm.i915, I915_CACHE_NONE), pte_flags); @@ -970,11 +976,13 @@ static void uc_fw_bind_ggtt(struct intel_uc_fw *uc_fw) static void uc_fw_unbind_ggtt(struct intel_uc_fw *uc_fw) { - struct drm_i915_gem_object *obj = uc_fw->obj; struct i915_ggtt *ggtt = __uc_fw_to_gt(uc_fw)->ggtt; - u64 start = uc_fw_ggtt_offset(uc_fw); + struct i915_vma_resource *vma_res = &uc_fw->vma_res; + + if (!vma_res->node_size) + return; - ggtt->vm.clear_range(&ggtt->vm, start, obj->base.size); + ggtt->vm.clear_range(&ggtt->vm, vma_res->start, vma_res->node_size); } static int uc_fw_xfer(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags) @@ -991,7 +999,7 @@ static int uc_fw_xfer(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags) intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL); /* Set the source address for the uCode */ - offset = uc_fw_ggtt_offset(uc_fw); + offset = uc_fw->vma_res.start; GEM_BUG_ON(upper_32_bits(offset) & 0xFFFF0000); intel_uncore_write_fw(uncore, DMA_ADDR_0_LOW, lower_32_bits(offset)); intel_uncore_write_fw(uncore, DMA_ADDR_0_HIGH, upper_32_bits(offset)); @@ -1065,9 +1073,7 @@ int intel_uc_fw_upload(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags) return -ENOEXEC; /* Call custom loader */ - uc_fw_bind_ggtt(uc_fw); err = uc_fw_xfer(uc_fw, dst_offset, dma_flags); - uc_fw_unbind_ggtt(uc_fw); if (err) goto fail; @@ -1171,6 +1177,8 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw) goto out_unpin; } + uc_fw_bind_ggtt(uc_fw); + return 0; out_unpin: @@ -1181,6 +1189,7 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw) void intel_uc_fw_fini(struct intel_uc_fw *uc_fw) { + uc_fw_unbind_ggtt(uc_fw); uc_fw_rsa_data_destroy(uc_fw); if (i915_gem_object_has_pinned_pages(uc_fw->obj)) @@ -1189,6 +1198,17 @@ void intel_uc_fw_fini(struct intel_uc_fw *uc_fw) intel_uc_fw_change_status(uc_fw, INTEL_UC_FIRMWARE_AVAILABLE); } +void intel_uc_fw_resume_mapping(struct intel_uc_fw *uc_fw) +{ + if (!intel_uc_fw_is_available(uc_fw)) + return; + + if (!i915_gem_object_has_pinned_pages(uc_fw->obj)) + return; + + uc_fw_bind_ggtt(uc_fw); +} + /** * intel_uc_fw_cleanup_fetch - cleanup uC firmware * @uc_fw: uC firmware diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h index 6ba00e6b3975..2be9470eb712 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h @@ -99,13 +99,19 @@ struct intel_uc_fw { struct drm_i915_gem_object *obj; /** - * @dummy: A vma used in binding the uc fw to ggtt. We can't define this - * vma on the stack as it can lead to a stack overflow, so we define it - * here. Safe to have 1 copy per uc fw because the binding is single - * threaded as it done during driver load (inherently single threaded) - * or during a GT reset (mutex guarantees single threaded). + * @needs_ggtt_mapping: indicates whether the fw object needs to be + * pinned to ggtt. If true, the fw is pinned at init time and unpinned + * during driver unload. */ - struct i915_vma_resource dummy; + bool needs_ggtt_mapping; + + /** + * @vma_res: A vma resource used in binding the uc fw to ggtt. The fw is + * pinned in a reserved area of the ggtt (above the maximum address + * usable by GuC); therefore, we can't use the normal vma functions to + * do the pinning and we instead use this resource to do so. + */ + struct i915_vma_resource vma_res; struct i915_vma *rsa_data; u32 rsa_size; @@ -282,12 +288,14 @@ static inline u32 intel_uc_fw_get_upload_size(struct intel_uc_fw *uc_fw) } void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw, - enum intel_uc_fw_type type); + enum intel_uc_fw_type type, + bool needs_ggtt_mapping); int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw); void intel_uc_fw_cleanup_fetch(struct intel_uc_fw *uc_fw); int intel_uc_fw_upload(struct intel_uc_fw *uc_fw, u32 offset, u32 dma_flags); int intel_uc_fw_init(struct intel_uc_fw *uc_fw); void intel_uc_fw_fini(struct intel_uc_fw *uc_fw); +void intel_uc_fw_resume_mapping(struct intel_uc_fw *uc_fw); size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len); int intel_uc_fw_mark_load_failed(struct intel_uc_fw *uc_fw, int err); void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p); From patchwork Wed May 31 23:54:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 89A9EC77B7A for ; Wed, 31 May 2023 23:54:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D507D10E1FC; Wed, 31 May 2023 23:54:33 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7250310E0DB; Wed, 31 May 2023 23:54:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577266; x=1717113266; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iegpz9UWNbmWbJXamW9iQ0IoB0xfyd2aYsfjooy0kxA=; b=PamBsnjanRIn40S5VuWBIAzXUJZBTbTGgK6J7wDHoSZuILkvXvBaYJ09 rht6Jsn17xVBqrhjzj5xrdojpwrkA1RcCB3NoKFakqccHYebE/deagsNg w4fR3Wzrmf1pSSPT+OwstvqyZkbab93XeiIl/g+XYWF9wCXlrRtSCknUp 5zMQAUvEU2OFLKNx1mbnCv+W51sOkeXIqclEY9XrdBQgtwrBM/Oa1bNhA 4XKD1QJIvhNdfs+mDn1zx92bVefRYmIUqL0O1pxkVeAnOG096Gl7BVuHv qDM18Fm0lC4oO57R8A77XFDii/eKXvxmHQO3j8V6YC5KcvDTK7NEB0RcC w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747025" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747025" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109923" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109923" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:25 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 2/7] drm/i915/huc: Parse the GSC-enabled HuC binary Date: Wed, 31 May 2023 16:54:10 -0700 Message-Id: <20230531235415.1467475-3-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Harrison , Daniele Ceraolo Spurio , Alan Previn , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The new binaries that support the 2-step authentication contain the legacy-style binary, which we can use for loading the HuC via DMA. To find out where this is located in the image, we need to parse the manifest of the GSC-enabled HuC binary. The manifest consist of a partition header followed by entries, one of which contains the offset we're looking for. Note that the DG2 GSC binary contains entries with the same names, but it doesn't contain a full legacy binary, so we need to skip assigning the dma offset in that case (which we can do by checking the ccs). Also, since we're now parsing the entries, we can extract the HuC version that way instead of using hardcoded offsets. Note that the GSC binary uses the same structures in its binary header, so they've been added in their own header file. v2: fix structure names to match meu defines (s/CPT/CPD/), update commit message, check ccs validity, drop old version location defines. v3: drop references to the MEU tool to reduce confusion, fix log (John) v4: fix log for real (John) Signed-off-by: Daniele Ceraolo Spurio Cc: Alan Previn Cc: John Harrison Reviewed-by: Alan Previn #v2 Reviewed-by: John Harrison --- .../drm/i915/gt/uc/intel_gsc_binary_headers.h | 74 ++++++++++ drivers/gpu/drm/i915/gt/uc/intel_huc.c | 11 +- drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c | 136 ++++++++++++++++++ drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h | 5 +- drivers/gpu/drm/i915/gt/uc/intel_huc_print.h | 21 +++ drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 72 +++++----- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h | 2 + drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h | 6 - 8 files changed, 274 insertions(+), 53 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_gsc_binary_headers.h create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_huc_print.h diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_binary_headers.h b/drivers/gpu/drm/i915/gt/uc/intel_gsc_binary_headers.h new file mode 100644 index 000000000000..714f0c256118 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_binary_headers.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#ifndef _INTEL_GSC_BINARY_HEADERS_H_ +#define _INTEL_GSC_BINARY_HEADERS_H_ + +#include + +/* Code partition directory (CPD) structures */ +struct intel_gsc_cpd_header_v2 { + u32 header_marker; +#define INTEL_GSC_CPD_HEADER_MARKER 0x44504324 + + u32 num_of_entries; + u8 header_version; + u8 entry_version; + u8 header_length; /* in bytes */ + u8 flags; + u32 partition_name; + u32 crc32; +} __packed; + +struct intel_gsc_cpd_entry { + u8 name[12]; + + /* + * Bits 0-24: offset from the beginning of the code partition + * Bit 25: huffman compressed + * Bits 26-31: reserved + */ + u32 offset; +#define INTEL_GSC_CPD_ENTRY_OFFSET_MASK GENMASK(24, 0) +#define INTEL_GSC_CPD_ENTRY_HUFFMAN_COMP BIT(25) + + /* + * Module/Item length, in bytes. For Huffman-compressed modules, this + * refers to the uncompressed size. For software-compressed modules, + * this refers to the compressed size. + */ + u32 length; + + u8 reserved[4]; +} __packed; + +struct intel_gsc_version { + u16 major; + u16 minor; + u16 hotfix; + u16 build; +} __packed; + +struct intel_gsc_manifest_header { + u32 header_type; /* 0x4 for manifest type */ + u32 header_length; /* in dwords */ + u32 header_version; + u32 flags; + u32 vendor; + u32 date; + u32 size; /* In dwords, size of entire manifest (header + extensions) */ + u32 header_id; + u32 internal_data; + struct intel_gsc_version fw_version; + u32 security_version; + struct intel_gsc_version meu_kit_version; + u32 meu_manifest_version; + u8 general_data[4]; + u8 reserved3[56]; + u32 modulus_size; /* in dwords */ + u32 exponent_size; /* in dwords */ +} __packed; + +#endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 268e036f8f28..6d795438b3e4 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -6,23 +6,14 @@ #include #include "gt/intel_gt.h" -#include "gt/intel_gt_print.h" #include "intel_guc_reg.h" #include "intel_huc.h" +#include "intel_huc_print.h" #include "i915_drv.h" #include #include -#define huc_printk(_huc, _level, _fmt, ...) \ - gt_##_level(huc_to_gt(_huc), "HuC: " _fmt, ##__VA_ARGS__) -#define huc_err(_huc, _fmt, ...) huc_printk((_huc), err, _fmt, ##__VA_ARGS__) -#define huc_warn(_huc, _fmt, ...) huc_printk((_huc), warn, _fmt, ##__VA_ARGS__) -#define huc_notice(_huc, _fmt, ...) huc_printk((_huc), notice, _fmt, ##__VA_ARGS__) -#define huc_info(_huc, _fmt, ...) huc_printk((_huc), info, _fmt, ##__VA_ARGS__) -#define huc_dbg(_huc, _fmt, ...) huc_printk((_huc), dbg, _fmt, ##__VA_ARGS__) -#define huc_probe_error(_huc, _fmt, ...) huc_printk((_huc), probe_error, _fmt, ##__VA_ARGS__) - /** * DOC: HuC * diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c index 534b0aa43316..3a9d81899a78 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c @@ -5,11 +5,147 @@ #include "gt/intel_gsc.h" #include "gt/intel_gt.h" +#include "intel_gsc_binary_headers.h" #include "intel_huc.h" #include "intel_huc_fw.h" +#include "intel_huc_print.h" #include "i915_drv.h" #include "pxp/intel_pxp_huc.h" +static void get_version_from_gsc_manifest(struct intel_uc_fw_ver *ver, const void *data) +{ + const struct intel_gsc_manifest_header *manifest = data; + + ver->major = manifest->fw_version.major; + ver->minor = manifest->fw_version.minor; + ver->patch = manifest->fw_version.hotfix; +} + +static bool css_valid(const void *data, size_t size) +{ + const struct uc_css_header *css = data; + + if (unlikely(size < sizeof(struct uc_css_header))) + return false; + + if (css->module_type != 0x6) + return false; + + if (css->module_vendor != PCI_VENDOR_ID_INTEL) + return false; + + return true; +} + +static inline u32 entry_offset(const struct intel_gsc_cpd_entry *entry) +{ + return entry->offset & INTEL_GSC_CPD_ENTRY_OFFSET_MASK; +} + +int intel_huc_fw_get_binary_info(struct intel_uc_fw *huc_fw, const void *data, size_t size) +{ + struct intel_huc *huc = container_of(huc_fw, struct intel_huc, fw); + const struct intel_gsc_cpd_header_v2 *header = data; + const struct intel_gsc_cpd_entry *entry; + size_t min_size = sizeof(*header); + int i; + + if (!huc_fw->loaded_via_gsc) { + huc_err(huc, "Invalid FW type for GSC header parsing!\n"); + return -EINVAL; + } + + if (size < sizeof(*header)) { + huc_err(huc, "FW too small! %zu < %zu\n", size, min_size); + return -ENODATA; + } + + /* + * The GSC-enabled HuC binary starts with a directory header, followed + * by a series of entries. Each entry is identified by a name and + * points to a specific section of the binary containing the relevant + * data. The entries we're interested in are: + * - "HUCP.man": points to the GSC manifest header for the HuC, which + * contains the version info. + * - "huc_fw": points to the legacy-style binary that can be used for + * load via the DMA. This entry only contains a valid CSS + * on binaries for platforms that support 2-step HuC load + * via dma and auth via GSC (like MTL). + * + * -------------------------------------------------- + * [ intel_gsc_cpd_header_v2 ] + * -------------------------------------------------- + * [ intel_gsc_cpd_entry[] ] + * [ entry1 ] + * [ ... ] + * [ entryX ] + * [ "HUCP.man" ] + * [ ... ] + * [ offset >----------------------------]------o + * [ ... ] | + * [ entryY ] | + * [ "huc_fw" ] | + * [ ... ] | + * [ offset >----------------------------]----------o + * -------------------------------------------------- | | + * | | + * -------------------------------------------------- | | + * [ intel_gsc_manifest_header ]<-----o | + * [ ... ] | + * [ intel_gsc_version fw_version ] | + * [ ... ] | + * -------------------------------------------------- | + * | + * -------------------------------------------------- | + * [ data[] ]<---------o + * [ ... ] + * [ ... ] + * -------------------------------------------------- + */ + + if (header->header_marker != INTEL_GSC_CPD_HEADER_MARKER) { + huc_err(huc, "invalid marker for CPD header: 0x%08x!\n", + header->header_marker); + return -EINVAL; + } + + /* we only have binaries with header v2 and entry v1 for now */ + if (header->header_version != 2 || header->entry_version != 1) { + huc_err(huc, "invalid CPD header/entry version %u:%u!\n", + header->header_version, header->entry_version); + return -EINVAL; + } + + if (header->header_length < sizeof(struct intel_gsc_cpd_header_v2)) { + huc_err(huc, "invalid CPD header length %u!\n", + header->header_length); + return -EINVAL; + } + + min_size = header->header_length + sizeof(*entry) * header->num_of_entries; + if (size < min_size) { + huc_err(huc, "FW too small! %zu < %zu\n", size, min_size); + return -ENODATA; + } + + entry = data + header->header_length; + + for (i = 0; i < header->num_of_entries; i++, entry++) { + if (strcmp(entry->name, "HUCP.man") == 0) + get_version_from_gsc_manifest(&huc_fw->file_selected.ver, + data + entry_offset(entry)); + + if (strcmp(entry->name, "huc_fw") == 0) { + u32 offset = entry_offset(entry); + + if (offset < size && css_valid(data + offset, size - offset)) + huc_fw->dma_start_offset = offset; + } + } + + return 0; +} + int intel_huc_fw_load_and_auth_via_gsc(struct intel_huc *huc) { int ret; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h index db42e238b45f..0999ffe6f962 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h @@ -7,8 +7,11 @@ #define _INTEL_HUC_FW_H_ struct intel_huc; +struct intel_uc_fw; + +#include int intel_huc_fw_load_and_auth_via_gsc(struct intel_huc *huc); int intel_huc_fw_upload(struct intel_huc *huc); - +int intel_huc_fw_get_binary_info(struct intel_uc_fw *huc_fw, const void *data, size_t size); #endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_print.h b/drivers/gpu/drm/i915/gt/uc/intel_huc_print.h new file mode 100644 index 000000000000..915d310ee1df --- /dev/null +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_print.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#ifndef __INTEL_HUC_PRINT__ +#define __INTEL_HUC_PRINT__ + +#include "gt/intel_gt.h" +#include "gt/intel_gt_print.h" + +#define huc_printk(_huc, _level, _fmt, ...) \ + gt_##_level(huc_to_gt(_huc), "HuC: " _fmt, ##__VA_ARGS__) +#define huc_err(_huc, _fmt, ...) huc_printk((_huc), err, _fmt, ##__VA_ARGS__) +#define huc_warn(_huc, _fmt, ...) huc_printk((_huc), warn, _fmt, ##__VA_ARGS__) +#define huc_notice(_huc, _fmt, ...) huc_printk((_huc), notice, _fmt, ##__VA_ARGS__) +#define huc_info(_huc, _fmt, ...) huc_printk((_huc), info, _fmt, ##__VA_ARGS__) +#define huc_dbg(_huc, _fmt, ...) huc_printk((_huc), dbg, _fmt, ##__VA_ARGS__) +#define huc_probe_error(_huc, _fmt, ...) huc_printk((_huc), probe_error, _fmt, ##__VA_ARGS__) + +#endif /* __INTEL_HUC_PRINT__ */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index 31776c279f32..ec0b3d214af1 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -548,33 +548,6 @@ static void __force_fw_fetch_failures(struct intel_uc_fw *uc_fw, int e) } } -static int check_gsc_manifest(struct intel_gt *gt, - const struct firmware *fw, - struct intel_uc_fw *uc_fw) -{ - u32 *dw = (u32 *)fw->data; - u32 version_hi, version_lo; - size_t min_size; - - /* Check the size of the blob before examining buffer contents */ - min_size = sizeof(u32) * (HUC_GSC_VERSION_LO_DW + 1); - if (unlikely(fw->size < min_size)) { - gt_warn(gt, "%s firmware %s: invalid size: %zu < %zu\n", - intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, - fw->size, min_size); - return -ENODATA; - } - - version_hi = dw[HUC_GSC_VERSION_HI_DW]; - version_lo = dw[HUC_GSC_VERSION_LO_DW]; - - uc_fw->file_selected.ver.major = FIELD_GET(HUC_GSC_MAJOR_VER_HI_MASK, version_hi); - uc_fw->file_selected.ver.minor = FIELD_GET(HUC_GSC_MINOR_VER_HI_MASK, version_hi); - uc_fw->file_selected.ver.patch = FIELD_GET(HUC_GSC_PATCH_VER_LO_MASK, version_lo); - - return 0; -} - static void uc_unpack_css_version(struct intel_uc_fw_ver *ver, u32 css_value) { /* Get version numbers from the CSS header */ @@ -631,22 +604,22 @@ static void guc_read_css_info(struct intel_uc_fw *uc_fw, struct uc_css_header *c uc_fw->private_data_size = css->private_data_size; } -static int check_ccs_header(struct intel_gt *gt, - const struct firmware *fw, - struct intel_uc_fw *uc_fw) +static int __check_ccs_header(struct intel_gt *gt, + const void *fw_data, size_t fw_size, + struct intel_uc_fw *uc_fw) { struct uc_css_header *css; size_t size; /* Check the size of the blob before examining buffer contents */ - if (unlikely(fw->size < sizeof(struct uc_css_header))) { + if (unlikely(fw_size < sizeof(struct uc_css_header))) { gt_warn(gt, "%s firmware %s: invalid size: %zu < %zu\n", intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, - fw->size, sizeof(struct uc_css_header)); + fw_size, sizeof(struct uc_css_header)); return -ENODATA; } - css = (struct uc_css_header *)fw->data; + css = (struct uc_css_header *)fw_data; /* Check integrity of size values inside CSS header */ size = (css->header_size_dw - css->key_size_dw - css->modulus_size_dw - @@ -654,7 +627,7 @@ static int check_ccs_header(struct intel_gt *gt, if (unlikely(size != sizeof(struct uc_css_header))) { gt_warn(gt, "%s firmware %s: unexpected header size: %zu != %zu\n", intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, - fw->size, sizeof(struct uc_css_header)); + fw_size, sizeof(struct uc_css_header)); return -EPROTO; } @@ -666,10 +639,10 @@ static int check_ccs_header(struct intel_gt *gt, /* At least, it should have header, uCode and RSA. Size of all three. */ size = sizeof(struct uc_css_header) + uc_fw->ucode_size + uc_fw->rsa_size; - if (unlikely(fw->size < size)) { + if (unlikely(fw_size < size)) { gt_warn(gt, "%s firmware %s: invalid size: %zu < %zu\n", intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path, - fw->size, size); + fw_size, size); return -ENOEXEC; } @@ -690,6 +663,33 @@ static int check_ccs_header(struct intel_gt *gt, return 0; } +static int check_gsc_manifest(struct intel_gt *gt, + const struct firmware *fw, + struct intel_uc_fw *uc_fw) +{ + if (uc_fw->type != INTEL_UC_FW_TYPE_HUC) { + gt_err(gt, "trying to GSC-parse a non-HuC binary"); + return -EINVAL; + } + + intel_huc_fw_get_binary_info(uc_fw, fw->data, fw->size); + + if (uc_fw->dma_start_offset) { + u32 delta = uc_fw->dma_start_offset; + + __check_ccs_header(gt, fw->data + delta, fw->size - delta, uc_fw); + } + + return 0; +} + +static int check_ccs_header(struct intel_gt *gt, + const struct firmware *fw, + struct intel_uc_fw *uc_fw) +{ + return __check_ccs_header(gt, fw->data, fw->size, uc_fw); +} + static bool is_ver_8bit(struct intel_uc_fw_ver *ver) { return ver->major < 0xFF && ver->minor < 0xFF && ver->patch < 0xFF; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h index 2be9470eb712..b3daba9526eb 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h @@ -118,6 +118,8 @@ struct intel_uc_fw { u32 ucode_size; u32 private_data_size; + u32 dma_start_offset; + bool loaded_via_gsc; }; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h index 646fa8aa6cf1..7fe405126249 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h @@ -84,10 +84,4 @@ struct uc_css_header { } __packed; static_assert(sizeof(struct uc_css_header) == 128); -#define HUC_GSC_VERSION_HI_DW 44 -#define HUC_GSC_MAJOR_VER_HI_MASK (0xFF << 0) -#define HUC_GSC_MINOR_VER_HI_MASK (0xFF << 16) -#define HUC_GSC_VERSION_LO_DW 45 -#define HUC_GSC_PATCH_VER_LO_MASK (0xFF << 0) - #endif /* _INTEL_UC_FW_ABI_H */ From patchwork Wed May 31 23:54:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0A5FC7EE2D for ; Wed, 31 May 2023 23:54:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 45C9210E1ED; Wed, 31 May 2023 23:54:32 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 54BD110E0DB; Wed, 31 May 2023 23:54:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577267; x=1717113267; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CKuxOGZeoy9ZNIohsbp4ppff45FQJIhNbrTc82ki1fM=; b=l1QqtpG7DT9RkOYB+7BKDPiizcvNe64ss8b+bkd5d+9W4Tk3B3n1UUmj hy44HYrOAqA5X9jv1L8mfghxfwttST142JQU7HWMlITVNloc/QkgAgAe0 CKQt/966eSV5Fqvf1zCjP/AAFytEqi8Ly+bZz0ewbWxHljVhynNIMhZ1j afXkTejJhewwBsi/ydyFB2kfeVIctN39LeSnhQ9ha59p/bbpz97d0YTQa w/zC7RXvKjtC8rJGEbA1Wy/xYQT990o/xQBwsTiyt5LGKf70JsZb1aByG RKpwwWdFNT8T1CexHGWAD0/a5SYRkLfRCt4mH0P9tuqlFs+gWuLrMhIBY w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747033" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747033" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109938" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109938" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:26 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 3/7] drm/i915/huc: Load GSC-enabled HuC via DMA xfer if the fuse says so Date: Wed, 31 May 2023 16:54:11 -0700 Message-Id: <20230531235415.1467475-4-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Harrison , Daniele Ceraolo Spurio , Alan Previn , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" In the previous patch we extracted the offset of the legacy-style HuC binary located within the GSC-enabled blob, so now we can use that to load the HuC via DMA if the fuse is set that way. Note that we now need to differentiate between "GSC-enabled binary" and "loaded by GSC", so the former case has been renamed to "has GSC headers" for clarity, while the latter is now based on the fuse instead of the binary format. This way, all the legacy load paths are automatically taken (including the auth by GuC) without having to implement further code changes. v2: s/is_meu_binary/has_gsc_headers/, clearer logs (John) v3: split check for GSC access, better comments (John) Signed-off-by: Daniele Ceraolo Spurio Cc: Alan Previn Cc: John Harrison Reviewed-by: John Harrison --- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 49 +++++++++++++++++------ drivers/gpu/drm/i915/gt/uc/intel_huc.h | 4 +- drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 12 +++--- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h | 2 +- 5 files changed, 47 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 6d795438b3e4..27c5e41fa84c 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -298,31 +298,54 @@ void intel_huc_init_early(struct intel_huc *huc) static int check_huc_loading_mode(struct intel_huc *huc) { struct intel_gt *gt = huc_to_gt(huc); - bool fw_needs_gsc = intel_huc_is_loaded_by_gsc(huc); - bool hw_uses_gsc = false; + bool gsc_enabled = huc->fw.has_gsc_headers; /* * The fuse for HuC load via GSC is only valid on platforms that have * GuC deprivilege. */ if (HAS_GUC_DEPRIVILEGE(gt->i915)) - hw_uses_gsc = intel_uncore_read(gt->uncore, GUC_SHIM_CONTROL2) & - GSC_LOADS_HUC; + huc->loaded_via_gsc = intel_uncore_read(gt->uncore, GUC_SHIM_CONTROL2) & + GSC_LOADS_HUC; - if (fw_needs_gsc != hw_uses_gsc) { - huc_err(huc, "mismatch between FW (%s) and HW (%s) load modes\n", - HUC_LOAD_MODE_STRING(fw_needs_gsc), HUC_LOAD_MODE_STRING(hw_uses_gsc)); + if (huc->loaded_via_gsc && !gsc_enabled) { + huc_err(huc, "HW requires a GSC-enabled blob, but we found a legacy one\n"); return -ENOEXEC; } - /* make sure we can access the GSC via the mei driver if we need it */ - if (!(IS_ENABLED(CONFIG_INTEL_MEI_PXP) && IS_ENABLED(CONFIG_INTEL_MEI_GSC)) && - fw_needs_gsc) { - huc_info(huc, "can't load due to missing MEI modules\n"); - return -EIO; + /* + * On newer platforms we have GSC-enabled binaries but we load the HuC + * via DMA. To do so we need to find the location of the legacy-style + * binary inside the GSC-enabled one, which we do at fetch time. Make + * sure that we were able to do so if the fuse says we need to load via + * DMA and the binary is GSC-enabled. + */ + if (!huc->loaded_via_gsc && gsc_enabled && !huc->fw.dma_start_offset) { + huc_err(huc, "HW in DMA mode, but we have an incompatible GSC-enabled blob\n"); + return -ENOEXEC; + } + + /* + * If the HuC is loaded via GSC, we need to be able to access the GSC. + * On DG2 this is done via the mei components, while on newer platforms + * it is done via the GSCCS, + */ + if (huc->loaded_via_gsc) { + if (IS_DG2(gt->i915)) { + if (!IS_ENABLED(CONFIG_INTEL_MEI_PXP) || + !IS_ENABLED(CONFIG_INTEL_MEI_GSC)) { + huc_info(huc, "can't load due to missing mei modules\n"); + return -EIO; + } + } else { + if (!HAS_ENGINE(gt, GSC0)){ + huc_info(huc, "can't load due to missing GSCCS\n"); + return -EIO; + } + } } - huc_dbg(huc, "loaded by GSC = %s\n", str_yes_no(fw_needs_gsc)); + huc_dbg(huc, "loaded by GSC = %s\n", str_yes_no(huc->loaded_via_gsc)); return 0; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.h b/drivers/gpu/drm/i915/gt/uc/intel_huc.h index 0789184d81a2..112f0dce4702 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.h @@ -39,6 +39,8 @@ struct intel_huc { struct notifier_block nb; enum intel_huc_delayed_load_status status; } delayed_load; + + bool loaded_via_gsc; }; int intel_huc_sanitize(struct intel_huc *huc); @@ -73,7 +75,7 @@ static inline bool intel_huc_is_used(struct intel_huc *huc) static inline bool intel_huc_is_loaded_by_gsc(const struct intel_huc *huc) { - return huc->fw.loaded_via_gsc; + return huc->loaded_via_gsc; } static inline bool intel_huc_wait_required(struct intel_huc *huc) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c index 3a9d81899a78..89a887d33b77 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c @@ -50,7 +50,7 @@ int intel_huc_fw_get_binary_info(struct intel_uc_fw *huc_fw, const void *data, s size_t min_size = sizeof(*header); int i; - if (!huc_fw->loaded_via_gsc) { + if (!huc_fw->has_gsc_headers) { huc_err(huc, "Invalid FW type for GSC header parsing!\n"); return -EINVAL; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index ec0b3d214af1..a1c8a982479f 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -186,7 +186,7 @@ struct __packed uc_fw_blob { u8 major; u8 minor; u8 patch; - bool loaded_via_gsc; + bool has_gsc_headers; }; #define UC_FW_BLOB_BASE(major_, minor_, patch_, path_) \ @@ -197,7 +197,7 @@ struct __packed uc_fw_blob { #define UC_FW_BLOB_NEW(major_, minor_, patch_, gsc_, path_) \ { UC_FW_BLOB_BASE(major_, minor_, patch_, path_) \ - .legacy = false, .loaded_via_gsc = gsc_ } + .legacy = false, .has_gsc_headers = gsc_ } #define UC_FW_BLOB_OLD(major_, minor_, patch_, path_) \ { UC_FW_BLOB_BASE(major_, minor_, patch_, path_) \ @@ -310,7 +310,7 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw) uc_fw->file_wanted.ver.major = blob->major; uc_fw->file_wanted.ver.minor = blob->minor; uc_fw->file_wanted.ver.patch = blob->patch; - uc_fw->loaded_via_gsc = blob->loaded_via_gsc; + uc_fw->has_gsc_headers = blob->has_gsc_headers; found = true; break; } @@ -737,7 +737,7 @@ static int check_fw_header(struct intel_gt *gt, if (uc_fw->type == INTEL_UC_FW_TYPE_GSC) return 0; - if (uc_fw->loaded_via_gsc) + if (uc_fw->has_gsc_headers) err = check_gsc_manifest(gt, fw, uc_fw); else err = check_ccs_header(gt, fw, uc_fw); @@ -999,7 +999,7 @@ static int uc_fw_xfer(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags) intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL); /* Set the source address for the uCode */ - offset = uc_fw->vma_res.start; + offset = uc_fw->vma_res.start + uc_fw->dma_start_offset; GEM_BUG_ON(upper_32_bits(offset) & 0xFFFF0000); intel_uncore_write_fw(uncore, DMA_ADDR_0_LOW, lower_32_bits(offset)); intel_uncore_write_fw(uncore, DMA_ADDR_0_HIGH, upper_32_bits(offset)); @@ -1238,7 +1238,7 @@ size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len) { struct intel_memory_region *mr = uc_fw->obj->mm.region; u32 size = min_t(u32, uc_fw->rsa_size, max_len); - u32 offset = sizeof(struct uc_css_header) + uc_fw->ucode_size; + u32 offset = uc_fw->dma_start_offset + sizeof(struct uc_css_header) + uc_fw->ucode_size; struct sgt_iter iter; size_t count = 0; int idx; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h index b3daba9526eb..054f02811971 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h @@ -120,7 +120,7 @@ struct intel_uc_fw { u32 dma_start_offset; - bool loaded_via_gsc; + bool has_gsc_headers; }; /* From patchwork Wed May 31 23:54:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8CF1EC7EE2D for ; Wed, 31 May 2023 23:54:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1114F10E129; Wed, 31 May 2023 23:54:32 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 59D5410E0F9; Wed, 31 May 2023 23:54:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577268; x=1717113268; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+ztP8igmaMYPj9wa+3/6YesKPU88eGl8vIbj3OI22MA=; b=E4OpL/2biqhDvfLlvlD2uyNn41I72oz7/Llz92fkAGUgVvwEAvEqnEr9 EJI0sktlJTWDfMob9+WrfmbMHiU7pkHT5eAYzAE3I9qJnk7AhZjgKVmCj dr9Mqo1x9iVsIrw6D2IpMm8WjCaiFIiD6MpbPWIOUBzdpIHtRn6Tpyn4n RfWUB9BGx1Hsi5MiuR/jONmYrVObJWRGz8naMrwREUMu/e42Ft+0agx3K ElA1m4E+K4UNu0Nrfrp0MrlK9a1JKStTFgqbHKvqQMvaHYsL8lUu6KQnd wf8B5OKICxlubGgC88Eg806b1bum63TU6F5LVAcmmYNd6tREW4RzPjL9X Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747045" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747045" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109950" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109950" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:27 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 4/7] drm/i915/huc: differentiate the 2 steps of the MTL HuC auth flow Date: Wed, 31 May 2023 16:54:12 -0700 Message-Id: <20230531235415.1467475-5-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Harrison , Daniele Ceraolo Spurio , Alan Previn , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Before we add the second step of the MTL HuC auth (via GSC), we need to have the ability to differentiate between them. To do so, the huc authentication check is duplicated for GuC and GSC auth, with GSC-enabled binaries being considered fully authenticated only after the GSC auth step. To report the difference between the 2 auth steps, a new case is added to the HuC getparam. This way, the clear media driver can start submitting before full auth, as partial auth is enough for those workloads. v2: fix authentication status check for DG2 v3: add a better comment at the top of the HuC file to explain the different approaches to load and auth (John) v4: update call to intel_huc_is_authenticated in the pxp code to check for GSC authentication v5: drop references to meu and esclamation mark in huc_auth print (John) Signed-off-by: Daniele Ceraolo Spurio Cc: Alan Previn Cc: John Harrison Reviewed-by: Alan Previn #v2 Reviewed-by: John Harrison --- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 111 ++++++++++++++++----- drivers/gpu/drm/i915/gt/uc/intel_huc.h | 16 ++- drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c | 4 +- drivers/gpu/drm/i915/i915_reg.h | 3 + drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c | 2 +- include/uapi/drm/i915_drm.h | 3 +- 6 files changed, 104 insertions(+), 35 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 27c5e41fa84c..73efdb027082 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -10,6 +10,7 @@ #include "intel_huc.h" #include "intel_huc_print.h" #include "i915_drv.h" +#include "i915_reg.h" #include #include @@ -22,15 +23,23 @@ * capabilities by adding HuC specific commands to batch buffers. * * The kernel driver is only responsible for loading the HuC firmware and - * triggering its security authentication, which is performed by the GuC on - * older platforms and by the GSC on newer ones. For the GuC to correctly - * perform the authentication, the HuC binary must be loaded before the GuC one. + * triggering its security authentication. This is done differently depending + * on the platform: + * - older platforms (from Gen9 to most Gen12s): the load is performed via DMA + * and the authentication via GuC + * - DG2: load and authentication are both performed via GSC. + * - MTL and newer platforms: the load is performed via DMA (same as with + * not-DG2 older platforms), while the authentication is done in 2-steps, + * a first auth for clear-media workloads via GuC and a second one for all + * workloads via GSC. + * On platforms where the GuC does the authentication, to correctly do so the + * HuC binary must be loaded before the GuC one. * Loading the HuC is optional; however, not using the HuC might negatively * impact power usage and/or performance of media workloads, depending on the * use-cases. * HuC must be reloaded on events that cause the WOPCM to lose its contents - * (S3/S4, FLR); GuC-authenticated HuC must also be reloaded on GuC/GT reset, - * while GSC-managed HuC will survive that. + * (S3/S4, FLR); on older platforms the HuC must also be reloaded on GuC/GT + * reset, while on newer ones it will survive that. * * See https://github.com/intel/media-driver for the latest details on HuC * functionality. @@ -106,7 +115,7 @@ static enum hrtimer_restart huc_delayed_load_timer_callback(struct hrtimer *hrti { struct intel_huc *huc = container_of(hrtimer, struct intel_huc, delayed_load.timer); - if (!intel_huc_is_authenticated(huc)) { + if (!intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC)) { if (huc->delayed_load.status == INTEL_HUC_WAITING_ON_GSC) huc_notice(huc, "timed out waiting for MEI GSC\n"); else if (huc->delayed_load.status == INTEL_HUC_WAITING_ON_PXP) @@ -124,7 +133,7 @@ static void huc_delayed_load_start(struct intel_huc *huc) { ktime_t delay; - GEM_BUG_ON(intel_huc_is_authenticated(huc)); + GEM_BUG_ON(intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC)); /* * On resume we don't have to wait for MEI-GSC to be re-probed, but we @@ -284,13 +293,23 @@ void intel_huc_init_early(struct intel_huc *huc) } if (GRAPHICS_VER(i915) >= 11) { - huc->status.reg = GEN11_HUC_KERNEL_LOAD_INFO; - huc->status.mask = HUC_LOAD_SUCCESSFUL; - huc->status.value = HUC_LOAD_SUCCESSFUL; + huc->status[INTEL_HUC_AUTH_BY_GUC].reg = GEN11_HUC_KERNEL_LOAD_INFO; + huc->status[INTEL_HUC_AUTH_BY_GUC].mask = HUC_LOAD_SUCCESSFUL; + huc->status[INTEL_HUC_AUTH_BY_GUC].value = HUC_LOAD_SUCCESSFUL; } else { - huc->status.reg = HUC_STATUS2; - huc->status.mask = HUC_FW_VERIFIED; - huc->status.value = HUC_FW_VERIFIED; + huc->status[INTEL_HUC_AUTH_BY_GUC].reg = HUC_STATUS2; + huc->status[INTEL_HUC_AUTH_BY_GUC].mask = HUC_FW_VERIFIED; + huc->status[INTEL_HUC_AUTH_BY_GUC].value = HUC_FW_VERIFIED; + } + + if (IS_DG2(i915)) { + huc->status[INTEL_HUC_AUTH_BY_GSC].reg = GEN11_HUC_KERNEL_LOAD_INFO; + huc->status[INTEL_HUC_AUTH_BY_GSC].mask = HUC_LOAD_SUCCESSFUL; + huc->status[INTEL_HUC_AUTH_BY_GSC].value = HUC_LOAD_SUCCESSFUL; + } else { + huc->status[INTEL_HUC_AUTH_BY_GSC].reg = HECI_FWSTS5(MTL_GSC_HECI1_BASE); + huc->status[INTEL_HUC_AUTH_BY_GSC].mask = HECI_FWSTS5_HUC_AUTH_DONE; + huc->status[INTEL_HUC_AUTH_BY_GSC].value = HECI_FWSTS5_HUC_AUTH_DONE; } } @@ -397,28 +416,38 @@ void intel_huc_suspend(struct intel_huc *huc) delayed_huc_load_complete(huc); } -int intel_huc_wait_for_auth_complete(struct intel_huc *huc) +static const char *auth_mode_string(struct intel_huc *huc, + enum intel_huc_authentication_type type) +{ + bool partial = huc->fw.has_gsc_headers && type == INTEL_HUC_AUTH_BY_GUC; + + return partial ? "clear media" : "all workloads"; +} + +int intel_huc_wait_for_auth_complete(struct intel_huc *huc, + enum intel_huc_authentication_type type) { struct intel_gt *gt = huc_to_gt(huc); int ret; ret = __intel_wait_for_register(gt->uncore, - huc->status.reg, - huc->status.mask, - huc->status.value, + huc->status[type].reg, + huc->status[type].mask, + huc->status[type].value, 2, 50, NULL); /* mark the load process as complete even if the wait failed */ delayed_huc_load_complete(huc); if (ret) { - huc_err(huc, "firmware not verified %pe\n", ERR_PTR(ret)); + huc_err(huc, "firmware not verified for %s: %pe\n", + auth_mode_string(huc, type), ERR_PTR(ret)); intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_LOAD_FAIL); return ret; } intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_RUNNING); - huc_info(huc, "authenticated!\n"); + huc_info(huc, "authenticated for %s\n", auth_mode_string(huc, type)); return 0; } @@ -458,7 +487,7 @@ int intel_huc_auth(struct intel_huc *huc) } /* Check authentication status, it should be done by now */ - ret = intel_huc_wait_for_auth_complete(huc); + ret = intel_huc_wait_for_auth_complete(huc, INTEL_HUC_AUTH_BY_GUC); if (ret) goto fail; @@ -469,16 +498,29 @@ int intel_huc_auth(struct intel_huc *huc) return ret; } -bool intel_huc_is_authenticated(struct intel_huc *huc) +bool intel_huc_is_authenticated(struct intel_huc *huc, + enum intel_huc_authentication_type type) { struct intel_gt *gt = huc_to_gt(huc); intel_wakeref_t wakeref; u32 status = 0; with_intel_runtime_pm(gt->uncore->rpm, wakeref) - status = intel_uncore_read(gt->uncore, huc->status.reg); + status = intel_uncore_read(gt->uncore, huc->status[type].reg); - return (status & huc->status.mask) == huc->status.value; + return (status & huc->status[type].mask) == huc->status[type].value; +} + +static bool huc_is_fully_authenticated(struct intel_huc *huc) +{ + struct intel_uc_fw *huc_fw = &huc->fw; + + if (!huc_fw->has_gsc_headers) + return intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GUC); + else if (intel_huc_is_loaded_by_gsc(huc) || HAS_ENGINE(huc_to_gt(huc), GSC0)) + return intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC); + else + return false; } /** @@ -493,7 +535,9 @@ bool intel_huc_is_authenticated(struct intel_huc *huc) */ int intel_huc_check_status(struct intel_huc *huc) { - switch (__intel_uc_fw_status(&huc->fw)) { + struct intel_uc_fw *huc_fw = &huc->fw; + + switch (__intel_uc_fw_status(huc_fw)) { case INTEL_UC_FIRMWARE_NOT_SUPPORTED: return -ENODEV; case INTEL_UC_FIRMWARE_DISABLED: @@ -510,7 +554,17 @@ int intel_huc_check_status(struct intel_huc *huc) break; } - return intel_huc_is_authenticated(huc); + /* + * GSC-enabled binaries loaded via DMA are first partially + * authenticated by GuC and then fully authenticated by GSC + */ + if (huc_is_fully_authenticated(huc)) + return 1; /* full auth */ + else if (huc_fw->has_gsc_headers && !intel_huc_is_loaded_by_gsc(huc) && + intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GUC)) + return 2; /* clear media only */ + else + return 0; } static bool huc_has_delayed_load(struct intel_huc *huc) @@ -524,7 +578,10 @@ void intel_huc_update_auth_status(struct intel_huc *huc) if (!intel_uc_fw_is_loadable(&huc->fw)) return; - if (intel_huc_is_authenticated(huc)) + if (!huc->fw.has_gsc_headers) + return; + + if (huc_is_fully_authenticated(huc)) intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_RUNNING); else if (huc_has_delayed_load(huc)) @@ -557,5 +614,5 @@ void intel_huc_load_status(struct intel_huc *huc, struct drm_printer *p) with_intel_runtime_pm(gt->uncore->rpm, wakeref) drm_printf(p, "HuC status: 0x%08x\n", - intel_uncore_read(gt->uncore, huc->status.reg)); + intel_uncore_read(gt->uncore, huc->status[INTEL_HUC_AUTH_BY_GUC].reg)); } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.h b/drivers/gpu/drm/i915/gt/uc/intel_huc.h index 112f0dce4702..3f6aa7c37abc 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.h @@ -22,6 +22,12 @@ enum intel_huc_delayed_load_status { INTEL_HUC_DELAYED_LOAD_ERROR, }; +enum intel_huc_authentication_type { + INTEL_HUC_AUTH_BY_GUC = 0, + INTEL_HUC_AUTH_BY_GSC, + INTEL_HUC_AUTH_MAX_MODES +}; + struct intel_huc { /* Generic uC firmware management */ struct intel_uc_fw fw; @@ -31,7 +37,7 @@ struct intel_huc { i915_reg_t reg; u32 mask; u32 value; - } status; + } status[INTEL_HUC_AUTH_MAX_MODES]; struct { struct i915_sw_fence fence; @@ -49,10 +55,12 @@ int intel_huc_init(struct intel_huc *huc); void intel_huc_fini(struct intel_huc *huc); void intel_huc_suspend(struct intel_huc *huc); int intel_huc_auth(struct intel_huc *huc); -int intel_huc_wait_for_auth_complete(struct intel_huc *huc); +int intel_huc_wait_for_auth_complete(struct intel_huc *huc, + enum intel_huc_authentication_type type); +bool intel_huc_is_authenticated(struct intel_huc *huc, + enum intel_huc_authentication_type type); int intel_huc_check_status(struct intel_huc *huc); void intel_huc_update_auth_status(struct intel_huc *huc); -bool intel_huc_is_authenticated(struct intel_huc *huc); void intel_huc_register_gsc_notifier(struct intel_huc *huc, const struct bus_type *bus); void intel_huc_unregister_gsc_notifier(struct intel_huc *huc, const struct bus_type *bus); @@ -81,7 +89,7 @@ static inline bool intel_huc_is_loaded_by_gsc(const struct intel_huc *huc) static inline bool intel_huc_wait_required(struct intel_huc *huc) { return intel_huc_is_used(huc) && intel_huc_is_loaded_by_gsc(huc) && - !intel_huc_is_authenticated(huc); + !intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC); } void intel_huc_load_status(struct intel_huc *huc, struct drm_printer *p); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c index 89a887d33b77..ac2ae5f5011e 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c @@ -161,7 +161,7 @@ int intel_huc_fw_load_and_auth_via_gsc(struct intel_huc *huc) * component gets re-bound and this function called again. If so, just * mark the HuC as loaded. */ - if (intel_huc_is_authenticated(huc)) { + if (intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC)) { intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_RUNNING); return 0; } @@ -174,7 +174,7 @@ int intel_huc_fw_load_and_auth_via_gsc(struct intel_huc *huc) intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_TRANSFERRED); - return intel_huc_wait_for_auth_complete(huc); + return intel_huc_wait_for_auth_complete(huc, INTEL_HUC_AUTH_BY_GSC); } /** diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 0523418129c5..8ed7c39c2b30 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -941,6 +941,9 @@ #define HECI_H_GS1(base) _MMIO((base) + 0xc4c) #define HECI_H_GS1_ER_PREP REG_BIT(0) +#define HECI_FWSTS5(base) _MMIO((base) + 0xc68) +#define HECI_FWSTS5_HUC_AUTH_DONE (1 << 19) + #define HSW_GTT_CACHE_EN _MMIO(0x4024) #define GTT_CACHE_EN_ALL 0xF0007FFF #define GEN7_WR_WATERMARK _MMIO(0x4028) diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c b/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c index 8dc41de3f6f7..016bd8fad89d 100644 --- a/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c @@ -196,7 +196,7 @@ bool intel_pxp_gsccs_is_ready_for_sessions(struct intel_pxp *pxp) * gsc-proxy init flow (the last set of dependencies that * are out of order) will suffice. */ - if (intel_huc_is_authenticated(&pxp->ctrl_gt->uc.huc) && + if (intel_huc_is_authenticated(&pxp->ctrl_gt->uc.huc, INTEL_HUC_AUTH_BY_GSC) && intel_gsc_uc_fw_proxy_init_done(&pxp->ctrl_gt->uc.gsc)) return true; diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index f31dfacde601..a1848e806059 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -674,7 +674,8 @@ typedef struct drm_i915_irq_wait { * If the IOCTL is successful, the returned parameter will be set to one of the * following values: * * 0 if HuC firmware load is not complete, - * * 1 if HuC firmware is authenticated and running. + * * 1 if HuC firmware is loaded and fully authenticated, + * * 2 if HuC firmware is loaded and authenticated for clear media only */ #define I915_PARAM_HUC_STATUS 42 From patchwork Wed May 31 23:54:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 86395C7EE23 for ; Wed, 31 May 2023 23:54:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3BB9C10E1EC; Wed, 31 May 2023 23:54:32 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 27FE410E1EB; Wed, 31 May 2023 23:54:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577269; x=1717113269; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=g9/89UgIYcJuIjpPM9mDVG6lMLNhDfkz3B33RgQ/+VQ=; b=kNtcSSmFbg3tCGvpgvuMx0enzDcDn++4Mu2TcRG5RtZbV4fs/kBrYeC4 h0tVdG7HdOFPCI81tKsqUAT546/qcvMkH9am8cxYi29cuN//tEEA+5Jzg 7RB5yE2Tu4tYoVcTpzLS2xCoqfArJOufR+Et18lvbqNNUczU8ERwHD/S6 wturslQBshtJTQdnwtaKDgbGyd/uInpSMVPLic+95Y+Qr1gwsDJHunCW4 BEEupMdlyFhRimzGIrLWVkn7E1BG6rHKdowBlezC3CxpfyDr4KYqhGkGV cCa/RKBE9yqp9YQj+TTZrR/TT647VQG5umbmwXtrHFphrbCmPudQY7rn5 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747054" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747054" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109961" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109961" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:28 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 5/7] drm/i915/mtl/huc: auth HuC via GSC Date: Wed, 31 May 2023 16:54:13 -0700 Message-Id: <20230531235415.1467475-6-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniele Ceraolo Spurio , Alan Previn , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The full authentication via the GSC requires an heci packet submission to the GSC FW via the GSC CS. The GSC has new PXP command for this (literally called NEW_HUC_AUTH). The intel_huc_auth function is also updated to handle both authentication types. v2: check that the GuC auth for clear media has completed before proceding with the full auth v3: use a define for the object size (Alan) Signed-off-by: Daniele Ceraolo Spurio Cc: Alan Previn Reviewed-by: Alan Previn --- drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c | 27 +++++- .../i915/gt/uc/intel_gsc_uc_heci_cmd_submit.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 53 ++++++++--- drivers/gpu/drm/i915/gt/uc/intel_huc.h | 6 +- drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c | 95 +++++++++++++++++++ drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h | 1 + drivers/gpu/drm/i915/gt/uc/intel_uc.c | 2 +- .../drm/i915/pxp/intel_pxp_cmd_interface_43.h | 17 +++- drivers/gpu/drm/i915/pxp/intel_pxp_huc.c | 2 +- 9 files changed, 183 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c index b26f493f86fa..c659cc01f32f 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c @@ -29,13 +29,32 @@ static void gsc_work(struct work_struct *work) if (actions & GSC_ACTION_FW_LOAD) { ret = intel_gsc_uc_fw_upload(gsc); - if (ret == -EEXIST) /* skip proxy if not a new load */ - actions &= ~GSC_ACTION_FW_LOAD; - else if (ret) + if (!ret) + /* setup proxy on a new load */ + actions |= GSC_ACTION_SW_PROXY; + else if (ret != -EEXIST) goto out_put; + + /* + * The HuC auth can be done both before or after the proxy init; + * if done after, a proxy request will be issued and must be + * serviced before the authentication can complete. + * Since this worker also handles proxy requests, we can't + * perform an action that requires the proxy from within it and + * then stall waiting for it, because we'd be blocking the + * service path. Therefore, it is easier for us to load HuC + * first and do proxy later. The GSC will ack the HuC auth and + * then send the HuC proxy request as part of the proxy init + * flow. + * Note that we can only do the GSC auth if the GuC auth was + * successful. + */ + if (intel_uc_uses_huc(>->uc) && + intel_huc_is_authenticated(>->uc.huc, INTEL_HUC_AUTH_BY_GUC)) + intel_huc_auth(>->uc.huc, INTEL_HUC_AUTH_BY_GSC); } - if (actions & (GSC_ACTION_FW_LOAD | GSC_ACTION_SW_PROXY)) { + if (actions & GSC_ACTION_SW_PROXY) { if (!intel_gsc_uc_fw_init_done(gsc)) { gt_err(gt, "Proxy request received with GSC not loaded!\n"); goto out_put; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.c b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.c index 579c0f5a1438..0ad090304ca0 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.c @@ -99,7 +99,7 @@ void intel_gsc_uc_heci_cmd_emit_mtl_header(struct intel_gsc_mtl_header *header, u64 host_session_id) { host_session_id &= ~HOST_SESSION_MASK; - if (heci_client_id == HECI_MEADDRESS_PXP) + if (host_session_id && heci_client_id == HECI_MEADDRESS_PXP) host_session_id |= HOST_SESSION_PXP_SINGLE; header->validity_marker = GSC_HECI_VALIDITY_MARKER; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 73efdb027082..df8f28ebfc94 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -11,6 +11,7 @@ #include "intel_huc_print.h" #include "i915_drv.h" #include "i915_reg.h" +#include "pxp/intel_pxp_cmd_interface_43.h" #include #include @@ -371,20 +372,36 @@ static int check_huc_loading_mode(struct intel_huc *huc) int intel_huc_init(struct intel_huc *huc) { + struct intel_gt *gt = huc_to_gt(huc); int err; err = check_huc_loading_mode(huc); if (err) goto out; + if (HAS_ENGINE(gt, GSC0)) { + struct i915_vma *vma; + + vma = intel_guc_allocate_vma(>->uc.guc, PXP43_HUC_AUTH_INOUT_SIZE * 2); + if (IS_ERR(vma)) { + huc_info(huc, "Failed to allocate heci pkt\n"); + goto out; + } + + huc->heci_pkt = vma; + } + err = intel_uc_fw_init(&huc->fw); if (err) - goto out; + goto out_pkt; intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_LOADABLE); return 0; +out_pkt: + if (huc->heci_pkt) + i915_vma_unpin_and_release(&huc->heci_pkt, 0); out: intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_INIT_FAIL); huc_info(huc, "initialization failed %pe\n", ERR_PTR(err)); @@ -399,6 +416,9 @@ void intel_huc_fini(struct intel_huc *huc) */ delayed_huc_load_fini(huc); + if (huc->heci_pkt) + i915_vma_unpin_and_release(&huc->heci_pkt, 0); + if (intel_uc_fw_is_loadable(&huc->fw)) intel_uc_fw_fini(&huc->fw); } @@ -454,6 +474,7 @@ int intel_huc_wait_for_auth_complete(struct intel_huc *huc, /** * intel_huc_auth() - Authenticate HuC uCode * @huc: intel_huc structure + * @type: authentication type (via GuC or via GSC) * * Called after HuC and GuC firmware loading during intel_uc_init_hw(). * @@ -461,7 +482,7 @@ int intel_huc_wait_for_auth_complete(struct intel_huc *huc, * passing the offset of the RSA signature to intel_guc_auth_huc(). It then * waits for up to 50ms for firmware verification ACK. */ -int intel_huc_auth(struct intel_huc *huc) +int intel_huc_auth(struct intel_huc *huc, enum intel_huc_authentication_type type) { struct intel_gt *gt = huc_to_gt(huc); struct intel_guc *guc = >->uc.guc; @@ -470,31 +491,41 @@ int intel_huc_auth(struct intel_huc *huc) if (!intel_uc_fw_is_loaded(&huc->fw)) return -ENOEXEC; - /* GSC will do the auth */ + /* GSC will do the auth with the load */ if (intel_huc_is_loaded_by_gsc(huc)) return -ENODEV; + if (intel_huc_is_authenticated(huc, type)) + return -EEXIST; + ret = i915_inject_probe_error(gt->i915, -ENXIO); if (ret) goto fail; - GEM_BUG_ON(intel_uc_fw_is_running(&huc->fw)); - - ret = intel_guc_auth_huc(guc, intel_guc_ggtt_offset(guc, huc->fw.rsa_data)); - if (ret) { - huc_err(huc, "authentication by GuC failed %pe\n", ERR_PTR(ret)); - goto fail; + switch (type) { + case INTEL_HUC_AUTH_BY_GUC: + ret = intel_guc_auth_huc(guc, intel_guc_ggtt_offset(guc, huc->fw.rsa_data)); + break; + case INTEL_HUC_AUTH_BY_GSC: + ret = intel_huc_fw_auth_via_gsccs(huc); + break; + default: + MISSING_CASE(type); + ret = -EINVAL; } + if (ret) + goto fail; /* Check authentication status, it should be done by now */ - ret = intel_huc_wait_for_auth_complete(huc, INTEL_HUC_AUTH_BY_GUC); + ret = intel_huc_wait_for_auth_complete(huc, type); if (ret) goto fail; return 0; fail: - huc_probe_error(huc, "authentication failed %pe\n", ERR_PTR(ret)); + huc_probe_error(huc, "%s authentication failed %pe\n", + auth_mode_string(huc, type), ERR_PTR(ret)); return ret; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.h b/drivers/gpu/drm/i915/gt/uc/intel_huc.h index 3f6aa7c37abc..ba5cb08e9e7b 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.h @@ -15,6 +15,7 @@ #include struct bus_type; +struct i915_vma; enum intel_huc_delayed_load_status { INTEL_HUC_WAITING_ON_GSC = 0, @@ -46,6 +47,9 @@ struct intel_huc { enum intel_huc_delayed_load_status status; } delayed_load; + /* for load via GSCCS */ + struct i915_vma *heci_pkt; + bool loaded_via_gsc; }; @@ -54,7 +58,7 @@ void intel_huc_init_early(struct intel_huc *huc); int intel_huc_init(struct intel_huc *huc); void intel_huc_fini(struct intel_huc *huc); void intel_huc_suspend(struct intel_huc *huc); -int intel_huc_auth(struct intel_huc *huc); +int intel_huc_auth(struct intel_huc *huc, enum intel_huc_authentication_type type); int intel_huc_wait_for_auth_complete(struct intel_huc *huc, enum intel_huc_authentication_type type); bool intel_huc_is_authenticated(struct intel_huc *huc, diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c index ac2ae5f5011e..e608152fecfc 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c @@ -6,11 +6,106 @@ #include "gt/intel_gsc.h" #include "gt/intel_gt.h" #include "intel_gsc_binary_headers.h" +#include "intel_gsc_uc_heci_cmd_submit.h" #include "intel_huc.h" #include "intel_huc_fw.h" #include "intel_huc_print.h" #include "i915_drv.h" #include "pxp/intel_pxp_huc.h" +#include "pxp/intel_pxp_cmd_interface_43.h" + +struct mtl_huc_auth_msg_in { + struct intel_gsc_mtl_header header; + struct pxp43_new_huc_auth_in huc_in; +} __packed; + +struct mtl_huc_auth_msg_out { + struct intel_gsc_mtl_header header; + struct pxp43_huc_auth_out huc_out; +} __packed; + +int intel_huc_fw_auth_via_gsccs(struct intel_huc *huc) +{ + struct intel_gt *gt = huc_to_gt(huc); + struct drm_i915_private *i915 = gt->i915; + struct drm_i915_gem_object *obj; + struct mtl_huc_auth_msg_in *msg_in; + struct mtl_huc_auth_msg_out *msg_out; + void *pkt_vaddr; + u64 pkt_offset; + int retry = 5; + int err = 0; + + if (!huc->heci_pkt) + return -ENODEV; + + obj = huc->heci_pkt->obj; + pkt_offset = i915_ggtt_offset(huc->heci_pkt); + + pkt_vaddr = i915_gem_object_pin_map_unlocked(obj, + i915_coherent_map_type(i915, obj, true)); + if (IS_ERR(pkt_vaddr)) + return PTR_ERR(pkt_vaddr); + + msg_in = pkt_vaddr; + msg_out = pkt_vaddr + PXP43_HUC_AUTH_INOUT_SIZE; + + intel_gsc_uc_heci_cmd_emit_mtl_header(&msg_in->header, + HECI_MEADDRESS_PXP, + sizeof(*msg_in), 0); + + msg_in->huc_in.header.api_version = PXP_APIVER(4, 3); + msg_in->huc_in.header.command_id = PXP43_CMDID_NEW_HUC_AUTH; + msg_in->huc_in.header.status = 0; + msg_in->huc_in.header.buffer_len = sizeof(msg_in->huc_in) - + sizeof(msg_in->huc_in.header); + msg_in->huc_in.huc_base_address = huc->fw.vma_res.start; + msg_in->huc_in.huc_size = huc->fw.obj->base.size; + + do { + err = intel_gsc_uc_heci_cmd_submit_packet(>->uc.gsc, + pkt_offset, sizeof(*msg_in), + pkt_offset + PXP43_HUC_AUTH_INOUT_SIZE, + PXP43_HUC_AUTH_INOUT_SIZE); + if (err) { + huc_err(huc, "failed to submit GSC request to auth: %d\n", err); + goto out_unpin; + } + + if (msg_out->header.flags & GSC_OUTFLAG_MSG_PENDING) { + msg_in->header.gsc_message_handle = msg_out->header.gsc_message_handle; + err = -EBUSY; + msleep(50); + } + } while (--retry && err == -EBUSY); + + if (err) + goto out_unpin; + + if (msg_out->header.message_size != sizeof(*msg_out)) { + huc_err(huc, "invalid GSC reply length %u [expected %zu]\n", + msg_out->header.message_size, sizeof(*msg_out)); + err = -EPROTO; + goto out_unpin; + } + + /* + * The GSC will return PXP_STATUS_OP_NOT_PERMITTED if the HuC is already + * loaded. If the same error is ever returned with HuC not loaded we'll + * still catch it when we check the authentication bit later. + */ + if (msg_out->huc_out.header.status != PXP_STATUS_SUCCESS && + msg_out->huc_out.header.status != PXP_STATUS_OP_NOT_PERMITTED) { + huc_err(huc, "auth failed with GSC error = 0x%x\n", + msg_out->huc_out.header.status); + err = -EIO; + goto out_unpin; + } + +out_unpin: + i915_gem_object_unpin_map(obj); + return err; +} static void get_version_from_gsc_manifest(struct intel_uc_fw_ver *ver, const void *data) { diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h index 0999ffe6f962..307ab45e6b09 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h @@ -12,6 +12,7 @@ struct intel_uc_fw; #include int intel_huc_fw_load_and_auth_via_gsc(struct intel_huc *huc); +int intel_huc_fw_auth_via_gsccs(struct intel_huc *huc); int intel_huc_fw_upload(struct intel_huc *huc); int intel_huc_fw_get_binary_info(struct intel_uc_fw *huc_fw, const void *data, size_t size); #endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c index 1e7f5cc9d550..18250fb64bd8 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c @@ -538,7 +538,7 @@ static int __uc_init_hw(struct intel_uc *uc) if (intel_huc_is_loaded_by_gsc(huc)) intel_huc_update_auth_status(huc); else - intel_huc_auth(huc); + intel_huc_auth(huc, INTEL_HUC_AUTH_BY_GUC); if (intel_uc_uses_guc_submission(uc)) { ret = intel_guc_submission_enable(guc); diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_43.h b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_43.h index 09777719cd84..0165d38fbead 100644 --- a/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_43.h +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_cmd_interface_43.h @@ -11,19 +11,30 @@ /* PXP-Cmd-Op definitions */ #define PXP43_CMDID_START_HUC_AUTH 0x0000003A +#define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */ #define PXP43_CMDID_INIT_SESSION 0x00000036 /* PXP-Packet sizes for MTL's GSCCS-HECI instruction */ #define PXP43_MAX_HECI_INOUT_SIZE (SZ_32K) -/* PXP-Input-Packet: HUC-Authentication */ +/* PXP-Packet size for MTL's NEW_HUC_AUTH instruction */ +#define PXP43_HUC_AUTH_INOUT_SIZE (SZ_4K) + +/* PXP-Input-Packet: HUC Load and Authentication */ struct pxp43_start_huc_auth_in { struct pxp_cmd_header header; __le64 huc_base_address; } __packed; -/* PXP-Output-Packet: HUC-Authentication */ -struct pxp43_start_huc_auth_out { +/* PXP-Input-Packet: HUC Auth-only */ +struct pxp43_new_huc_auth_in { + struct pxp_cmd_header header; + u64 huc_base_address; + u32 huc_size; +} __packed; + +/* PXP-Output-Packet: HUC Load and Authentication or Auth-only */ +struct pxp43_huc_auth_out { struct pxp_cmd_header header; } __packed; diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_huc.c b/drivers/gpu/drm/i915/pxp/intel_pxp_huc.c index 23431c36b60b..5eedce916942 100644 --- a/drivers/gpu/drm/i915/pxp/intel_pxp_huc.c +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_huc.c @@ -19,7 +19,7 @@ int intel_pxp_huc_load_and_auth(struct intel_pxp *pxp) struct intel_gt *gt; struct intel_huc *huc; struct pxp43_start_huc_auth_in huc_in = {0}; - struct pxp43_start_huc_auth_out huc_out = {0}; + struct pxp43_huc_auth_out huc_out = {0}; dma_addr_t huc_phys_addr; u8 client_id = 0; u8 fence_id = 0; From patchwork Wed May 31 23:54:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9112AC7EE23 for ; Wed, 31 May 2023 23:55:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B605810E517; Wed, 31 May 2023 23:55:11 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 05FA710E129; Wed, 31 May 2023 23:54:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577270; x=1717113270; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+udcl7GkglUOwqBqtPikunHOfK6Y/Tqg0Xs/zqCOrZM=; b=X/JjdQI0rDzUDqc3MCAmZi61pdrn9WKyNYaZkkfqk0IqZMa3ajwUiSjs m+b+sZgdFlUzKu2XQQquQc6lxcb2t1D51LaMRs9g9SpcHfu4ETUeKrk2e HWfu7O2y+CZDaCN18bACW5+3/vFHSSwYI/vmYzgQIEL6x6US63M2myhr5 Z04NKmwySs5Y329bYoBo0Z9T7xZy2zJFkURtqzTu0C0hu11ts5NtKUhVQ Tn8Ebip7E+y4kz9og6MJ7BGNcXlzM2KrATUv64Ufzjz6eb/BLKRnXL5B2 PiHQQgbe7h361DGPZi8WtOPlEf6XsgqAwdYXxxvjPKbwINTG3cj3ayWva Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747063" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747063" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109973" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109973" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:29 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 6/7] drm/i915/mtl/huc: Use the media gt for the HuC getparam Date: Wed, 31 May 2023 16:54:14 -0700 Message-Id: <20230531235415.1467475-7-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniele Ceraolo Spurio , John Harrison , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On MTL, for obvious reasons, HuC is only available on the media tile. We already disable SW support for HuC on the root gt due to the absence of VCS engines, but we also need to update the getparam to point to the HuC struct in the media GT. Signed-off-by: Daniele Ceraolo Spurio Cc: John Harrison Reviewed-by: John Harrison --- drivers/gpu/drm/i915/i915_getparam.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_getparam.c b/drivers/gpu/drm/i915/i915_getparam.c index 6f11d7eaa91a..890f2b382bee 100644 --- a/drivers/gpu/drm/i915/i915_getparam.c +++ b/drivers/gpu/drm/i915/i915_getparam.c @@ -100,7 +100,11 @@ int i915_getparam_ioctl(struct drm_device *dev, void *data, value = sseu->min_eu_in_pool; break; case I915_PARAM_HUC_STATUS: - value = intel_huc_check_status(&to_gt(i915)->uc.huc); + /* On platform with a media GT, the HuC is on that GT */ + if (i915->media_gt) + value = intel_huc_check_status(&i915->media_gt->uc.huc); + else + value = intel_huc_check_status(&to_gt(i915)->uc.huc); if (value < 0) return value; break; From patchwork Wed May 31 23:54:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Ceraolo Spurio X-Patchwork-Id: 13262864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F252AC7EE23 for ; Wed, 31 May 2023 23:54:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 520A510E1F2; Wed, 31 May 2023 23:54:33 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 16ED110E129; Wed, 31 May 2023 23:54:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685577271; x=1717113271; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Kybx4qovyAnSxeEcQP1DLBQ4hb3Ay17ZaxxbIVJq3Tw=; b=IeoanV+QsXxAg3sRWeRrIq/ZTJNsk/qePBrHk/JlXLr4j2kgqB4LopeU +2QHceeFlV1jjIjX6FpRSbBdQQ+h7ZeuTA/KUWLloIkaq7SZG6LPKVzIm g42IJ+QOhhn0O9/GvN6tZgUDzqcd29ml0wDAsjB9Dhif2hLdV86IkP9Jc K8rtns27DnrBo9ZaH4Y+xkSMwk4snkQ2W2wjg60N3Gnfj4nbbGgIHYgIz J523YE52QCA8Tg3sUd1JhJdWGBeOA5e8F//qNMhYxCiJUB77sxvrmbP10 8b/67FbcQSxbnxnxIspx4snemHi4rTiWJ4W/flZUO/X1HFNYuraxEixK+ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="335747073" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="335747073" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="707109982" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="707109982" Received: from valcore-skull-1.fm.intel.com ([10.1.27.19]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 16:54:30 -0700 From: Daniele Ceraolo Spurio To: intel-gfx@lists.freedesktop.org Subject: [PATCH v5 7/7] drm/i915/huc: define HuC FW version for MTL Date: Wed, 31 May 2023 16:54:15 -0700 Message-Id: <20230531235415.1467475-8-daniele.ceraolospurio@intel.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> References: <20230531235415.1467475-1-daniele.ceraolospurio@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Harrison , Daniele Ceraolo Spurio , Alan Previn , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Follow the same logic as DG2, so just a meu binary with no version number. Signed-off-by: Daniele Ceraolo Spurio Cc: Alan Previn Reviewed-by: John Harrison --- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index a1c8a982479f..944725e62414 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -108,6 +108,7 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw, fw_def(SKYLAKE, 0, guc_mmp(skl, 70, 1, 1)) #define INTEL_HUC_FIRMWARE_DEFS(fw_def, huc_raw, huc_mmp, huc_gsc) \ + fw_def(METEORLAKE, 0, huc_gsc(mtl)) \ fw_def(DG2, 0, huc_gsc(dg2)) \ fw_def(ALDERLAKE_P, 0, huc_raw(tgl)) \ fw_def(ALDERLAKE_P, 0, huc_mmp(tgl, 7, 9, 3)) \