From patchwork Fri Sep 17 21:38:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EAB6C433FE for ; Fri, 17 Sep 2021 21:38:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6E89661261 for ; Fri, 17 Sep 2021 21:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241802AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 Received: from mga07.intel.com ([134.134.136.100]:54587 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241097AbhIQVkM (ORCPT ); Fri, 17 Sep 2021 17:40:12 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563046" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563046" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:49 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646799" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:49 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 1/7] x86/sgx: Provide indication of life-cycle of EPC pages Date: Fri, 17 Sep 2021 14:38:30 -0700 Message-Id: <20210917213836.175138-2-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org SGX EPC pages go through the following life cycle: DIRTY ---> FREE ---> IN-USE --\ ^ | \-----------------/ Recovery action for poison for a DIRTY or FREE page is simple. Just make sure never to allocate the page. IN-USE pages need some extra handling. It would be good to use the sgx_epc_page->owner field as an indicator of where an EPC page is currently in that cycle (owner != NULL means the EPC page is IN-USE). But there is one caller, sgx_alloc_va_page(), that calls with NULL. Since there are multiple uses of the "owner" field with different types change the sgx_epc_page structure to define an anonymous union with each of the uses explicitly called out. Start epc_pages out with a non-NULL owner while they are in DIRTY state. Fix up the one holdout to provide a non-NULL owner. Refactor the allocation sequence so that changes to/from NULL value happen together with adding/removing the epc_page from a free list while the node->lock is held. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/sgx/encl.c | 5 +++-- arch/x86/kernel/cpu/sgx/encl.h | 2 +- arch/x86/kernel/cpu/sgx/ioctl.c | 2 +- arch/x86/kernel/cpu/sgx/main.c | 23 ++++++++++++----------- arch/x86/kernel/cpu/sgx/sgx.h | 12 ++++++++---- 5 files changed, 25 insertions(+), 19 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 001808e3901c..ad8c61933b0a 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -667,6 +667,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm, /** * sgx_alloc_va_page() - Allocate a Version Array (VA) page + * @va_page: struct sgx_va_page connected to this VA page * * Allocate a free EPC page and convert it to a Version Array (VA) page. * @@ -674,12 +675,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm, * a VA page, * -errno otherwise */ -struct sgx_epc_page *sgx_alloc_va_page(void) +struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page) { struct sgx_epc_page *epc_page; int ret; - epc_page = sgx_alloc_epc_page(NULL, true); + epc_page = sgx_alloc_epc_page(va_page, true); if (IS_ERR(epc_page)) return ERR_CAST(epc_page); diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index fec43ca65065..3d12dbeae14a 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -111,7 +111,7 @@ void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write); int sgx_encl_test_and_clear_young(struct mm_struct *mm, struct sgx_encl_page *page); -struct sgx_epc_page *sgx_alloc_va_page(void); +struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page); unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page); void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset); bool sgx_va_page_full(struct sgx_va_page *va_page); diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 83df20e3e633..655ce0bb069d 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -30,7 +30,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl) if (!va_page) return ERR_PTR(-ENOMEM); - va_page->epc_page = sgx_alloc_va_page(); + va_page->epc_page = sgx_alloc_va_page(va_page); if (IS_ERR(va_page->epc_page)) { err = ERR_CAST(va_page->epc_page); kfree(va_page); diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 63d3de02bbcc..4a5b51d16133 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -457,7 +457,7 @@ static bool __init sgx_page_reclaimer_init(void) return true; } -static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid) +static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(void *private, int nid) { struct sgx_numa_node *node = &sgx_numa_nodes[nid]; struct sgx_epc_page *page = NULL; @@ -471,6 +471,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid) page = list_first_entry(&node->free_page_list, struct sgx_epc_page, list); list_del_init(&page->list); + page->private = private; sgx_nr_free_pages--; spin_unlock(&node->lock); @@ -480,6 +481,7 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid) /** * __sgx_alloc_epc_page() - Allocate an EPC page + * @owner: the owner of the EPC page * * Iterate through NUMA nodes and reserve ia free EPC page to the caller. Start * from the NUMA node, where the caller is executing. @@ -488,14 +490,14 @@ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid) * - an EPC page: A borrowed EPC pages were available. * - NULL: Out of EPC pages. */ -struct sgx_epc_page *__sgx_alloc_epc_page(void) +struct sgx_epc_page *__sgx_alloc_epc_page(void *private) { struct sgx_epc_page *page; int nid_of_current = numa_node_id(); int nid = nid_of_current; if (node_isset(nid_of_current, sgx_numa_mask)) { - page = __sgx_alloc_epc_page_from_node(nid_of_current); + page = __sgx_alloc_epc_page_from_node(private, nid_of_current); if (page) return page; } @@ -506,7 +508,7 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void) if (nid == nid_of_current) break; - page = __sgx_alloc_epc_page_from_node(nid); + page = __sgx_alloc_epc_page_from_node(private, nid); if (page) return page; } @@ -559,7 +561,7 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) /** * sgx_alloc_epc_page() - Allocate an EPC page - * @owner: the owner of the EPC page + * @private: per-caller private data * @reclaim: reclaim pages if necessary * * Iterate through EPC sections and borrow a free EPC page to the caller. When a @@ -574,16 +576,14 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) * an EPC page, * -errno on error */ -struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) +struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim) { struct sgx_epc_page *page; for ( ; ; ) { - page = __sgx_alloc_epc_page(); - if (!IS_ERR(page)) { - page->owner = owner; + page = __sgx_alloc_epc_page(private); + if (!IS_ERR(page)) break; - } if (list_empty(&sgx_active_page_list)) return ERR_PTR(-ENOMEM); @@ -624,6 +624,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page) spin_lock(&node->lock); + page->private = NULL; list_add_tail(&page->list, &node->free_page_list); sgx_nr_free_pages++; @@ -652,7 +653,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, for (i = 0; i < nr_pages; i++) { section->pages[i].section = index; section->pages[i].flags = 0; - section->pages[i].owner = NULL; + section->pages[i].private = "dirty"; list_add_tail(§ion->pages[i].list, &sgx_dirty_page_list); } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 4628acec0009..8b1be10a46f6 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -28,8 +28,12 @@ struct sgx_epc_page { unsigned int section; - unsigned int flags; - struct sgx_encl_page *owner; + int flags; + union { + void *private; + struct sgx_encl_page *owner; + struct sgx_encl_page *vepc; + }; struct list_head list; }; @@ -77,12 +81,12 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page) return section->virt_addr + index * PAGE_SIZE; } -struct sgx_epc_page *__sgx_alloc_epc_page(void); +struct sgx_epc_page *__sgx_alloc_epc_page(void *private); void sgx_free_epc_page(struct sgx_epc_page *page); void sgx_mark_page_reclaimable(struct sgx_epc_page *page); int sgx_unmark_page_reclaimable(struct sgx_epc_page *page); -struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim); +struct sgx_epc_page *sgx_alloc_epc_page(void *private, bool reclaim); #ifdef CONFIG_X86_SGX_KVM int __init sgx_vepc_init(void); From patchwork Fri Sep 17 21:38:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61383C433F5 for ; Fri, 17 Sep 2021 21:38:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 420786127C for ; Fri, 17 Sep 2021 21:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242350AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 Received: from mga07.intel.com ([134.134.136.100]:54587 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241485AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563049" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563049" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646801" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:49 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 2/7] x86/sgx: Add infrastructure to identify SGX EPC pages Date: Fri, 17 Sep 2021 14:38:31 -0700 Message-Id: <20210917213836.175138-3-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X86 machine check architecture reports a physical address when there is a memory error. Handling that error requires a method to determine whether the physical address reported is in any of the areas reserved for EPC pages by BIOS. SGX EPC pages do not have Linux "struct page" associated with them. Keep track of the mapping from ranges of EPC pages to the sections that contain them using an xarray. Create a function arch_is_platform_page() that simply reports whether an address is an EPC page for use elsewhere in the kernel. The ACPI error injection code needs this function and is typically built as a module, so export it. Note that arch_is_platform_page() will be slower than other similar "what type is this page" functions that can simply check bits in the "struct page". If there is some future performance critical user of this function it may need to be implemented in a more efficient way. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/sgx/main.c | 10 ++++++++++ arch/x86/kernel/cpu/sgx/sgx.h | 1 + 2 files changed, 11 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4a5b51d16133..10892513212d 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -20,6 +20,7 @@ struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; static int sgx_nr_epc_sections; static struct task_struct *ksgxd_tsk; static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq); +static DEFINE_XARRAY(epc_page_ranges); /* * These variables are part of the state of the reclaimer, and must be accessed @@ -649,6 +650,9 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, } section->phys_addr = phys_addr; + section->end_phys_addr = phys_addr + size - 1; + xa_store_range(&epc_page_ranges, section->phys_addr, + section->end_phys_addr, section, GFP_KERNEL); for (i = 0; i < nr_pages; i++) { section->pages[i].section = index; @@ -660,6 +664,12 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, return true; } +bool arch_is_platform_page(u64 paddr) +{ + return !!xa_load(&epc_page_ranges, paddr); +} +EXPORT_SYMBOL_GPL(arch_is_platform_page); + /** * A section metric is concatenated in a way that @low bits 12-31 define the * bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 8b1be10a46f6..6a55b1971956 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -54,6 +54,7 @@ struct sgx_numa_node { */ struct sgx_epc_section { unsigned long phys_addr; + unsigned long end_phys_addr; void *virt_addr; struct sgx_epc_page *pages; struct sgx_numa_node *node; From patchwork Fri Sep 17 21:38:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C63A3C43219 for ; Fri, 17 Sep 2021 21:38:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7BB061279 for ; Fri, 17 Sep 2021 21:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229544AbhIQVkO (ORCPT ); Fri, 17 Sep 2021 17:40:14 -0400 Received: from mga07.intel.com ([134.134.136.100]:54587 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241949AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563050" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563050" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646804" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:49 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 3/7] x86/sgx: Initial poison handling for dirty and free pages Date: Fri, 17 Sep 2021 14:38:32 -0700 Message-Id: <20210917213836.175138-4-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org A memory controller patrol scrubber can report poison in a page that isn't currently being used. Add "poison" field in the sgx_epc_page that can be set for an sgx_epc_page. Check for it: 1) When sanitizing dirty pages 2) When freeing epc pages Poison is a new field separated from flags to avoid having to make all updates to flags atomic, or integrate poison state changes into some other locking scheme to protect flags. In both cases place the poisoned page on a list of poisoned epc pages to make sure it will not be reallocated. Add debugfs files /sys/kernel/debug/sgx/poison_page_list so that system administrators get a list of those pages that have been dropped because of poison. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/sgx/main.c | 30 +++++++++++++++++++++++++++++- arch/x86/kernel/cpu/sgx/sgx.h | 3 ++- 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 10892513212d..7a53ff876059 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2016-20 Intel Corporation. */ +#include #include #include #include @@ -43,6 +44,7 @@ static nodemask_t sgx_numa_mask; static struct sgx_numa_node *sgx_numa_nodes; static LIST_HEAD(sgx_dirty_page_list); +static LIST_HEAD(sgx_poison_page_list); /* * Reset post-kexec EPC pages to the uninitialized state. The pages are removed @@ -62,6 +64,12 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list) page = list_first_entry(dirty_page_list, struct sgx_epc_page, list); + if (page->poison) { + list_del(&page->list); + list_add(&page->list, &sgx_poison_page_list); + continue; + } + ret = __eremove(sgx_get_epc_virt_addr(page)); if (!ret) { /* @@ -626,7 +634,10 @@ void sgx_free_epc_page(struct sgx_epc_page *page) spin_lock(&node->lock); page->private = NULL; - list_add_tail(&page->list, &node->free_page_list); + if (page->poison) + list_add(&page->list, &sgx_poison_page_list); + else + list_add_tail(&page->list, &node->free_page_list); sgx_nr_free_pages++; spin_unlock(&node->lock); @@ -657,6 +668,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, for (i = 0; i < nr_pages; i++) { section->pages[i].section = index; section->pages[i].flags = 0; + section->pages[i].poison = 0; section->pages[i].private = "dirty"; list_add_tail(§ion->pages[i].list, &sgx_dirty_page_list); } @@ -801,8 +813,21 @@ int sgx_set_attribute(unsigned long *allowed_attributes, } EXPORT_SYMBOL_GPL(sgx_set_attribute); +static int poison_list_show(struct seq_file *m, void *private) +{ + struct sgx_epc_page *page; + + list_for_each_entry(page, &sgx_poison_page_list, list) + seq_printf(m, "0x%lx\n", sgx_get_epc_phys_addr(page)); + + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(poison_list); + static int __init sgx_init(void) { + struct dentry *dir; int ret; int i; @@ -834,6 +859,9 @@ static int __init sgx_init(void) if (sgx_vepc_init() && ret) goto err_provision; + dir = debugfs_create_dir("sgx", arch_debugfs_dir); + debugfs_create_file("poison_page_list", 0400, dir, NULL, &poison_list_fops); + return 0; err_provision: diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 6a55b1971956..77f3d98c9fbf 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -28,7 +28,8 @@ struct sgx_epc_page { unsigned int section; - int flags; + u16 flags; + u16 poison; union { void *private; struct sgx_encl_page *owner; From patchwork Fri Sep 17 21:38:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D88BC4332F for ; Fri, 17 Sep 2021 21:38:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A9B961279 for ; Fri, 17 Sep 2021 21:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242464AbhIQVkO (ORCPT ); Fri, 17 Sep 2021 17:40:14 -0400 Received: from mga07.intel.com ([134.134.136.100]:54592 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242100AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563051" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563051" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646807" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:49 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 4/7] x86/sgx: Add SGX infrastructure to recover from poison Date: Fri, 17 Sep 2021 14:38:33 -0700 Message-Id: <20210917213836.175138-5-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Provide a recovery function arch_memory_failure(). If the poison was consumed synchronously then send a SIGBUS. Note that the virtual address of the access is not included with the SIGBUS as is the case for poison outside of SGX enclaves. This doesn't matter as addresses of code/data inside an enclave is of little to no use to code executing outside the (now dead) enclave. Poison found in a free page results in the page being moved from the free list to the poison page list. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/sgx/main.c | 77 ++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 7a53ff876059..8f23c8489cec 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -682,6 +682,83 @@ bool arch_is_platform_page(u64 paddr) } EXPORT_SYMBOL_GPL(arch_is_platform_page); +static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr) +{ + struct sgx_epc_section *section; + + section = xa_load(&epc_page_ranges, paddr); + if (!section) + return NULL; + + return §ion->pages[PFN_DOWN(paddr - section->phys_addr)]; +} + +/* + * Called in process context to handle a hardware reported + * error in an SGX EPC page. + * If the MF_ACTION_REQUIRED bit is set in flags, then the + * context is the task that consumed the poison data. Otherwise + * this is called from a kernel thread unrelated to the page. + */ +int arch_memory_failure(unsigned long pfn, int flags) +{ + struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT); + struct sgx_epc_section *section; + struct sgx_numa_node *node; + + /* + * mm/memory-failure.c calls this routine for all errors + * where there isn't a "struct page" for the address. But that + * includes other address ranges besides SGX. + */ + if (!page) + return -ENXIO; + + /* + * If poison was consumed synchronously. Send a SIGBUS to + * the task. Hardware has already exited the SGX enclave and + * will not allow re-entry to an enclave that has a memory + * error. The signal may help the task understand why the + * enclave is broken. + */ + if (flags & MF_ACTION_REQUIRED) + force_sig(SIGBUS); + + section = &sgx_epc_sections[page->section]; + node = section->node; + + spin_lock(&node->lock); + + /* Already poisoned? Nothing more to do */ + if (page->poison) + goto out; + + page->poison = 1; + + /* + * If there is no owner, then the page is on a free list. + * Move it to the poison page list. + */ + if (!page->private) { + list_del(&page->list); + list_add(&page->list, &sgx_poison_page_list); + goto out; + } + + /* + * TBD: Add additional plumbing to enable pre-emptive + * action for asynchronous poison notification. Until + * then just hope that the poison: + * a) is not accessed - sgx_free_epc_page() will deal with it + * when the user gives it back + * b) results in a recoverable machine check rather than + * a fatal one + */ +out: + spin_unlock(&node->lock); + return 0; +} + /** * A section metric is concatenated in a way that @low bits 12-31 define the * bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the From patchwork Fri Sep 17 21:38:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3506DC4332F for ; Fri, 17 Sep 2021 21:38:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 208DA60F51 for ; Fri, 17 Sep 2021 21:38:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244229AbhIQVkP (ORCPT ); Fri, 17 Sep 2021 17:40:15 -0400 Received: from mga07.intel.com ([134.134.136.100]:54587 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242243AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563052" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563052" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646810" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 5/7] x86/sgx: Hook arch_memory_failure() into mainline code Date: Fri, 17 Sep 2021 14:38:34 -0700 Message-Id: <20210917213836.175138-6-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Add a call inside memory_failure() to check if the address is an SGX EPC page and handle it. Note the SGX EPC pages do not have a "struct page" entry, so the hook goes in at the same point as the device mapping hook. Pull the call to acquire the mutex earlier so the SGX errors are also protected. Make set_mce_nospec() skip SGX pages when trying to adjust the 1:1 map. Signed-off-by: Tony Luck --- arch/x86/include/asm/processor.h | 8 ++++++++ arch/x86/include/asm/set_memory.h | 4 ++++ include/linux/mm.h | 13 +++++++++++++ mm/memory-failure.c | 19 +++++++++++++------ 4 files changed, 38 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 9ad2acaaae9b..4865f2860a4f 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -853,4 +853,12 @@ enum mds_mitigations { MDS_MITIGATION_VMWERV, }; +#ifdef CONFIG_X86_SGX +int arch_memory_failure(unsigned long pfn, int flags); +#define arch_memory_failure arch_memory_failure + +bool arch_is_platform_page(u64 paddr); +#define arch_is_platform_page arch_is_platform_page +#endif + #endif /* _ASM_X86_PROCESSOR_H */ diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 43fa081a1adb..ce8dd215f5b3 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -2,6 +2,7 @@ #ifndef _ASM_X86_SET_MEMORY_H #define _ASM_X86_SET_MEMORY_H +#include #include #include @@ -98,6 +99,9 @@ static inline int set_mce_nospec(unsigned long pfn, bool unmap) unsigned long decoy_addr; int rc; + /* SGX pages are not in the 1:1 map */ + if (arch_is_platform_page(pfn << PAGE_SHIFT)) + return 0; /* * We would like to just call: * set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1); diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..3cc63682fe47 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3284,5 +3284,18 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma) return 0; } +#ifndef arch_memory_failure +static inline int arch_memory_failure(unsigned long pfn, int flags) +{ + return -ENXIO; +} +#endif +#ifndef arch_is_platform_page +static inline bool arch_is_platform_page(u64 paddr) +{ + return false; +} +#endif + #endif /* __KERNEL__ */ #endif /* _LINUX_MM_H */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 54879c339024..5693bac9509c 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1632,21 +1632,28 @@ int memory_failure(unsigned long pfn, int flags) if (!sysctl_memory_failure_recovery) panic("Memory failure on page %lx", pfn); + mutex_lock(&mf_mutex); + p = pfn_to_online_page(pfn); if (!p) { + res = arch_memory_failure(pfn, flags); + if (res == 0) + goto unlock_mutex; + if (pfn_valid(pfn)) { pgmap = get_dev_pagemap(pfn, NULL); - if (pgmap) - return memory_failure_dev_pagemap(pfn, flags, - pgmap); + if (pgmap) { + res = memory_failure_dev_pagemap(pfn, flags, + pgmap); + goto unlock_mutex; + } } pr_err("Memory failure: %#lx: memory outside kernel control\n", pfn); - return -ENXIO; + res = -ENXIO; + goto unlock_mutex; } - mutex_lock(&mf_mutex); - try_again: if (PageHuge(p)) { res = memory_failure_hugetlb(pfn, flags); From patchwork Fri Sep 17 21:38:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104C9C433EF for ; Fri, 17 Sep 2021 21:38:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED65E60F51 for ; Fri, 17 Sep 2021 21:38:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242502AbhIQVkO (ORCPT ); Fri, 17 Sep 2021 17:40:14 -0400 Received: from mga07.intel.com ([134.134.136.100]:54592 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242239AbhIQVkN (ORCPT ); Fri, 17 Sep 2021 17:40:13 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563053" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563053" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646814" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 6/7] x86/sgx: Add hook to error injection address validation Date: Fri, 17 Sep 2021 14:38:35 -0700 Message-Id: <20210917213836.175138-7-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org SGX reserved memory does not appear in the standard address maps. Add hook to call into the SGX code to check if an address is located in SGX memory. There are other challenges in injecting errors into SGX. Update the documentation with a sequence of operations to inject. Signed-off-by: Tony Luck --- .../firmware-guide/acpi/apei/einj.rst | 19 +++++++++++++++++++ drivers/acpi/apei/einj.c | 3 ++- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst index c042176e1707..55e2331a6438 100644 --- a/Documentation/firmware-guide/acpi/apei/einj.rst +++ b/Documentation/firmware-guide/acpi/apei/einj.rst @@ -181,5 +181,24 @@ You should see something like this in dmesg:: [22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0 [22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0) +Special notes for injection into SGX enclaves: + +There may be a separate BIOS setup option to enable SGX injection. + +The injection process consists of setting some special memory controller +trigger that will inject the error on the next write to the target +address. But the h/w prevents any software outside of an SGX enclave +from accessing enclave pages (even BIOS SMM mode). + +The following sequence can be used: + 1) Determine physical address of enclave page + 2) Use "notrigger=1" mode to inject (this will setup + the injection address, but will not actually inject) + 3) Enter the enclave + 4) Store data to the virtual address matching physical address from step 1 + 5) Execute CLFLUSH for that virtual address + 6) Spin delay for 250ms + 7) Read from the virtual address. This will trigger the error + For more information about EINJ, please refer to ACPI specification version 4.0, section 17.5 and ACPI 5.0, section 18.6. diff --git a/drivers/acpi/apei/einj.c b/drivers/acpi/apei/einj.c index 2882450c443e..67c335baad52 100644 --- a/drivers/acpi/apei/einj.c +++ b/drivers/acpi/apei/einj.c @@ -544,7 +544,8 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, ((region_intersects(base_addr, size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE) != REGION_INTERSECTS) && (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY) - != REGION_INTERSECTS))) + != REGION_INTERSECTS) && + !arch_is_platform_page(base_addr))) return -EINVAL; inject: From patchwork Fri Sep 17 21:38:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 12503027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27CD4C433FE for ; Fri, 17 Sep 2021 21:38:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12DB16127C for ; Fri, 17 Sep 2021 21:38:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245480AbhIQVkQ (ORCPT ); Fri, 17 Sep 2021 17:40:16 -0400 Received: from mga07.intel.com ([134.134.136.100]:54592 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242430AbhIQVkO (ORCPT ); Fri, 17 Sep 2021 17:40:14 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="286563054" X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="286563054" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 X-IronPort-AV: E=Sophos;i="5.85,302,1624345200"; d="scan'208";a="546646816" Received: from agluck-desk2.sc.intel.com ([10.3.52.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 14:38:50 -0700 From: Tony Luck To: Sean Christopherson , Jarkko Sakkinen , Dave Hansen Cc: Cathy Zhang , linux-sgx@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Tony Luck Subject: [PATCH v5 7/7] x86/sgx: Add check for SGX pages to ghes_do_memory_failure() Date: Fri, 17 Sep 2021 14:38:36 -0700 Message-Id: <20210917213836.175138-8-tony.luck@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210917213836.175138-1-tony.luck@intel.com> References: <20210827195543.1667168-1-tony.luck@intel.com> <20210917213836.175138-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org SGX EPC pages do not have a "struct page" associated with them so the pfn_valid() sanity check fails and results in a warning message to the console. Add an additonal check to skip the warning if the address of the error is in an SGX EPC page. Signed-off-by: Tony Luck --- drivers/acpi/apei/ghes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 0c8330ed1ffd..0c5c9acc6254 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -449,7 +449,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) return false; pfn = PHYS_PFN(physical_addr); - if (!pfn_valid(pfn)) { + if (!pfn_valid(pfn) && !arch_is_platform_page(physical_addr)) { pr_warn_ratelimited(FW_WARN GHES_PFX "Invalid address in generic error data: %#llx\n", physical_addr);