From patchwork Tue Sep 20 06:39:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12981429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 448CBECAAD8 for ; Tue, 20 Sep 2022 06:36:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231162AbiITGgp (ORCPT ); Tue, 20 Sep 2022 02:36:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230391AbiITGgM (ORCPT ); Tue, 20 Sep 2022 02:36:12 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3819F5F22E for ; Mon, 19 Sep 2022 23:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663655663; x=1695191663; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JqTi2emEzPv0cuny/ToNFjsgyl+Ff7FAKCiWchBTS+Q=; b=HJy4T5BRuRp3nHWprgVCtbV0tHgLa3hKQu/YbPrpqB/b1amrRFPEs9ue MbQbs/UFSXPSwZqK4o2XbnTJD/KBJu7EUIKBMccBw1c4MO3LtZES4w1CA MfbqI0kAvIzVaKzTLkG1ceiBENQQZUNESg/fx44DZ+3m3H3Tgz9jsDx4v CL2GeNlKJnHW3WePFMKY7cIbP6tsjtrIyq3Q24cxheTSvurd+iqBs0zRP 16U8Xpmpby/DQy55gWooLYLtUHvbPTOf/2vZP3n4gxSQQozHNR5o22Erf i+xu2GLEub0NpULXegZkMBpaCWcYNLTB5oKPkH1O69UPQ7dclr3RGDKch g==; X-IronPort-AV: E=McAfee;i="6500,9779,10475"; a="297203966" X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="297203966" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2022 23:34:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="947537986" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga005.fm.intel.com with ESMTP; 19 Sep 2022 23:34:19 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com, jarkko@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, kai.huang@intel.com Cc: seanjc@google.com, fan.du@intel.com, cathy.zhang@intel.com, zhiquan1.li@intel.com Subject: [PATCH v9 1/3] x86/sgx: Rename the owner field of struct sgx_epc_page as encl_owner Date: Tue, 20 Sep 2022 14:39:46 +0800 Message-Id: <20220920063948.3556917-2-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220920063948.3556917-1-zhiquan1.li@intel.com> References: <20220920063948.3556917-1-zhiquan1.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org In order to send SIGBUS to userspace hypervisor to allow it to inject #MC to guest, use virtual EPC page's owner to be the userspace virtual address of the EPC page. To avoid casting, use a union to separate the use of owner for SGX driver EPC page and virtual EPC page in the next step. To pave the way, rename owner of SGX driver EPC page to 'encl_owner' to be more specific and update all of references. There is no functional change. Signed-off-by: Zhiquan Li Acked-by: Jarkko Sakkinen Acked-by: Kai Huang --- No changes since V8. Changes since V7: - Enrich the motivation for renaming in commit message with the explanation from Kai. Link: https://lore.kernel.org/linux-sgx/YxEyRT2SbfBdYNfm@kernel.org/T/#me02a2ce0f3cc0122e62dac496d89321d1c006807 - Add Acked-by from Jarkko. - Add Acked-by from Kai Huang. Changes since V6: - Revise the commit message suggested by Jarkko. Link: https://lore.kernel.org/linux-sgx/20220826160503.1576966-1-zhiquan1.li@intel.com/T/#mb201506ed06932438c82d48915cd4ceae9745bc2 --- arch/x86/kernel/cpu/sgx/main.c | 20 ++++++++++---------- arch/x86/kernel/cpu/sgx/sgx.h | 2 +- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 515e2a5f25bb..1315c69a733e 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -102,7 +102,7 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list) static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page) { - struct sgx_encl_page *page = epc_page->owner; + struct sgx_encl_page *page = epc_page->encl_owner; struct sgx_encl *encl = page->encl; struct sgx_encl_mm *encl_mm; bool ret = true; @@ -134,7 +134,7 @@ static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page) static void sgx_reclaimer_block(struct sgx_epc_page *epc_page) { - struct sgx_encl_page *page = epc_page->owner; + struct sgx_encl_page *page = epc_page->encl_owner; unsigned long addr = page->desc & PAGE_MASK; struct sgx_encl *encl = page->encl; int ret; @@ -191,7 +191,7 @@ void sgx_ipi_cb(void *info) static void sgx_encl_ewb(struct sgx_epc_page *epc_page, struct sgx_backing *backing) { - struct sgx_encl_page *encl_page = epc_page->owner; + struct sgx_encl_page *encl_page = epc_page->encl_owner; struct sgx_encl *encl = encl_page->encl; struct sgx_va_page *va_page; unsigned int va_offset; @@ -244,7 +244,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, struct sgx_backing *backing) { - struct sgx_encl_page *encl_page = epc_page->owner; + struct sgx_encl_page *encl_page = epc_page->encl_owner; struct sgx_encl *encl = encl_page->encl; struct sgx_backing secs_backing; int ret; @@ -306,7 +306,7 @@ static void sgx_reclaim_pages(void) epc_page = list_first_entry(&sgx_active_page_list, struct sgx_epc_page, list); list_del_init(&epc_page->list); - encl_page = epc_page->owner; + encl_page = epc_page->encl_owner; if (kref_get_unless_zero(&encl_page->encl->refcount) != 0) chunk[cnt++] = epc_page; @@ -320,7 +320,7 @@ static void sgx_reclaim_pages(void) for (i = 0; i < cnt; i++) { epc_page = chunk[i]; - encl_page = epc_page->owner; + encl_page = epc_page->encl_owner; if (!sgx_reclaimer_age(epc_page)) goto skip; @@ -359,7 +359,7 @@ static void sgx_reclaim_pages(void) if (!epc_page) continue; - encl_page = epc_page->owner; + encl_page = epc_page->encl_owner; sgx_reclaimer_write(epc_page, &backing[i]); kref_put(&encl_page->encl->refcount, sgx_encl_release); @@ -560,7 +560,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) for ( ; ; ) { page = __sgx_alloc_epc_page(); if (!IS_ERR(page)) { - page->owner = owner; + page->encl_owner = owner; break; } @@ -603,7 +603,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page) spin_lock(&node->lock); - page->owner = NULL; + page->encl_owner = NULL; if (page->poison) list_add(&page->list, &node->sgx_poison_page_list); else @@ -638,7 +638,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, for (i = 0; i < nr_pages; i++) { section->pages[i].section = index; section->pages[i].flags = 0; - section->pages[i].owner = NULL; + section->pages[i].encl_owner = NULL; section->pages[i].poison = 0; list_add_tail(§ion->pages[i].list, &sgx_dirty_page_list); } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 0f2020653fba..4d88abccd12e 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -33,7 +33,7 @@ struct sgx_epc_page { unsigned int section; u16 flags; u16 poison; - struct sgx_encl_page *owner; + struct sgx_encl_page *encl_owner; struct list_head list; }; From patchwork Tue Sep 20 06:39:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12981430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A6F8C54EE9 for ; Tue, 20 Sep 2022 06:36:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbiITGgt (ORCPT ); Tue, 20 Sep 2022 02:36:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230374AbiITGgU (ORCPT ); Tue, 20 Sep 2022 02:36:20 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7EF85F223 for ; Mon, 19 Sep 2022 23:34:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663655666; x=1695191666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HilB6Pxgiuny/dMTP3XD/eHATyA8NIIY+sMypPSXVfQ=; b=QFOFGYXvlUNWL0c2eJNtkO19xUxQItH5eDMARjIeIDUYLqhj5eHwgRu1 UJ37piOFhteI4fUlRAY3/hU7gzfBqR7bqL4NaZnhuSQ2mzJY/GsgN5DHM 3zgJsRgHyDWOLbm2w0XNZP7bpjTHCNQtFhMP7n3SKj4FqrJDpQEe1jatT oQG3gpe/h1qqFfloetu2Cz3w9TehGMzaQTiUu7sQ1rnaQw/YQ3A5sXks3 dBTJxeVHcVcgpiPeqTVtB30p+QZHNtApdpXMMmOnQyYf0cUlbQRKPgHZP TGwSno8T8erN6XWUL0nDLO5QTHB1253YHlqcRXjGwK6PvPbOyFKU9k7GI Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10475"; a="297203975" X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="297203975" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2022 23:34:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="947538007" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga005.fm.intel.com with ESMTP; 19 Sep 2022 23:34:23 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com, jarkko@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, kai.huang@intel.com Cc: seanjc@google.com, fan.du@intel.com, cathy.zhang@intel.com, zhiquan1.li@intel.com Subject: [PATCH v9 2/3] x86/sgx: Introduce union with vepc_vaddr field for virtualization case Date: Tue, 20 Sep 2022 14:39:47 +0800 Message-Id: <20220920063948.3556917-3-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220920063948.3556917-1-zhiquan1.li@intel.com> References: <20220920063948.3556917-1-zhiquan1.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org When a page triggers a machine check, it only reports the PFN. But in order to inject #MC into hypervisor, the virtual address is required. The 'encl_owner' field is useless in virtualization case, then repurpose it as 'vepc_vaddr' - the virtual address of the virtual EPC page for such case so that arch_memory_failure() can easily retrieve it. Introduce a union to prevent adding a new dedicated structure to track the virtual address of virtual EPC page. And it can also prevent playing the casting games while using it. Add a new EPC page flag - SGX_EPC_PAGE_KVM_GUEST to interpret the meaning of the field. Co-developed-by: Cathy Zhang Signed-off-by: Cathy Zhang Signed-off-by: Zhiquan Li Acked-by: Kai Huang Reviewed-by: Jarkko Sakkinen --- Changes since V8: - Remove excess Acked-by. Changes since V7: - Add Acked-by from Jarkko. No changes since V6. Changes since V5: - To prevent casting the 'encl_owner' field, introduce a union with another field - 'vepc_vaddr', sugguested by Dave Hansen. - Add Reviewed-by from Jarkko. Link: https://lore.kernel.org/linux-sgx/Yrf27fugD7lkyaek@kernel.org/T/#m379d00fc7f1d43726a42b3884637532061a8c0d1 Changes since V4: - Add Co-developed-by and Signed-off-by from Cathy Zhang, as she had fully discussed the flag name with Jarkko. Link: https://lore.kernel.org/all/df92395ade424401ac3c6322de568720@intel.com/ - Add Acked-by from Kai Huang Link: https://lore.kernel.org/linux-sgx/0676cd4e-d94b-e904-81ae-ca1c05d37070@intel.com/T/#mccfb11df30698dbd060f2b6f06383cda7f154ef3 Changes since V3: - Take the definition of EPC page flag SGX_EPC_PAGE_KVM_GUEST from Cathy Zhang's third patch of SGX rebootless recovery patch set but discard irrelevant portion, since it might need some time to re-forge and these are two different features. Link: https://lore.kernel.org/linux-sgx/41704e5d4c03b49fcda12e695595211d950cfb08.camel@kernel.org/T/#m9782d23496cacecb7da07a67daa79f4b322ae170 Changes since V2: - Remove struct sgx_vepc_page and relevant code. - Rework the patch suggested by Jarkko. - Remove new EPC page flag SGX_EPC_PAGE_IS_VEPC definition as it is duplicated to SGX_EPC_PAGE_KVM_GUEST. Link: https://lore.kernel.org/linux-sgx/eb95b32ecf3d44a695610cf7f2816785@intel.com/T/#u Changes since V1: - Add documentation suggested by Jarkko. --- arch/x86/kernel/cpu/sgx/main.c | 4 ++++ arch/x86/kernel/cpu/sgx/sgx.h | 8 +++++++- arch/x86/kernel/cpu/sgx/virt.c | 4 +++- 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 1315c69a733e..b319bedcaf1e 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -549,6 +549,10 @@ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) * Finally, wake up ksgxd when the number of pages goes below the watermark * before returning back to the caller. * + * When an EPC page is assigned to KVM guest, repurpose the 'encl_owner' field + * as the virtual address of virtual EPC page, since it is useless in such + * scenario, so 'owner' is assigned to 'vepc_vaddr'. + * * Return: * an EPC page, * -errno on error diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 4d88abccd12e..d16a8baa28d4 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -28,12 +28,18 @@ /* Pages on free list */ #define SGX_EPC_PAGE_IS_FREE BIT(1) +/* Pages allocated for KVM guest */ +#define SGX_EPC_PAGE_KVM_GUEST BIT(2) struct sgx_epc_page { unsigned int section; u16 flags; u16 poison; - struct sgx_encl_page *encl_owner; + union { + struct sgx_encl_page *encl_owner; + /* Use when SGX_EPC_PAGE_KVM_GUEST set in ->flags: */ + void __user *vepc_vaddr; + }; struct list_head list; }; diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c index 6a77a14eee38..776ae5c1c032 100644 --- a/arch/x86/kernel/cpu/sgx/virt.c +++ b/arch/x86/kernel/cpu/sgx/virt.c @@ -46,10 +46,12 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc, if (epc_page) return 0; - epc_page = sgx_alloc_epc_page(vepc, false); + epc_page = sgx_alloc_epc_page((void *)addr, false); if (IS_ERR(epc_page)) return PTR_ERR(epc_page); + epc_page->flags |= SGX_EPC_PAGE_KVM_GUEST; + ret = xa_err(xa_store(&vepc->page_array, index, epc_page, GFP_KERNEL)); if (ret) goto err_free; From patchwork Tue Sep 20 06:39:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12981431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DC92ECAAD8 for ; Tue, 20 Sep 2022 06:36:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230403AbiITGgx (ORCPT ); Tue, 20 Sep 2022 02:36:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbiITGgV (ORCPT ); Tue, 20 Sep 2022 02:36:21 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0782EE90 for ; Mon, 19 Sep 2022 23:34:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663655671; x=1695191671; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cGLYoUiOR1de/Gx/W8MZkGvfargWonvwxf3D5g9msVY=; b=S9Sql9STRUY16NmX81Rg57YxQXFBOKP6cb9P4Gy5J+02wVMlkBYigxd7 bItvy0gNAKjZXZHcmJsnl2TgGCq5FypP9Yt8Lp8aJ+qEUdjeXFtz2duUA ykFKVrnjxOW3EJXfJ6adc23lozQWvC6NV+B6nIDbR9axF0htR9Z1OTShr Eya3Ll49gs1eZVssmLiBglUGFMsJO5csKt7KBgRpByUcNv4yybYkiY2ey 9/0p6Qg/SJz/nx3PquDbOqLy98y83fxzRVyQb6SfJT0mbXvcy6LseF+h4 xwocNVuAcyX1O5RO4F9WFy7MA+lk6FOc6HRlGh4iBoLJy9Ic2qbIrypfi A==; X-IronPort-AV: E=McAfee;i="6500,9779,10475"; a="297203983" X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="297203983" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2022 23:34:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="947538029" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga005.fm.intel.com with ESMTP; 19 Sep 2022 23:34:28 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com, jarkko@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, kai.huang@intel.com Cc: seanjc@google.com, fan.du@intel.com, cathy.zhang@intel.com, zhiquan1.li@intel.com Subject: [PATCH v9 3/3] x86/sgx: Fine grained SGX MCA behavior for virtualization Date: Tue, 20 Sep 2022 14:39:48 +0800 Message-Id: <20220920063948.3556917-4-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220920063948.3556917-1-zhiquan1.li@intel.com> References: <20220920063948.3556917-1-zhiquan1.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Today, if a guest accesses an SGX EPC page with memory failure, the kernel behavior will kill the entire guest. This blast radius is too large. It would be idea to kill only the SGX application inside the guest. To fix this, send a SIGBUS to host userspace (like QEMU) which can follow up by injecting a #MC to the guest. SGX virtual EPC driver doesn't explicitly prevent virtual EPC instance being shared by multiple VMs via fork(). However KVM doesn't support running a VM across multiple mm structures, and the de facto userspace hypervisor (Qemu) doesn't use fork() to create a new VM, so in practice this should not happen. Signed-off-by: Zhiquan Li Acked-by: Kai Huang Link: https://lore.kernel.org/linux-sgx/443cb425-009c-2784-56f4-5e707122de76@intel.com/T/#m1d1f4098f4fad78034e8706a60e4d79c119db407 Reviewed-by: Jarkko Sakkinen --- Changes since V8: - Remove excess Acked-by. Changes since V7: - Add Acked-by from Jarkko. Changes since V6: - Fix build warning due to type changes. Changes since V5: - Use the 'vepc_vaddr' field instead of casting the 'owner' field. - Clean up the commit message suggested by Dave. Link: https://lore.kernel.org/linux-sgx/Yrf27fugD7lkyaek@kernel.org/T/#m2ff4778948cdc9ee65f09672f1d02f8dc467247b - Add Reviewed-by from Jarkko. Changes since V4: - Switch the order of the two variables so all of variables are in reverse Christmas style. - Do not initialize "ret" because it will be overridden by the return value of force_sig_mceerr() unconditionally. Changes since V2: - Retrieve virtual address from "owner" field of struct sgx_epc_page, instead of struct sgx_vepc_page. - Replace EPC page flag SGX_EPC_PAGE_IS_VEPC with SGX_EPC_PAGE_KVM_GUEST as they are duplicated. Changes since V1: - Add Acked-by from Kai Huang. - Add Kai's excellent explanation regarding to why we no need to consider that one virtual EPC be shared by two guests. --- arch/x86/kernel/cpu/sgx/main.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index b319bedcaf1e..160c8dbee0ab 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -679,6 +679,8 @@ int arch_memory_failure(unsigned long pfn, int flags) struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT); struct sgx_epc_section *section; struct sgx_numa_node *node; + void __user *vaddr; + int ret; /* * mm/memory-failure.c calls this routine for all errors @@ -695,8 +697,26 @@ int arch_memory_failure(unsigned long pfn, int flags) * error. The signal may help the task understand why the * enclave is broken. */ - if (flags & MF_ACTION_REQUIRED) - force_sig(SIGBUS); + if (flags & MF_ACTION_REQUIRED) { + /* + * Provide extra info to the task so that it can make further + * decision but not simply kill it. This is quite useful for + * virtualization case. + */ + if (page->flags & SGX_EPC_PAGE_KVM_GUEST) { + /* + * The 'encl_owner' field is repurposed, when allocating EPC + * page it was assigned to the virtual address of virtual EPC + * page. + */ + vaddr = (void *)((unsigned long)page->vepc_vaddr & PAGE_MASK); + ret = force_sig_mceerr(BUS_MCEERR_AR, vaddr, PAGE_SHIFT); + if (ret < 0) + pr_err("Memory failure: Error sending signal to %s:%d: %d\n", + current->comm, current->pid, ret); + } else + force_sig(SIGBUS); + } section = &sgx_epc_sections[page->section]; node = section->node;