From patchwork Thu Apr 21 11:03:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61937C433F5 for ; Thu, 21 Apr 2022 11:04:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237007AbiDULHR (ORCPT ); Thu, 21 Apr 2022 07:07:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244846AbiDULHP (ORCPT ); Thu, 21 Apr 2022 07:07:15 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCC522E69B for ; Thu, 21 Apr 2022 04:04:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539064; x=1682075064; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=jiVCWZgSM1dkoNH//ZvVQSMpC4RxptWTF3CpbPCxpOU=; b=aNMXKGLlrx+wxyTR4FTcbzYQsVnucpZyWcmdVaL8Lr+UDkzJJClY3DS7 EWzFSednHdVzRHoYIuVxtHqyMOaIQlITWXPWn+vq5Kt0RZfGce/ltkcCQ pFZwjbIz5W/dLeB65unsDLpbwKe1gQYNtvNsuvLsoqiSWphVWMyPALp/F LRvx8Gv2hyXZbaLQ6SB+P7y+chQcQAOeV5q5fLUVIBXhmt9N+JCMyEIkC 7UAtGvMFnlAGAobLxjysjbAZnYGTY10pdYA02n7iaUKO6IQLtZA3mg1gy Yyoh4TBVr8/LXJGF74+o6dtNy0FfkRoOzCcbKfX9hdOwW4Mmw494/sEoL w==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="244893336" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="244893336" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039108" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:18 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 1/9] x86/sgx: Introduce mechanism to prevent new initializations of EPC pages Date: Thu, 21 Apr 2022 19:03:18 +0800 Message-Id: <20220421110326.856-2-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org == Background == EUPDATESVN is a new SGX instruction which allows enclave attestation to include information about updated microcode without a reboot. The SGX hardware maintains metadata for each enclave page to help enforce its security guarantees. This includes things like a record of the enclave to which the page belongs and the type of the page: SGX metadata like "VA" or "SECS" pages, or regular enclave pages like those that store user data. Before an EUPDATESVN operation can be successful, all SGX memory (aka. EPC) must be marked as "unused" in the SGX hardware metadata (aka, EPCM). The SGX microcode now maintains a reference count of pages which are unused to aid in determining when all pages reach the "unused" state. Both bare-metal and KVM guest EPC must be made unused. To increase the chance of a successful EUPDATESVN, the kernel prevents existing enclaves from creating new, valid pages and prevents new enclave creation (creating an enclave involves initializing a "SECS" page). The entire EUPDATESVN process is very slow since it potentially affects gigabytes of enclave memory. It can potentially take seconds or minutes to complete. Userspace may encounter -EBUSY errors during the update and is expected to retry. == Patch contents == Introduce mechanism to prevent new initializations of EPC pages. Use a flag to indicate when SGX EPC pages are "locked", which means it's not allowed to allocate new EPC page for use. Check it in all paths that can initialize an EPC page. Use SRCU to ensure that the flag is visible across the system before proceeding with an update. Add checks to all sites that call SGX instructions that can transition pages from unused to initialized to ensure that the SRCU lock is held. Signed-off-by: Cathy Zhang --- Changes since v3: - Rename lable "out" as "err" in sgx_ioc_enclave_create, update error branch accordingly. (Jarrko Sakkinen) Changes since v2: - Move out the SGX2 related change to remove the dependency. (Jarkko Sakkinen, Reinette Chatre). --- arch/x86/kernel/cpu/sgx/encls.h | 8 ++++++ arch/x86/kernel/cpu/sgx/sgx.h | 3 +++ arch/x86/kernel/cpu/sgx/encl.c | 19 +++++++++++++++ arch/x86/kernel/cpu/sgx/ioctl.c | 43 +++++++++++++++++++++++++++------ arch/x86/kernel/cpu/sgx/main.c | 37 ++++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/virt.c | 20 +++++++++++++++ 6 files changed, 123 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h index fa04a73daf9c..60321c5f5718 100644 --- a/arch/x86/kernel/cpu/sgx/encls.h +++ b/arch/x86/kernel/cpu/sgx/encls.h @@ -138,6 +138,8 @@ static inline bool encls_failed(int ret) static inline int __ecreate(struct sgx_pageinfo *pginfo, void *secs) { + lockdep_assert_held(&sgx_lock_epc_srcu); + return __encls_2(ECREATE, pginfo, secs); } @@ -148,6 +150,8 @@ static inline int __eextend(void *secs, void *addr) static inline int __eadd(struct sgx_pageinfo *pginfo, void *addr) { + lockdep_assert_held(&sgx_lock_epc_srcu); + return __encls_2(EADD, pginfo, addr); } @@ -179,6 +183,8 @@ static inline int __etrack(void *addr) static inline int __eldu(struct sgx_pageinfo *pginfo, void *addr, void *va) { + lockdep_assert_held(&sgx_lock_epc_srcu); + return __encls_ret_3(ELDU, pginfo, addr, va); } @@ -191,6 +197,8 @@ static inline int __epa(void *addr) { unsigned long rbx = SGX_PAGE_TYPE_VA; + lockdep_assert_held(&sgx_lock_epc_srcu); + return __encls_2(EPA, rbx, addr); } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index b30cee4de903..d7a1490d90bb 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -103,4 +103,7 @@ static inline int __init sgx_vepc_init(void) void sgx_update_lepubkeyhash(u64 *lepubkeyhash); +extern struct srcu_struct sgx_lock_epc_srcu; +bool sgx_epc_is_locked(void); + #endif /* _X86_SGX_H */ diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 02ff9ac83985..68c8d65a8dee 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -142,6 +142,7 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) unsigned long phys_addr; struct sgx_encl *encl; vm_fault_t ret; + int srcu_idx; encl = vma->vm_private_data; @@ -153,11 +154,18 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) if (unlikely(!encl)) return VM_FAULT_SIGBUS; + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + if (sgx_epc_is_locked()) { + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + return VM_FAULT_SIGBUS; + } + mutex_lock(&encl->lock); entry = sgx_encl_load_page(encl, addr, vma->vm_flags); if (IS_ERR(entry)) { mutex_unlock(&encl->lock); + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); if (PTR_ERR(entry) == -EBUSY) return VM_FAULT_NOPAGE; @@ -170,12 +178,14 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, addr, PFN_DOWN(phys_addr)); if (ret != VM_FAULT_NOPAGE) { mutex_unlock(&encl->lock); + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); return VM_FAULT_SIGBUS; } sgx_encl_test_and_clear_young(vma->vm_mm, entry); mutex_unlock(&encl->lock); + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); return VM_FAULT_NOPAGE; } @@ -323,6 +333,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, struct sgx_encl_page *entry = NULL; char data[sizeof(unsigned long)]; unsigned long align; + int srcu_idx; int offset; int cnt; int ret = 0; @@ -339,8 +350,15 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, return -EFAULT; for (i = 0; i < len; i += cnt) { + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + if (sgx_epc_is_locked()) { + ret = -EBUSY; + goto out; + } + entry = sgx_encl_reserve_page(encl, (addr + i) & PAGE_MASK, vma->vm_flags); + if (IS_ERR(entry)) { ret = PTR_ERR(entry); break; @@ -366,6 +384,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, out: mutex_unlock(&encl->lock); + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); if (ret) break; diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index b3c2e8d58142..a4df72f715d7 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -147,24 +147,42 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) static long sgx_ioc_enclave_create(struct sgx_encl *encl, void __user *arg) { struct sgx_enclave_create create_arg; + int srcu_idx; void *secs; int ret; - if (test_bit(SGX_ENCL_CREATED, &encl->flags)) - return -EINVAL; + if (test_bit(SGX_ENCL_CREATED, &encl->flags)) { + ret = -EINVAL; + goto err; + } - if (copy_from_user(&create_arg, arg, sizeof(create_arg))) - return -EFAULT; + if (copy_from_user(&create_arg, arg, sizeof(create_arg))) { + ret = -EFAULT; + goto err; + } secs = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!secs) - return -ENOMEM; + if (!secs) { + ret = -ENOMEM; + goto err; + } if (copy_from_user(secs, (void __user *)create_arg.src, PAGE_SIZE)) ret = -EFAULT; - else + else { + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + if (sgx_epc_is_locked()) { + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + ret = -EBUSY; + goto err; + } + ret = sgx_encl_create(encl, secs); + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + } + +err: kfree(secs); return ret; } @@ -418,6 +436,7 @@ static long sgx_ioc_enclave_add_pages(struct sgx_encl *encl, void __user *arg) struct sgx_enclave_add_pages add_arg; struct sgx_secinfo secinfo; unsigned long c; + int srcu_idx; int ret; if (!test_bit(SGX_ENCL_CREATED, &encl->flags) || @@ -455,8 +474,18 @@ static long sgx_ioc_enclave_add_pages(struct sgx_encl *encl, void __user *arg) if (need_resched()) cond_resched(); + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + if (sgx_epc_is_locked()) { + ret = -EBUSY; + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + break; + } + ret = sgx_encl_add_page(encl, add_arg.src + c, add_arg.offset + c, &secinfo, add_arg.flags); + + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + if (ret) break; } diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index b3aba1e1274c..10360f06c0df 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -23,6 +23,17 @@ static int sgx_nr_epc_sections; static struct task_struct *ksgxd_tsk; static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq); static DEFINE_XARRAY(sgx_epc_address_space); +/* + * The flag sgx_epc_locked prevents any new SGX flows that + * may attempt to allocate a new EPC page. + */ +static bool __rcu sgx_epc_locked; +/* + * By synchronizing around sgx_epc_locked SRCU ensures that any executing + * SGX flows have completed before proceeding with an SVN update. New SGX flows + * will be prevented from starting during an SVN update. + */ +DEFINE_SRCU(sgx_lock_epc_srcu); /* * These variables are part of the state of the reclaimer, and must be accessed @@ -407,6 +418,8 @@ static bool sgx_should_reclaim(unsigned long watermark) static int ksgxd(void *p) { + int srcu_idx; + set_freezable(); /* @@ -427,9 +440,15 @@ static int ksgxd(void *p) kthread_should_stop() || sgx_should_reclaim(SGX_NR_HIGH_PAGES)); + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + if (sgx_epc_is_locked()) + goto maybe_resched; + if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) sgx_reclaim_pages(); +maybe_resched: + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); cond_resched(); } @@ -972,3 +991,21 @@ static int __init sgx_init(void) } device_initcall(sgx_init); + +static void sgx_lock_epc(void) +{ + sgx_epc_locked = true; + synchronize_srcu(&sgx_lock_epc_srcu); +} + +static void sgx_unlock_epc(void) +{ + sgx_epc_locked = false; + synchronize_srcu(&sgx_lock_epc_srcu); +} + +bool sgx_epc_is_locked(void) +{ + lockdep_assert_held(&sgx_lock_epc_srcu); + return sgx_epc_locked; +} diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c index 6a77a14eee38..e953816d7c8b 100644 --- a/arch/x86/kernel/cpu/sgx/virt.c +++ b/arch/x86/kernel/cpu/sgx/virt.c @@ -75,10 +75,21 @@ static vm_fault_t sgx_vepc_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct sgx_vepc *vepc = vma->vm_private_data; + int srcu_idx; int ret; mutex_lock(&vepc->lock); + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + + if (sgx_epc_is_locked()) { + ret = -EBUSY; + goto out_unlock; + } + ret = __sgx_vepc_fault(vepc, vma, vmf->address); + +out_unlock: + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); mutex_unlock(&vepc->lock); if (!ret) @@ -331,6 +342,7 @@ int __init sgx_vepc_init(void) int sgx_virt_ecreate(struct sgx_pageinfo *pageinfo, void __user *secs, int *trapnr) { + int srcu_idx; int ret; /* @@ -347,6 +359,12 @@ int sgx_virt_ecreate(struct sgx_pageinfo *pageinfo, void __user *secs, if (WARN_ON_ONCE(!access_ok(secs, PAGE_SIZE))) return -EINVAL; + srcu_idx = srcu_read_lock(&sgx_lock_epc_srcu); + if (sgx_epc_is_locked()) { + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + return -EBUSY; + } + __uaccess_begin(); ret = __ecreate(pageinfo, (void *)secs); __uaccess_end(); @@ -356,6 +374,8 @@ int sgx_virt_ecreate(struct sgx_pageinfo *pageinfo, void __user *secs, return -EFAULT; } + srcu_read_unlock(&sgx_lock_epc_srcu, srcu_idx); + /* ECREATE doesn't return an error code, it faults or succeeds. */ WARN_ON_ONCE(ret); return 0; From patchwork Thu Apr 21 11:03:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76788C433F5 for ; Thu, 21 Apr 2022 11:05:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239906AbiDULIe (ORCPT ); Thu, 21 Apr 2022 07:08:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388734AbiDULHg (ORCPT ); Thu, 21 Apr 2022 07:07:36 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A561E2F03B for ; Thu, 21 Apr 2022 04:04:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539076; x=1682075076; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=lzF76dr5OyhgznlKi7kBwEA0d9lnZNetuc+LqiEoC00=; b=Y7TmFABYrHbf7K7vkjBPbSl7/d8gGI/h0w4eBgZ3e0oobgeqv7uvtd37 unPvSYDcbjnhbWLvpIlWVZjQKSJwzfmwBCcshQzKGWYw9wngEs9CF5YR/ nBrUpvwdrLbBlTAtkAsWmf6dGinOzCAKaDlJNVd1JNvaG/ObNekRHg8Rn iWFHBWeEQIcUIDq9pz4Qn4aswDTKmQo44G8K15fMkD3QfRtRGiPbkJOze /Hq3RKeghGLXNTV3LT9qu9fUSouvYDvJT4+KUDx2U8/rXL1zCsCT9ldiJ QnFnROKv5fZQjAulkmmlXUglZF7bfdaRHsXLJttm+7iv6Ihh6eIBQ9cA9 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="244893370" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="244893370" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039124" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:21 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 2/9] x86/sgx: Save enclave pointer for VA page Date: Thu, 21 Apr 2022 19:03:19 +0800 Message-Id: <20220421110326.856-3-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Tearing down all enclaves is required by SGX SVN update, which involves running the ENCLS[EREMOVE] instruction on every EPC page. This (tearing down all enclaves) should be coordinated with any enclaves that may be in the process of existing and thus already be running ENCLS[EREMOVE] as part of enclave release. In support of this coordination, it is required to know which enclave owns each in-use EPC page. It is already possible to locate the owning enclave of SECS and regular pages but not for VA pages. Make the following changes for VA pages' location: 1) Make epc->owner type-agnostic by changing its type to 'void *'. So, besides "struct sgx_encl_page", it can have other types, like "struct sgx_va_page". 2) Save the enclave pointer for each VA page to support locating its owning enclave. Note: to track 2T EPC memory, this scheme of tracking will use additional 8M memory. Signed-off-by: Cathy Zhang --- Changes since v3: - Squash patch "x86/sgx: Provide VA page non-NULL owner" and "x86/sgx: Save enclave pointer for VA page". Update commit log. (Suggested by Jarkko Sakkinen) --- arch/x86/kernel/cpu/sgx/encl.h | 4 ++-- arch/x86/kernel/cpu/sgx/sgx.h | 2 +- arch/x86/kernel/cpu/sgx/encl.c | 5 +++-- arch/x86/kernel/cpu/sgx/ioctl.c | 3 ++- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 7cdc351bc273..59fbd4ed5c64 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -76,6 +76,7 @@ struct sgx_va_page { struct sgx_epc_page *epc_page; DECLARE_BITMAP(slots, SGX_VA_SLOT_COUNT); struct list_head list; + struct sgx_encl *encl; }; struct sgx_backing { @@ -112,8 +113,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write); int sgx_encl_test_and_clear_young(struct mm_struct *mm, struct sgx_encl_page *page); - -struct sgx_epc_page *sgx_alloc_va_page(void); +struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page); unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page); void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset); bool sgx_va_page_full(struct sgx_va_page *va_page); diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index d7a1490d90bb..f8ed9deac18b 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -33,7 +33,7 @@ struct sgx_epc_page { unsigned int section; u16 flags; u16 poison; - struct sgx_encl_page *owner; + void *owner; struct list_head list; }; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 68c8d65a8dee..c0725111cc25 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -753,6 +753,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm, /** * sgx_alloc_va_page() - Allocate a Version Array (VA) page + * @va_page: struct sgx_va_page connected to this VA page * * Allocate a free EPC page and convert it to a Version Array (VA) page. * @@ -760,12 +761,12 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm, * a VA page, * -errno otherwise */ -struct sgx_epc_page *sgx_alloc_va_page(void) +struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page) { struct sgx_epc_page *epc_page; int ret; - epc_page = sgx_alloc_epc_page(NULL, true); + epc_page = sgx_alloc_epc_page(va_page, true); if (IS_ERR(epc_page)) return ERR_CAST(epc_page); diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index a4df72f715d7..b77343eb2d49 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -30,7 +30,8 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl) if (!va_page) return ERR_PTR(-ENOMEM); - va_page->epc_page = sgx_alloc_va_page(); + va_page->encl = encl; + va_page->epc_page = sgx_alloc_va_page(va_page); if (IS_ERR(va_page->epc_page)) { err = ERR_CAST(va_page->epc_page); kfree(va_page); From patchwork Thu Apr 21 11:03:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0A5CC433EF for ; Thu, 21 Apr 2022 11:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388694AbiDULIh (ORCPT ); Thu, 21 Apr 2022 07:08:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388837AbiDULHy (ORCPT ); Thu, 21 Apr 2022 07:07:54 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 737812E6B9 for ; Thu, 21 Apr 2022 04:04:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539088; x=1682075088; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ZtEw984FSyt7Mg/5qmNCm9wpwSTesJEyrJihUI/RwSo=; b=CvZZsUXdlHAKkCNkLeuKQJCvBOs8cof11lU1m4NCjW0RYACsVWV9uCFn dzDNJCP3SlzW/XVJd3wNpGTPfLywheXf/bkdPjAi6ZMzRRhJSW/0cv9zu h98RPo5mEN5dSam7f91MAjt27uwktSCMCROnWNqTgGZaXz5vQCab00xOk 4OwnmEsSFrtyUvGP57KBZfSSkQXWYDASXw+xQJnKR/qJWbART9T4AQrRJ ZWS0pFe6y54zmsD6dKg+TxFjdojVbnuPabUtVWdoQmrdIBRUwSu0h5UuG qNUmJ0476pHNF8tKuQ6WfaZmjlKCX3X/XHHQ6ProqIAY5WNo2sGd9j20T A==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="244893389" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="244893389" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039133" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:24 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 3/9] x86/sgx: Keep record for SGX VA and Guest page type Date: Thu, 21 Apr 2022 19:03:20 +0800 Message-Id: <20220421110326.856-4-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Regular enclave EPC pages have sgx_encl_page as their owner, but SGX VA page and KVM guest EPC page are maintained by different owner structures. SGX CPUSVN update requires to know the EPC page owner's status and then decide how to handle the page. Keep a record of page type for SGX VA and KVM guest page while the other EPC pages already have their type tracked, so that CPUSVN update can get EPC page's owner by type and handle it then. Signed-off-by: Cathy Zhang --- Changes since v3: - Rename SGX_EPC_PAGE_GUEST as SGX_EPC_PAGE_KVM_GUEST. (Suggested by Jarkko, Sakkinen) --- arch/x86/kernel/cpu/sgx/sgx.h | 4 ++++ arch/x86/kernel/cpu/sgx/encl.c | 2 ++ arch/x86/kernel/cpu/sgx/virt.c | 2 ++ 3 files changed, 8 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index f8ed9deac18b..4ad0e5396eef 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -28,6 +28,10 @@ /* Pages on free list */ #define SGX_EPC_PAGE_IS_FREE BIT(1) +/* VA page */ +#define SGX_EPC_PAGE_VA BIT(2) +/* Pages allocated for KVM guest */ +#define SGX_EPC_PAGE_KVM_GUEST BIT(3) struct sgx_epc_page { unsigned int section; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index c0725111cc25..a01a72637e2e 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -777,6 +777,8 @@ struct sgx_epc_page *sgx_alloc_va_page(struct sgx_va_page *va_page) return ERR_PTR(-EFAULT); } + epc_page->flags |= SGX_EPC_PAGE_VA; + return epc_page; } diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c index e953816d7c8b..104487b72fb8 100644 --- a/arch/x86/kernel/cpu/sgx/virt.c +++ b/arch/x86/kernel/cpu/sgx/virt.c @@ -50,6 +50,8 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc, if (IS_ERR(epc_page)) return PTR_ERR(epc_page); + epc_page->flags |= SGX_EPC_PAGE_KVM_GUEST; + ret = xa_err(xa_store(&vepc->page_array, index, epc_page, GFP_KERNEL)); if (ret) goto err_free; From patchwork Thu Apr 21 11:03:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CC91C4332F for ; Thu, 21 Apr 2022 11:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244846AbiDULIe (ORCPT ); Thu, 21 Apr 2022 07:08:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388859AbiDULHz (ORCPT ); Thu, 21 Apr 2022 07:07:55 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 889A6FFD for ; Thu, 21 Apr 2022 04:04:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539094; x=1682075094; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=j4UJUF70NSk0RzWaLoZTcW+T+o/dWb4AKAXSlaYPkuQ=; b=lsUg5cC+WfNCfslyGJAismCdT7yu6/isx24TPxxeqnjaZNTeeceQlKFq SbHhKISt6LeChayQQonkx7W/2IhRzy7PXrxEubq7yMP2gt2D09Vd+RudL OzQBf2ewHkfATGunABoKTtH1uhq830QsTmZjdrLAOPZROLE6Jx0AQQU6g Qs/XYbTU5Jw0WrjgtplNTHYaNRmj+OGp8ApvlYzQ3NhUNTYsQlFbwQIYl mJY45D3fLA2i3+6XbJ0nSLCcAaxaaeBXspFMLdq6qPH1zngztZLhfD4cG NLq/5XyUXS4sB7QVLYlBG8/dWrdql2SZrI4fVKDm/H/Pcm1oO7VDpzcPb w==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="244893400" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="244893400" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039147" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:27 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 4/9] x86/sgx: Save the size of each EPC section Date: Thu, 21 Apr 2022 19:03:21 +0800 Message-Id: <20220421110326.856-5-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org SGX CPUSVN update process should check all EPC pages to ensure they are marked as unused. For EPC pages are stored in EPC sections, it's required to save the size of each section, as the indicator for the end of each section's traversing to unuse EPC pages. Signed-off-by: Cathy Zhang --- Changes since v3: - Update commit log to explain clearly why record the size. (Suggested by Jarkko Sakkinen) --- arch/x86/kernel/cpu/sgx/sgx.h | 1 + arch/x86/kernel/cpu/sgx/main.c | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 4ad0e5396eef..775477e0b8af 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -63,6 +63,7 @@ struct sgx_epc_section { void *virt_addr; struct sgx_epc_page *pages; struct sgx_numa_node *node; + u64 size; }; extern struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 10360f06c0df..031c1402cd7e 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -665,6 +665,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, } section->phys_addr = phys_addr; + section->size = size; xa_store_range(&sgx_epc_address_space, section->phys_addr, phys_addr + size - 1, section, GFP_KERNEL); From patchwork Thu Apr 21 11:03:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BC55C433FE for ; Thu, 21 Apr 2022 11:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352928AbiDULIg (ORCPT ); Thu, 21 Apr 2022 07:08:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388866AbiDULH5 (ORCPT ); Thu, 21 Apr 2022 07:07:57 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF823219C for ; Thu, 21 Apr 2022 04:04:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539094; x=1682075094; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=QsRhyDYNjLunq5ptmiVmqJLwK8MTtIYTWmSHvCSsf/c=; b=ADLr1LxjmD73Td9zJGNMmTHG21neDKqJxmzcjIRS2tbu2jdwqWeJUij/ eFufPhnoA7C8Jl/ogiAE6bHT1+wYrWuTcj3EajJ27TbQPeo6lhqtUq5o0 vTzi/J+VIu2MbT03DOSZQ9dNeyBtIaCLfu0rR64R7SVr+hJ2kl/NikIUF Jz3wXMjHueqrZBlBh3AysY94/DFX24cQ068gZBOfHwY5FprCxQbtkzzpD Pvv9mbY9tQekcpoyi1gmeI3Iv3+jbiRZlYLUEXDpa1mddCH4GZm0sbbEx UEmiNnnWpL7jAorog8Xil/TYH17e4rMJOpM/N3c/1THcwcmztSZ1aEvQ+ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="244893424" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="244893424" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039174" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:30 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 5/9] x86/sgx: Forced EPC page zapping for EUPDATESVN Date: Thu, 21 Apr 2022 19:03:22 +0800 Message-Id: <20220421110326.856-6-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Before an EUPDATESVN instruction can be successful, all enclave pages (EPC) must be marked as unused in the SGX hardware metadata (EPCM). A page becomes unused when an issued EREMOVE instruction succeeds. To prepare for EUPDATESVN, loop over all SGX pages and attempt to EREMOVE them. This is fatal to running enclaves and destroys all enclave state and memory contents. This destruction is by design and mitigates any compromise of enclaves or the SGX hardware itself which occurred before the microcode update. An EREMOVE operation on a page may fail for a few reasons. Each has its own mitigations. First, EREMOVE will fail if an enclave that uses the page is executing. Send an IPI to all CPUs that might be running the enclave to force it out of the enclave long enough to EREMOVE the page. Other CPUs might enter the enclave in the meantime, so this is not a rock-solid guarantee. Second, EREMOVE can fail on special SGX metadata pages, such as SECS. EREMOVE will work on them only after the normal SGX pages that depend on them have been EREMOVE'd. Defer handling those pages and repeat EREMOVE after the dependency has been addressed. Signed-off-by: Cathy Zhang --- Changes since v3: - Rename SGX_EPC_PAGE_GUEST as SGX_EPC_PAGE_KVM_GUEST (Suggested by Jarkko Sakkinen) - Remove "VA" from sentence "Second, EREMOVE can fail on special SGX metadata...", for except concurrency rules, only SECS pages might be EREMOVEd failed and will be tracked for a deferred handling. (Suggested by Jarkko Sakkinen) --- arch/x86/kernel/cpu/sgx/sgx.h | 13 ++ arch/x86/kernel/cpu/sgx/encl.c | 14 +- arch/x86/kernel/cpu/sgx/main.c | 347 +++++++++++++++++++++++++++++++++ 3 files changed, 373 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 775477e0b8af..d90532957181 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -32,6 +32,17 @@ #define SGX_EPC_PAGE_VA BIT(2) /* Pages allocated for KVM guest */ #define SGX_EPC_PAGE_KVM_GUEST BIT(3) +/* + * Pages, failed to be zapped (EREMOVED) + * by SGX CPUSVN update process. + */ +#define SGX_EPC_PAGE_ZAP_TRACKED BIT(4) +/* + * Pages, the associated enclave is being + * released while SGX CPUSVN update is + * running. + */ +#define SGX_EPC_PAGE_IN_RELEASE BIT(5) struct sgx_epc_page { unsigned int section; @@ -110,5 +121,7 @@ void sgx_update_lepubkeyhash(u64 *lepubkeyhash); extern struct srcu_struct sgx_lock_epc_srcu; bool sgx_epc_is_locked(void); +void sgx_zap_wakeup(void); +void sgx_zap_abort(void); #endif /* _X86_SGX_H */ diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index a01a72637e2e..be177a5e3292 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -457,6 +457,12 @@ void sgx_encl_release(struct kref *ref) WARN_ON_ONCE(encl->secs_child_cnt); WARN_ON_ONCE(encl->secs.epc_page); + /* + * EPC pages were freed and EREMOVE was executed. Wake + * up any zappers which were waiting for this. + */ + sgx_zap_wakeup(); + kfree(encl); } @@ -840,8 +846,14 @@ void sgx_encl_free_epc_page(struct sgx_epc_page *page) WARN_ON_ONCE(page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED); ret = __eremove(sgx_get_epc_virt_addr(page)); - if (WARN_ONCE(ret, EREMOVE_ERROR_MESSAGE, ret, ret)) + if (WARN_ONCE(ret, EREMOVE_ERROR_MESSAGE, ret, ret)) { + /* + * The EREMOVE failed. If a CPUSVN is in progress, + * it is now expected to fail. Notify it. + */ + sgx_zap_abort(); return; + } sgx_free_epc_page(page); } diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 031c1402cd7e..72317866ddaa 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -34,6 +34,16 @@ static bool __rcu sgx_epc_locked; * will be prevented from starting during an SVN update. */ DEFINE_SRCU(sgx_lock_epc_srcu); +static DECLARE_WAIT_QUEUE_HEAD(sgx_zap_waitq); + +/* The flag means to abort the SGX CPUSVN update process */ +static bool sgx_zap_abort_wait; +/* + * Track the number of SECS and VA pages associated with enclaves + * in releasing. SGX CPUSVN update will wait for them EREMOVEd by + * enclave exiting process. + */ +static atomic_t zap_waiting_count; /* * These variables are part of the state of the reclaimer, and must be accessed @@ -636,6 +646,24 @@ void sgx_free_epc_page(struct sgx_epc_page *page) spin_lock(&node->lock); + /* + * The page is EREMOVEd, stop tracking it + * as a deferred target for CPUSVN update + * process. + */ + if ((page->flags & SGX_EPC_PAGE_ZAP_TRACKED) && + (!list_empty(&page->list))) + list_del(&page->list); + + /* + * The page is EREMOVEd, decrease + * "zap_waiting_count" to stop counting it + * as a waiting target for CPUSVN update + * process. + */ + if (page->flags & SGX_EPC_PAGE_IN_RELEASE) + atomic_dec_if_positive(&zap_waiting_count); + page->owner = NULL; if (page->poison) list_add(&page->list, &node->sgx_poison_page_list); @@ -1010,3 +1038,322 @@ bool sgx_epc_is_locked(void) lockdep_assert_held(&sgx_lock_epc_srcu); return sgx_epc_locked; } + +/** + * sgx_zap_encl_page - unuse one EPC page + * @section: EPC section + * @epc_page: EPC page + * @secs_pages_list: list to trac SECS pages failed to be EREMOVEd + * + * Zap an EPC page if it's used by an enclave. + * + * Returns: + * 0: EPC page is unused or EREMOVE succeeds. + * -EBUSY: EREMOVE failed for other threads executing + * in enclave. + * -EIO: Other EREMOVE failures, like EPC leaks. + */ +static int sgx_zap_encl_page(struct sgx_epc_section *section, + struct sgx_epc_page *epc_page, + struct list_head *secs_pages_list) +{ + struct sgx_encl *encl; + struct sgx_encl_page *encl_page; + struct sgx_va_page *va_page; + int retry_count = 10; + int ret = 0; + + /* + * Holding the per-section lock to ensure the + * "owner" field will not be cleared while + * checking. + */ + spin_lock(§ion->node->lock); + + /* + * The "owner" field is NULL, it means the page + * is unused. + */ + if (!epc_page->owner) { + spin_unlock(§ion->node->lock); + return 0; + } + + if (epc_page->flags & SGX_EPC_PAGE_VA) { + va_page = epc_page->owner; + encl = va_page->encl; + } else { + encl_page = epc_page->owner; + encl = encl_page->encl; + } + + if (!encl) { + spin_unlock(§ion->node->lock); + /* + * The page has owner, but without an Enclave + * associated with. This might be caused by + * EPC leaks happen in enclave's release path. + */ + ret = __eremove(sgx_get_epc_virt_addr(epc_page)); + if (WARN_ONCE(ret, EREMOVE_ERROR_MESSAGE, ret, ret)) + ret = -EIO; + else + sgx_free_epc_page(epc_page); + return ret; + } + + /* + * Wait for any enclave already being released to complete + * but prevent any additional enclave from starting release + * while we operate on it. + */ + if (!kref_get_unless_zero(&encl->refcount)) { + + /* + * The enclave is exiting. The EUPDATESVN + * procedure needs to wait for the EREMOVE + * operation which happens as a part of + * the enclave exit operation. Use + * "zap_waiting_count" to indicate to the + * EUPDATESVN code when it needs to wait. + */ + if (((epc_page->flags & SGX_EPC_PAGE_VA) || + (encl_page->type == SGX_PAGE_TYPE_SECS)) && + !(epc_page->flags & SGX_EPC_PAGE_IN_RELEASE)) { + atomic_inc(&zap_waiting_count); + epc_page->flags |= SGX_EPC_PAGE_IN_RELEASE; + } + + spin_unlock(§ion->node->lock); + return 0; + } + + spin_unlock(§ion->node->lock); + + /* + * This EREMOVE has two main purposes: + * 1. Getting EPC pages into the "unused" state. + * Every EPC page must be unused before an + * EUPDATESVN can be succeed. + * 2. Forcing enclaves to exit more frequently. + * EREMOVE will not succeed while any thread is + * running in the enclave. Every successful + * EREMOVE increases the chance that an enclave + * will trip over this page, fault, and exit. + * This, in turn, increases the likelihood of + * success for every future EREMOVE attempt. + */ + ret = __eremove(sgx_get_epc_virt_addr(epc_page)); + + if (!ret) { + /* + * The SECS page is EREMOVEd successfully this time. + * Remove it from the list to stop tracking it. + */ + if ((epc_page->flags & SGX_EPC_PAGE_ZAP_TRACKED) && + !list_empty(&epc_page->list)) { + list_del_init(&epc_page->list); + epc_page->flags &= ~SGX_EPC_PAGE_ZAP_TRACKED; + } + goto out; + } + + if (ret == SGX_CHILD_PRESENT) { + /* + * The SECS page is failed to be EREMOVEd due + * to associations. Add it to "secs_pages_list" + * for deferred handling. + */ + if (!(epc_page->flags & SGX_EPC_PAGE_ZAP_TRACKED) && + secs_pages_list) { + epc_page->flags |= SGX_EPC_PAGE_ZAP_TRACKED; + list_add_tail(&epc_page->list, secs_pages_list); + } + ret = 0; + goto out; + } + + if (ret) { + /* + * EREMOVE will fail on a page if the owning + * enclave is executing. An IPI will cause the + * enclave to exit, providing an opportunity to + * EREMOVE the page, but it does not guarantee + * the page will be EREMOVEd successfully. Retry + * for several times, if it keeps on failing, + * return -EBUSY to notify userspace for retry. + */ + do { + on_each_cpu_mask(sgx_encl_cpumask(encl), sgx_ipi_cb, NULL, true); + ret = __eremove(sgx_get_epc_virt_addr(epc_page)); + if (!ret) + break; + retry_count--; + } while (retry_count); + + if (ret) + ret = -EBUSY; + } + +out: + kref_put(&encl->refcount, sgx_encl_release); + return ret; +} + +/** + * sgx_zap_section_pages - unuse one EPC section's pages + * @section: EPC section + * @secs_pages_list: list to track SECS pages failed to be EREMOVEd + * + * Iterate through pages in one EPC section, unuse the pages + * initialized for enclaves on bare metal. + * + * TODO: EPC pages for KVM guest will be handled in future. + * + * Returns: + * 0: EPC page is unused. + * -EBUSY: EREMOVE failed for other threads executing + * in enclave. + * -EIO: Other EREMOVE failures, like EPC leaks. + */ +static int sgx_zap_section_pages(struct sgx_epc_section *section, + struct list_head *secs_pages_list) +{ + struct sgx_epc_page *epc_page; + int i, ret = 0; + unsigned long nr_pages = section->size >> PAGE_SHIFT; + + for (i = 0; i < nr_pages; i++) { + epc_page = §ion->pages[i]; + + /* + * EPC page has "NULL" owner, indicating + * it's unused. No action required for + * this case. + * + * No new owner can be assigned when SGX + * is "frozen". + */ + if (!epc_page->owner) + continue; + + /* + * Try to "unuse" all SGX memory used by enclaves + * on bare-metal. + * + * Failures might be caused by the following reasons: + * 1. EREMOVE failure due to other threads executing + * in enclave. Return -EBUSY to notify userspace + * for a later retry. + * 2. Other EREMOVE failures. For example, a bug in + * SGX memory management like a leak that lost + * track of an SGX EPC page. Upon these failures, + * do not even attempt EUPDATESVN. + */ + if (!(epc_page->flags & SGX_EPC_PAGE_KVM_GUEST)) { + ret = sgx_zap_encl_page(section, epc_page, secs_pages_list); + if (ret) + return ret; + } + } + + return ret; +} + +/** + * sgx_zap_pages - unuse all EPC sections' pages + * + * Context: This function is called while microcode_mutex lock + * is held by the caller, it ensures that the update + * process will not run concurrently. + * + * Returns: + * 0: All enclaves have been torn down and + * all EPC pages are unused. + * -ERESTARTSYS: Interrupted by a signal. + * -EBUSY: EREMOVE failed for other threads executing + * in enclave. + * -EIO: Other EREMOVE failures, like EPC leaks. + */ +static int sgx_zap_pages(void) +{ + struct sgx_epc_page *epc_page, *tmp; + struct sgx_epc_section *section; + int i, ret = 0; + + LIST_HEAD(secs_pages_list); + + for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) { + section = &sgx_epc_sections[i]; + if (!section->pages) + break; + /* + * Go through the section's pages and try to EREMOVE + * each one, except the ones associated with enclaves + * in releasing. + */ + ret = sgx_zap_section_pages(section, &secs_pages_list); + if (WARN_ON_ONCE(ret)) + goto out; + } + + /* + * The SECS page should have no associations now, try + * EREMOVE them again. + */ + list_for_each_entry_safe(epc_page, tmp, &secs_pages_list, list) { + section = &sgx_epc_sections[epc_page->section]; + ret = sgx_zap_encl_page(section, epc_page, NULL); + if (ret) + goto out; + } + + /* + * There might be pages in the process of being freed + * by exiting enclaves. Wait for the exiting process + * to succeed or fail. + */ + ret = wait_event_interruptible(sgx_zap_waitq, + (!atomic_read(&zap_waiting_count) || + sgx_zap_abort_wait)); + if (ret == -ERESTARTSYS) { + pr_err("CPUSVN update is not finished yet, but killed by userspace\n"); + goto out; + } + + if (sgx_zap_abort_wait) { + ret = -EIO; + pr_err("exit-side EREMOVE failure. Aborting CPUSVN update\n"); + goto out; + } + +out: + return ret; +} + +/** + * sgx_zap_wakeup - wake up CPUSVN update process + * + * Whenever enclave is freed, this function will + * be called to check if all EPC pages are unused. + * Wake up the CPUSVN update process if it's true. + */ +void sgx_zap_wakeup(void) +{ + if (wq_has_sleeper(&sgx_zap_waitq) && + !atomic_read(&zap_waiting_count)) + wake_up(&sgx_zap_waitq); +} + +/** + * sgx_zap_abort - abort SGX CPUSVN update process + * + * When EPC leaks happen in enclave release process, + * it will set flag sgx_zap_abort_wait as true to + * abort the CPUSVN update process. + */ +void sgx_zap_abort(void) +{ + sgx_zap_abort_wait = true; + wake_up(&sgx_zap_waitq); +} From patchwork Thu Apr 21 11:03:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89117C433EF for ; Thu, 21 Apr 2022 11:05:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352661AbiDULIf (ORCPT ); Thu, 21 Apr 2022 07:08:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388932AbiDULIE (ORCPT ); Thu, 21 Apr 2022 07:08:04 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7459518E0F for ; Thu, 21 Apr 2022 04:05:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539107; x=1682075107; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=+3y6DxvsctMxXZh/+jol0SmDk7ld6S7QJDcag3c4+r0=; b=VWeDgCxkOmk/rBd2IyUHgTpUDTZJNG4o1rU6ix2LPGi6fLNyeTTG9VgI ZSmtPBMe9ty1HoTdKoknn4F/PWssNPc2B4eTaWpzd2oXqu42XwxAehsmc qoo2DBhuvbAmeJK88ijAq0n1x90I2Xcwt1t7M1wVcB1vZJB5auqFdo5hO Dlrq5JpDDHefOgQ41/vdQ4M7PW96+Tdw7uTLhvcIzKYGO52RNslpBypft sof2hiNLSnd1sIignIdHzwtiQlE9jAsgbBI1gb0uNWGfw2Af198SJwdWx HxYmhd9GMBVQyLp/A+QJ6uzHvj4F4KKIngEvViCIWRacWtDEy8hnw0nvL w==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="244893431" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="244893431" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039184" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:33 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 6/9] x86/sgx: Define error codes for ENCLS[EUPDATESVN] Date: Thu, 21 Apr 2022 19:03:23 +0800 Message-Id: <20220421110326.856-7-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Add error codes for ENCLS[EUPDATESVN], then SGX CPUSVN update process can know the execution state of EUPDATESVN and notify userspace. Signed-off-by: Cathy Zhang --- arch/x86/include/asm/sgx.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h index 03ec7e32659c..4a8ca7281229 100644 --- a/arch/x86/include/asm/sgx.h +++ b/arch/x86/include/asm/sgx.h @@ -65,17 +65,27 @@ enum sgx_encls_function { /** * enum sgx_return_code - The return code type for ENCLS, ENCLU and ENCLV + * %SGX_EPC_PAGE_CONFLICT Page is being written by other ENCLS function. * %SGX_NOT_TRACKED: Previous ETRACK's shootdown sequence has not * been completed yet. * %SGX_CHILD_PRESENT SECS has child pages present in the EPC. * %SGX_INVALID_EINITTOKEN: EINITTOKEN is invalid and enclave signer's * public key does not match IA32_SGXLEPUBKEYHASH. + * %SGX_INSUFFICIENT_ENTROPY: Insufficient entropy in RNG. + * %SGX_EPC_NOT_READY: EPC is not ready for SVN update. + * %SGX_NO_UPDATE: EUPDATESVN was successful, but CPUSVN was not + * updated because current SVN was not newer than + * CPUSVN. * %SGX_UNMASKED_EVENT: An unmasked event, e.g. INTR, was received */ enum sgx_return_code { + SGX_EPC_PAGE_CONFLICT = 7, SGX_NOT_TRACKED = 11, SGX_CHILD_PRESENT = 13, SGX_INVALID_EINITTOKEN = 16, + SGX_INSUFFICIENT_ENTROPY = 29, + SGX_EPC_NOT_READY = 30, + SGX_NO_UPDATE = 31, SGX_UNMASKED_EVENT = 128, }; From patchwork Thu Apr 21 11:03:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5957BC433FE for ; Thu, 21 Apr 2022 11:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245741AbiDULIf (ORCPT ); Thu, 21 Apr 2022 07:08:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388757AbiDULHk (ORCPT ); Thu, 21 Apr 2022 07:07:40 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC2CF2F3AE for ; Thu, 21 Apr 2022 04:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539079; x=1682075079; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=rgvfROZEViuFt6jnJPQBGeEBIuSm+JPCSlpAf02Cxxg=; b=Cvht/ZOzXQsdCxl/Hj+z9iisTKwvPTO4Mgj4vh5Wjx+u8I+y/VQBDns1 cz51PwnDwIm+Z3vZ1bnJIeIO0kDmBBEHJAKTq3bW4aB/N30FZVhlOSCla AqCZ312R/W+NV3ZI6tdvScQCoZrvS8uV1OLmZYnCnLroPiGDyru9lbAk0 q1CUlz15CB1FazG3yJ71mAggRH5EED5Wk3a34DrhiOW0BSkz9iMsg+cp0 NK4S565TrB5BXnDbQ8h796zCuzPndOG05yVQxg799QxZ4W/Z+WnJLiiQw xJOiu2QvscoyuKAv9Zl2ZhxF98yYXqxeGTUdShxaFpu3ESwd3en/8O34+ Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="251631197" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="251631197" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039229" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:36 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 7/9] x86/sgx: Implement ENCLS[EUPDATESVN] Date: Thu, 21 Apr 2022 19:03:24 +0800 Message-Id: <20220421110326.856-8-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org The SGX attestation architecture assumes a compromise of all running enclaves and cryptographic assets (like internal SGX encryption keys) whenever a microcode update affects SGX. To mitigate the impact of this presumed compromise, a new supervisor SGX instruction: ENCLS[EUPDATESVN], is introduced to update SGX microcode version and generate new cryptographic assets in runtime after SGX microcode update. EUPDATESVN requires that SGX memory to be marked as "unused" before it will succeed. This ensures that no compromised enclave can survive the process and provides an opportunity to generate new cryptographic assets. Signed-off-by: Cathy Zhang --- Changes since v1: - Print message for each return code to notify userspace the ENCLS[EUPDATESVN] execution status. --- arch/x86/include/asm/sgx.h | 33 +++++++++++---------- arch/x86/kernel/cpu/sgx/encls.h | 6 ++++ arch/x86/kernel/cpu/sgx/main.c | 52 +++++++++++++++++++++++++++++++++ 3 files changed, 76 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h index 4a8ca7281229..74bcb6841a4b 100644 --- a/arch/x86/include/asm/sgx.h +++ b/arch/x86/include/asm/sgx.h @@ -26,23 +26,26 @@ #define SGX_CPUID_EPC_SECTION 0x1 /* The bitmask for the EPC section type. */ #define SGX_CPUID_EPC_MASK GENMASK(3, 0) +/* EUPDATESVN presence indication */ +#define SGX_CPUID_EUPDATESVN BIT(10) enum sgx_encls_function { - ECREATE = 0x00, - EADD = 0x01, - EINIT = 0x02, - EREMOVE = 0x03, - EDGBRD = 0x04, - EDGBWR = 0x05, - EEXTEND = 0x06, - ELDU = 0x08, - EBLOCK = 0x09, - EPA = 0x0A, - EWB = 0x0B, - ETRACK = 0x0C, - EAUG = 0x0D, - EMODPR = 0x0E, - EMODT = 0x0F, + ECREATE = 0x00, + EADD = 0x01, + EINIT = 0x02, + EREMOVE = 0x03, + EDGBRD = 0x04, + EDGBWR = 0x05, + EEXTEND = 0x06, + ELDU = 0x08, + EBLOCK = 0x09, + EPA = 0x0A, + EWB = 0x0B, + ETRACK = 0x0C, + EAUG = 0x0D, + EMODPR = 0x0E, + EMODT = 0x0F, + EUPDATESVN = 0x18, }; /** diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h index 60321c5f5718..8455f385e817 100644 --- a/arch/x86/kernel/cpu/sgx/encls.h +++ b/arch/x86/kernel/cpu/sgx/encls.h @@ -208,4 +208,10 @@ static inline int __ewb(struct sgx_pageinfo *pginfo, void *addr, return __encls_ret_3(EWB, pginfo, addr, va); } +/* Update CPUSVN at runtime. */ +static inline int __eupdatesvn(void) +{ + return __encls_ret_1(EUPDATESVN, ""); +} + #endif /* _X86_ENCLS_H */ diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 72317866ddaa..fbb093c9fe1a 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -1357,3 +1357,55 @@ void sgx_zap_abort(void) sgx_zap_abort_wait = true; wake_up(&sgx_zap_waitq); } + +/** + * sgx_updatesvn() - Issue ENCLS[EUPDATESVN] + * If EPC is ready, this instruction will update CPUSVN to the currently + * loaded microcode update SVN and generate new cryptographic assets. + * + * Return: + * 0: CPUSVN is update successfully. + * %SGX_LOCKFAIL: An instruction concurrency rule was violated. + * %SGX_INSUFFICIENT_ENTROPY: Insufficient entropy in RNG. + * %SGX_EPC_NOT_READY: EPC is not ready for SVN update. + * %SGX_NO_UPDATE: EUPDATESVN was successful, but CPUSVN was not + * updated because current SVN was not newer than + * CPUSVN. + */ +static int sgx_updatesvn(void) +{ + int ret; + int retry = 10; + + do { + ret = __eupdatesvn(); + if (ret != SGX_INSUFFICIENT_ENTROPY) + break; + + } while (--retry); + + switch (ret) { + case 0: + pr_info("EUPDATESVN was successful!\n"); + break; + case SGX_NO_UPDATE: + pr_info("EUPDATESVN was successful, but CPUSVN was not updated, " + "because current SVN was not newer than CPUSVN.\n"); + break; + case SGX_EPC_NOT_READY: + pr_info("EPC is not ready for SVN update."); + break; + case SGX_INSUFFICIENT_ENTROPY: + pr_info("CPUSVN update is failed due to Insufficient entropy in RNG, " + "please try it later.\n"); + break; + case SGX_EPC_PAGE_CONFLICT: + pr_info("CPUSVN update is failed due to concurrency violation, please " + "stop running any other ENCLS leaf and try it later.\n"); + break; + default: + break; + } + + return ret; +} From patchwork Thu Apr 21 11:03:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43B59C4332F for ; Thu, 21 Apr 2022 11:05:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388699AbiDULIi (ORCPT ); Thu, 21 Apr 2022 07:08:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388807AbiDULHw (ORCPT ); Thu, 21 Apr 2022 07:07:52 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 885592F004 for ; Thu, 21 Apr 2022 04:04:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539084; x=1682075084; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=5UZQOP41k04KXqjGY+XF9VRmCK0/zWpd1s1rgX0eZ50=; b=e6boCFVoMwhVf2MpkxGmm0xYCDeaeVl/YZE2ftf66HqfX1QxCeCq5VLM irK9+e+iPyMNYy1DMFHYJc5oB/s9BjXMUOkv4LPAfeSpaLE/R9BZ71/E6 N+npUn64OiT9w4Dnz96z5SkIKusT9cP2oReQtNJifa+nyXI/WDWN+Lm4U MOedXC4aGJNMjjOGykmn7CqL0fiTjolMuP0GZXMBfnD+i6rfoWZ1EkQJ1 hA4CdLYHdngIGd3GxJh9NICiTRqbUdtzqKyIkICu1VekXfav0GpvjCcaB ArtzJ1jm4Bv5Kf17W+gBUFxyMDonxu3u1UZRSVqrhd+JsyU+AiFsOZ+e1 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="324759113" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="324759113" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039248" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:39 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 8/9] x86/cpu: Call ENCLS[EUPDATESVN] procedure in microcode update Date: Thu, 21 Apr 2022 19:03:25 +0800 Message-Id: <20220421110326.856-9-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org EUPDATESVN is the SGX instruction which allows enclave attestation to include information about updated microcode without a reboot. Microcode updates which affect SGX require two phases: 1. Do the main microcode update 2. Make the new CPUSVN available for enclave attestation via EUPDATESVN. Before a EUPDATESVN can succeed, all enclave pages (EPC) must be marked as unused in the SGX metadata (EPCM). This operation destroys all preexisting SGX enclave data and metadata. This is by design and mitigates the impact of vulnerabilities that may have compromised enclaves or the SGX hardware itself prior to the update. Signed-off-by: Cathy Zhang --- Changes since v3: - Rename update_cpusvn_intel() as sgx_update_cpusvn_intel(). (Dave Hansen) - Refine the comments when sgx_update_cpusvn_intel() is called by microcode_check(). (Borislav Petkov, Dave Hansen) - Define both the 'static inline' stub *and* the declaration for sgx_update_cpusvn_intel() in sgx.h. (Dave Hansen) Changes since v1: - Remove the sysfs file svnupdate. (Thomas Gleixner, Dave Hansen) - Let late microcode load path call ENCLS[EUPDATESVN] procedure directly. (Borislav Petkov) - Redefine update_cpusvn_intel() to return void instead of int. --- arch/x86/include/asm/microcode.h | 1 + arch/x86/include/asm/sgx.h | 6 ++++++ arch/x86/kernel/cpu/common.c | 10 ++++++++++ arch/x86/kernel/cpu/sgx/main.c | 12 ++++++++++++ 4 files changed, 29 insertions(+) diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h index d6bfdfb0f0af..ec12392af371 100644 --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -3,6 +3,7 @@ #define _ASM_X86_MICROCODE_H #include +#include #include #include diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h index 74bcb6841a4b..1321670a6338 100644 --- a/arch/x86/include/asm/sgx.h +++ b/arch/x86/include/asm/sgx.h @@ -409,4 +409,10 @@ int sgx_virt_einit(void __user *sigstruct, void __user *token, int sgx_set_attribute(unsigned long *allowed_attributes, unsigned int attribute_fd); +#ifdef CONFIG_X86_SGX +extern void sgx_update_cpusvn_intel(void); +#else +static inline void sgx_update_cpusvn_intel(void) {} +#endif + #endif /* _ASM_X86_SGX_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 7b8382c11788..41bed20b586d 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -59,6 +59,7 @@ #include #include #include +#include #include "cpu.h" @@ -2086,6 +2087,15 @@ void microcode_check(void) perf_check_microcode(); + /* + * SGX attestation incorporates the microcode versions of all processors + * on the system and is affected by microcode updates. So, update SGX + * attestation metric (called CPUSVN) to ensure enclaves attest to the + * new version after microcode update. + */ + if (IS_ENABLED(CONFIG_X86_SGX) && (cpuid_eax(SGX_CPUID) & SGX_CPUID_EUPDATESVN)) + sgx_update_cpusvn_intel(); + /* Reload CPUID max function as it might've changed. */ info.cpuid_level = cpuid_eax(0); diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index fbb093c9fe1a..20be96a79cc1 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -1409,3 +1409,15 @@ static int sgx_updatesvn(void) return ret; } + +void sgx_update_cpusvn_intel(void) +{ + sgx_lock_epc(); + if (sgx_zap_pages()) + goto out; + + sgx_updatesvn(); + +out: + sgx_unlock_epc(); +} From patchwork Thu Apr 21 11:03:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Cathy" X-Patchwork-Id: 12821471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DD0BC433EF for ; Thu, 21 Apr 2022 11:05:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231478AbiDULIe (ORCPT ); Thu, 21 Apr 2022 07:08:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388825AbiDULHx (ORCPT ); Thu, 21 Apr 2022 07:07:53 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70B8E3055D for ; Thu, 21 Apr 2022 04:04:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650539086; x=1682075086; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/OYeFfhaL98wr7wOHl4mf2MKjSXpF9b8n7EMCgY88pI=; b=bAMAz0zxwKdCvAQLZK2K91oEQltThrhZmtT7udFpg33N6ZHVVCAsM7zJ TogRINgI/g/pUhMbebX+EenZ+FGyq+Q/BlBWhDjrjR7WsafdYBv2N1XsJ xV9wP3YL0+XiSyj6EyMwlZnc05EGv6m16FFMlZR05GOt+XqoLq7JKcXeZ xibAy9lFKudsUQvt8QalUzzefEmP8pKxG4mmVApc0GJBlHPGDYj+LEc0q +dfupJw6MggXKYfkoTStVMjXs+0GAgwI6Yb5Znla+sDwBXotyKViAKfCZ O5JNYecvkO/wU81gSrRZB2WwEEH1LnNPQUUGb/z4SRHdxUbER9Mk7HsgP A==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="324759119" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="324759119" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 04:04:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="703039254" Received: from cathy-vostro-3670.bj.intel.com ([10.238.156.128]) by fmsmga001.fm.intel.com with ESMTP; 21 Apr 2022 04:04:42 -0700 From: Cathy Zhang To: linux-sgx@vger.kernel.org, x86@kernel.org Cc: jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@intel.com, ashok.raj@intel.com, cathy.zhang@intel.com, chao.p.peng@intel.com, yang.zhong@intel.com Subject: [PATCH v4 9/9] x86/sgx: Call ENCLS[EUPDATESVN] during SGX initialization Date: Thu, 21 Apr 2022 19:03:26 +0800 Message-Id: <20220421110326.856-10-cathy.zhang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220421110326.856-1-cathy.zhang@intel.com> References: <20220421110326.856-1-cathy.zhang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org A snapshot of the processor microcode SVN is taken each boot cycle at the time when Intel SGX is first used. This results in microcode updates being loadable at any time, fixing microcode issues. However, if system boot up through kexec() from error recovery, no hardware reset happens, any SGX leaf execution during boot up is not assumed as the first use in such case, and no snapshot of SVN is taken. So, it's necessary to call ENCLS[EUPDATESVN] to update SVN automatically, rather than waiting for the admin to do it when he/she is even not aware of that. Call ENCLS[EUPDATESVN] after sanitizing pages will increase the chance of success, for it requires that EPC is empty. Signed-off-by: Cathy Zhang --- Changes since v3: - Rename as sgx_update_cpusvn_intel(). Changes since v1: - Update accordingly for update_cpusvn_intel() return *void*. --- arch/x86/kernel/cpu/sgx/main.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 20be96a79cc1..caf4f0db47f9 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -426,6 +426,7 @@ static bool sgx_should_reclaim(unsigned long watermark) !list_empty(&sgx_active_page_list); } +void sgx_update_cpusvn_intel(void); static int ksgxd(void *p) { int srcu_idx; @@ -440,7 +441,14 @@ static int ksgxd(void *p) __sgx_sanitize_pages(&sgx_dirty_page_list); /* sanity check: */ - WARN_ON(!list_empty(&sgx_dirty_page_list)); + if (!WARN_ON(!list_empty(&sgx_dirty_page_list))) { + /* + * Do SVN update for kexec(). It should complete without error, for + * all EPC pages are unused at this point. + */ + if (cpuid_eax(SGX_CPUID) & SGX_CPUID_EUPDATESVN) + sgx_update_cpusvn_intel(); + } while (!kthread_should_stop()) { if (try_to_freeze())