From patchwork Wed Oct 16 18:37:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6AA717E6 for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C7B052067D for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394280AbfJPShv (ORCPT ); Wed, 16 Oct 2019 14:37:51 -0400 Received: from mga07.intel.com ([134.134.136.100]:22394 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394272AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258414" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:46 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 01/12] x86/sgx: Pass EADD the kernel's virtual address for the source page Date: Wed, 16 Oct 2019 11:37:34 -0700 Message-Id: <20191016183745.8226-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Use the kernel's virtual address to reference the source page when EADDing a page to the enclave. gup() "does not guarantee that the page exists in the user mappings", i.e. EADD can still fault and deadlock due to mmap_sem contention if it consumes the userspace address. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/ioctl.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index bb8fcc4f91e3..2dd0eceee111 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -328,16 +328,14 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, if (ret < 1) return ret; - __uaccess_begin(); - pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page); pginfo.metadata = (unsigned long)secinfo; - pginfo.contents = (unsigned long)src; + pginfo.contents = (unsigned long)kmap_atomic(src_page); ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); - __uaccess_end(); + kunmap_atomic((void *)pginfo.contents); put_page(src_page); return ret ? -EFAULT : 0; From patchwork Wed Oct 16 18:37:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 715DC14DB for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B89F21A4C for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394281AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 Received: from mga07.intel.com ([134.134.136.100]:22396 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394264AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258417" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:47 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 02/12] x86/sgx: Check the validity of the source page address for EADD Date: Wed, 16 Oct 2019 11:37:35 -0700 Message-Id: <20191016183745.8226-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Add an explicit access_ok() check on EADD's source page to avoid passing garbage to gup(). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/ioctl.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 2dd0eceee111..7d1b449bf771 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -498,6 +498,9 @@ static long sgx_ioc_enclave_add_page(struct sgx_encl *encl, void __user *arg) !IS_ALIGNED(addp.src, PAGE_SIZE)) return -EINVAL; + if (!(access_ok(addp.src, PAGE_SIZE))) + return -EFAULT; + if (addp.addr < encl->base || addp.addr - encl->base >= encl->size) return -EINVAL; From patchwork Wed Oct 16 18:37:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193913 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E9B318A6 for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 79BA121D7E for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394287AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga07.intel.com ([134.134.136.100]:22394 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726776AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258420" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:47 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 03/12] x86/sgx: Fix EEXTEND error handling Date: Wed, 16 Oct 2019 11:37:36 -0700 Message-Id: <20191016183745.8226-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Rework EEXTEND error handling to fix issues related to destroying the enclave in response to EEXTEND failure. At the time of EEXTEND, the page is already visibile in the sense that it has been added to the radix tree, and therefore will be processed by sgx_encl_destroy(). This means the "add" part needs to be fully completed prior to invoking sgx_encl_destroy() in order to avoid consuming half-baked state. Move sgx_encl_destroy() to the call site of __sgx_encl_extend() so that it is somewhat more obvious why the add needs to complete before doing EEXTEND. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/ioctl.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 7d1b449bf771..4169ff3c81d8 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -351,18 +351,14 @@ static int __sgx_encl_extend(struct sgx_encl *encl, for_each_set_bit(i, &mrmask, 16) { ret = __eextend(sgx_epc_addr(encl->secs.epc_page), sgx_epc_addr(epc_page) + (i * 0x100)); - if (ret) - goto err_out; + if (ret) { + if (encls_failed(ret)) + ENCLS_WARN(ret, "EEXTEND"); + return -EFAULT; + } } return 0; - -err_out: - if (encls_failed(ret)) - ENCLS_WARN(ret, "EEXTEND"); - - sgx_encl_destroy(encl); - return -EFAULT; } static int sgx_encl_add_page(struct sgx_encl *encl, @@ -421,19 +417,24 @@ static int sgx_encl_add_page(struct sgx_encl *encl, if (ret) goto err_out; - ret = __sgx_encl_extend(encl, epc_page, addp->mrmask); - if (ret) - goto err_out; - + /* + * Complete the "add" before doing the "extend" so that the "add" + * isn't in a half-baked state in the extremely unlikely scenario the + * the enclave will be destroyed in response to EEXTEND failure. + */ encl_page->encl = encl; encl_page->epc_page = epc_page; encl->secs_child_cnt++; - sgx_mark_page_reclaimable(encl_page->epc_page); + ret = __sgx_encl_extend(encl, epc_page, addp->mrmask); + if (ret) + sgx_encl_destroy(encl); + else + sgx_mark_page_reclaimable(encl_page->epc_page); mutex_unlock(&encl->lock); up_read(¤t->mm->mmap_sem); - return 0; + return ret; err_out: radix_tree_delete(&encl_page->encl->page_tree, From patchwork Wed Oct 16 18:37:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193917 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D288817E6 for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B3AAA2067D for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726776AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga07.intel.com ([134.134.136.100]:22394 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394289AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258423" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:47 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 04/12] x86/sgx: Drop mmap_sem before EEXTENDing an enclave page Date: Wed, 16 Oct 2019 11:37:37 -0700 Message-Id: <20191016183745.8226-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Drop mmap_sem, which needs to be held for read across EADD, prior to doing EEXTEND on the newly added page to avoid holding mmap_sem for an extended duration. EEXTEND doesn't access user pages and holding encl->lock without mmap_sem is perfectly ok, while EEXTEND is a _slow_ operation, to the point where it operates on 256-byte chunks instead of 4k pages to maintain a reasonable latency for a single instruction. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/ioctl.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 4169ff3c81d8..7be3fdc846d7 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -409,11 +409,15 @@ static int sgx_encl_add_page(struct sgx_encl *encl, */ ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), encl_page); - if (ret) + if (ret) { + up_read(¤t->mm->mmap_sem); goto err_out_unlock; + } ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo, addp->src); + up_read(¤t->mm->mmap_sem); + if (ret) goto err_out; @@ -433,7 +437,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, sgx_mark_page_reclaimable(encl_page->epc_page); mutex_unlock(&encl->lock); - up_read(¤t->mm->mmap_sem); return ret; err_out: @@ -443,7 +446,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, err_out_unlock: sgx_encl_shrink(encl, va_page); mutex_unlock(&encl->lock); - up_read(¤t->mm->mmap_sem); err_out_free: sgx_free_page(epc_page); From patchwork Wed Oct 16 18:37:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193925 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B29315AB for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 661482067D for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394293AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga07.intel.com ([134.134.136.100]:22396 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394290AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258426" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:47 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 05/12] x86/sgx: Remove redundant message from WARN on non-emtpy mm_list Date: Wed, 16 Oct 2019 11:37:38 -0700 Message-Id: <20191016183745.8226-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Use WARN_ON_ONCE() instead of WARN_ONCE() for detecting a non-empty mm_list during enclave release, the "mm_list non-empty" message doesn't provide any additional or helpful information. Suggested-by: Jarkko Sakkinen Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/encl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 28e29deaad8f..ae81cd7cd8a8 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -514,7 +514,7 @@ void sgx_encl_release(struct kref *ref) if (encl->backing) fput(encl->backing); - WARN_ONCE(!list_empty(&encl->mm_list), "mm_list non-empty"); + WARN_ON_ONCE(!list_empty(&encl->mm_list)); /* Detect EPC page leak's. */ WARN_ON_ONCE(encl->secs_child_cnt); From patchwork Wed Oct 16 18:37:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0531B18B8 for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DA5F82067D for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394285AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 Received: from mga04.intel.com ([192.55.52.120]:46169 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394264AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258429" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:47 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 06/12] x86/sgx: Fix a memory leak in sgx_encl_destroy() Date: Wed, 16 Oct 2019 11:37:39 -0700 Message-Id: <20191016183745.8226-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Delete an enclave page's entry in the radix tree regardless of whether or not it has an associated EPC page, and free the page itself when it's deleted from the radix tree. Don't free/delete anything if the page is held by the reclaimer, as the reclaimer needs the page itself and the driver needs the radix entry to re-process the entry during sgx_encl_release(). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/encl.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index ae81cd7cd8a8..6e60520a939c 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -469,14 +469,19 @@ void sgx_encl_destroy(struct sgx_encl *encl) radix_tree_for_each_slot(slot, &encl->page_tree, &iter, 0) { entry = *slot; if (entry->epc_page) { - if (!sgx_free_page(entry->epc_page)) { - encl->secs_child_cnt--; - entry->epc_page = NULL; - } - - radix_tree_delete(&entry->encl->page_tree, - PFN_DOWN(entry->desc)); + /* + * The page and its radix tree entry cannot be freed + * if the page is being held by the reclaimer. + */ + if (sgx_free_page(entry->epc_page)) + continue; + encl->secs_child_cnt--; + entry->epc_page = NULL; } + + radix_tree_delete(&entry->encl->page_tree, + PFN_DOWN(entry->desc)); + kfree(entry); } if (!encl->secs_child_cnt && encl->secs.epc_page) { From patchwork Wed Oct 16 18:37:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C130A18B8 for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A0C682067D for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394290AbfJPShv (ORCPT ); Wed, 16 Oct 2019 14:37:51 -0400 Received: from mga04.intel.com ([192.55.52.120]:46169 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394280AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258433" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:47 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 07/12] x86/sgx: WARN on any non-zero return from __eremove() Date: Wed, 16 Oct 2019 11:37:40 -0700 Message-Id: <20191016183745.8226-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org WARN on any non-zero return from __eremove() to make it clear that any kind of failure is unexpected. Technically, warning on negative values is unnecessary as the ENCLS helpers return SGX error codes, which are currently all postive. But, the more precise check might be misinterpreted as implying the negative values are expected/ok. Note, prior to a recent change, warning only on positive values was necessary to avoid a redundant double-WARN as the WARN resided outside of what is now sgx_free_page() and so could consume -EBUSY. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index b66d5191cbaf..15965fd1f4a2 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -137,7 +137,7 @@ int sgx_free_page(struct sgx_epc_page *page) spin_unlock(&sgx_active_page_list_lock); ret = __eremove(sgx_epc_addr(page)); - WARN(ret > 0, "EREMOVE returned %d (0x%x)", ret, ret); + WARN(ret, "EREMOVE returned %d (0x%x)", ret, ret); spin_lock(§ion->lock); list_add_tail(&page->list, §ion->page_list); From patchwork Wed Oct 16 18:37:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193915 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB9A815AB for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 969C52067D for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394292AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga04.intel.com ([192.55.52.120]:46169 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394287AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258436" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:48 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 08/12] x86/sgx: WARN only once if EREMOVE fails Date: Wed, 16 Oct 2019 11:37:41 -0700 Message-Id: <20191016183745.8226-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org WARN only once if EREMOVE fails to avoid spamming the kernel log if a catastrophic failure occurs. Warning on every failure is helpful for development, but is a bad idea for production code as EREMOVE rarely fails just once... Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 15965fd1f4a2..718fd5590608 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -137,7 +137,7 @@ int sgx_free_page(struct sgx_epc_page *page) spin_unlock(&sgx_active_page_list_lock); ret = __eremove(sgx_epc_addr(page)); - WARN(ret, "EREMOVE returned %d (0x%x)", ret, ret); + WARN_ONCE(ret, "EREMOVE returned %d (0x%x)", ret, ret); spin_lock(§ion->lock); list_add_tail(&page->list, §ion->page_list); From patchwork Wed Oct 16 18:37:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193921 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22447199D for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0D5D02067D for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394289AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga04.intel.com ([192.55.52.120]:46169 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394288AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258439" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:48 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 09/12] x86/sgx: Split second half of sgx_free_page() to a separate helper Date: Wed, 16 Oct 2019 11:37:42 -0700 Message-Id: <20191016183745.8226-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Move the post-reclaim half of sgx_free_page() to a standalone helper so that it can be used in flows where the page is known to be non-reclaimable. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/main.c | 44 ++++++++++++++++++++++++++-------- arch/x86/kernel/cpu/sgx/sgx.h | 1 + 2 files changed, 35 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 718fd5590608..083d9a589882 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -103,6 +103,39 @@ struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim) return entry; } + +/** + * __sgx_free_page() - Free an EPC page + * @page: pointer a previously allocated EPC page + * + * EREMOVE an EPC page and insert it back to the list of free pages. The page + * must not be reclaimable. + */ +void __sgx_free_page(struct sgx_epc_page *page) +{ + struct sgx_epc_section *section; + int ret; + + /* + * Don't take sgx_active_page_list_lock when asserting the page isn't + * reclaimable, missing a WARN in the very rare case is preferable to + * unnecessarily taking a global lock in the common case. + */ + WARN_ON_ONCE(page->desc & SGX_EPC_PAGE_RECLAIMABLE); + + ret = __eremove(sgx_epc_addr(page)); + if (WARN_ONCE(ret, "EREMOVE returned %d (0x%x)", ret, ret)) + return; + + section = sgx_epc_section(page); + + spin_lock(§ion->lock); + list_add_tail(&page->list, §ion->page_list); + sgx_nr_free_pages++; + spin_unlock(§ion->lock); + +} + /** * sgx_free_page() - Free an EPC page * @page: pointer a previously allocated EPC page @@ -116,9 +149,6 @@ struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim) */ int sgx_free_page(struct sgx_epc_page *page) { - struct sgx_epc_section *section = sgx_epc_section(page); - int ret; - /* * Remove the page from the active list if necessary. If the page * is actively being reclaimed, i.e. RECLAIMABLE is set but the @@ -136,13 +166,7 @@ int sgx_free_page(struct sgx_epc_page *page) } spin_unlock(&sgx_active_page_list_lock); - ret = __eremove(sgx_epc_addr(page)); - WARN_ONCE(ret, "EREMOVE returned %d (0x%x)", ret, ret); - - spin_lock(§ion->lock); - list_add_tail(&page->list, §ion->page_list); - sgx_nr_free_pages++; - spin_unlock(§ion->lock); + __sgx_free_page(page); return 0; } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 160a3c996ef6..87e375e8c25e 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -85,6 +85,7 @@ void sgx_reclaim_pages(void); struct sgx_epc_page *sgx_try_alloc_page(void); struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim); +void __sgx_free_page(struct sgx_epc_page *page); int sgx_free_page(struct sgx_epc_page *page); #endif /* _X86_SGX_H */ From patchwork Wed Oct 16 18:37:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193927 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9882418A6 for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 837982067D for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394288AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga04.intel.com ([192.55.52.120]:46169 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394264AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258442" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:48 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 10/12] x86/sgx: Use the post-reclaim variant of __sgx_free_page() Date: Wed, 16 Oct 2019 11:37:43 -0700 Message-Id: <20191016183745.8226-11-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Use __sgx_free_page() in all locations where the EPC page is supposed to be unreclaimable so that a WARN fires if the page is unexpectedly marked for reclaim. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/encl.c | 8 ++++---- arch/x86/kernel/cpu/sgx/ioctl.c | 6 +++--- arch/x86/kernel/cpu/sgx/reclaim.c | 4 ++-- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 6e60520a939c..19af43826f9b 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -73,7 +73,7 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page, ret = __sgx_encl_eldu(encl_page, epc_page, secs_page); if (ret) { - sgx_free_page(epc_page); + __sgx_free_page(epc_page); return ERR_PTR(ret); } @@ -485,7 +485,7 @@ void sgx_encl_destroy(struct sgx_encl *encl) } if (!encl->secs_child_cnt && encl->secs.epc_page) { - sgx_free_page(encl->secs.epc_page); + __sgx_free_page(encl->secs.epc_page); encl->secs.epc_page = NULL; } @@ -498,7 +498,7 @@ void sgx_encl_destroy(struct sgx_encl *encl) va_page = list_first_entry(&encl->va_pages, struct sgx_va_page, list); list_del(&va_page->list); - sgx_free_page(va_page->epc_page); + __sgx_free_page(va_page->epc_page); kfree(va_page); } } @@ -696,7 +696,7 @@ struct sgx_epc_page *sgx_alloc_va_page(void) ret = __epa(sgx_epc_addr(epc_page)); if (ret) { WARN_ONCE(1, "EPA returned %d (0x%x)", ret, ret); - sgx_free_page(epc_page); + __sgx_free_page(epc_page); return ERR_PTR(-EFAULT); } diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 7be3fdc846d7..07b3a9a1cda6 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -47,7 +47,7 @@ static void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page) encl->page_cnt--; if (va_page) { - sgx_free_page(va_page->epc_page); + __sgx_free_page(va_page->epc_page); list_del(&va_page->list); kfree(va_page); } @@ -220,7 +220,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) return 0; err_out: - sgx_free_page(encl->secs.epc_page); + __sgx_free_page(encl->secs.epc_page); encl->secs.epc_page = NULL; err_out_backing: @@ -448,7 +448,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, mutex_unlock(&encl->lock); err_out_free: - sgx_free_page(epc_page); + __sgx_free_page(epc_page); kfree(encl_page); return ret; diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c index 8143c9a20894..3f183dd0e653 100644 --- a/arch/x86/kernel/cpu/sgx/reclaim.c +++ b/arch/x86/kernel/cpu/sgx/reclaim.c @@ -323,7 +323,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, if (!encl->secs_child_cnt) { if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) { - sgx_free_page(encl->secs.epc_page); + __sgx_free_page(encl->secs.epc_page); encl->secs.epc_page = NULL; } else if (atomic_read(&encl->flags) & SGX_ENCL_INITIALIZED) { ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size), @@ -331,7 +331,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, if (!ret) { sgx_encl_ewb(encl->secs.epc_page, &secs_backing); - sgx_free_page(encl->secs.epc_page); + __sgx_free_page(encl->secs.epc_page); encl->secs.epc_page = NULL; } From patchwork Wed Oct 16 18:37:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193923 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E46114DB for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4805821D80 for ; Wed, 16 Oct 2019 18:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394264AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 Received: from mga04.intel.com ([192.55.52.120]:46170 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394293AbfJPShu (ORCPT ); Wed, 16 Oct 2019 14:37:50 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258445" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:48 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 11/12] x86/sgx: Don't update free page count if EPC section allocation fails Date: Wed, 16 Oct 2019 11:37:44 -0700 Message-Id: <20191016183745.8226-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Update the number of free pages only after an EPC section is fully initialized, else the free page count will be left in a bogus state if allocation fails. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 083d9a589882..6311aef10ec4 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -216,9 +216,10 @@ static bool __init sgx_alloc_epc_section(u64 addr, u64 size, page->desc = (addr + (i << PAGE_SHIFT)) | index; list_add_tail(&page->list, §ion->unsanitized_page_list); - sgx_nr_free_pages++; } + sgx_nr_free_pages += nr_pages; + return true; err_out: From patchwork Wed Oct 16 18:37:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11193909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10E8017E6 for ; Wed, 16 Oct 2019 18:37:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EF43821A4C for ; Wed, 16 Oct 2019 18:37:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394291AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 Received: from mga14.intel.com ([192.55.52.115]:46126 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2394281AbfJPSht (ORCPT ); Wed, 16 Oct 2019 14:37:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Oct 2019 11:37:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,304,1566889200"; d="scan'208";a="397258448" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga006.fm.intel.com with ESMTP; 16 Oct 2019 11:37:48 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH for_v23 v3 12/12] x86/sgx: Reinstate per EPC section free page counts Date: Wed, 16 Oct 2019 11:37:45 -0700 Message-Id: <20191016183745.8226-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20191016183745.8226-1-sean.j.christopherson@intel.com> References: <20191016183745.8226-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Track the free page count on a per EPC section basis so that the value is properly protected by the section's spinlock. As was pointed out when the change was proposed[*], using a global non-atomic counter to track the number of free EPC pages is not safe. The order of non-atomic reads and writes are not guaranteed, i.e. concurrent RMW operats can write stale data. This causes a variety of bad behavior, e.g. livelocks because the free page count wraps and causes the swap thread to stop reclaiming. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/main.c | 11 +++++------ arch/x86/kernel/cpu/sgx/reclaim.c | 4 ++-- arch/x86/kernel/cpu/sgx/sgx.h | 18 +++++++++++++++++- 3 files changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 6311aef10ec4..efbb52e4ecad 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -13,18 +13,17 @@ struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; int sgx_nr_epc_sections; -unsigned long sgx_nr_free_pages; static struct sgx_epc_page *__sgx_try_alloc_page(struct sgx_epc_section *section) { struct sgx_epc_page *page; - if (list_empty(§ion->page_list)) + if (!section->free_cnt) return NULL; page = list_first_entry(§ion->page_list, struct sgx_epc_page, list); list_del_init(&page->list); - sgx_nr_free_pages--; + section->free_cnt--; return page; } @@ -97,7 +96,7 @@ struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim) schedule(); } - if (sgx_nr_free_pages < SGX_NR_LOW_PAGES) + if (!sgx_at_least_N_free_pages(SGX_NR_LOW_PAGES)) wake_up(&ksgxswapd_waitq); return entry; @@ -131,7 +130,7 @@ void __sgx_free_page(struct sgx_epc_page *page) spin_lock(§ion->lock); list_add_tail(&page->list, §ion->page_list); - sgx_nr_free_pages++; + section->free_cnt++; spin_unlock(§ion->lock); } @@ -218,7 +217,7 @@ static bool __init sgx_alloc_epc_section(u64 addr, u64 size, list_add_tail(&page->list, §ion->unsanitized_page_list); } - sgx_nr_free_pages += nr_pages; + section->free_cnt = nr_pages; return true; diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c index 3f183dd0e653..8619141f4bed 100644 --- a/arch/x86/kernel/cpu/sgx/reclaim.c +++ b/arch/x86/kernel/cpu/sgx/reclaim.c @@ -68,7 +68,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) static inline bool sgx_should_reclaim(void) { - return sgx_nr_free_pages < SGX_NR_HIGH_PAGES && + return !sgx_at_least_N_free_pages(SGX_NR_HIGH_PAGES) && !list_empty(&sgx_active_page_list); } @@ -430,7 +430,7 @@ void sgx_reclaim_pages(void) section = sgx_epc_section(epc_page); spin_lock(§ion->lock); list_add_tail(&epc_page->list, §ion->page_list); - sgx_nr_free_pages++; + section->free_cnt++; spin_unlock(§ion->lock); } } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 87e375e8c25e..c7f0277299f6 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -30,6 +30,7 @@ struct sgx_epc_page { struct sgx_epc_section { unsigned long pa; void *va; + unsigned long free_cnt; struct list_head page_list; struct list_head unsanitized_page_list; spinlock_t lock; @@ -73,12 +74,27 @@ static inline void *sgx_epc_addr(struct sgx_epc_page *page) #define SGX_NR_HIGH_PAGES 64 extern int sgx_nr_epc_sections; -extern unsigned long sgx_nr_free_pages; extern struct task_struct *ksgxswapd_tsk; extern struct wait_queue_head(ksgxswapd_waitq); extern struct list_head sgx_active_page_list; extern spinlock_t sgx_active_page_list_lock; +static inline bool sgx_at_least_N_free_pages(unsigned long threshold) +{ + struct sgx_epc_section *section; + unsigned long free_cnt = 0; + int i; + + for (i = 0; i < sgx_nr_epc_sections; i++) { + section = &sgx_epc_sections[i]; + free_cnt += section->free_cnt; + if (free_cnt >= threshold) + return true; + } + + return false; +} + bool __init sgx_page_reclaimer_init(void); void sgx_mark_page_reclaimable(struct sgx_epc_page *page); void sgx_reclaim_pages(void);