From patchwork Tue Aug 13 01:12:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091055 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 463341399 for ; Tue, 13 Aug 2019 01:12:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39C9E285E0 for ; Tue, 13 Aug 2019 01:12:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2E18C285E3; Tue, 13 Aug 2019 01:12:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D8E5F285E1 for ; Tue, 13 Aug 2019 01:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726500AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 Received: from mga03.intel.com ([134.134.136.65]:29948 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726316AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062471" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:53 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary Date: Mon, 12 Aug 2019 18:12:45 -0700 Message-Id: <20190813011252.4121-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Align the enclave's binary blob to 4096 bytes so that a pointer to the blob satisfies hardware's requirements that the source data for EADD be page aligned. An upcoming kernel change will extend the alignment requirement to userspace so that the kernel can avoid copying the source into an internal buffer, i.e. pass the userspace pointer directly to EADD. Signed-off-by: Sean Christopherson --- tools/testing/selftests/x86/sgx/encl_piggy.S | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/testing/selftests/x86/sgx/encl_piggy.S b/tools/testing/selftests/x86/sgx/encl_piggy.S index 542001658afb..a7f6447abbba 100644 --- a/tools/testing/selftests/x86/sgx/encl_piggy.S +++ b/tools/testing/selftests/x86/sgx/encl_piggy.S @@ -4,6 +4,7 @@ */ .section ".rodata", "a" + .balign 4096 encl_bin: .globl encl_bin From patchwork Tue Aug 13 01:12:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 876151399 for ; Tue, 13 Aug 2019 01:12:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A553285E0 for ; Tue, 13 Aug 2019 01:12:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6EEBC285E2; Tue, 13 Aug 2019 01:12:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D3DA285E3 for ; Tue, 13 Aug 2019 01:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726296AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 Received: from mga03.intel.com ([134.134.136.65]:29948 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062474" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:53 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned Date: Mon, 12 Aug 2019 18:12:46 -0700 Message-Id: <20190813011252.4121-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Reject the EADD ioctl() if the source address provided by userspace is not page aligned. Page alignment is required by hardware, but this is not enforced on userspace as the kernel first copies the source page to an internal (page aligned) buffer. Require the userspace address to be page aligned in preparation for reworking EADD to directly consume the userspace address. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 9b784a061a47..bc65249ed5df 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -624,7 +624,8 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) if (copy_from_user(&addp, arg, sizeof(addp))) return -EFAULT; - if (!IS_ALIGNED(addp.addr, PAGE_SIZE)) + if (!IS_ALIGNED(addp.addr, PAGE_SIZE) || + !IS_ALIGNED(addp.src, PAGE_SIZE)) return -EINVAL; if (addp.addr < encl->base || addp.addr - encl->base >= encl->size) From patchwork Tue Aug 13 01:12:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091043 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DA8B14F7 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E579285E0 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F113285E8; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 802CB285E8 for ; Tue, 13 Aug 2019 01:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726316AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 Received: from mga03.intel.com ([134.134.136.65]:29948 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726543AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062478" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:53 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 3/8] x86/sgx: Validate generic SECINFO immediately after copying from user Date: Mon, 12 Aug 2019 18:12:47 -0700 Message-Id: <20190813011252.4121-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When adding pages to the encalve, verify the SECINFO flags provided by userspace are valid prior to consuming the protection bits and to avoid allocating a page when SECINFO is invalid. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index bc65249ed5df..5831f51d64cd 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -519,8 +519,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, struct sgx_va_page *va_page; int ret; - if (sgx_validate_secinfo(secinfo)) - return -EINVAL; if (page_type == SGX_SECINFO_TCS) { ret = sgx_validate_tcs(encl, data); if (ret) @@ -635,6 +633,9 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) sizeof(secinfo))) return -EFAULT; + if (sgx_validate_secinfo(&secinfo)) + return -EINVAL; + data_page = alloc_page(GFP_HIGHUSER); if (!data_page) return -ENOMEM; From patchwork Tue Aug 13 01:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091045 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 56276184E for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48EAA285E0 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 31C35285FB; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FA97285EA for ; Tue, 13 Aug 2019 01:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726579AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 Received: from mga03.intel.com ([134.134.136.65]:29948 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062481" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:53 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Date: Mon, 12 Aug 2019 18:12:48 -0700 Message-Id: <20190813011252.4121-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Set SGX_ENCL_PAGE_TCS when encl_page->desc is initialized in sgx_encl_page_alloc() to improve readability, and so that the code isn't affected when the bulk of __sgx_encl_add_page() is rewritten to remove the EADD worker in a future patch. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 5831f51d64cd..2b3b86412131 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -247,7 +247,8 @@ static int sgx_validate_secs(const struct sgx_secs *secs, static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, unsigned long addr, - unsigned long prot) + unsigned long prot, + u64 page_type) { struct sgx_encl_page *encl_page; int ret; @@ -258,6 +259,8 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, if (!encl_page) return ERR_PTR(-ENOMEM); encl_page->desc = addr; + if (page_type == SGX_SECINFO_TCS) + encl_page->desc |= SGX_ENCL_PAGE_TCS; encl_page->encl = encl; encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0); ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), @@ -476,7 +479,6 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned int mrmask) { unsigned long page_index = sgx_encl_get_index(encl_page); - u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_add_page_req *req = NULL; struct page *backing; void *backing_ptr; @@ -495,8 +497,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, backing_ptr = kmap(backing); memcpy(backing_ptr, data, PAGE_SIZE); kunmap(backing); - if (page_type == SGX_SECINFO_TCS) - encl_page->desc |= SGX_ENCL_PAGE_TCS; + memcpy(&req->secinfo, secinfo, sizeof(*secinfo)); req->encl = encl; req->encl_page = encl_page; @@ -533,7 +534,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto err_out_unlock; } - encl_page = sgx_encl_page_alloc(encl, addr, prot); + encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type); if (IS_ERR(encl_page)) { ret = PTR_ERR(encl_page); goto err_out_shrink; From patchwork Tue Aug 13 01:12:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091051 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F2C7318B7 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E582D285E2 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D60D2285E0; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04016285E2 for ; Tue, 13 Aug 2019 01:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726505AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 Received: from mga12.intel.com ([192.55.52.136]:4461 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726296AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062485" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:54 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 5/8] x86/sgx: Move encl_page insertion into tree out of alloc flow Date: Mon, 12 Aug 2019 18:12:49 -0700 Message-Id: <20190813011252.4121-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move insertion into the page tree out of sgx_encl_page_alloc() so that the function can be moved out from under encl->lock. This is a preparatory step for removing the add page worker, as the encl_page is needed to allocate its EPC page, and EPC page allocation must be done without holding encl->lock so that it can trigger reclaim if necessary. Note, radix_tree_insert() returns -EEXIST if the the slot is already in use, i.e. there's no need to pre-check via radix_tree_lookup(). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 2b3b86412131..55e0fe261b8c 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -251,10 +251,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, u64 page_type) { struct sgx_encl_page *encl_page; - int ret; - if (radix_tree_lookup(&encl->page_tree, PFN_DOWN(addr))) - return ERR_PTR(-EEXIST); encl_page = kzalloc(sizeof(*encl_page), GFP_KERNEL); if (!encl_page) return ERR_PTR(-ENOMEM); @@ -263,12 +260,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, encl_page->desc |= SGX_ENCL_PAGE_TCS; encl_page->encl = encl; encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0); - ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), - encl_page); - if (ret) { - kfree(encl_page); - return ERR_PTR(ret); - } + return encl_page; } @@ -540,6 +532,11 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto err_out_shrink; } + ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), + encl_page); + if (ret) + goto err_out_free; + ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); if (ret) goto err_out; @@ -550,6 +547,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, err_out: radix_tree_delete(&encl_page->encl->page_tree, PFN_DOWN(encl_page->desc)); +err_out_free: kfree(encl_page); err_out_shrink: From patchwork Tue Aug 13 01:12:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091057 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 64943112C for ; Tue, 13 Aug 2019 01:12:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 576D0285E1 for ; Tue, 13 Aug 2019 01:12:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4BD52285E0; Tue, 13 Aug 2019 01:12:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56FF9285E5 for ; Tue, 13 Aug 2019 01:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726549AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 Received: from mga12.intel.com ([192.55.52.136]:4461 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726316AbfHMBMy (ORCPT ); Mon, 12 Aug 2019 21:12:54 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062489" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:54 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 6/8] x86/sgx: Allocate encl_page prior to taking encl->lock Date: Mon, 12 Aug 2019 18:12:50 -0700 Message-Id: <20190813011252.4121-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Refactor sgx_encl_add_page() to allocate the encl_page prior to taking encl->lock so that the encl_page can be used to allocate its associated EPC page without having to drop and retake encl->lock. Removal of the add page worker will move EPC page allocation to sgx_encl_add_page(). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 29 ++++++++++++-------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 55e0fe261b8c..49407ccb26c8 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -518,24 +518,22 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } - mutex_lock(&encl->lock); - - va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD); - if (IS_ERR(va_page)) { - ret = PTR_ERR(va_page); - goto err_out_unlock; - } - encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type); - if (IS_ERR(encl_page)) { - ret = PTR_ERR(encl_page); - goto err_out_shrink; + if (IS_ERR(encl_page)) + return PTR_ERR(encl_page); + + mutex_lock(&encl->lock); + + va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD); + if (IS_ERR(va_page)) { + ret = PTR_ERR(va_page); + goto err_out_free; } ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), encl_page); if (ret) - goto err_out_free; + goto err_out_shrink; ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); if (ret) @@ -547,13 +545,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, err_out: radix_tree_delete(&encl_page->encl->page_tree, PFN_DOWN(encl_page->desc)); +err_out_shrink: + sgx_encl_shrink(encl, va_page); + err_out_free: kfree(encl_page); -err_out_shrink: - sgx_encl_shrink(encl, va_page); - -err_out_unlock: mutex_unlock(&encl->lock); return ret; } From patchwork Tue Aug 13 01:12:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091049 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E06731864 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D0A89285EA for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C4B71285FB; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E79F3285EE for ; Tue, 13 Aug 2019 01:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726592AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 Received: from mga03.intel.com ([134.134.136.65]:29948 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726543AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062492" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:54 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 7/8] x86/sgx: Remove the EADD page worker Date: Mon, 12 Aug 2019 18:12:51 -0700 Message-Id: <20190813011252.4121-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Remove the work queue approach to adding pages to an enclave. There are several benefits to fully adding pages within the context of the ioctl() call: - Simplifies the code base - Provides userspace with accurate error information, e.g. the ioctl() now fails if EPC allocation fails - Paves the way for passing the user's source page directly to EADD to eliminate the overhead of allocating a kernel page and copying the user data into said kernel page. The downside to removing the worker is that applications with their own scheduler, e.g. Go's M:N schedule, can see a significant reduction in throughput (10x or more) when building multiple enclaves in parallel, e.g. in the Go case, spinning up several goroutines with each goroutine building a different enclave. Suggested-by: Andy Lutomirski Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 191 ++++++------------------- arch/x86/kernel/cpu/sgx/driver/main.c | 4 - arch/x86/kernel/cpu/sgx/encl.h | 2 - 3 files changed, 40 insertions(+), 157 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 49407ccb26c8..840376cf352f 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -14,14 +14,6 @@ #include #include "driver.h" -struct sgx_add_page_req { - struct sgx_encl *encl; - struct sgx_encl_page *encl_page; - struct sgx_secinfo secinfo; - unsigned long mrmask; - struct list_head list; -}; - static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, unsigned int disallowed_flags) { @@ -77,115 +69,6 @@ static void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page) } } -static bool sgx_process_add_page_req(struct sgx_add_page_req *req, - struct sgx_epc_page *epc_page) -{ - struct sgx_encl_page *encl_page = req->encl_page; - struct sgx_encl *encl = req->encl; - unsigned long page_index = sgx_encl_get_index(encl_page); - struct sgx_secinfo secinfo; - struct sgx_pageinfo pginfo; - struct page *backing; - unsigned long addr; - int ret; - int i; - - if (encl->flags & SGX_ENCL_DEAD) - return false; - - addr = SGX_ENCL_PAGE_ADDR(encl_page); - - backing = sgx_encl_get_backing_page(encl, page_index); - if (IS_ERR(backing)) - return false; - - /* - * The SECINFO field must be 64-byte aligned, copy it to a local - * variable that is guaranteed to be aligned as req->secinfo may - * or may not be 64-byte aligned, e.g. req may have been allocated - * via kzalloc which is not aware of __aligned attributes. - */ - memcpy(&secinfo, &req->secinfo, sizeof(secinfo)); - - pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); - pginfo.addr = addr; - pginfo.metadata = (unsigned long)&secinfo; - pginfo.contents = (unsigned long)kmap_atomic(backing); - ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); - kunmap_atomic((void *)(unsigned long)pginfo.contents); - - put_page(backing); - - if (ret) { - if (encls_failed(ret)) - ENCLS_WARN(ret, "EADD"); - return false; - } - - for_each_set_bit(i, &req->mrmask, 16) { - ret = __eextend(sgx_epc_addr(encl->secs.epc_page), - sgx_epc_addr(epc_page) + (i * 0x100)); - if (ret) { - if (encls_failed(ret)) - ENCLS_WARN(ret, "EEXTEND"); - return false; - } - } - - encl_page->encl = encl; - encl_page->epc_page = epc_page; - encl->secs_child_cnt++; - sgx_mark_page_reclaimable(encl_page->epc_page); - - return true; -} - -static void sgx_add_page_worker(struct work_struct *work) -{ - struct sgx_add_page_req *req; - bool skip_rest = false; - bool is_empty = false; - struct sgx_encl *encl; - struct sgx_epc_page *epc_page; - - encl = container_of(work, struct sgx_encl, work); - - do { - schedule(); - - mutex_lock(&encl->lock); - if (encl->flags & SGX_ENCL_DEAD) - skip_rest = true; - - req = list_first_entry(&encl->add_page_reqs, - struct sgx_add_page_req, list); - list_del(&req->list); - is_empty = list_empty(&encl->add_page_reqs); - mutex_unlock(&encl->lock); - - if (skip_rest) - goto next; - - epc_page = sgx_alloc_page(req->encl_page, true); - - mutex_lock(&encl->lock); - - if (IS_ERR(epc_page)) { - sgx_encl_destroy(encl); - skip_rest = true; - } else if (!sgx_process_add_page_req(req, epc_page)) { - sgx_free_page(epc_page); - sgx_encl_destroy(encl); - skip_rest = true; - } - - mutex_unlock(&encl->lock); - -next: - kfree(req); - } while (!is_empty); -} - static u32 sgx_calc_ssaframesize(u32 miscselect, u64 xfrm) { u32 size_max = PAGE_SIZE; @@ -299,8 +182,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) encl->backing = backing; - INIT_WORK(&encl->work, sgx_add_page_worker); - secs_epc = sgx_alloc_page(&encl->secs, true); if (IS_ERR(secs_epc)) { ret = PTR_ERR(secs_epc); @@ -466,40 +347,42 @@ static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs) static int __sgx_encl_add_page(struct sgx_encl *encl, struct sgx_encl_page *encl_page, + struct sgx_epc_page *epc_page, void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask) + unsigned long mrmask) { - unsigned long page_index = sgx_encl_get_index(encl_page); - struct sgx_add_page_req *req = NULL; - struct page *backing; - void *backing_ptr; - int empty; + struct sgx_pageinfo pginfo; + int ret; + int i; - req = kzalloc(sizeof(*req), GFP_KERNEL); - if (!req) - return -ENOMEM; + pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); + pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page); + pginfo.metadata = (unsigned long)secinfo; + pginfo.contents = (unsigned long)data; - backing = sgx_encl_get_backing_page(encl, page_index); - if (IS_ERR(backing)) { - kfree(req); - return PTR_ERR(backing); + ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); + if (ret) { + if (encls_failed(ret)) + ENCLS_WARN(ret, "EADD"); + return -EFAULT; } - backing_ptr = kmap(backing); - memcpy(backing_ptr, data, PAGE_SIZE); - kunmap(backing); + for_each_set_bit(i, &mrmask, 16) { + ret = __eextend(sgx_epc_addr(encl->secs.epc_page), + sgx_epc_addr(epc_page) + (i * 0x100)); + if (ret) { + if (encls_failed(ret)) + ENCLS_WARN(ret, "EEXTEND"); + return -EFAULT; + } + } + + encl_page->encl = encl; + encl_page->epc_page = epc_page; + encl->secs_child_cnt++; + sgx_mark_page_reclaimable(encl_page->epc_page); - memcpy(&req->secinfo, secinfo, sizeof(*secinfo)); - req->encl = encl; - req->encl_page = encl_page; - req->mrmask = mrmask; - empty = list_empty(&encl->add_page_reqs); - list_add_tail(&req->list, &encl->add_page_reqs); - if (empty) - queue_work(sgx_encl_wq, &encl->work); - set_page_dirty(backing); - put_page(backing); return 0; } @@ -509,6 +392,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; + struct sgx_epc_page *epc_page; struct sgx_va_page *va_page; int ret; @@ -522,6 +406,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, if (IS_ERR(encl_page)) return PTR_ERR(encl_page); + epc_page = sgx_alloc_page(encl_page, true); + if (IS_ERR(epc_page)) { + kfree(encl_page); + return PTR_ERR(epc_page); + } + mutex_lock(&encl->lock); va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD); @@ -535,7 +425,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, if (ret) goto err_out_shrink; - ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); + ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo, + mrmask); if (ret) goto err_out; @@ -549,6 +440,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, sgx_encl_shrink(encl, va_page); err_out_free: + sgx_free_page(epc_page); kfree(encl_page); mutex_unlock(&encl->lock); @@ -592,14 +484,13 @@ static int sgx_encl_page_import_user(void *dst, unsigned long src, * @arg: a user pointer to a struct sgx_enclave_add_page instance * * Add a page to an uninitialized enclave (EADD), and optionally extend the - * enclave's measurement with the contents of the page (EEXTEND). Adding is done - * asynchronously. A success only indicates that the page has been added to a - * work queue. + * enclave's measurement with the contents of the page (EEXTEND). * * Return: * 0 on success, * -EINVAL if other than RWX protection bits have been set * -EACCES if the source page is located in a noexec partition + * -ENOMEM if any memory allocation, including EPC, fails * -errno otherwise */ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) @@ -702,8 +593,6 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct, if (ret) return ret; - flush_work(&encl->work); - mutex_lock(&encl->lock); if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) { diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index dfa107247f2d..e740d71e2311 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -32,7 +32,6 @@ static int sgx_open(struct inode *inode, struct file *file) return -ENOMEM; kref_init(&encl->refcount); - INIT_LIST_HEAD(&encl->add_page_reqs); INIT_LIST_HEAD(&encl->va_pages); INIT_RADIX_TREE(&encl->page_tree, GFP_KERNEL); mutex_init(&encl->lock); @@ -81,9 +80,6 @@ static int sgx_release(struct inode *inode, struct file *file) encl->flags |= SGX_ENCL_DEAD; mutex_unlock(&encl->lock); - if (encl->work.func) - flush_work(&encl->work); - kref_put(&encl->refcount, sgx_encl_release); return 0; } diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index d3a1687ed84c..b1f4e4f0fa65 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -82,8 +82,6 @@ struct sgx_encl { unsigned long ssaframesize; struct list_head va_pages; struct radix_tree_root page_tree; - struct list_head add_page_reqs; - struct work_struct work; struct sgx_encl_page secs; cpumask_t cpumask; }; From patchwork Tue Aug 13 01:12:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11091047 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0F8B112C for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB35F285EF for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFE13285F1; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29CC6285F0 for ; Tue, 13 Aug 2019 01:12:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726543AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 Received: from mga03.intel.com ([134.134.136.65]:29948 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbfHMBMz (ORCPT ); Mon, 12 Aug 2019 21:12:55 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 18:12:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="176062495" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga008.fm.intel.com with ESMTP; 12 Aug 2019 18:12:54 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD Date: Mon, 12 Aug 2019 18:12:52 -0700 Message-Id: <20190813011252.4121-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190813011252.4121-1-sean.j.christopherson@intel.com> References: <20190813011252.4121-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Invoke EADD with the userspace source address instead of first copying the data to a kernel page to avoid the overhead of alloc_page() and copy_from_user(). Suggested-by: Andy Lutomirski Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 148 ++++++------------------- 1 file changed, 33 insertions(+), 115 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 840376cf352f..a55a138826d5 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -302,71 +302,46 @@ static int sgx_validate_secinfo(struct sgx_secinfo *secinfo) return 0; } -static bool sgx_validate_offset(struct sgx_encl *encl, unsigned long offset) -{ - if (offset & (PAGE_SIZE - 1)) - return false; - - if (offset >= encl->size) - return false; - - return true; -} - -static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs) -{ - int i; - - if (tcs->flags & SGX_TCS_RESERVED_MASK) - return -EINVAL; - - if (tcs->flags & SGX_TCS_DBGOPTIN) - return -EINVAL; - - if (!sgx_validate_offset(encl, tcs->ssa_offset)) - return -EINVAL; - - if (!sgx_validate_offset(encl, tcs->fs_offset)) - return -EINVAL; - - if (!sgx_validate_offset(encl, tcs->gs_offset)) - return -EINVAL; - - if ((tcs->fs_limit & 0xFFF) != 0xFFF) - return -EINVAL; - - if ((tcs->gs_limit & 0xFFF) != 0xFFF) - return -EINVAL; - - for (i = 0; i < SGX_TCS_RESERVED_SIZE; i++) - if (tcs->reserved[i]) - return -EINVAL; - - return 0; -} - static int __sgx_encl_add_page(struct sgx_encl *encl, struct sgx_encl_page *encl_page, struct sgx_epc_page *epc_page, - void *data, - struct sgx_secinfo *secinfo, - unsigned long mrmask) + struct sgx_secinfo *secinfo, unsigned long src, + unsigned long prot, unsigned long mrmask) { struct sgx_pageinfo pginfo; + struct vm_area_struct *vma; int ret; int i; pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page); pginfo.metadata = (unsigned long)secinfo; - pginfo.contents = (unsigned long)data; + pginfo.contents = src; + down_read(¤t->mm->mmap_sem); + + /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ + if (encl_page->vm_prot_bits & VM_EXEC) { + vma = find_vma(current->mm, src); + if (!vma) { + up_read(¤t->mm->mmap_sem); + return -EFAULT; + } + + if (!(vma->vm_flags & VM_MAYEXEC)) { + up_read(¤t->mm->mmap_sem); + return -EACCES; + } + } + + __uaccess_begin(); ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); - if (ret) { - if (encls_failed(ret)) - ENCLS_WARN(ret, "EADD"); + __uaccess_end(); + + up_read(¤t->mm->mmap_sem); + + if (ret) return -EFAULT; - } for_each_set_bit(i, &mrmask, 16) { ret = __eextend(sgx_epc_addr(encl->secs.epc_page), @@ -386,9 +361,9 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, return 0; } -static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, - void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask, unsigned long prot) +static int sgx_encl_add_page(struct sgx_encl *encl, + struct sgx_enclave_add_page *addp, + struct sgx_secinfo *secinfo, unsigned long prot) { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; @@ -396,13 +371,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, struct sgx_va_page *va_page; int ret; - if (page_type == SGX_SECINFO_TCS) { - ret = sgx_validate_tcs(encl, data); - if (ret) - return ret; - } - - encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type); + encl_page = sgx_encl_page_alloc(encl, addp->addr, prot, page_type); if (IS_ERR(encl_page)) return PTR_ERR(encl_page); @@ -425,8 +394,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, if (ret) goto err_out_shrink; - ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo, - mrmask); + ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo, + addp->src, prot, addp->mrmask); if (ret) goto err_out; @@ -447,36 +416,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } -static int sgx_encl_page_import_user(void *dst, unsigned long src, - unsigned long prot) -{ - struct vm_area_struct *vma; - int ret = 0; - - down_read(¤t->mm->mmap_sem); - - /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ - if (prot & PROT_EXEC) { - vma = find_vma(current->mm, src); - if (!vma) { - ret = -EFAULT; - goto out; - } - - if (!(vma->vm_flags & VM_MAYEXEC)) { - ret = -EACCES; - goto out; - } - } - - if (copy_from_user(dst, (void __user *)src, PAGE_SIZE)) - ret = -EFAULT; - -out: - up_read(¤t->mm->mmap_sem); - return ret; -} - /** * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE * @@ -498,10 +437,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) struct sgx_encl *encl = filep->private_data; struct sgx_enclave_add_page addp; struct sgx_secinfo secinfo; - struct page *data_page; unsigned long prot; - void *data; - int ret; if (!(encl->flags & SGX_ENCL_CREATED)) return -EINVAL; @@ -523,12 +459,6 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) if (sgx_validate_secinfo(&secinfo)) return -EINVAL; - data_page = alloc_page(GFP_HIGHUSER); - if (!data_page) - return -ENOMEM; - - data = kmap(data_page); - prot = _calc_vm_trans(secinfo.flags, SGX_SECINFO_R, PROT_READ) | _calc_vm_trans(secinfo.flags, SGX_SECINFO_W, PROT_WRITE) | _calc_vm_trans(secinfo.flags, SGX_SECINFO_X, PROT_EXEC); @@ -537,19 +467,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) if ((secinfo.flags & SGX_SECINFO_PAGE_TYPE_MASK) == SGX_SECINFO_TCS) prot |= PROT_READ | PROT_WRITE; - ret = sgx_encl_page_import_user(data, addp.src, prot); - if (ret) - goto out; - - ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask, - prot); - if (ret) - goto out; - -out: - kunmap(data_page); - __free_page(data_page); - return ret; + return sgx_encl_add_page(encl, &addp, &secinfo, prot); } static int __sgx_get_key_hash(struct crypto_shash *tfm, const void *modulus,