From patchwork Thu Aug 8 22:13:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11084887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04A261398 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E735C28BC3 for ; Thu, 8 Aug 2019 22:13:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CECC228BD7; Thu, 8 Aug 2019 22:13:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6ED4028BC9 for ; Thu, 8 Aug 2019 22:13:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390494AbfHHWNn (ORCPT ); Thu, 8 Aug 2019 18:13:43 -0400 Received: from mga03.intel.com ([134.134.136.65]:19058 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390492AbfHHWNn (ORCPT ); Thu, 8 Aug 2019 18:13:43 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 15:13:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="193247257" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga001.fm.intel.com with ESMTP; 08 Aug 2019 15:13:42 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 1/6] x86/sgx: Validate generic SECINFO immediately after copying from user Date: Thu, 8 Aug 2019 15:13:35 -0700 Message-Id: <20190808221340.29460-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808221340.29460-1-sean.j.christopherson@intel.com> References: <20190808221340.29460-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When adding pages to the encalve, verify the SECINFO flags provided by userspace are valid prior to consuming the protection bits and to avoid allocating a page when SECINFO is invalid. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index f2b9f0ece9ca..f14d9abd9b0d 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -518,8 +518,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, struct sgx_va_page *va_page; int ret; - if (sgx_validate_secinfo(secinfo)) - return -EINVAL; if (page_type == SGX_SECINFO_TCS) { ret = sgx_validate_tcs(encl, data); if (ret) @@ -634,6 +632,9 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) sizeof(secinfo))) return -EFAULT; + if (sgx_validate_secinfo(&secinfo)) + return -EINVAL; + data_page = alloc_page(GFP_HIGHUSER); if (!data_page) return -ENOMEM; From patchwork Thu Aug 8 22:13:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11084889 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C8301850 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B59128BC3 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DC1528BD0; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED67C28BCF for ; Thu, 8 Aug 2019 22:13:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404414AbfHHWNn (ORCPT ); Thu, 8 Aug 2019 18:13:43 -0400 Received: from mga03.intel.com ([134.134.136.65]:19058 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390491AbfHHWNn (ORCPT ); Thu, 8 Aug 2019 18:13:43 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 15:13:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="193247261" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga001.fm.intel.com with ESMTP; 08 Aug 2019 15:13:42 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 2/6] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Date: Thu, 8 Aug 2019 15:13:36 -0700 Message-Id: <20190808221340.29460-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808221340.29460-1-sean.j.christopherson@intel.com> References: <20190808221340.29460-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Set SGX_ENCL_PAGE_TCS when encl_page->desc is initialized in sgx_encl_page_alloc() to improve readability, and so that the code isn't affected when the bulk of __sgx_encl_add_page() is rewritten to remove the EADD worker in a future patch. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index f14d9abd9b0d..64deab36ed05 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -247,7 +247,8 @@ static int sgx_validate_secs(const struct sgx_secs *secs, static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, unsigned long addr, - unsigned long prot) + unsigned long prot, + u64 page_type) { struct sgx_encl_page *encl_page; int ret; @@ -258,6 +259,8 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, if (!encl_page) return ERR_PTR(-ENOMEM); encl_page->desc = addr; + if (page_type == SGX_SECINFO_TCS) + encl_page->desc |= SGX_ENCL_PAGE_TCS; encl_page->encl = encl; encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0); ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), @@ -475,7 +478,6 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned int mrmask) { unsigned long page_index = sgx_encl_get_index(encl, encl_page); - u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_add_page_req *req = NULL; struct page *backing; void *backing_ptr; @@ -494,8 +496,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, backing_ptr = kmap(backing); memcpy(backing_ptr, data, PAGE_SIZE); kunmap(backing); - if (page_type == SGX_SECINFO_TCS) - encl_page->desc |= SGX_ENCL_PAGE_TCS; + memcpy(&req->secinfo, secinfo, sizeof(*secinfo)); req->encl = encl; req->encl_page = encl_page; @@ -532,7 +533,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto err_out_unlock; } - encl_page = sgx_encl_page_alloc(encl, addr, prot); + encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type); if (IS_ERR(encl_page)) { ret = PTR_ERR(encl_page); goto err_out_shrink; From patchwork Thu Aug 8 22:13:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11084891 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFEEC1398 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE8E728BC3 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A2DAA28BD0; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4204E28BC9 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404534AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 Received: from mga03.intel.com ([134.134.136.65]:19058 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404325AbfHHWNn (ORCPT ); Thu, 8 Aug 2019 18:13:43 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 15:13:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="193247264" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga001.fm.intel.com with ESMTP; 08 Aug 2019 15:13:42 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 3/6] x86/sgx: Move encl_page insertion into tree out of alloc flow Date: Thu, 8 Aug 2019 15:13:37 -0700 Message-Id: <20190808221340.29460-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808221340.29460-1-sean.j.christopherson@intel.com> References: <20190808221340.29460-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move insertion into the page tree out of sgx_encl_page_alloc() so that the function can be moved out from under encl->lock. This is a preparatory step for removing the add page worker, as the encl_page is needed to allocate its EPC page, and EPC page allocation must be done without holding encl->lock so that it can trigger reclaim if necessary. Note, radix_tree_insert() returns -EEXIST if the the slot is already in use, i.e. there's no need to pre-check via radix_tree_lookup(). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 64deab36ed05..126f95afa9a1 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -251,10 +251,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, u64 page_type) { struct sgx_encl_page *encl_page; - int ret; - if (radix_tree_lookup(&encl->page_tree, PFN_DOWN(addr))) - return ERR_PTR(-EEXIST); encl_page = kzalloc(sizeof(*encl_page), GFP_KERNEL); if (!encl_page) return ERR_PTR(-ENOMEM); @@ -263,12 +260,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, encl_page->desc |= SGX_ENCL_PAGE_TCS; encl_page->encl = encl; encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0); - ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), - encl_page); - if (ret) { - kfree(encl_page); - return ERR_PTR(ret); - } + return encl_page; } @@ -539,6 +531,11 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto err_out_shrink; } + ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), + encl_page); + if (ret) + goto err_out_free; + ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); if (ret) goto err_out; @@ -549,6 +546,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, err_out: radix_tree_delete(&encl_page->encl->page_tree, PFN_DOWN(encl_page->desc)); +err_out_free: kfree(encl_page); err_out_shrink: From patchwork Thu Aug 8 22:13:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11084893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF6061709 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF10E28BC3 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C39B128BD0; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B0A528BCF for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404567AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 Received: from mga03.intel.com ([134.134.136.65]:19064 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404518AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 15:13:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="193247268" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga001.fm.intel.com with ESMTP; 08 Aug 2019 15:13:42 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 4/6] x86/sgx: Allocate encl_page prior to taking encl->lock Date: Thu, 8 Aug 2019 15:13:38 -0700 Message-Id: <20190808221340.29460-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808221340.29460-1-sean.j.christopherson@intel.com> References: <20190808221340.29460-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Refactor sgx_encl_add_page() to allocate the encl_page prior to taking encl->lock so that the encl_page can be used to allocate its associated EPC page without having to drop and retake encl->lock. Removal of the add page worker will move EPC page allocation to sgx_encl_add_page(). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 29 ++++++++++++-------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 126f95afa9a1..2d876429ba9a 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -517,24 +517,22 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } - mutex_lock(&encl->lock); - - va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD); - if (IS_ERR(va_page)) { - ret = PTR_ERR(va_page); - goto err_out_unlock; - } - encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type); - if (IS_ERR(encl_page)) { - ret = PTR_ERR(encl_page); - goto err_out_shrink; + if (IS_ERR(encl_page)) + return PTR_ERR(encl_page); + + mutex_lock(&encl->lock); + + va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD); + if (IS_ERR(va_page)) { + ret = PTR_ERR(va_page); + goto err_out_free; } ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), encl_page); if (ret) - goto err_out_free; + goto err_out_shrink; ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); if (ret) @@ -546,13 +544,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, err_out: radix_tree_delete(&encl_page->encl->page_tree, PFN_DOWN(encl_page->desc)); +err_out_shrink: + sgx_encl_shrink(encl, va_page); + err_out_free: kfree(encl_page); -err_out_shrink: - sgx_encl_shrink(encl, va_page); - -err_out_unlock: mutex_unlock(&encl->lock); return ret; } From patchwork Thu Aug 8 22:13:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11084897 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA9771398 for ; Thu, 8 Aug 2019 22:13:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A975E28BC3 for ; Thu, 8 Aug 2019 22:13:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D89128BC9; Thu, 8 Aug 2019 22:13:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D45E928BCF for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404325AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 Received: from mga03.intel.com ([134.134.136.65]:19058 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404533AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 15:13:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="193247273" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga001.fm.intel.com with ESMTP; 08 Aug 2019 15:13:43 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 5/6] x86/sgx: Remove the EADD page worker Date: Thu, 8 Aug 2019 15:13:39 -0700 Message-Id: <20190808221340.29460-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808221340.29460-1-sean.j.christopherson@intel.com> References: <20190808221340.29460-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Remove the work queue approach to adding pages to an enclave. There are several benefits to fully adding pages within the context of the ioctl() call: - Simplifies the code base - Provides userspace with accurate error information, e.g. the ioctl() now fails if EPC allocation fails - Paves the way for passing the user's source page directly to EADD to eliminate the overhead of allocating a kernel page and copying the user data into said kernel page. The downside to removing the worker is that applications with their own scheduler, e.g. Go's M:N scheduler, can see a significant reduction in throughput (10x or more) when building multiple enclaves in parallel, e.g. in the Go case, spinning up several goroutines with each goroutine building a different enclave. Suggested-by: Andy Lutomirski Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 191 ++++++------------------- arch/x86/kernel/cpu/sgx/driver/main.c | 4 - arch/x86/kernel/cpu/sgx/encl.h | 2 - 3 files changed, 40 insertions(+), 157 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 2d876429ba9a..e083625dcd15 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -14,14 +14,6 @@ #include #include "driver.h" -struct sgx_add_page_req { - struct sgx_encl *encl; - struct sgx_encl_page *encl_page; - struct sgx_secinfo secinfo; - unsigned long mrmask; - struct list_head list; -}; - static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, unsigned int disallowed_flags) { @@ -77,115 +69,6 @@ static void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page) } } -static bool sgx_process_add_page_req(struct sgx_add_page_req *req, - struct sgx_epc_page *epc_page) -{ - struct sgx_encl_page *encl_page = req->encl_page; - struct sgx_encl *encl = req->encl; - unsigned long page_index = sgx_encl_get_index(encl, encl_page); - struct sgx_secinfo secinfo; - struct sgx_pageinfo pginfo; - struct page *backing; - unsigned long addr; - int ret; - int i; - - if (encl->flags & SGX_ENCL_DEAD) - return false; - - addr = SGX_ENCL_PAGE_ADDR(encl_page); - - backing = sgx_encl_get_backing_page(encl, page_index); - if (IS_ERR(backing)) - return false; - - /* - * The SECINFO field must be 64-byte aligned, copy it to a local - * variable that is guaranteed to be aligned as req->secinfo may - * or may not be 64-byte aligned, e.g. req may have been allocated - * via kzalloc which is not aware of __aligned attributes. - */ - memcpy(&secinfo, &req->secinfo, sizeof(secinfo)); - - pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); - pginfo.addr = addr; - pginfo.metadata = (unsigned long)&secinfo; - pginfo.contents = (unsigned long)kmap_atomic(backing); - ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); - kunmap_atomic((void *)(unsigned long)pginfo.contents); - - put_page(backing); - - if (ret) { - if (encls_failed(ret)) - ENCLS_WARN(ret, "EADD"); - return false; - } - - for_each_set_bit(i, &req->mrmask, 16) { - ret = __eextend(sgx_epc_addr(encl->secs.epc_page), - sgx_epc_addr(epc_page) + (i * 0x100)); - if (ret) { - if (encls_failed(ret)) - ENCLS_WARN(ret, "EEXTEND"); - return false; - } - } - - encl_page->encl = encl; - encl_page->epc_page = epc_page; - encl->secs_child_cnt++; - sgx_mark_page_reclaimable(encl_page->epc_page); - - return true; -} - -static void sgx_add_page_worker(struct work_struct *work) -{ - struct sgx_add_page_req *req; - bool skip_rest = false; - bool is_empty = false; - struct sgx_encl *encl; - struct sgx_epc_page *epc_page; - - encl = container_of(work, struct sgx_encl, work); - - do { - schedule(); - - mutex_lock(&encl->lock); - if (encl->flags & SGX_ENCL_DEAD) - skip_rest = true; - - req = list_first_entry(&encl->add_page_reqs, - struct sgx_add_page_req, list); - list_del(&req->list); - is_empty = list_empty(&encl->add_page_reqs); - mutex_unlock(&encl->lock); - - if (skip_rest) - goto next; - - epc_page = sgx_alloc_page(req->encl_page, true); - - mutex_lock(&encl->lock); - - if (IS_ERR(epc_page)) { - sgx_encl_destroy(encl); - skip_rest = true; - } else if (!sgx_process_add_page_req(req, epc_page)) { - sgx_free_page(epc_page); - sgx_encl_destroy(encl); - skip_rest = true; - } - - mutex_unlock(&encl->lock); - -next: - kfree(req); - } while (!is_empty); -} - static u32 sgx_calc_ssaframesize(u32 miscselect, u64 xfrm) { u32 size_max = PAGE_SIZE; @@ -298,8 +181,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) encl->backing = backing; - INIT_WORK(&encl->work, sgx_add_page_worker); - secs_epc = sgx_alloc_page(&encl->secs, true); if (IS_ERR(secs_epc)) { ret = PTR_ERR(secs_epc); @@ -465,40 +346,42 @@ static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs) static int __sgx_encl_add_page(struct sgx_encl *encl, struct sgx_encl_page *encl_page, + struct sgx_epc_page *epc_page, void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask) + unsigned long mrmask) { - unsigned long page_index = sgx_encl_get_index(encl, encl_page); - struct sgx_add_page_req *req = NULL; - struct page *backing; - void *backing_ptr; - int empty; + struct sgx_pageinfo pginfo; + int ret; + int i; - req = kzalloc(sizeof(*req), GFP_KERNEL); - if (!req) - return -ENOMEM; + pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); + pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page); + pginfo.metadata = (unsigned long)secinfo; + pginfo.contents = (unsigned long)data; - backing = sgx_encl_get_backing_page(encl, page_index); - if (IS_ERR(backing)) { - kfree(req); - return PTR_ERR(backing); + ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); + if (ret) { + if (encls_failed(ret)) + ENCLS_WARN(ret, "EADD"); + return -EFAULT; } - backing_ptr = kmap(backing); - memcpy(backing_ptr, data, PAGE_SIZE); - kunmap(backing); + for_each_set_bit(i, &mrmask, 16) { + ret = __eextend(sgx_epc_addr(encl->secs.epc_page), + sgx_epc_addr(epc_page) + (i * 0x100)); + if (ret) { + if (encls_failed(ret)) + ENCLS_WARN(ret, "EEXTEND"); + return -EFAULT; + } + } + + encl_page->encl = encl; + encl_page->epc_page = epc_page; + encl->secs_child_cnt++; + sgx_mark_page_reclaimable(encl_page->epc_page); - memcpy(&req->secinfo, secinfo, sizeof(*secinfo)); - req->encl = encl; - req->encl_page = encl_page; - req->mrmask = mrmask; - empty = list_empty(&encl->add_page_reqs); - list_add_tail(&req->list, &encl->add_page_reqs); - if (empty) - queue_work(sgx_encl_wq, &encl->work); - set_page_dirty(backing); - put_page(backing); return 0; } @@ -508,6 +391,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; + struct sgx_epc_page *epc_page; struct sgx_va_page *va_page; int ret; @@ -521,6 +405,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, if (IS_ERR(encl_page)) return PTR_ERR(encl_page); + epc_page = sgx_alloc_page(encl_page, true); + if (IS_ERR(epc_page)) { + kfree(encl_page); + return PTR_ERR(epc_page); + } + mutex_lock(&encl->lock); va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD); @@ -534,7 +424,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, if (ret) goto err_out_shrink; - ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); + ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo, + mrmask); if (ret) goto err_out; @@ -548,6 +439,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, sgx_encl_shrink(encl, va_page); err_out_free: + sgx_free_page(epc_page); kfree(encl_page); mutex_unlock(&encl->lock); @@ -591,14 +483,13 @@ static int sgx_encl_page_import_user(void *dst, unsigned long src, * @arg: a user pointer to a struct sgx_enclave_add_page instance * * Add a page to an uninitialized enclave (EADD), and optionally extend the - * enclave's measurement with the contents of the page (EEXTEND). Adding is done - * asynchronously. A success only indicates that the page has been added to a - * work queue. + * enclave's measurement with the contents of the page (EEXTEND). * * Return: * 0 on success, * -EINVAL if other than RWX protection bits have been set * -EACCES if the source page is located in a noexec partition + * -ENOMEM if any memory allocation, including EPC, fails * -errno otherwise */ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) @@ -701,8 +592,6 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct, if (ret) return ret; - flush_work(&encl->work); - mutex_lock(&encl->lock); if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) { diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index dfa107247f2d..e740d71e2311 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -32,7 +32,6 @@ static int sgx_open(struct inode *inode, struct file *file) return -ENOMEM; kref_init(&encl->refcount); - INIT_LIST_HEAD(&encl->add_page_reqs); INIT_LIST_HEAD(&encl->va_pages); INIT_RADIX_TREE(&encl->page_tree, GFP_KERNEL); mutex_init(&encl->lock); @@ -81,9 +80,6 @@ static int sgx_release(struct inode *inode, struct file *file) encl->flags |= SGX_ENCL_DEAD; mutex_unlock(&encl->lock); - if (encl->work.func) - flush_work(&encl->work); - kref_put(&encl->refcount, sgx_encl_release); return 0; } diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 379fe4c22b5d..79885aae8869 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -81,8 +81,6 @@ struct sgx_encl { unsigned long ssaframesize; struct list_head va_pages; struct radix_tree_root page_tree; - struct list_head add_page_reqs; - struct work_struct work; struct sgx_encl_page secs; cpumask_t cpumask; }; From patchwork Thu Aug 8 22:13:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11084895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D21D1850 for ; Thu, 8 Aug 2019 22:13:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 589E828BC3 for ; Thu, 8 Aug 2019 22:13:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D35028BC9; Thu, 8 Aug 2019 22:13:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4BBF28BD3 for ; Thu, 8 Aug 2019 22:13:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404518AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 Received: from mga03.intel.com ([134.134.136.65]:19064 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404325AbfHHWNo (ORCPT ); Thu, 8 Aug 2019 18:13:44 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Aug 2019 15:13:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,363,1559545200"; d="scan'208";a="193247279" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by fmsmga001.fm.intel.com with ESMTP; 08 Aug 2019 15:13:43 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, Andy Lutomirski Subject: [PATCH for_v22 6/6] x86/sgx: Pass userspace source address directly to EADD Date: Thu, 8 Aug 2019 15:13:40 -0700 Message-Id: <20190808221340.29460-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808221340.29460-1-sean.j.christopherson@intel.com> References: <20190808221340.29460-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Invoke EADD with the userspace source address instead of first copying the data to a kernel page to avoid the overhead of alloc_page() and copy_from_user(). Since EADD can now fault due to consuming a userspace address, drop the TCS page validation and let hardware generate a fault if the TCS is bad. Because the vast majority of the TCS is reserved bytes, verifying the TCS essentially requires reading the entire page, which runs counter to the goal of invoking EADD with the userspace address. Suggested-by: Andy Lutomirski Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 148 ++++++------------------- 1 file changed, 33 insertions(+), 115 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index e083625dcd15..3b4297441e27 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -301,71 +301,46 @@ static int sgx_validate_secinfo(struct sgx_secinfo *secinfo) return 0; } -static bool sgx_validate_offset(struct sgx_encl *encl, unsigned long offset) -{ - if (offset & (PAGE_SIZE - 1)) - return false; - - if (offset >= encl->size) - return false; - - return true; -} - -static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs) -{ - int i; - - if (tcs->flags & SGX_TCS_RESERVED_MASK) - return -EINVAL; - - if (tcs->flags & SGX_TCS_DBGOPTIN) - return -EINVAL; - - if (!sgx_validate_offset(encl, tcs->ssa_offset)) - return -EINVAL; - - if (!sgx_validate_offset(encl, tcs->fs_offset)) - return -EINVAL; - - if (!sgx_validate_offset(encl, tcs->gs_offset)) - return -EINVAL; - - if ((tcs->fs_limit & 0xFFF) != 0xFFF) - return -EINVAL; - - if ((tcs->gs_limit & 0xFFF) != 0xFFF) - return -EINVAL; - - for (i = 0; i < SGX_TCS_RESERVED_SIZE; i++) - if (tcs->reserved[i]) - return -EINVAL; - - return 0; -} - static int __sgx_encl_add_page(struct sgx_encl *encl, struct sgx_encl_page *encl_page, struct sgx_epc_page *epc_page, - void *data, - struct sgx_secinfo *secinfo, - unsigned long mrmask) + struct sgx_secinfo *secinfo, unsigned long src, + unsigned long prot, unsigned long mrmask) { struct sgx_pageinfo pginfo; + struct vm_area_struct *vma; int ret; int i; pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page); pginfo.metadata = (unsigned long)secinfo; - pginfo.contents = (unsigned long)data; + pginfo.contents = src; + down_read(¤t->mm->mmap_sem); + + /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ + if (encl_page->vm_prot_bits & VM_EXEC) { + vma = find_vma(current->mm, src); + if (!vma) { + up_read(¤t->mm->mmap_sem); + return -EFAULT; + } + + if (!(vma->vm_flags & VM_MAYEXEC)) { + up_read(¤t->mm->mmap_sem); + return -EACCES; + } + } + + __uaccess_begin(); ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); - if (ret) { - if (encls_failed(ret)) - ENCLS_WARN(ret, "EADD"); + __uaccess_end(); + + up_read(¤t->mm->mmap_sem); + + if (ret) return -EFAULT; - } for_each_set_bit(i, &mrmask, 16) { ret = __eextend(sgx_epc_addr(encl->secs.epc_page), @@ -385,9 +360,9 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, return 0; } -static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, - void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask, unsigned long prot) +static int sgx_encl_add_page(struct sgx_encl *encl, + struct sgx_enclave_add_page *addp, + struct sgx_secinfo *secinfo, unsigned long prot) { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; @@ -395,13 +370,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, struct sgx_va_page *va_page; int ret; - if (page_type == SGX_SECINFO_TCS) { - ret = sgx_validate_tcs(encl, data); - if (ret) - return ret; - } - - encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type); + encl_page = sgx_encl_page_alloc(encl, addp->addr, prot, page_type); if (IS_ERR(encl_page)) return PTR_ERR(encl_page); @@ -424,8 +393,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, if (ret) goto err_out_shrink; - ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo, - mrmask); + ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo, + addp->src, prot, addp->mrmask); if (ret) goto err_out; @@ -446,36 +415,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } -static int sgx_encl_page_import_user(void *dst, unsigned long src, - unsigned long prot) -{ - struct vm_area_struct *vma; - int ret = 0; - - down_read(¤t->mm->mmap_sem); - - /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ - if (prot & PROT_EXEC) { - vma = find_vma(current->mm, src); - if (!vma) { - ret = -EFAULT; - goto out; - } - - if (!(vma->vm_flags & VM_MAYEXEC)) { - ret = -EACCES; - goto out; - } - } - - if (copy_from_user(dst, (void __user *)src, PAGE_SIZE)) - ret = -EFAULT; - -out: - up_read(¤t->mm->mmap_sem); - return ret; -} - /** * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE * @@ -497,10 +436,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) struct sgx_encl *encl = filep->private_data; struct sgx_enclave_add_page addp; struct sgx_secinfo secinfo; - struct page *data_page; unsigned long prot; - void *data; - int ret; if (!(encl->flags & SGX_ENCL_CREATED)) return -EINVAL; @@ -522,12 +458,6 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) if (sgx_validate_secinfo(&secinfo)) return -EINVAL; - data_page = alloc_page(GFP_HIGHUSER); - if (!data_page) - return -ENOMEM; - - data = kmap(data_page); - prot = _calc_vm_trans(secinfo.flags, SGX_SECINFO_R, PROT_READ) | _calc_vm_trans(secinfo.flags, SGX_SECINFO_W, PROT_WRITE) | _calc_vm_trans(secinfo.flags, SGX_SECINFO_X, PROT_EXEC); @@ -536,19 +466,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) if ((secinfo.flags & SGX_SECINFO_PAGE_TYPE_MASK) == SGX_SECINFO_TCS) prot |= PROT_READ | PROT_WRITE; - ret = sgx_encl_page_import_user(data, addp.src, prot); - if (ret) - goto out; - - ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask, - prot); - if (ret) - goto out; - -out: - kunmap(data_page); - __free_page(data_page); - return ret; + return sgx_encl_add_page(encl, &addp, &secinfo, prot); } static int __sgx_get_key_hash(struct crypto_shash *tfm, const void *modulus,