From patchwork Sat Jan 28 04:55:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 13119672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 444BBC61DA4 for ; Sat, 28 Jan 2023 04:55:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232846AbjA1Ezc (ORCPT ); Fri, 27 Jan 2023 23:55:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232788AbjA1Ezb (ORCPT ); Fri, 27 Jan 2023 23:55:31 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC66284B40 for ; Fri, 27 Jan 2023 20:55:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674881730; x=1706417730; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=oxyTiO35hY2GeoWb+XRp8uFFt8zjnCMjAA8Q4+dneAI=; b=lSY6TUHMHbE7mQI6BnybHjRr0NwQLU10Fw5MaFD4XVrF3nL0RtrR+FH0 AXTt//U3gvhidoZf1QFJnDhOiCORj3lsBm3RUisuE1028X9Tkm0I5Cp4P b9UT0dHePWqgvSBkgBUr2QWa5Hd1sZ4y78OJQtRE+hfvnsVrx91IonlAN SyLGBsgGi737COcrF0J2n+7bUEOVZqg+cqQhuMrOFUbsZ3rLrxIHOOVH9 nOiCyOHYxI18RWI9cfO4oh5hdIt/7KOCPu9x/Y5tkCT6Wh9XNK8qIq7iW b6eZ8obNp5SIH8sTJxUOiEplHNT7GC3QOSTreBdR+x4NNKVl1zIX/JKbH w==; X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="413484074" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="413484074" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2023 20:55:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="640942039" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="640942039" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by orsmga006.jf.intel.com with ESMTP; 27 Jan 2023 20:55:29 -0800 From: Haitao Huang To: linux-sgx@vger.kernel.org, jarkko@kernel.org, dave.hansen@linux.intel.com, reinette.chatre@intel.com, vijay.dhanraj@intel.com Subject: [RFC PATCH v4 1/4] x86/sgx: Export sgx_encl_eaug_page Date: Fri, 27 Jan 2023 20:55:26 -0800 Message-Id: <20230128045529.15749-2-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230128045529.15749-1-haitao.huang@linux.intel.com> References: <20230128045529.15749-1-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org This function will be called by both the page fault handler and the fops->fadvise callback. Signed-off-by: Haitao Huang Acked-by: Jarkko Sakkinen Reviewed-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/encl.c | 4 ++-- arch/x86/kernel/cpu/sgx/encl.h | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 2a0e90fe2abc..0185c5ab48dd 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -311,8 +311,8 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, * Returns: Appropriate vm_fault_t: VM_FAULT_NOPAGE when PTE was installed * successfully, VM_FAULT_SIGBUS or VM_FAULT_OOM as error otherwise. */ -static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, - struct sgx_encl *encl, unsigned long addr) +vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, + struct sgx_encl *encl, unsigned long addr) { vm_fault_t vmret = VM_FAULT_SIGBUS; struct sgx_pageinfo pginfo = {0}; diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index f94ff14c9486..9f19b06c3ae3 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -125,5 +125,7 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, unsigned long addr); struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, bool reclaim); void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page); +vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, + struct sgx_encl *encl, unsigned long addr); #endif /* _X86_ENCL_H */ From patchwork Sat Jan 28 04:55:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 13119673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03C7DC38142 for ; Sat, 28 Jan 2023 04:55:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232871AbjA1Ezd (ORCPT ); Fri, 27 Jan 2023 23:55:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229530AbjA1Ezc (ORCPT ); Fri, 27 Jan 2023 23:55:32 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 075C984B6A for ; Fri, 27 Jan 2023 20:55:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674881730; x=1706417730; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=/jXUBuLz0x9DQCcCRk50uA7HKRI6rgstcQ9z6x5v0hg=; b=aNnu2R+zQmYxl2QmNR/ZLlXC9wryUb5Y7w9Tl7h5sWxCsQ2nMg5QOF2l te2oW1kA+aXQI1E4hluPsDF4uti6N5jHiVPS1iB1bNOaFpDwv0D1WJP9S EsKX48E6YPlBFfl8W57f8PtkAX/ZjI0Umj/14eavLbUiRF/DaAbUpveT1 6vdEDp1QgoXYwYWQJ4KcqP1sD5uMThzmcQ9nDYeikwihK3ZISRNCQxA7F zjlF+ZTTmtZddKnytGmGIHHWvqeV/lWTlQkdnE5jz8udIx7krEqjAu53M VgyVtHCYUp0VujkyHBre4VGMATqqQjEqNQGHom49Z+VRS9b5xWBHe9HLq Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="413484076" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="413484076" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2023 20:55:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="640942044" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="640942044" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by orsmga006.jf.intel.com with ESMTP; 27 Jan 2023 20:55:29 -0800 From: Haitao Huang To: linux-sgx@vger.kernel.org, jarkko@kernel.org, dave.hansen@linux.intel.com, reinette.chatre@intel.com, vijay.dhanraj@intel.com Subject: [RFC PATCH v4 2/4] x86/sgx: Implement support for MADV_WILLNEED Date: Fri, 27 Jan 2023 20:55:27 -0800 Message-Id: <20230128045529.15749-3-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230128045529.15749-2-haitao.huang@linux.intel.com> References: <20230128045529.15749-1-haitao.huang@linux.intel.com> <20230128045529.15749-2-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Support madvise(..., MADV_WILLNEED) by adding EPC pages with EAUG in the newly added fops->fadvise() callback implementation, sgx_fadvise(). Change the return type and values of the sgx_encl_eaug_page function so that more specific error codes are returned for different treatment by the page fault handler and the fadvise callback. On any error, sgx_fadvise() will discontinue further operations and return as normal. The page fault handler allows a PF retried by returning VM_FAULT_NOPAGE in handling -EBUSY returned from sgx_encl_eaug_page. Signed-off-by: Haitao Huang Reviewed-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/driver.c | 74 ++++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/encl.c | 59 ++++++++++++++----------- arch/x86/kernel/cpu/sgx/encl.h | 4 +- 3 files changed, 111 insertions(+), 26 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c index aa9b8b868867..3a88daddc1a1 100644 --- a/arch/x86/kernel/cpu/sgx/driver.c +++ b/arch/x86/kernel/cpu/sgx/driver.c @@ -2,6 +2,7 @@ /* Copyright(c) 2016-20 Intel Corporation. */ #include +#include #include #include #include @@ -9,6 +10,7 @@ #include #include "driver.h" #include "encl.h" +#include "encls.h" u64 sgx_attributes_reserved_mask; u64 sgx_xfrm_reserved_mask = ~0x3; @@ -97,10 +99,81 @@ static int sgx_mmap(struct file *file, struct vm_area_struct *vma) vma->vm_ops = &sgx_vm_ops; vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; vma->vm_private_data = encl; + vma->vm_pgoff = PFN_DOWN(vma->vm_start - encl->base); return 0; } +/* + * Add new pages to the enclave sequentially with ENCLS[EAUG] for the WILLNEED advice. + * Only do this to existing VMAs in the same enclave and reject the request otherwise. + */ +static int sgx_fadvise(struct file *file, loff_t offset, loff_t len, int advice) +{ + struct sgx_encl *encl = file->private_data; + unsigned long start = offset + encl->base; + struct vm_area_struct *vma = NULL; + unsigned long end = start + len; + unsigned long pos; + int ret = -EINVAL; + + if (!cpu_feature_enabled(X86_FEATURE_SGX2)) + return -EINVAL; + /* Only support WILLNEED */ + if (advice != POSIX_FADV_WILLNEED) + return -EINVAL; + + if (offset + len < offset) + return -EINVAL; + if (start < encl->base) + return -EINVAL; + if (end < start) + return -EINVAL; + if (end > encl->base + encl->size) + return -EINVAL; + + if (!test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) + return -EINVAL; + + mmap_read_lock(current->mm); + + vma = find_vma(current->mm, start); + if (!vma) + goto unlock; + if (vma->vm_private_data != encl) + goto unlock; + + pos = start; + if (pos < vma->vm_start || end > vma->vm_end) { + /* Don't allow any gaps */ + goto unlock; + } + + /* Here: vm_start <= pos < end <= vm_end */ + while (pos < end) { + if (xa_load(&encl->page_array, PFN_DOWN(pos))) + continue; + if (signal_pending(current)) { + if (pos == start) + ret = -ERESTARTSYS; + else + ret = -EINTR; + goto unlock; + } + ret = sgx_encl_eaug_page(vma, encl, pos); + /* It's OK to not finish */ + if (ret) + break; + pos = pos + PAGE_SIZE; + cond_resched(); + } + ret = 0; + +unlock: + mmap_read_unlock(current->mm); + return ret; +} + static unsigned long sgx_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, @@ -133,6 +206,7 @@ static const struct file_operations sgx_encl_fops = { .compat_ioctl = sgx_compat_ioctl, #endif .mmap = sgx_mmap, + .fadvise = sgx_fadvise, .get_unmapped_area = sgx_get_unmapped_area, }; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 0185c5ab48dd..592cfea4c9e4 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -299,20 +299,17 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, } /** - * sgx_encl_eaug_page() - Dynamically add page to initialized enclave - * @vma: VMA obtained from fault info from where page is accessed - * @encl: enclave accessing the page - * @addr: address that triggered the page fault + * sgx_encl_eaug_page() - Dynamically add an EPC page to initialized enclave + * @vma: the VMA into which the page is to be added + * @encl: the enclave for which the page is to be added + * @addr: the start address of the page to be added * - * When an initialized enclave accesses a page with no backing EPC page - * on a SGX2 system then the EPC can be added dynamically via the SGX2 - * ENCLS[EAUG] instruction. - * - * Returns: Appropriate vm_fault_t: VM_FAULT_NOPAGE when PTE was installed - * successfully, VM_FAULT_SIGBUS or VM_FAULT_OOM as error otherwise. + * Returns: 0 on EAUG success and PTE was installed successfully, -EBUSY for + * waiting on reclaimer to free EPC, -ENOMEM for out of RAM, -EFAULT for + * all other failures. */ -vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, - struct sgx_encl *encl, unsigned long addr) +int sgx_encl_eaug_page(struct vm_area_struct *vma, + struct sgx_encl *encl, unsigned long addr) { vm_fault_t vmret = VM_FAULT_SIGBUS; struct sgx_pageinfo pginfo = {0}; @@ -321,10 +318,10 @@ vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, struct sgx_va_page *va_page; unsigned long phys_addr; u64 secinfo_flags; - int ret; + int ret = -EFAULT; if (!test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) - return VM_FAULT_SIGBUS; + return -EFAULT; /* * Ignore internal permission checking for dynamically added pages. @@ -335,21 +332,21 @@ vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, secinfo_flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_X; encl_page = sgx_encl_page_alloc(encl, addr - encl->base, secinfo_flags); if (IS_ERR(encl_page)) - return VM_FAULT_OOM; + return -ENOMEM; mutex_lock(&encl->lock); epc_page = sgx_alloc_epc_page(encl_page, false); if (IS_ERR(epc_page)) { if (PTR_ERR(epc_page) == -EBUSY) - vmret = VM_FAULT_NOPAGE; + ret = -EBUSY; goto err_out_unlock; } va_page = sgx_encl_grow(encl, false); if (IS_ERR(va_page)) { if (PTR_ERR(va_page) == -EBUSY) - vmret = VM_FAULT_NOPAGE; + ret = -EBUSY; goto err_out_epc; } @@ -362,16 +359,20 @@ vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, * If ret == -EBUSY then page was created in another flow while * running without encl->lock */ - if (ret) + if (ret) { + ret = -EFAULT; goto err_out_shrink; + } pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page); pginfo.addr = encl_page->desc & PAGE_MASK; pginfo.metadata = 0; ret = __eaug(&pginfo, sgx_get_epc_virt_addr(epc_page)); - if (ret) + if (ret) { + ret = -EFAULT; goto err_out; + } encl_page->encl = encl; encl_page->epc_page = epc_page; @@ -388,10 +389,10 @@ vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, vmret = vmf_insert_pfn(vma, addr, PFN_DOWN(phys_addr)); if (vmret != VM_FAULT_NOPAGE) { mutex_unlock(&encl->lock); - return VM_FAULT_SIGBUS; + return -EFAULT; } mutex_unlock(&encl->lock); - return VM_FAULT_NOPAGE; + return 0; err_out: xa_erase(&encl->page_array, PFN_DOWN(encl_page->desc)); @@ -404,7 +405,7 @@ vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, mutex_unlock(&encl->lock); kfree(encl_page); - return vmret; + return ret; } static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) @@ -434,8 +435,18 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) * enclave that will be checked for right away. */ if (cpu_feature_enabled(X86_FEATURE_SGX2) && - (!xa_load(&encl->page_array, PFN_DOWN(addr)))) - return sgx_encl_eaug_page(vma, encl, addr); + (!xa_load(&encl->page_array, PFN_DOWN(addr)))) { + switch (sgx_encl_eaug_page(vma, encl, addr)) { + case 0: + case -EBUSY: + return VM_FAULT_NOPAGE; + case -ENOMEM: + return VM_FAULT_OOM; + case -EFAULT: + default: + return VM_FAULT_SIGBUS; + } + } mutex_lock(&encl->lock); diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 9f19b06c3ae3..e5a507871fa3 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -125,7 +125,7 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, unsigned long addr); struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, bool reclaim); void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page); -vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, - struct sgx_encl *encl, unsigned long addr); +int sgx_encl_eaug_page(struct vm_area_struct *vma, + struct sgx_encl *encl, unsigned long addr); #endif /* _X86_ENCL_H */ From patchwork Sat Jan 28 04:55:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 13119674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 704F6C636BD for ; Sat, 28 Jan 2023 04:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229530AbjA1Ezd (ORCPT ); Fri, 27 Jan 2023 23:55:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232858AbjA1Ezc (ORCPT ); Fri, 27 Jan 2023 23:55:32 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 955E484B6D for ; Fri, 27 Jan 2023 20:55:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674881731; x=1706417731; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=wlN/uSnSjwjEpS3YvW3a0wz+q6wEfy8nG8PcVPLMdWo=; b=alMhStwxmIg0stYNf68IViU/cfkLxsZYzhWWrsHQ+kdR2/qupcGwtRUl j8VvyGrhJBx4DEYbtTwWRty33d/YwLffGPqRjwk6RHLTWKF0Jcy3t3H98 4kVLCQOgKWbgHJ4FGTIw3xjNg0SdyupWiPpkITNCKnLJhuiEg7mjxfd4t mTm2oQ2aiiq7lUet6PDyoxMeHF3QmSuX/zf/BQJ0Nw7RrKQ8HjnkbdtBp ofuENBCs7G6JsTuVfaWOrP1Kn4AOuC3e7jYmyKmC5VoNJlT/jFbQROF5/ qSwxDiZOATtiRNVyGaHmZNkHp2Krf9MJxQSfFs/6ptx5np0mtDaG/+WRe w==; X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="413484078" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="413484078" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2023 20:55:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="640942047" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="640942047" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by orsmga006.jf.intel.com with ESMTP; 27 Jan 2023 20:55:29 -0800 From: Haitao Huang To: linux-sgx@vger.kernel.org, jarkko@kernel.org, dave.hansen@linux.intel.com, reinette.chatre@intel.com, vijay.dhanraj@intel.com Subject: [RFC PATCH v4 3/4] selftests/sgx: add len field for EACCEPT op Date: Fri, 27 Jan 2023 20:55:28 -0800 Message-Id: <20230128045529.15749-4-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230128045529.15749-3-haitao.huang@linux.intel.com> References: <20230128045529.15749-1-haitao.huang@linux.intel.com> <20230128045529.15749-2-haitao.huang@linux.intel.com> <20230128045529.15749-3-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org So we can EACCEPT multiple pages inside enclave without EEXIT, preparing for testing with MADV_WILLNEED for ranges bigger than a single page. Signed-off-by: Haitao Huang Tested-by: Jarkko Sakkinen # NUC7 Reviewed-by: Jarkko Sakkinen --- tools/testing/selftests/sgx/defines.h | 1 + tools/testing/selftests/sgx/main.c | 15 +++++++++++++++ tools/testing/selftests/sgx/test_encl.c | 18 ++++++++++++------ 3 files changed, 28 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/sgx/defines.h b/tools/testing/selftests/sgx/defines.h index d8587c971941..8578e773d3d8 100644 --- a/tools/testing/selftests/sgx/defines.h +++ b/tools/testing/selftests/sgx/defines.h @@ -60,6 +60,7 @@ struct encl_op_eaccept { struct encl_op_header header; uint64_t epc_addr; uint64_t flags; + uint64_t len; uint64_t ret; }; diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c index e596b45bc5f8..e457f2d35461 100644 --- a/tools/testing/selftests/sgx/main.c +++ b/tools/testing/selftests/sgx/main.c @@ -493,6 +493,7 @@ TEST_F_TIMEOUT(enclave, unclobbered_vdso_oversubscribed_remove, TIMEOUT_DEFAULT) eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED; eaccept_op.header.type = ENCL_OP_EACCEPT; + eaccept_op.len = PAGE_SIZE; TH_LOG("Entering enclave to run EACCEPT for each page of %zd bytes may take a while ...", heap->size); @@ -916,6 +917,7 @@ TEST_F(enclave, epcm_permissions) * EPCM permissions changed from kernel, need to EACCEPT from enclave. */ eaccept_op.epc_addr = data_start; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_REG | SGX_SECINFO_PR; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1092,6 +1094,7 @@ TEST_F(enclave, augment) self->run.tcs = self->encl.encl_base + PAGE_SIZE; eaccept_op.epc_addr = self->encl.encl_base + total_size; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1194,6 +1197,7 @@ TEST_F(enclave, augment_via_eaccept) * without a #PF). All should be transparent to userspace. */ eaccept_op.epc_addr = self->encl.encl_base + total_size; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1299,6 +1303,7 @@ TEST_F_TIMEOUT(enclave, augment_via_eaccept_long, TIMEOUT_DEFAULT) * without a #PF). All should be transparent to userspace. */ eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; + eaccept_op.len = PAGE_SIZE; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1451,6 +1456,7 @@ TEST_F(enclave, tcs_create) */ eaccept_op.epc_addr = (unsigned long)stack_end; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1471,6 +1477,7 @@ TEST_F(enclave, tcs_create) EXPECT_EQ(eaccept_op.ret, 0); eaccept_op.epc_addr = (unsigned long)ssa; + eaccept_op.len = PAGE_SIZE; EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); @@ -1481,6 +1488,7 @@ TEST_F(enclave, tcs_create) EXPECT_EQ(eaccept_op.ret, 0); eaccept_op.epc_addr = (unsigned long)tcs; + eaccept_op.len = PAGE_SIZE; EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); @@ -1533,6 +1541,7 @@ TEST_F(enclave, tcs_create) /* EACCEPT new TCS page from enclave. */ eaccept_op.epc_addr = (unsigned long)tcs; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_TCS | SGX_SECINFO_MODIFIED; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1601,6 +1610,7 @@ TEST_F(enclave, tcs_create) self->run.tcs = self->encl.encl_base; eaccept_op.epc_addr = (unsigned long)stack_end; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -1614,6 +1624,7 @@ TEST_F(enclave, tcs_create) EXPECT_EQ(eaccept_op.ret, 0); eaccept_op.epc_addr = (unsigned long)tcs; + eaccept_op.len = PAGE_SIZE; eaccept_op.ret = 0; EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); @@ -1625,6 +1636,7 @@ TEST_F(enclave, tcs_create) EXPECT_EQ(eaccept_op.ret, 0); eaccept_op.epc_addr = (unsigned long)ssa; + eaccept_op.len = PAGE_SIZE; eaccept_op.ret = 0; EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); @@ -1653,6 +1665,7 @@ TEST_F(enclave, tcs_create) * trigger dynamic add of regular page at that location. */ eaccept_op.epc_addr = (unsigned long)tcs; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -2022,6 +2035,7 @@ TEST_F(enclave, remove_added_page_invalid_access_after_eaccept) EXPECT_EQ(ioc.count, 4096); eaccept_op.epc_addr = (unsigned long)data_start; + eaccept_op.len = PAGE_SIZE; eaccept_op.ret = 0; eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED; eaccept_op.header.type = ENCL_OP_EACCEPT; @@ -2112,6 +2126,7 @@ TEST_F(enclave, remove_untouched_page) */ eaccept_op.epc_addr = data_start; + eaccept_op.len = PAGE_SIZE; eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED; eaccept_op.ret = 0; eaccept_op.header.type = ENCL_OP_EACCEPT; diff --git a/tools/testing/selftests/sgx/test_encl.c b/tools/testing/selftests/sgx/test_encl.c index c0d6397295e3..439f1adbd357 100644 --- a/tools/testing/selftests/sgx/test_encl.c +++ b/tools/testing/selftests/sgx/test_encl.c @@ -37,12 +37,18 @@ static void do_encl_eaccept(void *_op) int rax; secinfo.flags = op->flags; - - asm volatile(".byte 0x0f, 0x01, 0xd7" - : "=a" (rax) - : "a" (EACCEPT), - "b" (&secinfo), - "c" (op->epc_addr)); + for (uint64_t addr = op->epc_addr; + addr < op->epc_addr + op->len; addr += 4096) { + asm volatile(".byte 0x0f, 0x01, 0xd7" + : "=a" (rax) + : "a" (EACCEPT), + "b" (&secinfo), + "c" (addr)); + if (rax) { + op->ret = rax; + return; + } + } op->ret = rax; } From patchwork Sat Jan 28 04:55:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 13119675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01749C27C76 for ; Sat, 28 Jan 2023 04:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232858AbjA1Eze (ORCPT ); Fri, 27 Jan 2023 23:55:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232788AbjA1Ezd (ORCPT ); Fri, 27 Jan 2023 23:55:33 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CB7884B40 for ; Fri, 27 Jan 2023 20:55:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674881732; x=1706417732; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=p7VuF4MeZw62oHxoOnYf/joNE+7SSumiSsIIev3eA8I=; b=jaEE8v8RpCsTKc31qsAxYupAQ+lClkyflbbAimFVd8URlgWzqe+dtK4N X6SAlq3sY4NTExAi32K5Ef+mbhCvzGW405JXo+Y5bLXFjECuTGm/J6LeT h59XJK3EkZv3bi+H0M8n4zw67BNQ8A51rXsHoq0Wsfx7HwhOet9ZyXoiw suh6zK6DaLMJqL0uHFDBWjctLAit9sfobU4Vas+H/UkSVtx/kInlxP7/r zHrhMWp0dlC9ZKJ58vyLJiyU/8Y6PGeY4HMiQW1gKcdqSg1WVXGRllzMs SWwB22lLCZjf4NSvL3PcFWdnItKlTTfVR16o/iRFpEEc0Cew7DGHx6JSr Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="413484080" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="413484080" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2023 20:55:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10603"; a="640942051" X-IronPort-AV: E=Sophos;i="5.97,253,1669104000"; d="scan'208";a="640942051" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by orsmga006.jf.intel.com with ESMTP; 27 Jan 2023 20:55:29 -0800 From: Haitao Huang To: linux-sgx@vger.kernel.org, jarkko@kernel.org, dave.hansen@linux.intel.com, reinette.chatre@intel.com, vijay.dhanraj@intel.com Subject: [RFC PATCH v4 4/4] selftests/sgx: Add test for madvise(..., WILLNEED) Date: Fri, 27 Jan 2023 20:55:29 -0800 Message-Id: <20230128045529.15749-5-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230128045529.15749-4-haitao.huang@linux.intel.com> References: <20230128045529.15749-1-haitao.huang@linux.intel.com> <20230128045529.15749-2-haitao.huang@linux.intel.com> <20230128045529.15749-3-haitao.huang@linux.intel.com> <20230128045529.15749-4-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Measure and compare run time for EAUG'ing different number of EPC pages with/without madvise(..., WILLNEED) call. Signed-off-by: Haitao Huang Tested-by: Jarkko Sakkinen Reviewed-by: Jarkko Sakkinen --- tools/testing/selftests/sgx/main.c | 167 +++++++++++++++++++++++++++++ 1 file changed, 167 insertions(+) diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c index e457f2d35461..e3432e73af69 100644 --- a/tools/testing/selftests/sgx/main.c +++ b/tools/testing/selftests/sgx/main.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -1360,6 +1361,172 @@ TEST_F_TIMEOUT(enclave, augment_via_eaccept_long, TIMEOUT_DEFAULT) munmap(addr, DYNAMIC_HEAP_SIZE); } +static int eaccept_range(struct _test_data_enclave *self, void *addr, + unsigned long size, uint64_t flags, + struct __test_metadata *_metadata) +{ + struct encl_op_eaccept eaccept_op; + + self->run.exception_vector = 0; + self->run.exception_error_code = 0; + self->run.exception_addr = 0; + + /* + * Run EACCEPT on every page to trigger the #PF->EAUG->EACCEPT(again + * without a #PF). All should be transparent to userspace. + */ + eaccept_op.flags = flags; + eaccept_op.ret = 0; + eaccept_op.header.type = ENCL_OP_EACCEPT; + eaccept_op.len = size; + eaccept_op.epc_addr = (uint64_t)(addr); + + EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); + + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + ASSERT_EQ(eaccept_op.ret, 0); + ASSERT_EQ(self->run.function, EEXIT); + + return 0; +} + +static int trim_remove_range(struct _test_data_enclave *self, void *addr, + unsigned long size, struct __test_metadata *_metadata) +{ + int ret, errno_save; + struct sgx_enclave_remove_pages remove_ioc; + struct sgx_enclave_modify_types modt_ioc; + unsigned long offset; + unsigned long count; + + if ((uint64_t)addr <= self->encl.encl_base) + return -1; + offset = (uint64_t)addr - self->encl.encl_base; + + memset(&modt_ioc, 0, sizeof(modt_ioc)); + modt_ioc.offset = offset; + modt_ioc.length = size; + modt_ioc.page_type = SGX_PAGE_TYPE_TRIM; + count = 0; + do { + ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPES, &modt_ioc); + + errno_save = ret == -1 ? errno : 0; + if (errno_save != EAGAIN) + break; + EXPECT_EQ(modt_ioc.result, 0); + + count += modt_ioc.count; + modt_ioc.offset += modt_ioc.count; + modt_ioc.length -= modt_ioc.count; + modt_ioc.result = 0; + modt_ioc.count = 0; + } while (modt_ioc.length != 0); + + EXPECT_EQ(ret, 0); + EXPECT_EQ(errno_save, 0); + EXPECT_EQ(modt_ioc.result, 0); + count += modt_ioc.count; + EXPECT_EQ(count, size); + + EXPECT_EQ(eaccept_range(self, addr, size, + SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED, + _metadata), 0); + + /* Complete page removal. */ + memset(&remove_ioc, 0, sizeof(remove_ioc)); + remove_ioc.offset = offset; + remove_ioc.length = size; + count = 0; + do { + ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_REMOVE_PAGES, &remove_ioc); + + errno_save = ret == -1 ? errno : 0; + if (errno_save != EAGAIN) + break; + + count += remove_ioc.count; + remove_ioc.offset += remove_ioc.count; + remove_ioc.length -= remove_ioc.count; + remove_ioc.count = 0; + } while (remove_ioc.length != 0); + + EXPECT_EQ(ret, 0); + EXPECT_EQ(errno_save, 0); + count += remove_ioc.count; + EXPECT_EQ(count, size); + + return 0; +} + +/* + * Compare performance with and without madvise call before EACCEPT'ing + * different size of regions. + */ +TEST_F_TIMEOUT(enclave, augment_via_madvise, TIMEOUT_DEFAULT) +{ + unsigned long advise_size = PAGE_SIZE; + unsigned long max_advise_size = get_total_epc_mem() * 3UL; + int speed_up_percent; + clock_t start; + double time_used1, time_used2; + size_t total_size = 0; + unsigned long i; + void *addr; + + if (!sgx2_supported()) + SKIP(return, "SGX2 not supported"); + + ASSERT_TRUE(setup_test_encl_dynamic(ENCL_HEAP_SIZE_DEFAULT, + max_advise_size, &self->encl, _metadata)); + + memset(&self->run, 0, sizeof(self->run)); + self->run.tcs = self->encl.encl_base; + + for (i = 0; i < self->encl.nr_segments; i++) { + struct encl_segment *seg = &self->encl.segment_tbl[i]; + + total_size += seg->size; + } + + for (i = 1; i < 52 && advise_size < max_advise_size; i++) { + addr = mmap((void *)self->encl.encl_base + total_size, advise_size, + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, + self->encl.fd, 0); + EXPECT_NE(addr, MAP_FAILED); + + start = clock(); + EXPECT_EQ(eaccept_range(self, addr, advise_size, + SGX_SECINFO_R | SGX_SECINFO_W + | SGX_SECINFO_REG + | SGX_SECINFO_PENDING, + _metadata), 0); + time_used1 = (double)clock() - start; + + EXPECT_EQ(trim_remove_range(self, addr, advise_size, _metadata), 0); + + start = clock(); + EXPECT_EQ(madvise(addr, advise_size, MADV_WILLNEED), 0); + EXPECT_EQ(eaccept_range(self, addr, advise_size, + SGX_SECINFO_R | SGX_SECINFO_W + | SGX_SECINFO_REG + | SGX_SECINFO_PENDING, + _metadata), 0); + time_used2 = (double)clock() - start; + + speed_up_percent = (int)((time_used1 - time_used2) / time_used1 * 100); + TH_LOG("madvise speed up for eaug'ing %10ld pages: %d%%", + advise_size / PAGE_SIZE, speed_up_percent); + EXPECT_GE(speed_up_percent, 0); + EXPECT_EQ(trim_remove_range(self, addr, advise_size, _metadata), 0); + munmap(addr, advise_size); + advise_size = (advise_size << 1UL); + } + encl_delete(&self->encl); +} + /* * SGX2 page type modification test in two phases: * Phase 1: