From patchwork Thu Jun 6 02:11:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10978307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E96876 for ; Thu, 6 Jun 2019 02:12:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9087B200E5 for ; Thu, 6 Jun 2019 02:12:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8478E28710; Thu, 6 Jun 2019 02:12:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FD2E2870E for ; Thu, 6 Jun 2019 02:12:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726795AbfFFCL6 (ORCPT ); Wed, 5 Jun 2019 22:11:58 -0400 Received: from mga17.intel.com ([192.55.52.151]:61631 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726736AbfFFCLw (ORCPT ); Wed, 5 Jun 2019 22:11:52 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Jun 2019 19:11:50 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga004.jf.intel.com with ESMTP; 05 Jun 2019 19:11:49 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH v2 4/5] LSM: x86/sgx: Introduce ->enclave_load() hook for Intel SGX Date: Wed, 5 Jun 2019 19:11:44 -0700 Message-Id: <20190606021145.12604-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190606021145.12604-1-sean.j.christopherson@intel.com> References: <20190606021145.12604-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP enclave_load() is roughly analogous to the existing file_mprotect(). Due to the nature of SGX and its Enclave Page Cache (EPC), all enclave VMAs are backed by a single file, i.e. /dev/sgx/enclave, that must be MAP_SHARED. Furthermore, all enclaves need read, write and execute VMAs. As a result, the existing/standard call to file_mprotect() does not provide any meaningful security for enclaves since an LSM can only deny/grant access to the EPC as a whole. security_enclave_load() is called when SGX is first loading an enclave page, i.e. copying a page from normal memory into the EPC. Although the prototype for enclave_load() is similar to file_mprotect(), e.g. SGX could theoretically use file_mprotect() and set reqprot=prot, a separate hook is desirable as the semantics of an enclave's protection bits are different than those of vmas, e.g. an enclave page tracks the maximal set of protections, whereas file_mprotect() operates on the actual protections being provided. In other words, LSMs will likely want to implement different policies for enclave page protections. Note, extensive discussion yielded no sane alternative to some form of SGX specific LSM hook[1]. [1] https://lkml.kernel.org/r/CALCETrXf8mSK45h7sTK5Wf+pXLVn=Bjsc_RLpgO-h-qdzBRo5Q@mail.gmail.com Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 12 ++++++------ include/linux/lsm_hooks.h | 13 +++++++++++++ include/linux/security.h | 12 ++++++++++++ security/security.c | 7 +++++++ 4 files changed, 38 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 44b2d73de7c3..29c0df672250 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -582,9 +583,6 @@ static int sgx_encl_page_copy(void *dst, unsigned long src, unsigned long prot) struct vm_area_struct *vma; int ret; - if (!(prot & VM_EXEC)) - return 0; - /* Hold mmap_sem across copy_from_user() to avoid a TOCTOU race. */ down_read(¤t->mm->mmap_sem); @@ -599,15 +597,17 @@ static int sgx_encl_page_copy(void *dst, unsigned long src, unsigned long prot) * but with some future proofing against other cases that may deny * execute permissions. */ - if (!(vma->vm_flags & VM_MAYEXEC)) { + if ((prot & VM_EXEC) && !(vma->vm_flags & VM_MAYEXEC)) { ret = -EACCES; goto out; } + ret = security_enclave_load(vma, prot); + if (ret) + goto out; + if (copy_from_user(dst, (void __user *)src, PAGE_SIZE)) ret = -EFAULT; - else - ret = 0; out: up_read(¤t->mm->mmap_sem); diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 47f58cfb6a19..c6f47a7eef70 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1446,6 +1446,12 @@ * @bpf_prog_free_security: * Clean up the security information stored inside bpf prog. * + * Security hooks for Intel SGX enclaves. + * + * @enclave_load: + * @vma: the source memory region of the enclave page being loaded. + * @prot: the (maximal) protections of the enclave page. + * Return 0 if permission is granted. */ union security_list_options { int (*binder_set_context_mgr)(struct task_struct *mgr); @@ -1807,6 +1813,10 @@ union security_list_options { int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux); void (*bpf_prog_free_security)(struct bpf_prog_aux *aux); #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX + int (*enclave_load)(struct vm_area_struct *vma, unsigned long prot); +#endif /* CONFIG_INTEL_SGX */ }; struct security_hook_heads { @@ -2046,6 +2056,9 @@ struct security_hook_heads { struct hlist_head bpf_prog_alloc_security; struct hlist_head bpf_prog_free_security; #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX + struct hlist_head enclave_load; +#endif /* CONFIG_INTEL_SGX */ } __randomize_layout; /* diff --git a/include/linux/security.h b/include/linux/security.h index 659071c2e57c..0b6d1eb7368b 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -1829,5 +1829,17 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux) #endif /* CONFIG_SECURITY */ #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX +#ifdef CONFIG_SECURITY +int security_enclave_load(struct vm_area_struct *vma, unsigned long prot); +#else +static inline int security_enclave_load(struct vm_area_struct *vma, + unsigned long prot) +{ + return 0; +} +#endif /* CONFIG_SECURITY */ +#endif /* CONFIG_INTEL_SGX */ + #endif /* ! __LINUX_SECURITY_H */ diff --git a/security/security.c b/security/security.c index 613a5c00e602..c6f7f26969b2 100644 --- a/security/security.c +++ b/security/security.c @@ -2359,3 +2359,10 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux) call_void_hook(bpf_prog_free_security, aux); } #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX +int security_enclave_load(struct vm_area_struct *vma, unsigned long prot) +{ + return call_int_hook(enclave_load, 0, vma, prot); +} +#endif /* CONFIG_INTEL_SGX */