From patchwork Sun Jul 7 23:41:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xing, Cedric" X-Patchwork-Id: 11034439 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AF201800 for ; Sun, 7 Jul 2019 23:41:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B8842811E for ; Sun, 7 Jul 2019 23:41:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FB5328113; Sun, 7 Jul 2019 23:41:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A423D2811E for ; Sun, 7 Jul 2019 23:41:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727646AbfGGXlr (ORCPT ); Sun, 7 Jul 2019 19:41:47 -0400 Received: from mga17.intel.com ([192.55.52.151]:62277 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727415AbfGGXlr (ORCPT ); Sun, 7 Jul 2019 19:41:47 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jul 2019 16:41:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,464,1557212400"; d="scan'208";a="340295307" Received: from bxing-mobl.amr.corp.intel.com (HELO ubt18m.amr.corp.intel.com) ([10.251.135.59]) by orsmga005.jf.intel.com with ESMTP; 07 Jul 2019 16:41:45 -0700 From: Cedric Xing To: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, cedric.xing@intel.com Subject: [RFC PATCH v3 1/4] x86/sgx: Add SGX specific LSM hooks Date: Sun, 7 Jul 2019 16:41:31 -0700 Message-Id: <3280c19f6f5c718fb17c7463fc9f620cd06a05cc.1562542383.git.cedric.xing@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP SGX enclaves are loaded from pages in regular memory. Given the ability to create executable pages, the newly added SGX subsystem may present a backdoor for adversaries to circumvent LSM policies, such as creating an executable enclave page from a modified regular page that would otherwise not be made executable as prohibited by LSM. Therefore arises the primary question of whether an enclave page should be allowed to be created from a given source page in regular memory. A related question is whether to grant/deny a mprotect() request on a given enclave page/range. mprotect() is traditionally covered by security_file_mprotect() hook, however, enclave pages have a different lifespan than either MAP_PRIVATE or MAP_SHARED. Particularly, MAP_PRIVATE pages have the same lifespan as the VMA while MAP_SHARED pages have the same lifespan as the backing file (on disk), but enclave pages have the lifespan of the enclave’s file descriptor. For example, enclave pages could be munmap()’ed then mmap()’ed again without losing contents (like MAP_SHARED), but all enclave pages will be lost once its file descriptor has been closed (like MAP_PRIVATE). That said, LSM modules need some new data structure for tracking protections of enclave pages/ranges so that they can make proper decisions at mmap()/mprotect() syscalls. The last question, which is orthogonal to the 2 above, is whether or not to allow a given enclave to launch/run. Enclave pages are not visible to the rest of the system, so to some extent offer a better place for malicious software to hide. Thus, it is sometimes desirable to whitelist/blacklist enclaves by their measurements, signing public keys, or image files. To address the questions above, 2 new LSM hooks are added for enclaves. · security_enclave_load() – This hook allows LSM to decide whether or not to allow instantiation of a range of enclave pages using the specified VMA. It is invoked when a range of enclave pages is about to be loaded. It serves 3 purposes: 1) indicate to LSM that the file struct in subject is an enclave; 2) allow LSM to decide whether or not to instantiate those pages and 3) allow LSM to initialize internal data structures for tracking origins/protections of those pages. · security_enclave_init() – This hook allows whitelisting/blacklisting or performing whatever checks deemed appropriate before an enclave is allowed to run. An LSM module may opt to use the file backing the SIGSTRUCT as a proxy to dictate allowed protections for anonymous pages. mprotect() of enclave pages continue to be governed by security_file_mprotect(), with the expectation that LSM is able to distinguish between regular and enclave pages inside the hook. For mmap(), the SGX subsystem is expected to invoke security_file_mprotect() explicitly to check protections against the requested protections for existing enclave pages. Signed-off-by: Cedric Xing --- include/linux/lsm_hooks.h | 27 +++++++++++++++++++++++++++ include/linux/security.h | 23 +++++++++++++++++++++++ security/security.c | 17 +++++++++++++++++ 3 files changed, 67 insertions(+) diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 47f58cfb6a19..9d9e44200683 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1446,6 +1446,22 @@ * @bpf_prog_free_security: * Clean up the security information stored inside bpf prog. * + * @enclave_load: + * Decide if a range of pages shall be allowed to be loaded into an + * enclave + * + * @encl points to the file identifying the target enclave + * @start target range starting address + * @end target range ending address + * @flags contains protections being requested for the target range + * @source points to the VMA containing the source pages to be loaded + * + * @enclave_init: + * Decide if an enclave shall be allowed to launch + * + * @encl points to the file identifying the target enclave being launched + * @sigstruct contains a copy of the SIGSTRUCT in kernel memory + * @source points to the VMA backing SIGSTRUCT in user memory */ union security_list_options { int (*binder_set_context_mgr)(struct task_struct *mgr); @@ -1807,6 +1823,13 @@ union security_list_options { int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux); void (*bpf_prog_free_security)(struct bpf_prog_aux *aux); #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX + int (*enclave_load)(struct file *encl, size_t start, size_t end, + size_t flags, struct vm_area_struct *source); + int (*enclave_init)(struct file *encl, struct sgx_sigstruct *sigstruct, + struct vm_area_struct *source); +#endif }; struct security_hook_heads { @@ -2046,6 +2069,10 @@ struct security_hook_heads { struct hlist_head bpf_prog_alloc_security; struct hlist_head bpf_prog_free_security; #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX + struct hlist_head enclave_load; + struct hlist_head enclave_init; +#endif } __randomize_layout; /* diff --git a/include/linux/security.h b/include/linux/security.h index 659071c2e57c..52c200810004 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -1829,5 +1829,28 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux) #endif /* CONFIG_SECURITY */ #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX +struct sgx_sigstruct; +#ifdef CONFIG_SECURITY +int security_enclave_load(struct file *encl, size_t start, size_t end, + size_t flags, struct vm_area_struct *source); +int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct, + struct vm_area_struct *source); +#else +static inline int security_enclave_load(struct file *encl, size_t start, + size_t end, struct vm_area_struct *src) +{ + return 0; +} + +static inline int security_enclave_init(struct file *encl, + struct sgx_sigstruct *sigstruct, + struct vm_area_struct *src) +{ + return 0; +} +#endif /* CONFIG_SECURITY */ +#endif /* CONFIG_INTEL_SGX */ + #endif /* ! __LINUX_SECURITY_H */ diff --git a/security/security.c b/security/security.c index f493db0bf62a..72c10f5e4f95 100644 --- a/security/security.c +++ b/security/security.c @@ -1420,6 +1420,7 @@ int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, { return call_int_hook(file_mprotect, 0, vma, reqprot, prot); } +EXPORT_SYMBOL(security_file_mprotect); int security_file_lock(struct file *file, unsigned int cmd) { @@ -2355,3 +2356,19 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux) call_void_hook(bpf_prog_free_security, aux); } #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX +int security_enclave_load(struct file *encl, size_t start, size_t end, + size_t flags, struct vm_area_struct *src) +{ + return call_int_hook(enclave_load, 0, encl, start, end, flags, src); +} +EXPORT_SYMBOL(security_enclave_load); + +int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct, + struct vm_area_struct *src) +{ + return call_int_hook(enclave_init, 0, encl, sigstruct, src); +} +EXPORT_SYMBOL(security_enclave_init); +#endif /* CONFIG_INTEL_SGX */ From patchwork Sun Jul 7 23:41:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xing, Cedric" X-Patchwork-Id: 11034443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 32A821800 for ; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 245512811E for ; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 189A52817F; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 64DE628173 for ; Sun, 7 Jul 2019 23:41:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727688AbfGGXls (ORCPT ); Sun, 7 Jul 2019 19:41:48 -0400 Received: from mga17.intel.com ([192.55.52.151]:62279 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727388AbfGGXlr (ORCPT ); Sun, 7 Jul 2019 19:41:47 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jul 2019 16:41:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,464,1557212400"; d="scan'208";a="340295310" Received: from bxing-mobl.amr.corp.intel.com (HELO ubt18m.amr.corp.intel.com) ([10.251.135.59]) by orsmga005.jf.intel.com with ESMTP; 07 Jul 2019 16:41:46 -0700 From: Cedric Xing To: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, cedric.xing@intel.com Subject: [RFC PATCH v3 2/4] x86/64: Call LSM hooks from SGX subsystem/module Date: Sun, 7 Jul 2019 16:41:32 -0700 Message-Id: X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It’s straightforward to call new LSM hooks from the SGX subsystem/module. There are three places where LSM hooks are invoked. 1) sgx_mmap() invokes security_file_mprotect() to validate requested protection. It is necessary because security_mmap_file() invoked by mmap() syscall only validates protections against /dev/sgx/enclave file, but not against those files from which the pages were loaded from. 2) security_enclave_load() is invoked upon loading of every enclave page by the EADD ioctl. Please note that if pages are EADD’ed in batch, the SGX subsystem/module is responsible for dividing pages in trunks so that each trunk is loaded from a single VMA. 3) security_enclave_init() is invoked before initializing (EINIT) every enclave. Signed-off-by: Cedric Xing --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 80 +++++++++++++++++++++++--- arch/x86/kernel/cpu/sgx/driver/main.c | 16 +++++- 2 files changed, 85 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index b186fb7b48d5..4f5abf9819a7 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) // Copyright(c) 2016-19 Intel Corporation. -#include +#include #include #include #include @@ -11,6 +11,7 @@ #include #include #include +#include #include "driver.h" struct sgx_add_page_req { @@ -575,6 +576,46 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } +static int sgx_encl_prepare_page(struct file *filp, unsigned long dst, + unsigned long src, void *buf) +{ + struct vm_area_struct *vma; + unsigned long prot; + int rc; + + if (dst & ~PAGE_SIZE) + return -EINVAL; + + rc = down_read_killable(¤t->mm->mmap_sem); + if (rc) + return rc; + + vma = find_vma(current->mm, dst); + if (vma && dst >= vma->vm_start) + prot = _calc_vm_trans(vma->vm_flags, VM_READ, PROT_READ) | + _calc_vm_trans(vma->vm_flags, VM_WRITE, PROT_WRITE) | + _calc_vm_trans(vma->vm_flags, VM_EXEC, PROT_EXEC); + else + prot = 0; + + vma = find_vma(current->mm, src); + if (!vma || src < vma->vm_start || src + PAGE_SIZE > vma->vm_end) + rc = -EFAULT; + + if (!rc && !(vma->vm_flags & VM_MAYEXEC)) + rc = -EACCES; + + if (!rc && copy_from_user(buf, (void __user *)src, PAGE_SIZE)) + rc = -EFAULT; + + if (!rc) + rc = security_enclave_load(filp, dst, PAGE_SIZE, prot, vma); + + up_read(¤t->mm->mmap_sem); + + return rc; +} + /** * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE * @@ -613,10 +654,9 @@ static long sgx_ioc_enclave_add_page(struct file *filep, unsigned int cmd, data = kmap(data_page); - if (copy_from_user((void *)data, (void __user *)addp->src, PAGE_SIZE)) { - ret = -EFAULT; + ret = sgx_encl_prepare_page(filep, addp->addr, addp->src, data); + if (ret) goto out; - } ret = sgx_encl_add_page(encl, addp->addr, data, &secinfo, addp->mrmask); if (ret) @@ -718,6 +758,31 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct, return ret; } +static int sgx_encl_prepare_sigstruct(struct file *filp, unsigned long src, + struct sgx_sigstruct *ss) +{ + struct vm_area_struct *vma; + int rc; + + rc = down_read_killable(¤t->mm->mmap_sem); + if (rc) + return rc; + + vma = find_vma(current->mm, src); + if (!vma || src < vma->vm_start || src + sizeof(*ss) > vma->vm_end) + rc = -EFAULT; + + if (!rc && copy_from_user(ss, (void __user *)src, sizeof(*ss))) + rc = -EFAULT; + + if (!rc) + rc = security_enclave_init(filp, ss, vma); + + up_read(¤t->mm->mmap_sem); + + return rc; +} + /** * sgx_ioc_enclave_init - handler for %SGX_IOC_ENCLAVE_INIT * @@ -753,12 +818,9 @@ static long sgx_ioc_enclave_init(struct file *filep, unsigned int cmd, ((unsigned long)sigstruct + PAGE_SIZE / 2); memset(einittoken, 0, sizeof(*einittoken)); - if (copy_from_user(sigstruct, (void __user *)initp->sigstruct, - sizeof(*sigstruct))) { - ret = -EFAULT; + ret = sgx_encl_prepare_sigstruct(filep, initp->sigstruct, sigstruct); + if (ret) goto out; - } - ret = sgx_encl_init(encl, sigstruct, einittoken); diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 58ba6153070b..8848711a55bd 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -63,14 +63,26 @@ static long sgx_compat_ioctl(struct file *filep, unsigned int cmd, static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; + unsigned long prot; + int rc; vma->vm_ops = &sgx_vm_ops; vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; vma->vm_private_data = encl; - kref_get(&encl->refcount); + prot = vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC); + vma->vm_flags &= ~prot; - return 0; + prot = _calc_vm_trans(prot, VM_READ, PROT_READ) | + _calc_vm_trans(prot, VM_WRITE, PROT_WRITE) | + _calc_vm_trans(prot, VM_EXEC, PROT_EXEC); + rc = security_file_mprotect(vma, prot, prot); + if (!rc) { + vma->vm_flags |= calc_vm_prot_bits(prot, 0); + kref_get(&encl->refcount); + } + + return rc; } static unsigned long sgx_get_unmapped_area(struct file *file, From patchwork Sun Jul 7 23:41:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xing, Cedric" X-Patchwork-Id: 11034459 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5637138B for ; Sun, 7 Jul 2019 23:41:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92B7628113 for ; Sun, 7 Jul 2019 23:41:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 822132811E; Sun, 7 Jul 2019 23:41:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E94C28113 for ; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727671AbfGGXlt (ORCPT ); Sun, 7 Jul 2019 19:41:49 -0400 Received: from mga17.intel.com ([192.55.52.151]:62277 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727616AbfGGXls (ORCPT ); Sun, 7 Jul 2019 19:41:48 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jul 2019 16:41:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,464,1557212400"; d="scan'208";a="340295313" Received: from bxing-mobl.amr.corp.intel.com (HELO ubt18m.amr.corp.intel.com) ([10.251.135.59]) by orsmga005.jf.intel.com with ESMTP; 07 Jul 2019 16:41:46 -0700 From: Cedric Xing To: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, cedric.xing@intel.com Subject: [RFC PATCH v3 3/4] X86/sgx: Introduce EMA as a new LSM module Date: Sun, 7 Jul 2019 16:41:33 -0700 Message-Id: <41e1a1a2f66226d88d45675434eb34dde5d0f50d.1562542383.git.cedric.xing@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As enclave pages have different lifespan than the existing MAP_PRIVATE and MAP_SHARED pages, a new data structure is required outside of VMA to track their protections and/or origins. Enclave Memory Area (or EMA for short) has been introduced to address the need. EMAs are maintained by a new LSM module named “ema”, which is similar to the idea of the “capability” LSM module. This new “ema” module has LSM_ORDER_FIRST so will always be dispatched before other LSM_ORDER_MUTABLE modules (e.g. selinux, apparmor, etc.). It is responsible for initializing EMA maps, and inserting and freeing EMA nodes, and offers APIs for other LSM modules to query/update EMAs. Details could be found in include/linux/lsm_ema.h and security/commonema.c. Signed-off-by: Cedric Xing --- include/linux/lsm_ema.h | 97 ++++++++++++++ security/Makefile | 1 + security/commonema.c | 277 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 375 insertions(+) create mode 100644 include/linux/lsm_ema.h create mode 100644 security/commonema.c diff --git a/include/linux/lsm_ema.h b/include/linux/lsm_ema.h new file mode 100644 index 000000000000..59fc4ea6fa78 --- /dev/null +++ b/include/linux/lsm_ema.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/** + * Enclave Memory Area interface for LSM modules + * + * Copyright(c) 2016-19 Intel Corporation. + */ + +#ifndef _LSM_EMA_H_ +#define _LSM_EMA_H_ + +#include +#include +#include +#include + +/** + * ema - Enclave Memory Area structure for LSM modules + * + * Data structure to track origins of enclave pages + * + * @link: + * Link to adjacent EMAs. EMAs are sorted by their addresses in ascending + * order + * @start: + * Starting address + * @end: + * Ending address + * @source: + * File from which this range was loaded from, or NULL if not loaded from + * any files + */ +struct ema { + struct list_head link; + size_t start; + size_t end; + struct file *source; +}; + +#define ema_data(ema, offset) \ + ((void *)((char *)((struct ema *)(ema) + 1) + offset)) + +/** + * ema_map - LSM Enclave Memory Map structure for LSM modules + * + * Container for EMAs of an enclave + * + * @list: + * Head of a list of sorted EMAs + * @lock: + * Acquire before querying/updateing the list EMAs + */ +struct ema_map { + struct list_head list; + struct mutex lock; +}; + +size_t __init ema_request_blob(size_t blob_size); +struct ema_map *ema_get_map(struct file *encl); +int ema_apply_to_range(struct ema_map *map, size_t start, size_t end, + int (*cb)(struct ema *ema, void *arg), void *arg); +void ema_remove_range(struct ema_map *map, size_t start, size_t end); + +static inline int __must_check ema_lock_map(struct ema_map *map) +{ + return mutex_lock_interruptible(&map->lock); +} + +static inline void ema_unlock_map(struct ema_map *map) +{ + mutex_unlock(&map->lock); +} + +static inline int ema_lock_apply_to_range(struct ema_map *map, + size_t start, size_t end, + int (*cb)(struct ema *, void *), + void *arg) +{ + int rc = ema_lock_map(map); + if (!rc) { + rc = ema_apply_to_range(map, start, end, cb, arg); + ema_unlock_map(map); + } + return rc; +} + +static inline int ema_lock_remove_range(struct ema_map *map, + size_t start, size_t end) +{ + int rc = ema_lock_map(map); + if (!rc) { + ema_remove_range(map, start, end); + ema_unlock_map(map); + } + return rc; +} + +#endif /* _LSM_EMA_H_ */ diff --git a/security/Makefile b/security/Makefile index c598b904938f..b66d03a94853 100644 --- a/security/Makefile +++ b/security/Makefile @@ -28,6 +28,7 @@ obj-$(CONFIG_SECURITY_YAMA) += yama/ obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/ obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/ obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o +obj-$(CONFIG_INTEL_SGX) += commonema.o # Object integrity file lists subdir-$(CONFIG_INTEGRITY) += integrity diff --git a/security/commonema.c b/security/commonema.c new file mode 100644 index 000000000000..c5b0bdfdc013 --- /dev/null +++ b/security/commonema.c @@ -0,0 +1,277 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) +// Copyright(c) 2016-18 Intel Corporation. + +#include +#include +#include + +static struct kmem_cache *_map_cache; +static struct kmem_cache *_node_cache; +static size_t _data_size __lsm_ro_after_init; + +static struct lsm_blob_sizes ema_blob_sizes __lsm_ro_after_init = { + .lbs_file = sizeof(atomic_long_t), +}; + +static atomic_long_t *_map_file(struct file *encl) +{ + return (void *)((char *)(encl->f_security) + ema_blob_sizes.lbs_file); +} + +static struct ema_map *_alloc_map(void) +{ + struct ema_map *m; + + m = kmem_cache_zalloc(_map_cache, GFP_KERNEL); + if (likely(m)) { + INIT_LIST_HEAD(&m->list); + mutex_init(&m->lock); + } + return m; +} + +static struct ema *_new_ema(size_t start, size_t end, struct file *src) +{ + struct ema *ema; + + if (unlikely(!_node_cache)) { + struct kmem_cache *c; + + c = kmem_cache_create("lsm-ema", sizeof(*ema) + _data_size, + __alignof__(typeof(*ema)), SLAB_PANIC, + NULL); + if (atomic_long_cmpxchg((atomic_long_t *)&_node_cache, 0, + (long)c)) + kmem_cache_destroy(c); + } + + ema = kmem_cache_zalloc(_node_cache, GFP_KERNEL); + if (likely(ema)) { + INIT_LIST_HEAD(&ema->link); + ema->start = start; + ema->end = end; + if (src) + ema->source = get_file(src); + } + return ema; +} + +static void _free_ema(struct ema *ema) +{ + if (ema->source) + fput(ema->source); + kmem_cache_free(_node_cache, ema); +} + +static void _free_map(struct ema_map *map) +{ + struct ema *p, *n; + + WARN_ON(mutex_is_locked(&map->lock)); + list_for_each_entry_safe(p, n, &map->list, link) + _free_ema(p); + kmem_cache_free(_map_cache, map); +} + +static struct ema_map *_init_map(struct file *encl) +{ + struct ema_map *m = ema_get_map(encl); + if (!m) { + m = _alloc_map(); + if (atomic_long_cmpxchg(_map_file(encl), 0, (long)m)) { + _free_map(m); + m = ema_get_map(encl); + } + } + return m; +} + +static inline struct ema *_next_ema(struct ema *p, struct ema_map *map) +{ + p = list_next_entry(p, link); + return &p->link == &map->list ? NULL : p; +} + +static inline struct ema *_find_ema(struct ema_map *map, size_t a) +{ + struct ema *p; + + WARN_ON(!mutex_is_locked(&map->lock)); + + list_for_each_entry(p, &map->list, link) + if (a < p->end) + break; + return &p->link == &map->list ? NULL : p; +} + +static struct ema *_split_ema(struct ema *p, size_t at) +{ + typeof(p) n; + + if (at <= p->start || at >= p->end) + return p; + + n = _new_ema(p->start, at, p->source); + if (likely(n)) { + memcpy(n + 1, p + 1, _data_size); + p->start = at; + list_add_tail(&n->link, &p->link); + } + return n; +} + +static int _merge_ema(struct ema *p, struct ema_map *map) +{ + typeof(p) prev = list_prev_entry(p, link); + + WARN_ON(!mutex_is_locked(&map->lock)); + + if (&prev->link == &map->list || prev->end != p->start || + prev->source != p->source || memcmp(prev + 1, p + 1, _data_size)) + return 0; + + p->start = prev->start; + fput(prev->source); + _free_ema(prev); + return 1; +} + +static inline int _insert_ema(struct ema_map *map, struct ema *n) +{ + typeof(n) p = _find_ema(map, n->start); + + if (!p) + list_add_tail(&n->link, &map->list); + else if (n->end <= p->start) + list_add_tail(&n->link, &p->link); + else + return -EEXIST; + + _merge_ema(n, map); + if (p) + _merge_ema(p, map); + return 0; +} + +static void ema_file_free_security(struct file *encl) +{ + struct ema_map *m = ema_get_map(encl); + if (m) + _free_map(m); +} + +static int ema_enclave_load(struct file *encl, size_t start, size_t end, + size_t flags, struct vm_area_struct *vma) +{ + struct ema_map *m; + struct ema *ema; + int rc; + + m = _init_map(encl); + if (unlikely(!m)) + return -ENOMEM; + + ema = _new_ema(start, end, vma ? vma->vm_file : NULL); + if (unlikely(!ema)) + return -ENOMEM; + + rc = ema_lock_map(m); + if (!rc) { + rc = _insert_ema(m, ema); + ema_unlock_map(m); + } + if (rc) + _free_ema(ema); + return rc; +} + +static int ema_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct, + struct vm_area_struct *vma) +{ + if (unlikely(!_init_map(encl))) + return -ENOMEM; + return 0; +} + +static struct security_hook_list ema_hooks[] __lsm_ro_after_init = { + LSM_HOOK_INIT(file_free_security, ema_file_free_security), + LSM_HOOK_INIT(enclave_load, ema_enclave_load), + LSM_HOOK_INIT(enclave_init, ema_enclave_init), +}; + +static int __init ema_init(void) +{ + _map_cache = kmem_cache_create("lsm-ema_map", sizeof(struct ema_map), + __alignof__(struct ema_map), SLAB_PANIC, + NULL); + security_add_hooks(ema_hooks, ARRAY_SIZE(ema_hooks), "ema"); + return 0; +} + +DEFINE_LSM(ema) = { + .name = "ema", + .order = LSM_ORDER_FIRST, + .init = ema_init, + .blobs = &ema_blob_sizes, +}; + +/* ema_request_blob shall only be called from LSM module init function */ +size_t __init ema_request_blob(size_t size) +{ + typeof(_data_size) offset = _data_size; + _data_size += size; + return offset; +} + +struct ema_map *ema_get_map(struct file *encl) +{ + return (struct ema_map *)atomic_long_read(_map_file(encl)); +} + +/** + * Invoke a callback function on every EMA falls within range, split EMAs as + * needed + */ +int ema_apply_to_range(struct ema_map *map, size_t start, size_t end, + int (*cb)(struct ema *, void *), void *arg) +{ + struct ema *ema; + int rc; + + ema = _find_ema(map, start); + while (ema && end > ema->start) { + if (start > ema->start) + _split_ema(ema, start); + if (end < ema->end) + ema = _split_ema(ema, end); + + rc = (*cb)(ema, arg); + _merge_ema(ema, map); + if (rc) + return rc; + + ema = _next_ema(ema, map); + } + + if (ema) + _merge_ema(ema, map); + return 0; +} + +/* Remove all EMAs falling within range, split EMAs as needed */ +void ema_remove_range(struct ema_map *map, size_t start, size_t end) +{ + struct ema *ema, *n; + + ema = _find_ema(map, start); + while (ema && end > ema->start) { + if (start > ema->start) + _split_ema(ema, start); + if (end < ema->end) + ema = _split_ema(ema, end); + + n = _next_ema(ema, map); + _free_ema(ema); + ema = n; + } +} From patchwork Sun Jul 7 23:41:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xing, Cedric" X-Patchwork-Id: 11034445 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDC86112C for ; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CDC322810E for ; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C286B28329; Sun, 7 Jul 2019 23:41:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D93B92810E for ; Sun, 7 Jul 2019 23:41:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727388AbfGGXlt (ORCPT ); Sun, 7 Jul 2019 19:41:49 -0400 Received: from mga17.intel.com ([192.55.52.151]:62279 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727671AbfGGXls (ORCPT ); Sun, 7 Jul 2019 19:41:48 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jul 2019 16:41:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,464,1557212400"; d="scan'208";a="340295316" Received: from bxing-mobl.amr.corp.intel.com (HELO ubt18m.amr.corp.intel.com) ([10.251.135.59]) by orsmga005.jf.intel.com with ESMTP; 07 Jul 2019 16:41:47 -0700 From: Cedric Xing To: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, cedric.xing@intel.com Subject: [RFC PATCH v3 4/4] x86/sgx: Implement SGX specific hooks in SELinux Date: Sun, 7 Jul 2019 16:41:34 -0700 Message-Id: <3a9efc8d3c27490dbcfe802ce3facddd62f47872.1562542383.git.cedric.xing@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch governs enclave page protections in a similar way to how current SELinux governs protections for regular memory pages. In summary: · All pages are allowed PROT_READ/PROT_WRITE upon request. · For pages that are EADD’ed, PROT_EXEC will be granted initially if PROT_EXEC could also be granted to the VMA containing the source pages, or if the calling process has ENCLAVE_EXECANON permission. Afterwards, PROT_EXEC will be removed once PROT_WRITE is requested/granted, and could be granted again if the backing file has EXECMOD or the calling process has PROCMEM. For anonymous pages, backing file is considered to be the file containing SIGSTRUCT. · For pages that are EAUG’ed, they are considered modified initially so PROT_EXEC will not be granted unless the file containing SIGSTRUCT has EXECMOD, or the calling process has EXECMEM. Besides, launch control is implemented as EXECUTE permission on the SIGSTRUCT file. That is, · SIGSTRUCT file has EXECUTE – Enclave is allowed to launch. But this is granted only if the enclosing VMA has the same content as the disk file (i.e. vma->anon_vma == NULL). · SIGSTRUCT file has EXECMOD – All anonymous enclave pages are allowed PROT_EXEC. In all cases, simultaneous WX requires EXECMEM on the calling process. Implementation wise, 2 bits are associated with every EMA by SELinux. · sourced – Set if EMA is loaded from some memory page (i.e. EADD’ed), cleared otherwise. When cleared, the backing file is considered to be the file containing SIGSTRUCT. · modified – Set if EMA has ever been mapped writable, as result of mmap()/mprotect() syscalls. When set, FILE__EXECMOD is required on the backing file for the range to be executable. Both bits are initialized at selinux_enclave_load() and checked in selinux_file_mprotect(). SGX subsystem is expected to invoke security_file_mprotect() upon mmap() to not bypass the check. mmap() shall be treated as mprotect() from PROT_NONE to the requested protection. selinux_enclave_init() determines if an enclave is allowed to launch, using the criteria described earlier. This implementation does NOT accept SIGSTRUCT in anonymous memory. The backing file is also cached in struct file_security_struct and will serve as the base for decisions for anonymous pages. There’s one new process permission – PROCESS2__ENCLAVE_EXECANON introduced by this patch. It is equivalent to FILE__EXECUTE for all enclave pages loaded from anonymous mappings. Signed-off-by: Cedric Xing --- security/selinux/hooks.c | 236 +++++++++++++++++++++++++++- security/selinux/include/classmap.h | 3 +- security/selinux/include/objsec.h | 7 + 3 files changed, 243 insertions(+), 3 deletions(-) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 94de51628fdc..c7fe1d47654d 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -3499,6 +3499,13 @@ static int selinux_file_alloc_security(struct file *file) return file_alloc_security(file); } +static void selinux_file_free_security(struct file *file) +{ + long f = atomic_long_read(&selinux_file(file)->encl_ss); + if (f) + fput((struct file *)f); +} + /* * Check whether a task has the ioctl permission and cmd * operation to an inode. @@ -3666,19 +3673,23 @@ static int selinux_mmap_file(struct file *file, unsigned long reqprot, (flags & MAP_TYPE) == MAP_SHARED); } +#ifdef CONFIG_INTEL_SGX +static int enclave_mprotect(struct vm_area_struct *, size_t); +#endif + static int selinux_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, unsigned long prot) { const struct cred *cred = current_cred(); u32 sid = cred_sid(cred); + int rc = 0; if (selinux_state.checkreqprot) prot = reqprot; if (default_noexec && (prot & PROT_EXEC) && !(vma->vm_flags & VM_EXEC)) { - int rc = 0; if (vma->vm_start >= vma->vm_mm->start_brk && vma->vm_end <= vma->vm_mm->brk) { rc = avc_has_perm(&selinux_state, @@ -3705,7 +3716,12 @@ static int selinux_file_mprotect(struct vm_area_struct *vma, return rc; } - return file_map_prot_check(vma->vm_file, prot, vma->vm_flags&VM_SHARED); + rc = file_map_prot_check(vma->vm_file, prot, vma->vm_flags&VM_SHARED); +#ifdef CONFIG_INTEL_SGX + if (!rc) + rc = enclave_mprotect(vma, prot); +#endif + return rc; } static int selinux_file_lock(struct file *file, unsigned int cmd) @@ -6740,6 +6756,213 @@ static void selinux_bpf_prog_free(struct bpf_prog_aux *aux) } #endif +#ifdef CONFIG_INTEL_SGX +static size_t ema__blob __lsm_ro_after_init; + +static inline struct ema_security_struct *selinux_ema(struct ema *ema) +{ + return ema_data(ema, ema__blob); +} + +static int ema__chk_X_cb(struct ema *ema, void *a) +{ + struct file_security_struct *fsec = selinux_file(a); + struct ema_security_struct *esec = selinux_ema(ema); + struct file *ess = (struct file *)atomic_long_read(&fsec->encl_ss); + int rc; + + if (!esec->sourced) { + /* EAUG'ed pages */ + rc = file_has_perm(current_cred(), ess, FILE__EXECMOD); + } else if (!ema->source) { + /* EADD'ed anonymous pages */ + u32 sid = current_sid(); + rc = avc_has_perm(&selinux_state, sid, sid, SECCLASS_PROCESS2, + PROCESS2__ENCLAVE_EXECANON, NULL); + if (rc) + rc = avc_has_perm(&selinux_state, sid, sid, + SECCLASS_PROCESS, PROCESS__EXECMEM, + NULL); + if (!rc && esec->modified) + rc = file_has_perm(current_cred(), ess, FILE__EXECMOD); + } else { + /* EADD'ed pages from files */ + u32 av = FILE__EXECUTE; + if (esec->modified) + av |= FILE__EXECMOD; + rc = file_has_perm(current_cred(), ema->source, av); + } + + return rc; +} + +static int ema__set_M_cb(struct ema *ema, void *a) +{ + selinux_ema(ema)->modified = 1; + return 0; +} + +static int enclave_mprotect(struct vm_area_struct *vma, size_t prot) +{ + struct ema_map *m; + int rc; + + /* is vma an enclave vma ? */ + if (!vma->vm_file) + return 0; + m = ema_get_map(vma->vm_file); + if (!m) + return 0; + + /* WX requires EXECMEM */ + if ((prot && PROT_WRITE) && (prot & PROT_EXEC)) { + rc = avc_has_perm(&selinux_state, current_sid(), current_sid(), + SECCLASS_PROCESS, PROCESS__EXECMEM, NULL); + if (rc) + return rc; + } + + rc = ema_lock_map(m); + if (rc) + return rc; + + if ((prot & PROT_EXEC) && !(vma->vm_flags & VM_EXEC)) + rc = ema_apply_to_range(m, vma->vm_start, vma->vm_end, + ema__chk_X_cb, vma->vm_file); + if (!rc && (prot & PROT_WRITE) && !(vma->vm_flags & VM_WRITE)) + rc = ema_apply_to_range(m, vma->vm_start, vma->vm_end, + ema__set_M_cb, NULL); + + ema_unlock_map(m); + + return rc; +} + +static int enclave_load_prot_check(struct file *encl, size_t prot, + struct vm_area_struct *vma) +{ + struct file_security_struct *fsec = selinux_file(encl); + struct file *ess; + const struct cred *cred = current_cred(); + u32 sid = cred_sid(cred); + int rc; + int modified = 0; + + /* R/W without X are always allowed */ + if (!(prot & PROT_EXEC)) + /* R/W always allowed */ + return 0; + + if (!vma) { + ess = (struct file *)atomic_long_read(&fsec->encl_ss); + WARN_ON(!ess); + if (unlikely(!ess)) + return -EPERM; + + /* For EAUG, X is considered self-modifying code */ + rc = file_has_perm(cred, ess, FILE__EXECMOD); + } else if (!vma->vm_file || IS_PRIVATE(file_inode(vma->vm_file))) { + /* EADD from anonymous pages requires ENCLAVE_EXECANON */ + if (!(prot & PROT_WRITE) && + avc_has_perm(&selinux_state, sid, sid, SECCLASS_PROCESS2, + PROCESS2__ENCLAVE_EXECANON, NULL)) { + /* On failure, Trigger EXECMEM check at the end */ + prot |= PROT_WRITE; + } + rc = 0; + } else { + /* EADD from file requires EXECUTE */ + u32 av = FILE__EXECUTE; + + /* EXECMOD required for modified private mapping */ + if (vma->anon_vma) { + av |= FILE__EXECMOD; + modified = 1; + } + + rc = file_has_perm(cred, vma->vm_file, av); + } + + /* WX requires EXECMEM additionally */ + if (!rc && (prot & PROT_WRITE)) + rc = avc_has_perm(&selinux_state, sid, sid, SECCLASS_PROCESS, + PROCESS__EXECMEM, NULL); + + return rc ? rc : modified; +} + +static int ema__set_cb(struct ema *ema, void *a) +{ + struct ema_security_struct *esec = selinux_ema(ema); + struct ema_security_struct *s = a; + + esec->modified = s->modified; + esec->sourced = s->sourced; + return 0; +} + +static int selinux_enclave_load(struct file *encl, size_t start, size_t end, + size_t flags, struct vm_area_struct *src) +{ + struct ema_map *m; + size_t prot; + int rc; + + m = ema_get_map(encl); + WARN_ON(!m); + if (unlikely(!m)) + return -EPERM; + + prot = flags & (PROT_READ | PROT_WRITE | PROT_EXEC); + + /* check if @prot could be granted */ + rc = enclave_load_prot_check(encl, prot, src); + + /* initialize ema */ + if (rc >= 0) { + struct ema_security_struct esec; + + if ((prot & PROT_WRITE) || rc) + esec.modified = 1; + if (src) + esec.sourced = 1; + + rc = ema_lock_apply_to_range(m, start, end, + ema__set_cb, &esec); + } + + /* remove ema on error */ + if (rc) + ema_remove_range(m, start, end); + + return rc; +} + +static int selinux_enclave_init(struct file *encl, + struct sgx_sigstruct *sigstruct, + struct vm_area_struct *src) +{ + struct file_security_struct *fsec = selinux_file(encl); + int rc; + + /* Is @src mapped shared, or mapped privately and not modified? */ + if (!src->vm_file || src->anon_vma) + return -EACCES; + + /* EXECUTE grants enclaves permission to launch */ + rc = file_has_perm(current_cred(), src->vm_file, FILE__EXECUTE); + if (rc) + return rc; + + /* Store SIGSTRUCT file for future use */ + if (atomic_long_cmpxchg(&fsec->encl_ss, 0, (long)src->vm_file)) + return -EEXIST; + + get_file(src->vm_file); + return 0; +} +#endif + struct lsm_blob_sizes selinux_blob_sizes __lsm_ro_after_init = { .lbs_cred = sizeof(struct task_security_struct), .lbs_file = sizeof(struct file_security_struct), @@ -6822,6 +7045,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { LSM_HOOK_INIT(file_permission, selinux_file_permission), LSM_HOOK_INIT(file_alloc_security, selinux_file_alloc_security), + LSM_HOOK_INIT(file_free_security, selinux_file_free_security), LSM_HOOK_INIT(file_ioctl, selinux_file_ioctl), LSM_HOOK_INIT(mmap_file, selinux_mmap_file), LSM_HOOK_INIT(mmap_addr, selinux_mmap_addr), @@ -6982,6 +7206,11 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { LSM_HOOK_INIT(bpf_map_free_security, selinux_bpf_map_free), LSM_HOOK_INIT(bpf_prog_free_security, selinux_bpf_prog_free), #endif + +#ifdef CONFIG_INTEL_SGX + LSM_HOOK_INIT(enclave_load, selinux_enclave_load), + LSM_HOOK_INIT(enclave_init, selinux_enclave_init), +#endif }; static __init int selinux_init(void) @@ -7007,6 +7236,9 @@ static __init int selinux_init(void) hashtab_cache_init(); +#ifdef CONFIG_INTEL_SGX + ema__blob = ema_request_blob(sizeof(struct ema_security_struct)); +#endif security_add_hooks(selinux_hooks, ARRAY_SIZE(selinux_hooks), "selinux"); if (avc_add_callback(selinux_netcache_avc_callback, AVC_CALLBACK_RESET)) diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h index 201f7e588a29..0d3161a52577 100644 --- a/security/selinux/include/classmap.h +++ b/security/selinux/include/classmap.h @@ -51,7 +51,8 @@ struct security_class_mapping secclass_map[] = { "execmem", "execstack", "execheap", "setkeycreate", "setsockcreate", "getrlimit", NULL } }, { "process2", - { "nnp_transition", "nosuid_transition", NULL } }, + { "nnp_transition", "nosuid_transition", + "enclave_execanon", NULL } }, { "system", { "ipc_info", "syslog_read", "syslog_mod", "syslog_console", "module_request", "module_load", NULL } }, diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h index 91c5395dd20c..8d1ce9c6d6fa 100644 --- a/security/selinux/include/objsec.h +++ b/security/selinux/include/objsec.h @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include "flask.h" @@ -68,6 +69,7 @@ struct file_security_struct { u32 fown_sid; /* SID of file owner (for SIGIO) */ u32 isid; /* SID of inode at the time of file open */ u32 pseqno; /* Policy seqno at the time of file open */ + atomic_long_t encl_ss; /* Enclave sigstruct file */ }; struct superblock_security_struct { @@ -154,6 +156,11 @@ struct bpf_security_struct { u32 sid; /*SID of bpf obj creater*/ }; +struct ema_security_struct { + int modified:1; /* Set when W is granted */ + int sourced:1; /* Set if loaded from source in regular memory */ +}; + extern struct lsm_blob_sizes selinux_blob_sizes; static inline struct task_security_struct *selinux_cred(const struct cred *cred) {