From patchwork Fri May 31 23:31:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971079 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 364906C5 for ; Fri, 31 May 2019 23:33:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 275FC28D9E for ; Fri, 31 May 2019 23:33:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B3A828DA1; Fri, 31 May 2019 23:33:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CDE7F28D9E for ; Fri, 31 May 2019 23:33:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726674AbfEaXcb (ORCPT ); Fri, 31 May 2019 19:32:31 -0400 Received: from mga07.intel.com ([134.134.136.100]:59345 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726550AbfEaXcb (ORCPT ); Fri, 31 May 2019 19:32:31 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:30 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:30 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 1/9] x86/sgx: Remove unused local variable in sgx_encl_release() Date: Fri, 31 May 2019 16:31:51 -0700 Message-Id: <20190531233159.30992-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/encl.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 7216bdf07bd0..f23ea0fbaa47 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -463,7 +463,6 @@ EXPORT_SYMBOL_GPL(sgx_encl_destroy); void sgx_encl_release(struct kref *ref) { struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount); - struct sgx_encl_mm *encl_mm; if (encl->pm_notifier.notifier_call) unregister_pm_notifier(&encl->pm_notifier); From patchwork Fri May 31 23:31:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971089 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BDDD6C5 for ; Fri, 31 May 2019 23:33:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F17A128D9E for ; Fri, 31 May 2019 23:33:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E4A1A28DA3; Fri, 31 May 2019 23:33:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 975EF28D9E for ; Fri, 31 May 2019 23:33:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727087AbfEaXdZ (ORCPT ); Fri, 31 May 2019 19:33:25 -0400 Received: from mga07.intel.com ([134.134.136.100]:59345 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726635AbfEaXcb (ORCPT ); Fri, 31 May 2019 19:32:31 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:30 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:30 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 2/9] x86/sgx: Do not naturally align MAP_FIXED address Date: Fri, 31 May 2019 16:31:52 -0700 Message-Id: <20190531233159.30992-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP SGX enclaves have an associated Enclave Linear Range (ELRANGE) that is tracked and enforced by the CPU using a base+mask approach, similar to how hardware range registers such as the variable MTRRs. As a result, the ELRANGE must be naturally sized and aligned. To reduce boilerplate code that would be needed in every userspace enclave loader, the SGX driver naturally aligns the mmap() address and also requires the range to be naturally sized. Unfortunately, SGX fails to grant a waiver to the MAP_FIXED case, e.g. incorrectly rejects mmap() if userspace is attempting to map a small slice of an existing enclave. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/main.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index afe844aa81d6..129d356aff30 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -79,7 +79,13 @@ static unsigned long sgx_get_unmapped_area(struct file *file, unsigned long pgoff, unsigned long flags) { - if (len < 2 * PAGE_SIZE || len & (len - 1) || flags & MAP_PRIVATE) + if (flags & MAP_PRIVATE) + return -EINVAL; + + if (flags & MAP_FIXED) + return addr; + + if (len < 2 * PAGE_SIZE || len & (len - 1)) return -EINVAL; addr = current->mm->get_unmapped_area(file, addr, 2 * len, pgoff, From patchwork Fri May 31 23:31:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971051 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 101306C5 for ; Fri, 31 May 2019 23:33:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01A7D28D9F for ; Fri, 31 May 2019 23:33:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E9CE928DA5; Fri, 31 May 2019 23:33:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B2F128D9F for ; Fri, 31 May 2019 23:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726867AbfEaXch (ORCPT ); Fri, 31 May 2019 19:32:37 -0400 Received: from mga07.intel.com ([134.134.136.100]:59345 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726550AbfEaXcd (ORCPT ); Fri, 31 May 2019 19:32:33 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:30 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:30 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 3/9] x86/sgx: Allow userspace to add multiple pages in single ioctl() Date: Fri, 31 May 2019 16:31:53 -0700 Message-Id: <20190531233159.30992-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to improve performance when building enclaves by reducing the number of user<->system transitions. Rather than provide arbitrary batching, e.g. with per-page SECINFO and mrmask, take advantage of the fact that any sane enclave will have large swaths of pages with identical properties, e.g. code vs. data sections. For simplicity and stability in the initial implementation, loop over the existing add page flow instead of taking a more agressive approach, which would require tracking transitions between VMAs and holding mmap_sem for an extended duration. Signed-off-by: Sean Christopherson --- arch/x86/include/uapi/asm/sgx.h | 21 ++--- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 104 +++++++++++++++---------- 2 files changed, 74 insertions(+), 51 deletions(-) diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h index 9ed690a38c70..4a12d6abbcb7 100644 --- a/arch/x86/include/uapi/asm/sgx.h +++ b/arch/x86/include/uapi/asm/sgx.h @@ -12,8 +12,8 @@ #define SGX_IOC_ENCLAVE_CREATE \ _IOW(SGX_MAGIC, 0x00, struct sgx_enclave_create) -#define SGX_IOC_ENCLAVE_ADD_PAGE \ - _IOW(SGX_MAGIC, 0x01, struct sgx_enclave_add_page) +#define SGX_IOC_ENCLAVE_ADD_PAGES \ + _IOW(SGX_MAGIC, 0x01, struct sgx_enclave_add_pages) #define SGX_IOC_ENCLAVE_INIT \ _IOW(SGX_MAGIC, 0x02, struct sgx_enclave_init) #define SGX_IOC_ENCLAVE_SET_ATTRIBUTE \ @@ -32,21 +32,22 @@ struct sgx_enclave_create { }; /** - * struct sgx_enclave_add_page - parameter structure for the - * %SGX_IOC_ENCLAVE_ADD_PAGE ioctl - * @addr: address within the ELRANGE - * @src: address for the page data - * @secinfo: address for the SECINFO data - * @mrmask: bitmask for the measured 256 byte chunks + * struct sgx_enclave_add_pages - parameter structure for the + * %SGX_IOC_ENCLAVE_ADD_PAGES ioctl + * @addr: start address within the ELRANGE + * @src: start address for the pages' data + * @secinfo: address for the SECINFO data (common to all pages) + * @nr_pages: number of pages (must be virtually contiguous) + * @mrmask: bitmask for the measured 256 byte chunks (common to all pages) */ -struct sgx_enclave_add_page { +struct sgx_enclave_add_pages { __u64 addr; __u64 src; __u64 secinfo; + __u32 nr_pages; __u16 mrmask; } __attribute__((__packed__)); - /** * struct sgx_enclave_init - parameter structure for the * %SGX_IOC_ENCLAVE_INIT ioctl diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index a27ec26a9350..6acfcbdeca9a 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -487,10 +487,9 @@ static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs) return 0; } -static int __sgx_encl_add_page(struct sgx_encl *encl, +static int sgx_encl_queue_page(struct sgx_encl *encl, struct sgx_encl_page *encl_page, - void *data, - struct sgx_secinfo *secinfo, + void *data, struct sgx_secinfo *secinfo, unsigned int mrmask) { unsigned long page_index = sgx_encl_get_index(encl, encl_page); @@ -529,9 +528,9 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, return 0; } -static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, - void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask) +static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, + void *data, struct sgx_secinfo *secinfo, + unsigned int mrmask) { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; @@ -563,7 +562,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto out; } - ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask); + ret = sgx_encl_queue_page(encl, encl_page, data, secinfo, mrmask); if (ret) { radix_tree_delete(&encl_page->encl->page_tree, PFN_DOWN(encl_page->desc)); @@ -575,56 +574,79 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } -/** - * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE - * - * @filep: open file to /dev/sgx - * @cmd: the command value - * @arg: pointer to an &sgx_enclave_add_page instance - * - * Add a page to an uninitialized enclave (EADD), and optionally extend the - * enclave's measurement with the contents of the page (EEXTEND). EADD and - * EEXTEND are done asynchronously via worker threads. A successful - * sgx_ioc_enclave_add_page() only indicates the page has been added to the - * work queue, it does not guarantee adding the page to the enclave will - * succeed. - * - * Return: - * 0 on success, - * -errno otherwise - */ -static long sgx_ioc_enclave_add_page(struct file *filep, unsigned int cmd, - unsigned long arg) +static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, + unsigned long src, struct sgx_secinfo *secinfo, + unsigned int mrmask) { - struct sgx_enclave_add_page *addp = (void *)arg; - struct sgx_encl *encl = filep->private_data; - struct sgx_secinfo secinfo; struct page *data_page; void *data; int ret; - if (copy_from_user(&secinfo, (void __user *)addp->secinfo, - sizeof(secinfo))) - return -EFAULT; - data_page = alloc_page(GFP_HIGHUSER); if (!data_page) return -ENOMEM; data = kmap(data_page); - if (copy_from_user((void *)data, (void __user *)addp->src, PAGE_SIZE)) { + if (copy_from_user((void *)data, (void __user *)src, PAGE_SIZE)) { ret = -EFAULT; goto out; } - ret = sgx_encl_add_page(encl, addp->addr, data, &secinfo, addp->mrmask); - if (ret) - goto out; - + ret = __sgx_encl_add_page(encl, addr, data, secinfo, mrmask); out: kunmap(data_page); __free_page(data_page); + + return ret; +} + +/** + * sgx_ioc_enclave_add_pages - handler for %SGX_IOC_ENCLAVE_ADD_PAGES + * + * @filep: open file to /dev/sgx + * @cmd: the command value + * @arg: pointer to an &sgx_enclave_add_page instance + * + * Add a range of pages to an uninitialized enclave (EADD), and optionally + * extend the enclave's measurement with the contents of the page (EEXTEND). + * The range of pages must be virtually contiguous. The SECINFO and + * measurement maskare applied to all pages, i.e. pages with different + * properties must be added in separate calls. + * + * EADD and EEXTEND are done asynchronously via worker threads. A successful + * sgx_ioc_enclave_add_page() only indicates the pages have been added to the + * work queue, it does not guarantee adding the pages to the enclave will + * succeed. + * + * Return: + * 0 on success, + * -errno otherwise + */ +static long sgx_ioc_enclave_add_pages(struct file *filep, unsigned int cmd, + unsigned long arg) +{ + struct sgx_enclave_add_pages *addp = (void *)arg; + struct sgx_encl *encl = filep->private_data; + struct sgx_secinfo secinfo; + unsigned int i; + int ret; + + if (copy_from_user(&secinfo, (void __user *)addp->secinfo, + sizeof(secinfo))) + return -EFAULT; + + for (i = 0, ret = 0; i < addp->nr_pages && !ret; i++) { + if (signal_pending(current)) + return -ERESTARTSYS; + + if (need_resched()) + cond_resched(); + + ret = sgx_encl_add_page(encl, addp->addr + i*PAGE_SIZE, + addp->src + i*PAGE_SIZE, + &secinfo, addp->mrmask); + } return ret; } @@ -823,8 +845,8 @@ long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg) case SGX_IOC_ENCLAVE_CREATE: handler = sgx_ioc_enclave_create; break; - case SGX_IOC_ENCLAVE_ADD_PAGE: - handler = sgx_ioc_enclave_add_page; + case SGX_IOC_ENCLAVE_ADD_PAGES: + handler = sgx_ioc_enclave_add_pages; break; case SGX_IOC_ENCLAVE_INIT: handler = sgx_ioc_enclave_init; From patchwork Fri May 31 23:31:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971049 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89659112C for ; Fri, 31 May 2019 23:33:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7899F28D9E for ; Fri, 31 May 2019 23:33:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CC5828DA3; Fri, 31 May 2019 23:33:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 15A7128D9E for ; Fri, 31 May 2019 23:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726877AbfEaXch (ORCPT ); Fri, 31 May 2019 19:32:37 -0400 Received: from mga07.intel.com ([134.134.136.100]:59346 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726816AbfEaXcd (ORCPT ); Fri, 31 May 2019 19:32:33 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:30 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:30 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 4/9] mm: Introduce vm_ops->mprotect() Date: Fri, 31 May 2019 16:31:54 -0700 Message-Id: <20190531233159.30992-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP SGX will use the mprotect() hook to prevent userspace from circumventing various security checks, i.e. Linux Security Modules. Enclaves are built by copying data from normal memory into the Enclave Page Cache (EPC). Due to the nature of SGX, the EPC is represented by a single file that must be MAP_SHARED, i.e. mprotect() only ever sees a single MAP_SHARED vm_file. Furthermore, all enclaves will need read, write and execute pages in the EPC. As a result, LSM policies cannot be meaningfully applied, e.g. an LSM can deny access to the EPC as a whole, but can't deny PROT_EXEC on page that originated in a non-EXECUTE file (which is long gone by the time mprotect() is called). By hooking mprotect(), SGX can make explicit LSM upcalls while an enclave is being built, i.e. when the kernel has a handle to origin of each enclave page, and enforce the result of the LSM policy whenever userspace maps the enclave page in the future. Alternatively, SGX could play games with MAY_{READ,WRITE,EXEC}, but that approach is quite ugly, e.g. would require userspace to call an SGX ioctl() prior to using mprotect() to extend a page's protections. Signed-off-by: Sean Christopherson --- include/linux/mm.h | 2 ++ mm/mprotect.c | 15 +++++++++++---- 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e8834ac32b7..50a42364a885 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -458,6 +458,8 @@ struct vm_operations_struct { void (*close)(struct vm_area_struct * area); int (*split)(struct vm_area_struct * area, unsigned long addr); int (*mremap)(struct vm_area_struct * area); + int (*mprotect)(struct vm_area_struct * area, unsigned long start, + unsigned long end, unsigned long prot); vm_fault_t (*fault)(struct vm_fault *vmf); vm_fault_t (*huge_fault)(struct vm_fault *vmf, enum page_entry_size pe_size); diff --git a/mm/mprotect.c b/mm/mprotect.c index bf38dfbbb4b4..e466ca5e4fe0 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -547,13 +547,20 @@ static int do_mprotect_pkey(unsigned long start, size_t len, goto out; } - error = security_file_mprotect(vma, reqprot, prot); - if (error) - goto out; - tmp = vma->vm_end; if (tmp > end) tmp = end; + + if (vma->vm_ops && vma->vm_ops->mprotect) { + error = vma->vm_ops->mprotect(vma, nstart, tmp, prot); + if (error) + goto out; + } + + error = security_file_mprotect(vma, reqprot, prot); + if (error) + goto out; + error = mprotect_fixup(vma, &prev, nstart, tmp, newflags); if (error) goto out; From patchwork Fri May 31 23:31:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971037 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3C2714DB for ; Fri, 31 May 2019 23:33:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D646C28D9E for ; Fri, 31 May 2019 23:33:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA52C28DA3; Fri, 31 May 2019 23:33:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6ECDA28D9E for ; Fri, 31 May 2019 23:33:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726894AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 Received: from mga07.intel.com ([134.134.136.100]:59346 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726735AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:31 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:30 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 5/9] x86/sgx: Restrict mapping without an enclave page to PROT_NONE Date: Fri, 31 May 2019 16:31:55 -0700 Message-Id: <20190531233159.30992-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To support LSM integration, SGX will require userspace to explicitly specify the allowed protections for each page. The allowed protections will be supplied to and modified by LSMs (based on their policies). To prevent userspace from circumventing the allowed protections, do not allow PROT_{READ,WRITE,EXEC} mappings to an enclave without an associated enclave page (which will track the allowed protections). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/main.c | 5 +++++ arch/x86/kernel/cpu/sgx/encl.c | 30 +++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/encl.h | 3 +++ 3 files changed, 38 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 129d356aff30..65a87c2fdf02 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -63,6 +63,11 @@ static long sgx_compat_ioctl(struct file *filep, unsigned int cmd, static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; + int ret; + + ret = sgx_map_allowed(encl, vma->vm_start, vma->vm_end, vma->vm_flags); + if (ret) + return ret; vma->vm_ops = &sgx_vm_ops; vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index f23ea0fbaa47..955d4f430adc 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -235,6 +235,35 @@ static void sgx_vma_close(struct vm_area_struct *vma) kref_put(&encl->refcount, sgx_encl_release); } +int sgx_map_allowed(struct sgx_encl *encl, unsigned long start, + unsigned long end, unsigned long prot) +{ + struct sgx_encl_page *page; + unsigned long addr; + + prot &= (VM_READ | VM_WRITE | VM_EXEC); + if (!prot || !encl) + return 0; + + mutex_lock(&encl->lock); + + for (addr = start; addr < end; addr += PAGE_SIZE) { + page = radix_tree_lookup(&encl->page_tree, addr >> PAGE_SHIFT); + if (!page) + return -EACCES; + } + + mutex_unlock(&encl->lock); + + return 0; +} + +static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned long prot) +{ + return sgx_map_allowed(vma->vm_private_data, start, end, prot); +} + static unsigned int sgx_vma_fault(struct vm_fault *vmf) { unsigned long addr = (unsigned long)vmf->address; @@ -372,6 +401,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, const struct vm_operations_struct sgx_vm_ops = { .close = sgx_vma_close, .open = sgx_vma_open, + .mprotect = sgx_vma_mprotect, .fault = sgx_vma_fault, .access = sgx_vma_access, }; diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index c557f0374d74..6e310e3b3fff 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -106,6 +106,9 @@ static inline unsigned long sgx_pcmd_offset(pgoff_t page_index) sizeof(struct sgx_pcmd); } +int sgx_map_allowed(struct sgx_encl *encl, unsigned long start, + unsigned long end, unsigned long prot); + enum sgx_encl_mm_iter { SGX_ENCL_MM_ITER_DONE = 0, SGX_ENCL_MM_ITER_NEXT = 1, From patchwork Fri May 31 23:31:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B0B214DB for ; Fri, 31 May 2019 23:33:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A8BF28D9E for ; Fri, 31 May 2019 23:33:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DED228DA3; Fri, 31 May 2019 23:33:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8CC128D9E for ; Fri, 31 May 2019 23:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726643AbfEaXdI (ORCPT ); Fri, 31 May 2019 19:33:08 -0400 Received: from mga07.intel.com ([134.134.136.100]:59345 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726860AbfEaXch (ORCPT ); Fri, 31 May 2019 19:32:37 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:31 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:31 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 6/9] x86/sgx: Require userspace to provide allowed prots to ADD_PAGES Date: Fri, 31 May 2019 16:31:56 -0700 Message-Id: <20190531233159.30992-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to support (the equivalent) of existing Linux Security Module functionality. Because SGX manually manages EPC memory, all enclave VMAs are backed by the same vm_file, i.e. /dev/sgx/enclave, so that SGX can implement the necessary hooks to move pages in/out of the EPC. And because EPC pages for any given enclave are fundamentally shared between processes, i.e. CoW semantics are not possible with EPC pages, /dev/sgx/enclave must always be MAP_SHARED. Lastly, all real world enclaves will need read, write and execute permissions to EPC pages. As a result, SGX does not play nice with existing LSM behavior as it is impossible to apply policies to enclaves with any reasonable granularity, e.g. an LSM can deny access to EPC altogether, but can't deny potentially dangerous behavior such as mapping pages RW->RW or RWX. To give LSMs enough information to implement their policies without having to resort to ugly things, e.g. holding a reference to the vm_file of each enclave page, require userspace to explicitly state the allowed protections for each page (region), i.e. take ALLOW_{READ,WRITE,EXEC} in the ADD_PAGES ioctl. The ALLOW_* flags will be passed to LSMs so that they can make informed decisions when the enclave is being built, i.e. when the source vm_file is available. For example, SELinux's EXECMOD permission can be required if an enclave is requesting both ALLOW_WRITE and ALLOW_EXEC. Update the mmap()/mprotect() hooks to enforce the ALLOW_* protections, a la the standard VM_MAY{READ,WRITE,EXEC} flags. The ALLOW_EXEC flag also has a second (important) use in that it can be used to prevent loading an enclave from a noexec file system, on SGX2 hardware (regardless of kernel support for SGX2), userspace could EADD from a noexec path using read-only permissions and later mprotect() and ENCLU[EMODPE] the page to gain execute permissions. By requiring ALLOW_EXEC up front, SGX will be able to enforce noexec paths when building the enclave. Signed-off-by: Sean Christopherson --- arch/x86/include/uapi/asm/sgx.h | 9 ++++++++- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 23 +++++++++++++++++------ arch/x86/kernel/cpu/sgx/encl.c | 2 +- arch/x86/kernel/cpu/sgx/encl.h | 1 + 4 files changed, 27 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h index 4a12d6abbcb7..4489e92fa0dc 100644 --- a/arch/x86/include/uapi/asm/sgx.h +++ b/arch/x86/include/uapi/asm/sgx.h @@ -31,6 +31,11 @@ struct sgx_enclave_create { __u64 src; }; +/* Supported flags for struct sgx_enclave_add_pages. */ +#define SGX_ALLOW_READ VM_READ +#define SGX_ALLOW_WRITE VM_WRITE +#define SGX_ALLOW_EXEC VM_EXEC + /** * struct sgx_enclave_add_pages - parameter structure for the * %SGX_IOC_ENCLAVE_ADD_PAGES ioctl @@ -39,6 +44,7 @@ struct sgx_enclave_create { * @secinfo: address for the SECINFO data (common to all pages) * @nr_pages: number of pages (must be virtually contiguous) * @mrmask: bitmask for the measured 256 byte chunks (common to all pages) + * @flags: flags, e.g. SGX_ALLOW_{READ,WRITE,EXEC} (common to all pages) */ struct sgx_enclave_add_pages { __u64 addr; @@ -46,7 +52,8 @@ struct sgx_enclave_add_pages { __u64 secinfo; __u32 nr_pages; __u16 mrmask; -} __attribute__((__packed__)); + __u16 flags; +}; /** * struct sgx_enclave_init - parameter structure for the diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 6acfcbdeca9a..c30acd3fbbdd 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -235,7 +235,8 @@ static int sgx_validate_secs(const struct sgx_secs *secs, } static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, - unsigned long addr) + unsigned long addr, + unsigned long allowed_prot) { struct sgx_encl_page *encl_page; int ret; @@ -247,6 +248,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, return ERR_PTR(-ENOMEM); encl_page->desc = addr; encl_page->encl = encl; + encl_page->allowed_prot = allowed_prot; ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), encl_page); if (ret) { @@ -530,7 +532,7 @@ static int sgx_encl_queue_page(struct sgx_encl *encl, static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask) + unsigned int mrmask, unsigned long allowed_prot) { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; @@ -556,7 +558,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto out; } - encl_page = sgx_encl_page_alloc(encl, addr); + encl_page = sgx_encl_page_alloc(encl, addr, allowed_prot); if (IS_ERR(encl_page)) { ret = PTR_ERR(encl_page); goto out; @@ -576,12 +578,20 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, unsigned long src, struct sgx_secinfo *secinfo, - unsigned int mrmask) + unsigned int mrmask, unsigned int flags) { + unsigned long prot = secinfo->flags & (VM_READ | VM_WRITE | VM_EXEC); + unsigned long allowed_prot = flags & (VM_READ | VM_WRITE | VM_EXEC); struct page *data_page; void *data; int ret; + BUILD_BUG_ON(SGX_SECINFO_R != VM_READ || SGX_SECINFO_W != VM_WRITE || + SGX_SECINFO_X != VM_EXEC); + + if (prot & ~allowed_prot) + return -EACCES; + data_page = alloc_page(GFP_HIGHUSER); if (!data_page) return -ENOMEM; @@ -593,7 +603,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto out; } - ret = __sgx_encl_add_page(encl, addr, data, secinfo, mrmask); + ret = __sgx_encl_add_page(encl, addr, data, secinfo, mrmask, + allowed_prot); out: kunmap(data_page); __free_page(data_page); @@ -645,7 +656,7 @@ static long sgx_ioc_enclave_add_pages(struct file *filep, unsigned int cmd, ret = sgx_encl_add_page(encl, addp->addr + i*PAGE_SIZE, addp->src + i*PAGE_SIZE, - &secinfo, addp->mrmask); + &secinfo, addp->mrmask, addp->flags); } return ret; } diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 955d4f430adc..e5847571a265 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -249,7 +249,7 @@ int sgx_map_allowed(struct sgx_encl *encl, unsigned long start, for (addr = start; addr < end; addr += PAGE_SIZE) { page = radix_tree_lookup(&encl->page_tree, addr >> PAGE_SHIFT); - if (!page) + if (!page || (prot & ~page->allowed_prot)) return -EACCES; } diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 6e310e3b3fff..7cca076a4987 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -41,6 +41,7 @@ enum sgx_encl_page_desc { struct sgx_encl_page { unsigned long desc; + unsigned long allowed_prot; struct sgx_epc_page *epc_page; struct sgx_va_page *va_page; struct sgx_encl *encl; From patchwork Fri May 31 23:31:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971045 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA36414DB for ; Fri, 31 May 2019 23:33:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC2CA28D9E for ; Fri, 31 May 2019 23:33:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFB4628DA5; Fri, 31 May 2019 23:33:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6666528D9E for ; Fri, 31 May 2019 23:33:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726889AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 Received: from mga07.intel.com ([134.134.136.100]:59347 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726862AbfEaXch (ORCPT ); Fri, 31 May 2019 19:32:37 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:31 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:31 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 7/9] x86/sgx: Enforce noexec filesystem restriction for enclaves Date: Fri, 31 May 2019 16:31:57 -0700 Message-Id: <20190531233159.30992-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Do not allow an enclave page to be mapped with PROT_EXEC if the source page is backed by a file on a noexec file system. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index c30acd3fbbdd..5f71be7cbb01 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -576,6 +576,27 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } +static int sgx_encl_page_protect(unsigned long src, unsigned long prot, + unsigned long *allowed_prot) +{ + struct vm_area_struct *vma; + + if (!(*allowed_prot & VM_EXEC)) + goto do_check; + + down_read(¤t->mm->mmap_sem); + vma = find_vma(current->mm, src); + if (!vma || (vma->vm_file && path_noexec(&vma->vm_file->f_path))) + *allowed_prot &= ~VM_EXEC; + up_read(¤t->mm->mmap_sem); + +do_check: + if (prot & ~*allowed_prot) + return -EACCES; + + return 0; +} + static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, unsigned long src, struct sgx_secinfo *secinfo, unsigned int mrmask, unsigned int flags) @@ -589,8 +610,9 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, BUILD_BUG_ON(SGX_SECINFO_R != VM_READ || SGX_SECINFO_W != VM_WRITE || SGX_SECINFO_X != VM_EXEC); - if (prot & ~allowed_prot) - return -EACCES; + ret = sgx_encl_page_protect(src, prot, &allowed_prot); + if (ret) + return ret; data_page = alloc_page(GFP_HIGHUSER); if (!data_page) From patchwork Fri May 31 23:31:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971027 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59C44112C for ; Fri, 31 May 2019 23:33:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BBCC28D9E for ; Fri, 31 May 2019 23:33:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FEF228DA3; Fri, 31 May 2019 23:33:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A3E2228D9E for ; Fri, 31 May 2019 23:33:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726911AbfEaXcj (ORCPT ); Fri, 31 May 2019 19:32:39 -0400 Received: from mga07.intel.com ([134.134.136.100]:59347 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726883AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:31 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:31 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 8/9] LSM: x86/sgx: Introduce ->enclave_load() hook for Intel SGX Date: Fri, 31 May 2019 16:31:58 -0700 Message-Id: <20190531233159.30992-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP enclave_load() is roughly analogous to the existing file_mprotect(). Due to the nature of SGX and its Enclave Page Cache (EPC), all enclave VMAs are backed by a single file, i.e. /dev/sgx/enclave, that must be MAP_SHARED. Furthermore, all enclaves need read, write and execute VMAs. As a result, file_mprotect() does not provide any meaningful security for enclaves since an LSM can only deny/grant access to the EPC as a whole. security_enclave_load() is called when SGX is first loading an enclave page, i.e. copying a page from normal memory into the EPC. The notable difference from file_mprotect() is the allowed_prot parameter, which is essentially an SGX-specific version of a VMA's MAY_{READ,WRITE,EXEC} flags. The purpose of allowed_prot is to enable checks such as SELinux's FILE__EXECMOD permission without having to track and update VMAs across multiple mm structs, i.e. SGX can ensure userspace doesn't overstep its bounds simply by restricting an enclave VMA's protections by vetting what is maximally allowed during build time. An alternative to the allowed_prot approach would be to use an enclave's SIGSTRUCT (a smallish structure that can uniquely identify an enclave) as a proxy for the enclave. For example, SGX could take and hold a reference to the file containing the SIGSTRUCT (if it's in a file) and call security_enclave_load() during mprotect(). While the SIGSTRUCT approach would provide better precision, the actual value added was deemed to be negligible. On the other hand, pinning a file for the lifetime of the enclave is ugly, and essentially caching LSM policies in each page's allowed_prot avoids having to make an extra LSM upcall during mprotect(). Note, extensive discussion yielded no sane alternative to some form of SGX specific LSM hook[1]. [1] https://lkml.kernel.org/r/CALCETrXf8mSK45h7sTK5Wf+pXLVn=Bjsc_RLpgO-h-qdzBRo5Q@mail.gmail.com Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 14 +++++++++----- include/linux/lsm_hooks.h | 16 ++++++++++++++++ include/linux/security.h | 2 ++ security/security.c | 8 ++++++++ 4 files changed, 35 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 5f71be7cbb01..260417ecbcff 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -580,21 +581,24 @@ static int sgx_encl_page_protect(unsigned long src, unsigned long prot, unsigned long *allowed_prot) { struct vm_area_struct *vma; + int ret = 0; - if (!(*allowed_prot & VM_EXEC)) + if (!(*allowed_prot & VM_EXEC) && !IS_ENABLED(CONFIG_SECURITY)) goto do_check; down_read(¤t->mm->mmap_sem); vma = find_vma(current->mm, src); if (!vma || (vma->vm_file && path_noexec(&vma->vm_file->f_path))) *allowed_prot &= ~VM_EXEC; +#ifdef CONFIG_SECURITY + ret = security_enclave_load(vma, prot, allowed_prot); +#endif up_read(¤t->mm->mmap_sem); do_check: - if (prot & ~*allowed_prot) - return -EACCES; - - return 0; + if (!ret && (prot & ~*allowed_prot)) + ret = -EACCES; + return ret; } static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 47f58cfb6a19..0562775424a0 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1446,6 +1446,14 @@ * @bpf_prog_free_security: * Clean up the security information stored inside bpf prog. * + * Security hooks for Intel SGX enclaves. + * + * @enclave_load: + * On success, returns 0 and optionally adjusts @allowed_prot + * @vma: the source memory region of the enclave page being loaded. + * @prot: the initial protection of the enclave page. + * @allowed_prot: the maximum protections of the enclave page. + * Return 0 if permission is granted. */ union security_list_options { int (*binder_set_context_mgr)(struct task_struct *mgr); @@ -1807,6 +1815,11 @@ union security_list_options { int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux); void (*bpf_prog_free_security)(struct bpf_prog_aux *aux); #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX + int (*enclave_load)(struct vm_area_struct *vma, unsigned long prot, + unsigned long *allowed_prot); +#endif /* CONFIG_INTEL_SGX */ }; struct security_hook_heads { @@ -2046,6 +2059,9 @@ struct security_hook_heads { struct hlist_head bpf_prog_alloc_security; struct hlist_head bpf_prog_free_security; #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX + struct hlist_head enclave_load; +#endif /* CONFIG_INTEL_SGX */ } __randomize_layout; /* diff --git a/include/linux/security.h b/include/linux/security.h index 659071c2e57c..2f7925eeef7e 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -392,6 +392,8 @@ void security_inode_invalidate_secctx(struct inode *inode); int security_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen); int security_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen); int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen); +int security_enclave_load(struct vm_area_struct *vma, unsigned long prot, + unsigned long *allowed_prot); #else /* CONFIG_SECURITY */ static inline int call_lsm_notifier(enum lsm_event event, void *data) diff --git a/security/security.c b/security/security.c index 613a5c00e602..07ed6763571e 100644 --- a/security/security.c +++ b/security/security.c @@ -2359,3 +2359,11 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux) call_void_hook(bpf_prog_free_security, aux); } #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX +int security_enclave_load(struct vm_area_struct *vma, unsigned long prot, + unsigned long *allowed_prot) +{ + return call_int_hook(enclave_load, 0, vma, prot, allowed_prot); +} +#endif /* CONFIG_INTEL_SGX */ From patchwork Fri May 31 23:31:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07DDB1902 for ; Fri, 31 May 2019 23:33:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE27828D9E for ; Fri, 31 May 2019 23:33:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E1E3728DA7; Fri, 31 May 2019 23:33:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7916128D9E for ; Fri, 31 May 2019 23:33:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726901AbfEaXcj (ORCPT ); Fri, 31 May 2019 19:32:39 -0400 Received: from mga07.intel.com ([134.134.136.100]:59345 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726881AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:31 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:31 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 9/9] security/selinux: Add enclave_load() implementation Date: Fri, 31 May 2019 16:31:59 -0700 Message-Id: <20190531233159.30992-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The goal of selinux_enclave_load() is to provide a facsimile of the existing selinux_file_mprotect() and file_map_prot_check() policies, but tailored to the unique properties of SGX. For example, an enclave page is technically backed by a MAP_SHARED file, but the "file" is essentially shared memory that is never persisted anywhere and also requires execute permissions (for some pages). The basic concept is to require appropriate execute permissions on the source of the enclave for pages that are requesting PROT_EXEC, e.g. if an enclave page is being loaded from a regular file, require FILE__EXECUTE and/or FILE__EXECMOND, and if it's coming from an anonymous/private mapping, require PROCESS__EXECMEM since the process is essentially executing from the mapping, albeit in a roundabout way. Note, FILE__READ and FILE__WRITE are intentionally not required even if the source page is backed by a regular file. Writes to the enclave page are contained to the EPC, i.e. never hit the original file, and read permissions have already been vetted (or the VMA doesn't have PROT_READ, in which case loading the page into the enclave will fail). Signed-off-by: Sean Christopherson --- security/selinux/hooks.c | 85 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 3ec702cf46ca..f436a055dda7 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -6726,6 +6726,87 @@ static void selinux_bpf_prog_free(struct bpf_prog_aux *aux) } #endif +#ifdef CONFIG_INTEL_SGX +int selinux_enclave_load(struct vm_area_struct *vma, unsigned long prot, + unsigned long *allowed_prot) +{ + const struct cred *cred = current_cred(); + u32 sid = cred_sid(cred); + int rc; + + /* SGX is supported only in 64-bit kernels. */ + WARN_ON_ONCE(!default_noexec); + + /* + * SGX is responsible for checking @prot vs @allowed_prot, and SELinux + * only cares about execute related permissions for enclaves. + */ + if (!(*allowed_prot & PROT_EXEC)) + return 0; + + /* + * Loading an executable enclave page from a VMA that is not executable + * itself requires EXECUTE permissions on the source file, or if there + * is no regular source file, EXECMEM since the page is being loaded + * from a non-executable anonymous mapping. + */ + if (!(vma->vm_flags & VM_EXEC)) { + if (vma->vm_file && !IS_PRIVATE(file_inode(vma->vm_file))) + rc = file_has_perm(cred, vma->vm_file, FILE__EXECUTE); + else + rc = avc_has_perm(&selinux_state, + sid, sid, SECCLASS_PROCESS, + PROCESS__EXECMEM, NULL); + + /* + * Reject the load if the enclave *needs* the page to be + * executable, otherwise prevent it from becoming executable. + */ + if (rc) { + if (prot & PROT_EXEC) + return rc; + + *allowed_prot &= ~PROT_EXEC; + } + } + + /* + * An enclave page that may do RW->RX or W+X requires EXECMOD (backed + * by a regular file) or EXECMEM (loaded from an anonymous mapping). + * Note, this hybrid EXECMOD and EXECMEM behavior is intentional and + * reflects the nature of enclaves and the EPC, e.g. EPC is effectively + * a non-persistent shared file, but each enclave is a private domain + * within that shared file, so delegate to the source of the enclave. + */ + if ((*allowed_prot & PROT_EXEC) && (*allowed_prot & PROT_WRITE)) { + if (vma->vm_file && !IS_PRIVATE(file_inode(vma->vm_file))) + rc = file_has_perm(cred, vma->vm_file, FILE__EXECMOD); + else + rc = avc_has_perm(&selinux_state, + sid, sid, SECCLASS_PROCESS, + PROCESS__EXECMEM, NULL); + /* + * Clear ALLOW_EXEC instead of ALLOWED_WRITE if permissions are + * lacking and @prot has neither PROT_WRITE or PROT_EXEC. If + * userspace wanted RX they would have requested RX, and due to + * lack of permissions they can never get RW->RX, i.e. the only + * useful transition is R->RW. + */ + if (rc) { + if ((prot & PROT_EXEC) && (prot & PROT_WRITE)) + return rc; + + if (prot & PROT_EXEC) + *allowed_prot &= ~PROT_WRITE; + else + *allowed_prot &= ~PROT_EXEC; + } + } + + return 0; +} +#endif + struct lsm_blob_sizes selinux_blob_sizes __lsm_ro_after_init = { .lbs_cred = sizeof(struct task_security_struct), .lbs_file = sizeof(struct file_security_struct), @@ -6968,6 +7049,10 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { LSM_HOOK_INIT(bpf_map_free_security, selinux_bpf_map_free), LSM_HOOK_INIT(bpf_prog_free_security, selinux_bpf_prog_free), #endif + +#ifdef CONFIG_INTEL_SGX + LSM_HOOK_INIT(enclave_load, selinux_enclave_load), +#endif }; static __init int selinux_init(void)