From patchwork Wed Jun 19 22:23:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9186828DC for ; Wed, 19 Jun 2019 22:24:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7CC2E286AE for ; Wed, 19 Jun 2019 22:24:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 71326288B3; Wed, 19 Jun 2019 22:24:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4B3B288AA for ; Wed, 19 Jun 2019 22:24:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730812AbfFSWYR (ORCPT ); Wed, 19 Jun 2019 18:24:17 -0400 Received: from mga18.intel.com ([134.134.136.126]:40157 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726322AbfFSWYP (ORCPT ); Wed, 19 Jun 2019 18:24:15 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743744" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:12 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 01/12] x86/sgx: Use mmu_notifier.release() instead of per-vma refcounting Date: Wed, 19 Jun 2019 15:23:50 -0700 Message-Id: <20190619222401.14942-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Using per-vma refcounting to track mm_structs associated with an enclave requires hooking .vm_close(), which in turn prevents the mm from merging vmas (precisely to allow refcounting). Avoid refcounting encl_mm altogether by registering an mmu_notifier at .mmap(), removing the dying encl_mm at mmu_notifier.release() and protecting mm_list during reclaim via a per-enclave SRCU. Removing refcounting/vm_close() allows merging of enclave vmas, at the cost of delaying removal of encl_mm structs from mm_list, i.e. an mm is disassociated from an enclave when the mm exits or the enclave dies, as opposed to when the last vma (in a given mm) is closed. The impact of delying encl_mm removal is its memory footprint and whatever overhead is incurred during EPC reclaim (to walk an mm's vmas). Practically speaking, a stale encl_mm will exist for a meaningful amount of time if and only if the enclave is mapped in a long-lived process and then passed off to another long-lived process. It is expected that the vast majority of use cases will not encounter this condition, e.g. even using a daemon to build enclaves should not result in a stale encl_mm as the builder should never need to mmap() the enclave. Even if there are scenarios that lead to defunct encl_mms, the cost is likely far outweighed by the benefits of reducing the number of vmas across all enclaves. Note, using SRCU to protect mm_list is not strictly necessary, i.e. the existing walker with encl_mm refcounting could be massaged to work with mmu_notifier.release(), but the resulting code is subtle and fragile (I never actually got it working). The primary issue is that an encl_mm can't be moved off the list until its refcount goes to zero, otherwise the custom walker goes off into the weeds. The refcount requirement then prevents using mm_list to identify if an mmu_notifier.release() has fired, i.e. another mechanism is needed to guard against races between exit_mmap() and sgx_release(). Cc: Dave Hansen Cc: Andy Lutomirski Signed-off-by: Sean Christopherson --- arch/x86/Kconfig | 2 + arch/x86/kernel/cpu/sgx/driver/ioctl.c | 14 -- arch/x86/kernel/cpu/sgx/driver/main.c | 38 ++++ arch/x86/kernel/cpu/sgx/encl.c | 234 +++++++++++-------------- arch/x86/kernel/cpu/sgx/encl.h | 19 +- arch/x86/kernel/cpu/sgx/reclaim.c | 71 +++----- 6 files changed, 182 insertions(+), 196 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a0fd17c32521..940c52762f24 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1918,6 +1918,8 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS config INTEL_SGX bool "Intel SGX core functionality" depends on X86_64 && CPU_SUP_INTEL + select MMU_NOTIFIER + select SRCU ---help--- Intel(R) SGX is a set of CPU instructions that can be used by applications to set aside private regions of code and data, referred diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index d17c60dca114..3552d642b26f 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -276,7 +276,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) { unsigned long encl_size = secs->size + PAGE_SIZE; struct sgx_epc_page *secs_epc; - struct sgx_encl_mm *encl_mm; unsigned long ssaframesize; struct sgx_pageinfo pginfo; struct sgx_secinfo secinfo; @@ -311,12 +310,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) INIT_WORK(&encl->work, sgx_add_page_worker); - encl_mm = sgx_encl_mm_add(encl, current->mm); - if (IS_ERR(encl_mm)) { - ret = PTR_ERR(encl_mm); - goto err_out; - } - secs_epc = sgx_alloc_page(&encl->secs, true); if (IS_ERR(secs_epc)) { ret = PTR_ERR(secs_epc); @@ -369,13 +362,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) encl->backing = NULL; } - if (!list_empty(&encl->mm_list)) { - encl_mm = list_first_entry(&encl->mm_list, struct sgx_encl_mm, - list); - list_del(&encl_mm->list); - kfree(encl_mm); - } - mutex_unlock(&encl->lock); return ret; } diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 0c831ee5e2de..07aa5f91b2dd 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -25,6 +25,7 @@ u32 sgx_xsave_size_tbl[64]; static int sgx_open(struct inode *inode, struct file *file) { struct sgx_encl *encl; + int ret; encl = kzalloc(sizeof(*encl), GFP_KERNEL); if (!encl) @@ -38,6 +39,12 @@ static int sgx_open(struct inode *inode, struct file *file) INIT_LIST_HEAD(&encl->mm_list); spin_lock_init(&encl->mm_lock); + ret = init_srcu_struct(&encl->srcu); + if (ret) { + kfree(encl); + return ret; + } + file->private_data = encl; return 0; @@ -46,6 +53,32 @@ static int sgx_open(struct inode *inode, struct file *file) static int sgx_release(struct inode *inode, struct file *file) { struct sgx_encl *encl = file->private_data; + struct sgx_encl_mm *encl_mm; + + /* + * Objects can't be *moved* off an RCU protected list (deletion is ok), + * nor can the object be freed until after synchronize_srcu(). + */ +restart: + spin_lock(&encl->mm_lock); + if (list_empty(&encl->mm_list)) { + encl_mm = NULL; + } else { + encl_mm = list_first_entry(&encl->mm_list, struct sgx_encl_mm, + list); + list_del_rcu(&encl_mm->list); + } + spin_unlock(&encl->mm_lock); + + if (encl_mm) { + synchronize_srcu(&encl->srcu); + + mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm); + + sgx_encl_mm_release(encl_mm); + + goto restart; + } kref_put(&encl->refcount, sgx_encl_release); @@ -63,6 +96,11 @@ static long sgx_compat_ioctl(struct file *filep, unsigned int cmd, static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; + int ret; + + ret = sgx_encl_mm_add(encl, vma->vm_mm); + if (ret) + return ret; vma->vm_ops = &sgx_vm_ops; vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 9566eb72d417..c6436bbd4a68 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -132,103 +132,125 @@ static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, return entry; } -struct sgx_encl_mm *sgx_encl_mm_add(struct sgx_encl *encl, - struct mm_struct *mm) +static void sgx_encl_mm_release_wq(struct work_struct *work) +{ + struct sgx_encl_mm *encl_mm = + container_of(work, struct sgx_encl_mm, release_work); + + sgx_encl_mm_release(encl_mm); +} + +/* + * Being a call_srcu() callback, this needs to be short, and sgx_encl_release() + * is anything but short. Do the final freeing in yet another async callback. + */ +static void sgx_encl_mm_release_delayed(struct rcu_head *rcu) +{ + struct sgx_encl_mm *encl_mm = + container_of(rcu, struct sgx_encl_mm, rcu); + + INIT_WORK(&encl_mm->release_work, sgx_encl_mm_release_wq); + schedule_work(&encl_mm->release_work); +} + +static void sgx_mmu_notifier_release(struct mmu_notifier *mn, + struct mm_struct *mm) +{ + struct sgx_encl_mm *encl_mm = + container_of(mn, struct sgx_encl_mm, mmu_notifier); + struct sgx_encl_mm *tmp = NULL; + + /* + * The enclave itself can remove encl_mm. Note, objects can't be moved + * off an RCU protected list, but deletion is ok. + */ + spin_lock(&encl_mm->encl->mm_lock); + list_for_each_entry(tmp, &encl_mm->encl->mm_list, list) { + if (tmp == encl_mm) { + list_del_rcu(&encl_mm->list); + break; + } + } + spin_unlock(&encl_mm->encl->mm_lock); + + if (tmp == encl_mm) { + synchronize_srcu(&encl_mm->encl->srcu); + + /* + * Delay freeing encl_mm until after mmu_notifier releases any + * SRCU locks. synchronize_srcu() must be called from process + * context, i.e. we can't throw mmu_notifier_unregister() in a + * work queue and be done with it. + */ + mmu_notifier_unregister_no_release(mn, mm); + mmu_notifier_call_srcu(&encl_mm->rcu, + &sgx_encl_mm_release_delayed); + } +} + +static const struct mmu_notifier_ops sgx_mmu_notifier_ops = { + .release = sgx_mmu_notifier_release, +}; + +static struct sgx_encl_mm *sgx_encl_find_mm(struct sgx_encl *encl, + struct mm_struct *mm) +{ + struct sgx_encl_mm *encl_mm = NULL; + struct sgx_encl_mm *tmp; + int idx; + + idx = srcu_read_lock(&encl->srcu); + + list_for_each_entry_rcu(tmp, &encl->mm_list, list) { + if (tmp->mm == mm) { + encl_mm = tmp; + break; + } + } + + srcu_read_unlock(&encl->srcu, idx); + + return encl_mm; +} + +int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm) { struct sgx_encl_mm *encl_mm; + int ret; + + lockdep_assert_held_exclusive(&mm->mmap_sem); + + /* + * mm_structs are kept on mm_list until the mm or the enclave dies, + * i.e. once an mm is off the list, it's gone for good, therefore it's + * impossible to get a false positive on @mm due to a stale mm_list. + */ + if (sgx_encl_find_mm(encl, mm)) + return 0; encl_mm = kzalloc(sizeof(*encl_mm), GFP_KERNEL); if (!encl_mm) - return ERR_PTR(-ENOMEM); + return -ENOMEM; encl_mm->encl = encl; encl_mm->mm = mm; - kref_init(&encl_mm->refcount); + encl_mm->mmu_notifier.ops = &sgx_mmu_notifier_ops; + + ret = __mmu_notifier_register(&encl_mm->mmu_notifier, mm); + if (ret) { + kfree(encl_mm); + return ret; + } + + kref_get(&encl->refcount); spin_lock(&encl->mm_lock); - list_add(&encl_mm->list, &encl->mm_list); + list_add_rcu(&encl_mm->list, &encl->mm_list); spin_unlock(&encl->mm_lock); - return encl_mm; -} + synchronize_srcu(&encl->srcu); -void sgx_encl_mm_release(struct kref *ref) -{ - struct sgx_encl_mm *encl_mm = - container_of(ref, struct sgx_encl_mm, refcount); - - spin_lock(&encl_mm->encl->mm_lock); - list_del(&encl_mm->list); - spin_unlock(&encl_mm->encl->mm_lock); - - kfree(encl_mm); -} - -static struct sgx_encl_mm *sgx_encl_get_mm(struct sgx_encl *encl, - struct mm_struct *mm) -{ - struct sgx_encl_mm *encl_mm = NULL; - struct sgx_encl_mm *prev_mm = NULL; - int iter; - - while (true) { - encl_mm = sgx_encl_next_mm(encl, prev_mm, &iter); - if (prev_mm) - kref_put(&prev_mm->refcount, sgx_encl_mm_release); - prev_mm = encl_mm; - - if (iter == SGX_ENCL_MM_ITER_DONE) - break; - - if (iter == SGX_ENCL_MM_ITER_RESTART) - continue; - - if (mm == encl_mm->mm) - return encl_mm; - } - - return NULL; -} - -static void sgx_vma_open(struct vm_area_struct *vma) -{ - struct sgx_encl *encl = vma->vm_private_data; - struct sgx_encl_mm *encl_mm; - - if (!encl) - return; - - if (encl->flags & SGX_ENCL_DEAD) - goto error; - - encl_mm = sgx_encl_get_mm(encl, vma->vm_mm); - if (!encl_mm) { - encl_mm = sgx_encl_mm_add(encl, vma->vm_mm); - if (IS_ERR(encl_mm)) - goto error; - } - - return; - -error: - vma->vm_private_data = NULL; -} - -static void sgx_vma_close(struct vm_area_struct *vma) -{ - struct sgx_encl *encl = vma->vm_private_data; - struct sgx_encl_mm *encl_mm; - - if (!encl) - return; - - encl_mm = sgx_encl_get_mm(encl, vma->vm_mm); - if (encl_mm) { - kref_put(&encl_mm->refcount, sgx_encl_mm_release); - - /* Release kref for the VMA. */ - kref_put(&encl_mm->refcount, sgx_encl_mm_release); - } + return 0; } static unsigned int sgx_vma_fault(struct vm_fault *vmf) @@ -366,8 +388,6 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, } const struct vm_operations_struct sgx_vm_ops = { - .close = sgx_vma_close, - .open = sgx_vma_open, .fault = sgx_vma_fault, .access = sgx_vma_access, }; @@ -465,7 +485,7 @@ void sgx_encl_release(struct kref *ref) if (encl->backing) fput(encl->backing); - WARN(!list_empty(&encl->mm_list), "sgx: mm_list non-empty"); + WARN_ONCE(!list_empty(&encl->mm_list), "sgx: mm_list non-empty"); kfree(encl); } @@ -503,46 +523,6 @@ struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, pgoff_t index) return shmem_read_mapping_page_gfp(mapping, index, gfpmask); } -/** - * sgx_encl_next_mm() - Iterate to the next mm - * @encl: an enclave - * @mm: an mm list entry - * @iter: iterator status - * - * Return: the enclave mm or NULL - */ -struct sgx_encl_mm *sgx_encl_next_mm(struct sgx_encl *encl, - struct sgx_encl_mm *encl_mm, int *iter) -{ - struct list_head *entry; - - WARN(!encl, "%s: encl is NULL", __func__); - WARN(!iter, "%s: iter is NULL", __func__); - - spin_lock(&encl->mm_lock); - - entry = encl_mm ? encl_mm->list.next : encl->mm_list.next; - WARN(!entry, "%s: entry is NULL", __func__); - - if (entry == &encl->mm_list) { - spin_unlock(&encl->mm_lock); - *iter = SGX_ENCL_MM_ITER_DONE; - return NULL; - } - - encl_mm = list_entry(entry, struct sgx_encl_mm, list); - - if (!kref_get_unless_zero(&encl_mm->refcount)) { - spin_unlock(&encl->mm_lock); - *iter = SGX_ENCL_MM_ITER_RESTART; - return NULL; - } - - spin_unlock(&encl->mm_lock); - *iter = SGX_ENCL_MM_ITER_NEXT; - return encl_mm; -} - static int sgx_encl_test_and_clear_young_cb(pte_t *ptep, pgtable_t token, unsigned long addr, void *data) { diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index c557f0374d74..0904b3c20ed0 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -9,9 +9,11 @@ #include #include #include +#include #include #include #include +#include #include /** @@ -57,8 +59,10 @@ enum sgx_encl_flags { struct sgx_encl_mm { struct sgx_encl *encl; struct mm_struct *mm; - struct kref refcount; struct list_head list; + struct mmu_notifier mmu_notifier; + struct work_struct release_work; + struct rcu_head rcu; }; struct sgx_encl { @@ -72,6 +76,7 @@ struct sgx_encl { spinlock_t mm_lock; struct file *backing; struct kref refcount; + struct srcu_struct srcu; unsigned long base; unsigned long size; unsigned long ssaframesize; @@ -118,11 +123,13 @@ void sgx_encl_destroy(struct sgx_encl *encl); void sgx_encl_release(struct kref *ref); pgoff_t sgx_encl_get_index(struct sgx_encl *encl, struct sgx_encl_page *page); struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, pgoff_t index); -struct sgx_encl_mm *sgx_encl_next_mm(struct sgx_encl *encl, - struct sgx_encl_mm *encl_mm, int *iter); -struct sgx_encl_mm *sgx_encl_mm_add(struct sgx_encl *encl, - struct mm_struct *mm); -void sgx_encl_mm_release(struct kref *ref); +int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm); +static inline void sgx_encl_mm_release(struct sgx_encl_mm *encl_mm) +{ + kref_put(&encl_mm->encl->refcount, sgx_encl_release); + + kfree(encl_mm); +} int sgx_encl_test_and_clear_young(struct mm_struct *mm, struct sgx_encl_page *page); struct sgx_encl_page *sgx_encl_reserve_page(struct sgx_encl *encl, diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c index f192ade93245..e9427220415b 100644 --- a/arch/x86/kernel/cpu/sgx/reclaim.c +++ b/arch/x86/kernel/cpu/sgx/reclaim.c @@ -140,23 +140,13 @@ static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page) { struct sgx_encl_page *page = epc_page->owner; struct sgx_encl *encl = page->encl; - struct sgx_encl_mm *encl_mm = NULL; - struct sgx_encl_mm *prev_mm = NULL; + struct sgx_encl_mm *encl_mm; bool ret = true; - int iter; + int idx; - while (true) { - encl_mm = sgx_encl_next_mm(encl, prev_mm, &iter); - if (prev_mm) - kref_put(&prev_mm->refcount, sgx_encl_mm_release); - prev_mm = encl_mm; - - if (iter == SGX_ENCL_MM_ITER_DONE) - break; - - if (iter == SGX_ENCL_MM_ITER_RESTART) - continue; + idx = srcu_read_lock(&encl->srcu); + list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { if (!mmget_not_zero(encl_mm->mm)) continue; @@ -164,14 +154,14 @@ static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page) ret = !sgx_encl_test_and_clear_young(encl_mm->mm, page); up_read(&encl_mm->mm->mmap_sem); - mmput(encl_mm->mm); + mmput_async(encl_mm->mm); - if (!ret || (encl->flags & SGX_ENCL_DEAD)) { - kref_put(&encl_mm->refcount, sgx_encl_mm_release); + if (!ret || (encl->flags & SGX_ENCL_DEAD)) break; - } } + srcu_read_unlock(&encl->srcu, idx); + /* * Do not reclaim this page if it has been recently accessed by any * mm_struct *and* if the enclave is still alive. No need to take @@ -195,24 +185,13 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page) struct sgx_encl_page *page = epc_page->owner; unsigned long addr = SGX_ENCL_PAGE_ADDR(page); struct sgx_encl *encl = page->encl; - struct sgx_encl_mm *encl_mm = NULL; - struct sgx_encl_mm *prev_mm = NULL; + struct sgx_encl_mm *encl_mm; struct vm_area_struct *vma; - int iter; - int ret; + int idx, ret; - while (true) { - encl_mm = sgx_encl_next_mm(encl, prev_mm, &iter); - if (prev_mm) - kref_put(&prev_mm->refcount, sgx_encl_mm_release); - prev_mm = encl_mm; - - if (iter == SGX_ENCL_MM_ITER_DONE) - break; - - if (iter == SGX_ENCL_MM_ITER_RESTART) - continue; + idx = srcu_read_lock(&encl->srcu); + list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { if (!mmget_not_zero(encl_mm->mm)) continue; @@ -224,9 +203,11 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page) up_read(&encl_mm->mm->mmap_sem); - mmput(encl_mm->mm); + mmput_async(encl_mm->mm); } + srcu_read_unlock(&encl->srcu, idx); + mutex_lock(&encl->lock); if (!(encl->flags & SGX_ENCL_DEAD)) { @@ -289,32 +270,24 @@ static void sgx_ipi_cb(void *info) static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl) { cpumask_t *cpumask = &encl->cpumask; - struct sgx_encl_mm *encl_mm = NULL; - struct sgx_encl_mm *prev_mm = NULL; - int iter; + struct sgx_encl_mm *encl_mm; + int idx; cpumask_clear(cpumask); - while (true) { - encl_mm = sgx_encl_next_mm(encl, prev_mm, &iter); - if (prev_mm) - kref_put(&prev_mm->refcount, sgx_encl_mm_release); - prev_mm = encl_mm; - - if (iter == SGX_ENCL_MM_ITER_DONE) - break; - - if (iter == SGX_ENCL_MM_ITER_RESTART) - continue; + idx = srcu_read_lock(&encl->srcu); + list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { if (!mmget_not_zero(encl_mm->mm)) continue; cpumask_or(cpumask, cpumask, mm_cpumask(encl_mm->mm)); - mmput(encl_mm->mm); + mmput_async(encl_mm->mm); } + srcu_read_unlock(&encl->srcu, idx); + return cpumask; } From patchwork Wed Jun 19 22:23:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 122473AEB for ; Wed, 19 Jun 2019 22:24:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3546286AE for ; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E76E3288B3; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92D3B288B6 for ; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730789AbfFSWYQ (ORCPT ); Wed, 19 Jun 2019 18:24:16 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730752AbfFSWYP (ORCPT ); Wed, 19 Jun 2019 18:24:15 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743748" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:12 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 02/12] x86/sgx: Do not naturally align MAP_FIXED address Date: Wed, 19 Jun 2019 15:23:51 -0700 Message-Id: <20190619222401.14942-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP SGX enclaves have an associated Enclave Linear Range (ELRANGE) that is tracked and enforced by the CPU using a base+mask approach, similar to how hardware range registers such as the variable MTRRs. As a result, the ELRANGE must be naturally sized and aligned. To reduce boilerplate code that would be needed in every userspace enclave loader, the SGX driver naturally aligns the mmap() address and also requires the range to be naturally sized. Unfortunately, SGX fails to grant a waiver to the MAP_FIXED case, e.g. incorrectly rejects mmap() if userspace is attempting to map a small slice of an existing enclave. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/main.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 07aa5f91b2dd..29384cdd0842 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -115,7 +115,13 @@ static unsigned long sgx_get_unmapped_area(struct file *file, unsigned long pgoff, unsigned long flags) { - if (len < 2 * PAGE_SIZE || len & (len - 1) || flags & MAP_PRIVATE) + if (flags & MAP_PRIVATE) + return -EINVAL; + + if (flags & MAP_FIXED) + return addr; + + if (len < 2 * PAGE_SIZE || len & (len - 1)) return -EINVAL; addr = current->mm->get_unmapped_area(file, addr, 2 * len, pgoff, From patchwork Wed Jun 19 22:23:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005419 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9958313AF for ; Wed, 19 Jun 2019 22:24:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8412B286AE for ; Wed, 19 Jun 2019 22:24:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75D5D288B4; Wed, 19 Jun 2019 22:24:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0988E286AE for ; Wed, 19 Jun 2019 22:24:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726482AbfFSWYN (ORCPT ); Wed, 19 Jun 2019 18:24:13 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbfFSWYN (ORCPT ); Wed, 19 Jun 2019 18:24:13 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743753" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 03/12] selftests: x86/sgx: Mark the enclave loader as not needing an exec stack Date: Wed, 19 Jun 2019 15:23:52 -0700 Message-Id: <20190619222401.14942-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The SGX enclave loader doesn't need an executable stack, but linkers will assume it does due to the lack of .note.GNU-stack sections in the loader's assembly code. As a result, the kernel tags the loader as having "read implies exec", and so adds PROT_EXEC to all mmap()s, even those for mapping EPC regions. This will cause problems in the future when userspace needs to explicit state a page's protection bits when the page is added to an enclave, e.g. adding TCS pages as R+W will cause mmap() to fail when the kernel tacks on +X. Explicitly tell the linker that an executable stack is not needed. Alternatively, each .S file could add .note.GNU-stack, but the loader should never need an executable stack so zap it in one fell swoop. Signed-off-by: Sean Christopherson --- tools/testing/selftests/x86/sgx/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/x86/sgx/Makefile b/tools/testing/selftests/x86/sgx/Makefile index 1fd6f2708e81..10136b73096b 100644 --- a/tools/testing/selftests/x86/sgx/Makefile +++ b/tools/testing/selftests/x86/sgx/Makefile @@ -2,7 +2,7 @@ top_srcdir = ../../../../.. include ../../lib.mk -HOST_CFLAGS := -Wall -Werror -g $(INCLUDES) -fPIC +HOST_CFLAGS := -Wall -Werror -g $(INCLUDES) -fPIC -z noexecstack ENCL_CFLAGS := -Wall -Werror -static -nostdlib -nostartfiles -fPIC \ -fno-stack-protector -mrdrnd $(INCLUDES) From patchwork Wed Jun 19 22:23:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005431 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86BC61580 for ; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 749F5288B6 for ; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 68CA1288B7; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71920288AA for ; Wed, 19 Jun 2019 22:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730758AbfFSWYP (ORCPT ); Wed, 19 Jun 2019 18:24:15 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbfFSWYO (ORCPT ); Wed, 19 Jun 2019 18:24:14 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743757" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 04/12] x86/sgx: Require userspace to define enclave pages' protection bits Date: Wed, 19 Jun 2019 15:23:53 -0700 Message-Id: <20190619222401.14942-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Existing Linux Security Module policies restrict userspace's ability to map memory, e.g. may require priveleged permissions to map a page that is simultaneously writable and executable. Said permissions are often tied to the file which backs the mapped memory, i.e. vm_file. For reasons explained below, SGX does not allow LSMs to enforce policies using existing LSM hooks such as file_mprotect(). Explicitly track the protection bits for an enclave page (separate from the vma/pte bits) and require userspace to explicit define a page's protection bits when the page is added to the enclave. Enclave page protection bits paves the way to adding security_enclave_load() LSM hook as an SGX equivalent to security_file_mprotect(), e.g. SGX can pass the page's protection bits and source vma to LSMs. The source vma will allow LSMs to tie permissions to files, e.g. the file containing the enclave's code and initial data, and the protection bits will allow LSMs to make decisions based on the capabilities of the process, e.g. if a process is allowed to load unmeasured code or load code from anonymous memory. Due to the nature of the Enclave Page Cache, and because the EPC is manually managed by SGX, all enclave vmas are backed by the same file, i.e. /dev/sgx/enclave. Specifically, a single file allows SGX to use file op hooks to move pages in/out of the EPC. Furthermore, EPC pages for any given enclave are fundamentally shared between processes, i.e. CoW semantics are not possible with EPC pages due to hardware restrictions such as 1:1 mappings between virtual and physical addresses (within the enclave). Lastly, all real world enclaves will need read, write and execute permissions to EPC pages. As a result, SGX does not play nice with existing LSM behavior as it is impossible to apply policies to enclaves with reasonable granularity, e.g. an LSM can deny access to EPC altogether, but can't deny potentially unwanted behavior such as mapping pages WX, loading code from anonymous memory, loading unmeasured code, etc... For example, because all (practical) enclaves need RW pages for data and RX pages for code, SELinux's existing policies will require all enclaves to have FILE__READ, FILE__WRITE and FILE__EXECUTE permissions on /dev/sgx/enclave. Witholding FILE__WRITE or FILE__EXECUTE in an attempt to deny RW->RX or RWX would prevent running *any* enclave, even those that cleanly separate RW and RX pages. And because /dev/sgx/enclave requires MAP_SHARED, the anonymous/CoW checks that would trigger FILE__EXECMOD or PROCESS__EXECMEM permissions will never fire. Taking protection bits has a second use in that it can be used to prevent loading an enclave from a noexec file system. On SGX2 hardware, regardless of kernel support for SGX2, userspace could EADD a page from a noexec path using read-only permissions and later mprotect() and ENCLU[EMODPE] the page to gain execute permissions. By requiring the enclave's page protections up front, SGX will be able to enforce noexec paths when building enclaves. To prevent userspace from circumventing the allowed protections, do not allow PROT_{READ,WRITE,EXEC} mappings to an enclave without an associated enclave page, i.e. prevent creating a mapping with unchecked protection bits. Many alternatives[1][2] have been explored, most notably the concept of having SGX check (at load time) and save the permissions of the enclave loader. The permissions would then be enforced by SGX at run time, e.g. via mmap()/mprotect() hooks of some form. The basic functionality of pre-checking permissions is relatively straightforward, but supporting LSM auditing is complex and fraught with pitfalls. If auditing is done at the time of denial then the audit logs will potentially show a large number of false positives. Auditing when the denial is enforced, e.g. at mprotect(), suffers from its own problems, e.g.: - Requires LSMs to pre-generate audit messages so that they can be replayed by SGX when the denial is actually enforced. - System changes can result in stale audit messages, e.g. if files are removed from the system, an LSM profile is modified, etc... - A process could log what is essentially a false positive denial, e.g. if the current process has the requisite capability but the original enclave loader did not. Signed-off-by: Sean Christopherson --- arch/x86/include/uapi/asm/sgx.h | 6 ++-- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 15 +++++--- arch/x86/kernel/cpu/sgx/driver/main.c | 49 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/encl.h | 1 + tools/testing/selftests/x86/sgx/main.c | 32 +++++++++++++++-- 5 files changed, 94 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h index 6dba9f282232..67a3babbb24d 100644 --- a/arch/x86/include/uapi/asm/sgx.h +++ b/arch/x86/include/uapi/asm/sgx.h @@ -35,15 +35,17 @@ struct sgx_enclave_create { * @src: address for the page data * @secinfo: address for the SECINFO data * @mrmask: bitmask for the measured 256 byte chunks + * @prot: maximal PROT_{READ,WRITE,EXEC} protections for the page */ struct sgx_enclave_add_page { __u64 addr; __u64 src; __u64 secinfo; - __u64 mrmask; + __u16 mrmask; + __u8 prot; + __u8 pad; }; - /** * struct sgx_enclave_init - parameter structure for the * %SGX_IOC_ENCLAVE_INIT ioctl diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 3552d642b26f..e18d2afd2aad 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -2,6 +2,7 @@ // Copyright(c) 2016-19 Intel Corporation. #include +#include #include #include #include @@ -235,7 +236,8 @@ static int sgx_validate_secs(const struct sgx_secs *secs, } static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, - unsigned long addr) + unsigned long addr, + unsigned long prot) { struct sgx_encl_page *encl_page; int ret; @@ -247,6 +249,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl, return ERR_PTR(-ENOMEM); encl_page->desc = addr; encl_page->encl = encl; + encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0); ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), encl_page); if (ret) { @@ -517,7 +520,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, void *data, struct sgx_secinfo *secinfo, - unsigned int mrmask) + unsigned int mrmask, unsigned long prot) { u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK; struct sgx_encl_page *encl_page; @@ -543,7 +546,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, goto out; } - encl_page = sgx_encl_page_alloc(encl, addr); + encl_page = sgx_encl_page_alloc(encl, addr, prot); if (IS_ERR(encl_page)) { ret = PTR_ERR(encl_page); goto out; @@ -584,6 +587,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) struct sgx_enclave_add_page addp; struct sgx_secinfo secinfo; struct page *data_page; + unsigned long prot; void *data; int ret; @@ -605,7 +609,10 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) goto out; } - ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask); + prot = addp.prot & (PROT_READ | PROT_WRITE | PROT_EXEC); + + ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask, + prot); if (ret) goto out; diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 29384cdd0842..dabfe2a7245a 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -93,15 +93,64 @@ static long sgx_compat_ioctl(struct file *filep, unsigned int cmd, } #endif +/* + * Returns the AND of VM_{READ,WRITE,EXEC} permissions across all pages + * covered by the specific VMA. A non-existent (or yet to be added) enclave + * page is considered to have no RWX permissions, i.e. is inaccessible. + */ +static unsigned long sgx_allowed_rwx(struct sgx_encl *encl, + struct vm_area_struct *vma) +{ + unsigned long allowed_rwx = VM_READ | VM_WRITE | VM_EXEC; + unsigned long idx, idx_start, idx_end; + struct sgx_encl_page *page; + + idx_start = PFN_DOWN(vma->vm_start); + idx_end = PFN_DOWN(vma->vm_end - 1); + + for (idx = idx_start; idx <= idx_end; ++idx) { + /* + * No need to take encl->lock, vm_prot_bits is set prior to + * insertion and never changes, and racing with adding pages is + * a userspace bug. + */ + rcu_read_lock(); + page = radix_tree_lookup(&encl->page_tree, idx); + rcu_read_unlock(); + + /* Do not allow R|W|X to a non-existent page. */ + if (!page) + allowed_rwx = 0; + else + allowed_rwx &= page->vm_prot_bits; + if (!allowed_rwx) + break; + } + + return allowed_rwx; +} + static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; + unsigned long allowed_rwx; int ret; + allowed_rwx = sgx_allowed_rwx(encl, vma); + if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC) & ~allowed_rwx) + return -EACCES; + ret = sgx_encl_mm_add(encl, vma->vm_mm); if (ret) return ret; + if (!(allowed_rwx & VM_READ)) + vma->vm_flags &= ~VM_MAYREAD; + if (!(allowed_rwx & VM_WRITE)) + vma->vm_flags &= ~VM_MAYWRITE; + if (!(allowed_rwx & VM_EXEC)) + vma->vm_flags &= ~VM_MAYEXEC; + vma->vm_ops = &sgx_vm_ops; vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; vma->vm_private_data = encl; diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 0904b3c20ed0..5ad018c8d74c 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -43,6 +43,7 @@ enum sgx_encl_page_desc { struct sgx_encl_page { unsigned long desc; + unsigned long vm_prot_bits; struct sgx_epc_page *epc_page; struct sgx_va_page *va_page; struct sgx_encl *encl; diff --git a/tools/testing/selftests/x86/sgx/main.c b/tools/testing/selftests/x86/sgx/main.c index e2265f841fb0..77e93f8e8a59 100644 --- a/tools/testing/selftests/x86/sgx/main.c +++ b/tools/testing/selftests/x86/sgx/main.c @@ -2,6 +2,7 @@ // Copyright(c) 2016-18 Intel Corporation. #include +#include #include #include #include @@ -18,6 +19,8 @@ #include "../../../../../arch/x86/kernel/cpu/sgx/arch.h" #include "../../../../../arch/x86/include/uapi/asm/sgx.h" +#define PAGE_SIZE 4096 + static const uint64_t MAGIC = 0x1122334455667788ULL; struct vdso_symtab { @@ -135,8 +138,7 @@ static bool encl_create(int dev_fd, unsigned long bin_size, for (secs->size = 4096; secs->size < bin_size; ) secs->size <<= 1; - base = mmap(NULL, secs->size, PROT_READ | PROT_WRITE | PROT_EXEC, - MAP_SHARED, dev_fd, 0); + base = mmap(NULL, secs->size, PROT_NONE, MAP_SHARED, dev_fd, 0); if (base == MAP_FAILED) { perror("mmap"); return false; @@ -147,7 +149,7 @@ static bool encl_create(int dev_fd, unsigned long bin_size, ioc.src = (unsigned long)secs; rc = ioctl(dev_fd, SGX_IOC_ENCLAVE_CREATE, &ioc); if (rc) { - fprintf(stderr, "ECREATE failed rc=%d.\n", rc); + fprintf(stderr, "ECREATE failed rc=%d, err=%d.\n", rc, errno); munmap(base, secs->size); return false; } @@ -160,8 +162,14 @@ static bool encl_add_page(int dev_fd, unsigned long addr, void *data, { struct sgx_enclave_add_page ioc; struct sgx_secinfo secinfo; + unsigned long prot; int rc; + if (flags == SGX_SECINFO_TCS) + prot = PROT_READ | PROT_WRITE; + else + prot = PROT_READ | PROT_WRITE | PROT_EXEC; + memset(&secinfo, 0, sizeof(secinfo)); secinfo.flags = flags; @@ -169,6 +177,7 @@ static bool encl_add_page(int dev_fd, unsigned long addr, void *data, ioc.mrmask = 0xFFFF; ioc.addr = addr; ioc.src = (uint64_t)data; + ioc.prot = prot; rc = ioctl(dev_fd, SGX_IOC_ENCLAVE_ADD_PAGE, &ioc); if (rc) { @@ -184,6 +193,7 @@ static bool encl_load(struct sgx_secs *secs, unsigned long bin_size) struct sgx_enclave_init ioc; uint64_t offset; uint64_t flags; + void *addr; int dev_fd; int rc; @@ -215,6 +225,22 @@ static bool encl_load(struct sgx_secs *secs, unsigned long bin_size) goto out_map; } + addr = mmap((void *)secs->base, PAGE_SIZE, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_FIXED, dev_fd, 0); + if (addr == MAP_FAILED) { + fprintf(stderr, "mmap() failed on TCS, errno=%d.\n", errno); + return false; + } + + addr = mmap((void *)(secs->base + PAGE_SIZE), bin_size - PAGE_SIZE, + PROT_READ | PROT_WRITE | PROT_EXEC, + MAP_SHARED | MAP_FIXED, dev_fd, 0); + if (addr == MAP_FAILED) { + fprintf(stderr, "mmap() failed, errno=%d.\n", errno); + return false; + } + + close(dev_fd); return true; out_map: From patchwork Wed Jun 19 22:23:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005441 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D23DD1398 for ; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA0F7288B9 for ; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE619288B8; Wed, 19 Jun 2019 22:24:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EEBB4288B8 for ; Wed, 19 Jun 2019 22:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730782AbfFSWYP (ORCPT ); Wed, 19 Jun 2019 18:24:15 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730747AbfFSWYP (ORCPT ); Wed, 19 Jun 2019 18:24:15 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743761" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 05/12] x86/sgx: Enforce noexec filesystem restriction for enclaves Date: Wed, 19 Jun 2019 15:23:54 -0700 Message-Id: <20190619222401.14942-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Do not allow an enclave page to be mapped with PROT_EXEC if the source vma does not have VM_MAYEXEC. This effectively enforces noexec as do_mmap() clears VM_MAYEXEC if the vma is being loaded from a noexec path, i.e. prevents executing a file by loading it into an enclave. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 42 +++++++++++++++++++++++--- 1 file changed, 37 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index e18d2afd2aad..1fca70a36ce3 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -564,6 +564,39 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } +static int sgx_encl_page_copy(void *dst, unsigned long src, unsigned long prot) +{ + struct vm_area_struct *vma; + int ret; + + /* Hold mmap_sem across copy_from_user() to avoid a TOCTOU race. */ + down_read(¤t->mm->mmap_sem); + + /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ + if (prot & PROT_EXEC) { + vma = find_vma(current->mm, src); + if (!vma) { + ret = -EFAULT; + goto out; + } + + if (!(vma->vm_flags & VM_MAYEXEC)) { + ret = -EACCES; + goto out; + } + } + + if (copy_from_user(dst, (void __user *)src, PAGE_SIZE)) + ret = -EFAULT; + else + ret = 0; + +out: + up_read(¤t->mm->mmap_sem); + + return ret; +} + /** * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE * @@ -604,13 +637,12 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) data = kmap(data_page); - if (copy_from_user((void *)data, (void __user *)addp.src, PAGE_SIZE)) { - ret = -EFAULT; - goto out; - } - prot = addp.prot & (PROT_READ | PROT_WRITE | PROT_EXEC); + ret = sgx_encl_page_copy(data, addp.src, prot); + if (ret) + goto out; + ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask, prot); if (ret) From patchwork Wed Jun 19 22:23:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005453 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A9141986 for ; Wed, 19 Jun 2019 22:24:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE00E286AE for ; Wed, 19 Jun 2019 22:24:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E2475288B7; Wed, 19 Jun 2019 22:24:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6AEB8286AE for ; Wed, 19 Jun 2019 22:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730819AbfFSWYR (ORCPT ); Wed, 19 Jun 2019 18:24:17 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730747AbfFSWYP (ORCPT ); Wed, 19 Jun 2019 18:24:15 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743764" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 06/12] mm: Introduce vm_ops->may_mprotect() Date: Wed, 19 Jun 2019 15:23:55 -0700 Message-Id: <20190619222401.14942-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP SGX will use ->may_mprotect() to invoke an SGX variant of the existing file_mprotect() and mmap_file() LSM hooks. The name may_mprotect() is intended to reflect the hook's purpose as a way to restrict mprotect() as opposed to a wholesale replacement. Due to the nature of SGX and its Enclave Page Cache (EPC), all enclave VMAs are backed by a single file, i.e. /dev/sgx/enclave, that must be MAP_SHARED. Furthermore, all enclaves need read, write and execute VMAs. As a result, applying W^X restrictions on /dev/sgx/enclave using existing LSM hooks is for all intents and purposes impossible, e.g. denying either W or X would deny access to *any* enclave. By hooking mprotect(), SGX can invoke an SGX specific LSM hook, which in turn allows LSMs to enforce W^X policies. Alternatively, SGX could provide a helper to identify enclaves given a vma or file. LSMs could then check if a mapping is for enclave and take action according. A second alternative would be to have SGX implement its own LSM hooks for file_mprotect() and mmap_file(), using them to "forward" the call to the SGX specific hook. The major con to both alternatives is that they provide zero flexibility for the SGX specific LSM hook. The "is_sgx_enclave()" helper doesn't allow SGX can't supply any additional information whatsoever, and the mmap_file() hook is called before the final address is known, e.g. SGX can't provide any information about the specific enclave being mapped. Signed-off-by: Sean Christopherson --- include/linux/mm.h | 2 ++ mm/mprotect.c | 15 +++++++++++---- 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e8834ac32b7..b11ec420c8d7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -458,6 +458,8 @@ struct vm_operations_struct { void (*close)(struct vm_area_struct * area); int (*split)(struct vm_area_struct * area, unsigned long addr); int (*mremap)(struct vm_area_struct * area); + int (*may_mprotect)(struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned long prot); vm_fault_t (*fault)(struct vm_fault *vmf); vm_fault_t (*huge_fault)(struct vm_fault *vmf, enum page_entry_size pe_size); diff --git a/mm/mprotect.c b/mm/mprotect.c index bf38dfbbb4b4..18732543b295 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -547,13 +547,20 @@ static int do_mprotect_pkey(unsigned long start, size_t len, goto out; } - error = security_file_mprotect(vma, reqprot, prot); - if (error) - goto out; - tmp = vma->vm_end; if (tmp > end) tmp = end; + + if (vma->vm_ops && vma->vm_ops->may_mprotect) { + error = vma->vm_ops->may_mprotect(vma, nstart, tmp, prot); + if (error) + goto out; + } + + error = security_file_mprotect(vma, reqprot, prot); + if (error) + goto out; + error = mprotect_fixup(vma, &prev, nstart, tmp, newflags); if (error) goto out; From patchwork Wed Jun 19 22:23:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005485 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B1A391398 for ; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FB07288B3 for ; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 94171288B9; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 106E4288B3 for ; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730810AbfFSWYR (ORCPT ); Wed, 19 Jun 2019 18:24:17 -0400 Received: from mga18.intel.com ([134.134.136.126]:40162 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730785AbfFSWYQ (ORCPT ); Wed, 19 Jun 2019 18:24:16 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743768" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 07/12] LSM: x86/sgx: Introduce ->enclave_map() hook for Intel SGX Date: Wed, 19 Jun 2019 15:23:56 -0700 Message-Id: <20190619222401.14942-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP enclave_map() is an SGX specific variant of file_mprotect() and mmap_file(), and is provided so that LSMs can apply W^X restrictions to enclaves. Due to the nature of SGX and its Enclave Page Cache (EPC), all enclave VMAs are backed by a single file, i.e. /dev/sgx/enclave, that must be MAP_SHARED. Furthermore, all enclaves need read, write and execute VMAs. As a result, applying W^X restrictions on /dev/sgx/enclave using existing LSM hooks is for all intents and purposes impossible, e.g. denying either W or X would deny access to any enclave. Note, extensive discussion yielded no sane alternative to some form of SGX specific LSM hook[1]. [1] https://lkml.kernel.org/r/CALCETrXf8mSK45h7sTK5Wf+pXLVn=Bjsc_RLpgO-h-qdzBRo5Q@mail.gmail.com Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/main.c | 9 ++++++++- arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++++++++ include/linux/lsm_hooks.h | 12 ++++++++++++ include/linux/security.h | 11 +++++++++++ security/security.c | 7 +++++++ 5 files changed, 50 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index dabfe2a7245a..4379a2fb1f82 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -133,13 +133,20 @@ static unsigned long sgx_allowed_rwx(struct sgx_encl *encl, static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; - unsigned long allowed_rwx; + unsigned long allowed_rwx, prot; int ret; allowed_rwx = sgx_allowed_rwx(encl, vma); if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC) & ~allowed_rwx) return -EACCES; + prot = _calc_vm_trans(vma->vm_flags, VM_READ, PROT_READ) | + _calc_vm_trans(vma->vm_flags, VM_WRITE, PROT_WRITE) | + _calc_vm_trans(vma->vm_flags, VM_EXEC, PROT_EXEC); + ret = security_enclave_map(prot); + if (ret) + return ret; + ret = sgx_encl_mm_add(encl, vma->vm_mm); if (ret) return ret; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index c6436bbd4a68..059d90dcaa27 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -2,6 +2,7 @@ // Copyright(c) 2016-18 Intel Corporation. #include +#include #include #include #include @@ -387,9 +388,20 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, return ret < 0 ? ret : i; } +#ifdef CONFIG_SECURITY +static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned long prot) +{ + return security_enclave_map(prot); +} +#endif + const struct vm_operations_struct sgx_vm_ops = { .fault = sgx_vma_fault, .access = sgx_vma_access, +#ifdef CONFIG_SECURITY + .may_mprotect = sgx_vma_mprotect, +#endif }; /** diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 47f58cfb6a19..7c1357105e61 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1446,6 +1446,11 @@ * @bpf_prog_free_security: * Clean up the security information stored inside bpf prog. * + * Security hooks for Intel SGX enclaves. + * + * @enclave_map: + * @prot contains the protection that will be applied by the kernel. + * Return 0 if permission is granted. */ union security_list_options { int (*binder_set_context_mgr)(struct task_struct *mgr); @@ -1807,6 +1812,10 @@ union security_list_options { int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux); void (*bpf_prog_free_security)(struct bpf_prog_aux *aux); #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX + int (*enclave_map)(unsigned long prot); +#endif /* CONFIG_INTEL_SGX */ }; struct security_hook_heads { @@ -2046,6 +2055,9 @@ struct security_hook_heads { struct hlist_head bpf_prog_alloc_security; struct hlist_head bpf_prog_free_security; #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX + struct hlist_head enclave_map; +#endif /* CONFIG_INTEL_SGX */ } __randomize_layout; /* diff --git a/include/linux/security.h b/include/linux/security.h index 659071c2e57c..6a1f54ba6794 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -1829,5 +1829,16 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux) #endif /* CONFIG_SECURITY */ #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INTEL_SGX +#ifdef CONFIG_SECURITY +int security_enclave_map(unsigned long prot); +#else +static inline int security_enclave_map(unsigned long prot) +{ + return 0; +} +#endif /* CONFIG_SECURITY */ +#endif /* CONFIG_INTEL_SGX */ + #endif /* ! __LINUX_SECURITY_H */ diff --git a/security/security.c b/security/security.c index 613a5c00e602..03951e08bdfc 100644 --- a/security/security.c +++ b/security/security.c @@ -2359,3 +2359,10 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux) call_void_hook(bpf_prog_free_security, aux); } #endif /* CONFIG_BPF_SYSCALL */ + +#ifdef CONFIG_INTEL_SGX +int security_enclave_map(unsigned long prot) +{ + return call_int_hook(enclave_map, 0, prot); +} +#endif /* CONFIG_INTEL_SGX */ From patchwork Wed Jun 19 22:23:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005455 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AE421398 for ; Wed, 19 Jun 2019 22:24:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A843288B3 for ; Wed, 19 Jun 2019 22:24:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F0D8288B9; Wed, 19 Jun 2019 22:24:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE3C0288B3 for ; Wed, 19 Jun 2019 22:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730833AbfFSWYS (ORCPT ); Wed, 19 Jun 2019 18:24:18 -0400 Received: from mga18.intel.com ([134.134.136.126]:40162 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726482AbfFSWYR (ORCPT ); Wed, 19 Jun 2019 18:24:17 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743771" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 08/12] security/selinux: Require SGX_MAPWX to map enclave page WX Date: Wed, 19 Jun 2019 15:23:57 -0700 Message-Id: <20190619222401.14942-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hook enclave_map() to require a new per-process capability, SGX_MAPWX, when mapping an enclave as simultaneously writable and executable. Note, @prot contains the actual protection bits that will be set by the kernel, not the maximal protection bits specified by userspace when the page was first loaded into the enclave. Signed-off-by: Sean Christopherson --- security/selinux/hooks.c | 21 +++++++++++++++++++++ security/selinux/include/classmap.h | 3 ++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 3ec702cf46ca..fc239e541b62 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -6726,6 +6726,23 @@ static void selinux_bpf_prog_free(struct bpf_prog_aux *aux) } #endif +#ifdef CONFIG_INTEL_SGX +static int selinux_enclave_map(unsigned long prot) +{ + const struct cred *cred = current_cred(); + u32 sid = cred_sid(cred); + + /* SGX is supported only in 64-bit kernels. */ + WARN_ON_ONCE(!default_noexec); + + if ((prot & PROT_EXEC) && (prot & PROT_WRITE)) + return avc_has_perm(&selinux_state, sid, sid, + SECCLASS_PROCESS2, PROCESS2__SGX_MAPWX, + NULL); + return 0; +} +#endif + struct lsm_blob_sizes selinux_blob_sizes __lsm_ro_after_init = { .lbs_cred = sizeof(struct task_security_struct), .lbs_file = sizeof(struct file_security_struct), @@ -6968,6 +6985,10 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { LSM_HOOK_INIT(bpf_map_free_security, selinux_bpf_map_free), LSM_HOOK_INIT(bpf_prog_free_security, selinux_bpf_prog_free), #endif + +#ifdef CONFIG_INTEL_SGX + LSM_HOOK_INIT(enclave_map, selinux_enclave_map), +#endif }; static __init int selinux_init(void) diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h index 201f7e588a29..cfd91e879bdf 100644 --- a/security/selinux/include/classmap.h +++ b/security/selinux/include/classmap.h @@ -51,7 +51,8 @@ struct security_class_mapping secclass_map[] = { "execmem", "execstack", "execheap", "setkeycreate", "setsockcreate", "getrlimit", NULL } }, { "process2", - { "nnp_transition", "nosuid_transition", NULL } }, + { "nnp_transition", "nosuid_transition", + "sgx_mapwx", NULL } }, { "system", { "ipc_info", "syslog_read", "syslog_mod", "syslog_console", "module_request", "module_load", NULL } }, From patchwork Wed Jun 19 22:23:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005493 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A71029B1 for ; Wed, 19 Jun 2019 22:24:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B4CD288B6 for ; Wed, 19 Jun 2019 22:24:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F6CF286AE; Wed, 19 Jun 2019 22:24:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FAFA288B7 for ; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730844AbfFSWYV (ORCPT ); Wed, 19 Jun 2019 18:24:21 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730796AbfFSWYR (ORCPT ); Wed, 19 Jun 2019 18:24:17 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743774" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 09/12] LSM: x86/sgx: Introduce ->enclave_load() hook for Intel SGX Date: Wed, 19 Jun 2019 15:23:58 -0700 Message-Id: <20190619222401.14942-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP enclave_load() is roughly analogous to the existing file_mprotect(). Due to the nature of SGX and its Enclave Page Cache (EPC), all enclave VMAs are backed by a single file, i.e. /dev/sgx/enclave, that must be MAP_SHARED. Furthermore, all enclaves need read, write and execute VMAs. As a result, the existing/standard call to file_mprotect() does not provide any meaningful security for enclaves since an LSM can only deny/grant access to the EPC as a whole. security_enclave_load() is called when SGX is first loading an enclave page, i.e. copying a page from normal memory into the EPC. Although the prototype for enclave_load() is similar to file_mprotect(), e.g. SGX could theoretically use file_mprotect() and set reqprot=prot, a separate hook is desirable as the semantics of an enclave's protection bits are different than those of vmas, e.g. an enclave page tracks the maximal set of protections, whereas file_mprotect() operates on the actual protections being provided. Enclaves also have unique security properties, e.g. measured code, that LSMs may want to consider. In other words, LSMs will likely want to implement different policies for enclave page protections. Note, extensive discussion yielded no sane alternative to some form of SGX specific LSM hook[1]. [1] https://lkml.kernel.org/r/CALCETrXf8mSK45h7sTK5Wf+pXLVn=Bjsc_RLpgO-h-qdzBRo5Q@mail.gmail.com Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/ioctl.c | 32 ++++++++++++++------------ include/linux/lsm_hooks.h | 8 +++++++ include/linux/security.h | 7 ++++++ security/security.c | 5 ++++ 4 files changed, 37 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c index 1fca70a36ce3..ae1b4d69441c 100644 --- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -564,7 +565,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr, return ret; } -static int sgx_encl_page_copy(void *dst, unsigned long src, unsigned long prot) +static int sgx_encl_page_copy(void *dst, unsigned long src, unsigned long prot, + u16 mrmask) { struct vm_area_struct *vma; int ret; @@ -572,24 +574,24 @@ static int sgx_encl_page_copy(void *dst, unsigned long src, unsigned long prot) /* Hold mmap_sem across copy_from_user() to avoid a TOCTOU race. */ down_read(¤t->mm->mmap_sem); + vma = find_vma(current->mm, src); + if (!vma) { + ret = -EFAULT; + goto out; + } + /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ - if (prot & PROT_EXEC) { - vma = find_vma(current->mm, src); - if (!vma) { - ret = -EFAULT; - goto out; - } - - if (!(vma->vm_flags & VM_MAYEXEC)) { - ret = -EACCES; - goto out; - } + if ((prot & PROT_EXEC) && !(vma->vm_flags & VM_MAYEXEC)) { + ret = -EACCES; + goto out; } + ret = security_enclave_load(vma, prot, mrmask == 0xffff); + if (ret) + goto out; + if (copy_from_user(dst, (void __user *)src, PAGE_SIZE)) ret = -EFAULT; - else - ret = 0; out: up_read(¤t->mm->mmap_sem); @@ -639,7 +641,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg) prot = addp.prot & (PROT_READ | PROT_WRITE | PROT_EXEC); - ret = sgx_encl_page_copy(data, addp.src, prot); + ret = sgx_encl_page_copy(data, addp.src, prot, addp.mrmask); if (ret) goto out; diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 7c1357105e61..3bc92c65f287 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1451,6 +1451,11 @@ * @enclave_map: * @prot contains the protection that will be applied by the kernel. * Return 0 if permission is granted. + * + * @enclave_load: + * @vma: the source memory region of the enclave page being loaded. + * @prot: the (maximal) protections of the enclave page. + * Return 0 if permission is granted. */ union security_list_options { int (*binder_set_context_mgr)(struct task_struct *mgr); @@ -1815,6 +1820,8 @@ union security_list_options { #ifdef CONFIG_INTEL_SGX int (*enclave_map)(unsigned long prot); + int (*enclave_load)(struct vm_area_struct *vma, unsigned long prot, + bool measured); #endif /* CONFIG_INTEL_SGX */ }; @@ -2057,6 +2064,7 @@ struct security_hook_heads { #endif /* CONFIG_BPF_SYSCALL */ #ifdef CONFIG_INTEL_SGX struct hlist_head enclave_map; + struct hlist_head enclave_load; #endif /* CONFIG_INTEL_SGX */ } __randomize_layout; diff --git a/include/linux/security.h b/include/linux/security.h index 6a1f54ba6794..572ddfc53039 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -1832,11 +1832,18 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux) #ifdef CONFIG_INTEL_SGX #ifdef CONFIG_SECURITY int security_enclave_map(unsigned long prot); +int security_enclave_load(struct vm_area_struct *vma, unsigned long prot, + bool measured); #else static inline int security_enclave_map(unsigned long prot) { return 0; } +static inline int security_enclave_load(struct vm_area_struct *vma, + unsigned long prot, bool measured) +{ + return 0; +} #endif /* CONFIG_SECURITY */ #endif /* CONFIG_INTEL_SGX */ diff --git a/security/security.c b/security/security.c index 03951e08bdfc..00f483beb1cc 100644 --- a/security/security.c +++ b/security/security.c @@ -2365,4 +2365,9 @@ int security_enclave_map(unsigned long prot) { return call_int_hook(enclave_map, 0, prot); } +int security_enclave_load(struct vm_area_struct *vma, unsigned long prot, + bool measured) +{ + return call_int_hook(enclave_load, 0, vma, prot, measured); +} #endif /* CONFIG_INTEL_SGX */ From patchwork Wed Jun 19 22:23:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005473 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C51F328DC for ; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B0F2F288AA for ; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A56AF288B6; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36C2E288AA for ; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730829AbfFSWYS (ORCPT ); Wed, 19 Jun 2019 18:24:18 -0400 Received: from mga18.intel.com ([134.134.136.126]:40157 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730752AbfFSWYR (ORCPT ); Wed, 19 Jun 2019 18:24:17 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743777" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 10/12] security/selinux: Add enclave_load() implementation Date: Wed, 19 Jun 2019 15:23:59 -0700 Message-Id: <20190619222401.14942-11-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The goal of selinux_enclave_load() is to provide a facsimile of the existing selinux_file_mprotect() and file_map_prot_check() policies, but tailored to the unique properties of SGX. For example, an enclave page is technically backed by a MAP_SHARED file, but the "file" is essentially shared memory that is never persisted anywhere and also requires execute permissions (for some pages). Enclaves are also less priveleged than normal user code, e.g. SYSCALL instructions #UD if attempted in an enclave. For this reason, add SGX specific permissions instead of reusing existing permissions such as FILE__EXECUTE so that policies can allow running code in an enclave, or allow dynamically loading code in an enclave without having to grant the same capability to normal user code outside of the enclave. Intended use of each permission: - SGX_EXECDIRTY: dynamically load code within the enclave itself - SGX_EXECUNMR: load unmeasured code into the enclave, e.g. Graphene - SGX_EXECANON: load code from anonymous memory (likely Graphene) - SGX_EXECUTE: load an enclave from a file, i.e. normal behavior Note, equivalents to FILE__READ and FILE__WRITE are intentionally never required. Writes to the enclave page are contained to the EPC, i.e. never hit the original file, and read permissions have already been vetted (or the VMA doesn't have PROT_READ, in which case loading the page into the enclave will fail). Signed-off-by: Sean Christopherson --- security/selinux/hooks.c | 55 +++++++++++++++++++++++++++-- security/selinux/include/classmap.h | 5 +-- 2 files changed, 55 insertions(+), 5 deletions(-) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index fc239e541b62..8a431168e454 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -6727,6 +6727,12 @@ static void selinux_bpf_prog_free(struct bpf_prog_aux *aux) #endif #ifdef CONFIG_INTEL_SGX +static inline int sgx_has_perm(u32 sid, u32 requested) +{ + return avc_has_perm(&selinux_state, sid, sid, + SECCLASS_PROCESS2, requested, NULL); +} + static int selinux_enclave_map(unsigned long prot) { const struct cred *cred = current_cred(); @@ -6736,11 +6742,53 @@ static int selinux_enclave_map(unsigned long prot) WARN_ON_ONCE(!default_noexec); if ((prot & PROT_EXEC) && (prot & PROT_WRITE)) - return avc_has_perm(&selinux_state, sid, sid, - SECCLASS_PROCESS2, PROCESS2__SGX_MAPWX, - NULL); + return sgx_has_perm(sid, PROCESS2__SGX_MAPWX); + return 0; } + +static int selinux_enclave_load(struct vm_area_struct *vma, unsigned long prot, + bool measured) +{ + const struct cred *cred = current_cred(); + u32 sid = cred_sid(cred); + int ret; + + /* SGX is supported only in 64-bit kernels. */ + WARN_ON_ONCE(!default_noexec); + + /* Only executable enclave pages are restricted in any way. */ + if (!(prot & PROT_EXEC)) + return 0; + + /* + * WX at load time only requires EXECDIRTY, e.g. to allow W->X. Actual + * WX mappings require MAPWX (see selinux_enclave_map()). + */ + if (prot & PROT_WRITE) { + ret = sgx_has_perm(sid, PROCESS2__SGX_EXECDIRTY); + if (ret) + goto out; + } + if (!measured) { + ret = sgx_has_perm(sid, PROCESS2__SGX_EXECUNMR); + if (ret) + goto out; + } + + if (!vma->vm_file || IS_PRIVATE(file_inode(vma->vm_file)) || + vma->anon_vma) + /* + * Loading enclave code from an anonymous mapping or from a + * modified private file mapping. + */ + ret = sgx_has_perm(sid, PROCESS2__SGX_EXECANON); + else + /* Loading from a shared or unmodified private file mapping. */ + ret = file_has_perm(cred, vma->vm_file, FILE__SGX_EXECUTE); +out: + return ret; +} #endif struct lsm_blob_sizes selinux_blob_sizes __lsm_ro_after_init = { @@ -6988,6 +7036,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { #ifdef CONFIG_INTEL_SGX LSM_HOOK_INIT(enclave_map, selinux_enclave_map), + LSM_HOOK_INIT(enclave_load, selinux_enclave_load), #endif }; diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h index cfd91e879bdf..baa1757be46a 100644 --- a/security/selinux/include/classmap.h +++ b/security/selinux/include/classmap.h @@ -7,7 +7,7 @@ #define COMMON_FILE_PERMS COMMON_FILE_SOCK_PERMS, "unlink", "link", \ "rename", "execute", "quotaon", "mounton", "audit_access", \ - "open", "execmod" + "open", "execmod", "sgx_execute" #define COMMON_SOCK_PERMS COMMON_FILE_SOCK_PERMS, "bind", "connect", \ "listen", "accept", "getopt", "setopt", "shutdown", "recvfrom", \ @@ -52,7 +52,8 @@ struct security_class_mapping secclass_map[] = { "setsockcreate", "getrlimit", NULL } }, { "process2", { "nnp_transition", "nosuid_transition", - "sgx_mapwx", NULL } }, + "sgx_mapwx", "sgx_execdirty", "sgx_execanon", "sgx_execunmr", + NULL } }, { "system", { "ipc_info", "syslog_read", "syslog_mod", "syslog_console", "module_request", "module_load", NULL } }, From patchwork Wed Jun 19 22:24:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005467 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C1682D47 for ; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4CCC6288B3 for ; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4171D288BB; Wed, 19 Jun 2019 22:24:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E19BA288B3 for ; Wed, 19 Jun 2019 22:24:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726482AbfFSWYS (ORCPT ); Wed, 19 Jun 2019 18:24:18 -0400 Received: from mga18.intel.com ([134.134.136.126]:40155 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726322AbfFSWYS (ORCPT ); Wed, 19 Jun 2019 18:24:18 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743780" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 11/12] security/apparmor: Add enclave_load() implementation Date: Wed, 19 Jun 2019 15:24:00 -0700 Message-Id: <20190619222401.14942-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Require execute permissions when loading an enclave from a file. Signed-off-by: Sean Christopherson --- security/apparmor/include/audit.h | 2 ++ security/apparmor/lsm.c | 14 ++++++++++++++ 2 files changed, 16 insertions(+) diff --git a/security/apparmor/include/audit.h b/security/apparmor/include/audit.h index ee559bc2acb8..84470483e04d 100644 --- a/security/apparmor/include/audit.h +++ b/security/apparmor/include/audit.h @@ -107,6 +107,8 @@ enum audit_type { #define OP_PROF_LOAD "profile_load" #define OP_PROF_RM "profile_remove" +#define OP_ENCL_LOAD "enclave_load" + struct apparmor_audit_data { int error; diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c index 87500bde5a92..2ed1157e1f58 100644 --- a/security/apparmor/lsm.c +++ b/security/apparmor/lsm.c @@ -517,6 +517,17 @@ static int apparmor_file_mprotect(struct vm_area_struct *vma, !(vma->vm_flags & VM_SHARED) ? MAP_PRIVATE : 0); } +#ifdef CONFIG_INTEL_SGX +static int apparmor_enclave_load(struct vm_area_struct *vma, unsigned long prot, + bool measured) +{ + if (!(prot & PROT_EXEC)) + return 0; + + return common_file_perm(OP_ENCL_LOAD, vma->vm_file, AA_EXEC_MMAP); +} +#endif + static int apparmor_sb_mount(const char *dev_name, const struct path *path, const char *type, unsigned long flags, void *data) { @@ -1243,6 +1254,9 @@ static struct security_hook_list apparmor_hooks[] __lsm_ro_after_init = { LSM_HOOK_INIT(secid_to_secctx, apparmor_secid_to_secctx), LSM_HOOK_INIT(secctx_to_secid, apparmor_secctx_to_secid), LSM_HOOK_INIT(release_secctx, apparmor_release_secctx), +#ifdef CONFIG_INTEL_SGX + LSM_HOOK_INIT(enclave_load, apparmor_enclave_load), +#endif }; /* From patchwork Wed Jun 19 22:24:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11005483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 41D9E13AF for ; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 325A3288B4 for ; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26B1A288B7; Wed, 19 Jun 2019 22:24:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A584288B4 for ; Wed, 19 Jun 2019 22:24:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726322AbfFSWYU (ORCPT ); Wed, 19 Jun 2019 18:24:20 -0400 Received: from mga18.intel.com ([134.134.136.126]:40157 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730815AbfFSWYS (ORCPT ); Wed, 19 Jun 2019 18:24:18 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Jun 2019 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,394,1557212400"; d="scan'208";a="150743782" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga007.jf.intel.com with ESMTP; 19 Jun 2019 15:24:13 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org, linux-security-module@vger.kernel.org, selinux@vger.kernel.org, Bill Roberts , Casey Schaufler , James Morris , Dave Hansen , Cedric Xing , Andy Lutomirski , Jethro Beekman , "Dr . Greg Wettstein" , Stephen Smalley Subject: [RFC PATCH v4 12/12] LSM: x86/sgx: Show line of sight to LSM support SGX2's EAUG Date: Wed, 19 Jun 2019 15:24:01 -0700 Message-Id: <20190619222401.14942-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190619222401.14942-1-sean.j.christopherson@intel.com> References: <20190619222401.14942-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Wire up a theoretical EAUG flag to show that the proposed LSM model is extensible to SGX2, i.e. that SGX can communicate to LSMs that an EAUG'd page is being mapped executable, as opposed to having to require userspace to state that an EAUG'd page *may* be mapped executable in the future. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/main.c | 10 +++++--- arch/x86/kernel/cpu/sgx/encl.c | 33 ++++++++++++++++++++++++++- arch/x86/kernel/cpu/sgx/encl.h | 2 ++ include/linux/lsm_hooks.h | 2 +- include/linux/security.h | 4 ++-- security/security.c | 4 ++-- security/selinux/hooks.c | 4 +++- 7 files changed, 49 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 4379a2fb1f82..b478c0f45279 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -99,7 +99,8 @@ static long sgx_compat_ioctl(struct file *filep, unsigned int cmd, * page is considered to have no RWX permissions, i.e. is inaccessible. */ static unsigned long sgx_allowed_rwx(struct sgx_encl *encl, - struct vm_area_struct *vma) + struct vm_area_struct *vma, + bool *eaug) { unsigned long allowed_rwx = VM_READ | VM_WRITE | VM_EXEC; unsigned long idx, idx_start, idx_end; @@ -123,6 +124,8 @@ static unsigned long sgx_allowed_rwx(struct sgx_encl *encl, allowed_rwx = 0; else allowed_rwx &= page->vm_prot_bits; + if (page->vm_prot_bits & SGX_VM_EAUG) + *eaug = true; if (!allowed_rwx) break; } @@ -134,16 +137,17 @@ static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; unsigned long allowed_rwx, prot; + bool eaug = false; int ret; - allowed_rwx = sgx_allowed_rwx(encl, vma); + allowed_rwx = sgx_allowed_rwx(encl, vma, &eaug); if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC) & ~allowed_rwx) return -EACCES; prot = _calc_vm_trans(vma->vm_flags, VM_READ, PROT_READ) | _calc_vm_trans(vma->vm_flags, VM_WRITE, PROT_WRITE) | _calc_vm_trans(vma->vm_flags, VM_EXEC, PROT_EXEC); - ret = security_enclave_map(prot); + ret = security_enclave_map(prot, eaug); if (ret) return ret; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 059d90dcaa27..2e64676a8144 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -389,10 +389,41 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, } #ifdef CONFIG_SECURITY +static bool is_eaug_range(struct sgx_encl *encl, unsigned long start, + unsigned long end) +{ + unsigned long idx, idx_start, idx_end; + struct sgx_encl_page *page; + + /* Enclave is dead or inaccessible. */ + if (!encl) + return false; + + idx_start = PFN_DOWN(start); + idx_end = PFN_DOWN(end - 1); + + for (idx = idx_start; idx <= idx_end; ++idx) { + /* + * No need to take encl->lock, vm_prot_bits is set prior to + * insertion and never changes, and racing with adding pages is + * a userspace bug. + */ + rcu_read_lock(); + page = radix_tree_lookup(&encl->page_tree, idx); + rcu_read_unlock(); + + /* Non-existent page can only be PROT_NONE, bail early. */ + if (!page || page->vm_prot_bits & SGX_VM_EAUG) + return true; + } + + return false; +} static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long prot) { - return security_enclave_map(prot); + return security_enclave_map(prot, + is_eaug_range(vma->vm_private_data, start, end)); } #endif diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 5ad018c8d74c..dae1a22dc87c 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -41,6 +41,8 @@ enum sgx_encl_page_desc { #define SGX_ENCL_PAGE_VA_OFFSET(encl_page) \ ((encl_page)->desc & SGX_ENCL_PAGE_VA_OFFSET_MASK) +#define SGX_VM_EAUG BIT(3) + struct sgx_encl_page { unsigned long desc; unsigned long vm_prot_bits; diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 3bc92c65f287..d7da732cf56e 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1819,7 +1819,7 @@ union security_list_options { #endif /* CONFIG_BPF_SYSCALL */ #ifdef CONFIG_INTEL_SGX - int (*enclave_map)(unsigned long prot); + int (*enclave_map)(unsigned long prot, bool eaug); int (*enclave_load)(struct vm_area_struct *vma, unsigned long prot, bool measured); #endif /* CONFIG_INTEL_SGX */ diff --git a/include/linux/security.h b/include/linux/security.h index 572ddfc53039..c55e14d776c8 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -1831,11 +1831,11 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux) #ifdef CONFIG_INTEL_SGX #ifdef CONFIG_SECURITY -int security_enclave_map(unsigned long prot); +int security_enclave_map(unsigned long prot, bool eaug); int security_enclave_load(struct vm_area_struct *vma, unsigned long prot, bool measured); #else -static inline int security_enclave_map(unsigned long prot) +static inline int security_enclave_map(unsigned long prot, bool eaug) { return 0; } diff --git a/security/security.c b/security/security.c index 00f483beb1cc..f276f05341f2 100644 --- a/security/security.c +++ b/security/security.c @@ -2361,9 +2361,9 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux) #endif /* CONFIG_BPF_SYSCALL */ #ifdef CONFIG_INTEL_SGX -int security_enclave_map(unsigned long prot) +int security_enclave_map(unsigned long prot, bool eaug) { - return call_int_hook(enclave_map, 0, prot); + return call_int_hook(enclave_map, 0, prot, eaug); } int security_enclave_load(struct vm_area_struct *vma, unsigned long prot, bool measured) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 8a431168e454..f349419d4c12 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -6733,7 +6733,7 @@ static inline int sgx_has_perm(u32 sid, u32 requested) SECCLASS_PROCESS2, requested, NULL); } -static int selinux_enclave_map(unsigned long prot) +static int selinux_enclave_map(unsigned long prot, bool eaug) { const struct cred *cred = current_cred(); u32 sid = cred_sid(cred); @@ -6743,6 +6743,8 @@ static int selinux_enclave_map(unsigned long prot) if ((prot & PROT_EXEC) && (prot & PROT_WRITE)) return sgx_has_perm(sid, PROCESS2__SGX_MAPWX); + else if (eaug && (prot & PROT_EXEC)) + return sgx_has_perm(sid, PROCESS2__SGX_EXECDIRTY); return 0; }