From patchwork Fri May 31 23:31:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10971035 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58ED5112C for ; Fri, 31 May 2019 23:33:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 46D5028D9E for ; Fri, 31 May 2019 23:33:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 39C9428D9F; Fri, 31 May 2019 23:33:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C7B0C28DA3 for ; Fri, 31 May 2019 23:33:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726893AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 Received: from mga07.intel.com ([134.134.136.100]:59346 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726735AbfEaXci (ORCPT ); Fri, 31 May 2019 19:32:38 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2019 16:32:31 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com ([10.54.74.36]) by orsmga008.jf.intel.com with ESMTP; 31 May 2019 16:32:30 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: Andy Lutomirski , Cedric Xing , Stephen Smalley , James Morris , "Serge E . Hallyn" , LSM List , Paul Moore , Eric Paris , selinux@vger.kernel.org, Jethro Beekman , Dave Hansen , Thomas Gleixner , Linus Torvalds , LKML , X86 ML , linux-sgx@vger.kernel.org, Andrew Morton , nhorman@redhat.com, npmccallum@redhat.com, Serge Ayoun , Shay Katz-zamir , Haitao Huang , Andy Shevchenko , Kai Svahn , Borislav Petkov , Josh Triplett , Kai Huang , David Rientjes , William Roberts , Philip Tricca Subject: [RFC PATCH 5/9] x86/sgx: Restrict mapping without an enclave page to PROT_NONE Date: Fri, 31 May 2019 16:31:55 -0700 Message-Id: <20190531233159.30992-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190531233159.30992-1-sean.j.christopherson@intel.com> References: <20190531233159.30992-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: selinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: selinux@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To support LSM integration, SGX will require userspace to explicitly specify the allowed protections for each page. The allowed protections will be supplied to and modified by LSMs (based on their policies). To prevent userspace from circumventing the allowed protections, do not allow PROT_{READ,WRITE,EXEC} mappings to an enclave without an associated enclave page (which will track the allowed protections). Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/sgx/driver/main.c | 5 +++++ arch/x86/kernel/cpu/sgx/encl.c | 30 +++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/encl.h | 3 +++ 3 files changed, 38 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c index 129d356aff30..65a87c2fdf02 100644 --- a/arch/x86/kernel/cpu/sgx/driver/main.c +++ b/arch/x86/kernel/cpu/sgx/driver/main.c @@ -63,6 +63,11 @@ static long sgx_compat_ioctl(struct file *filep, unsigned int cmd, static int sgx_mmap(struct file *file, struct vm_area_struct *vma) { struct sgx_encl *encl = file->private_data; + int ret; + + ret = sgx_map_allowed(encl, vma->vm_start, vma->vm_end, vma->vm_flags); + if (ret) + return ret; vma->vm_ops = &sgx_vm_ops; vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index f23ea0fbaa47..955d4f430adc 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -235,6 +235,35 @@ static void sgx_vma_close(struct vm_area_struct *vma) kref_put(&encl->refcount, sgx_encl_release); } +int sgx_map_allowed(struct sgx_encl *encl, unsigned long start, + unsigned long end, unsigned long prot) +{ + struct sgx_encl_page *page; + unsigned long addr; + + prot &= (VM_READ | VM_WRITE | VM_EXEC); + if (!prot || !encl) + return 0; + + mutex_lock(&encl->lock); + + for (addr = start; addr < end; addr += PAGE_SIZE) { + page = radix_tree_lookup(&encl->page_tree, addr >> PAGE_SHIFT); + if (!page) + return -EACCES; + } + + mutex_unlock(&encl->lock); + + return 0; +} + +static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned long prot) +{ + return sgx_map_allowed(vma->vm_private_data, start, end, prot); +} + static unsigned int sgx_vma_fault(struct vm_fault *vmf) { unsigned long addr = (unsigned long)vmf->address; @@ -372,6 +401,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, const struct vm_operations_struct sgx_vm_ops = { .close = sgx_vma_close, .open = sgx_vma_open, + .mprotect = sgx_vma_mprotect, .fault = sgx_vma_fault, .access = sgx_vma_access, }; diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index c557f0374d74..6e310e3b3fff 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -106,6 +106,9 @@ static inline unsigned long sgx_pcmd_offset(pgoff_t page_index) sizeof(struct sgx_pcmd); } +int sgx_map_allowed(struct sgx_encl *encl, unsigned long start, + unsigned long end, unsigned long prot); + enum sgx_encl_mm_iter { SGX_ENCL_MM_ITER_DONE = 0, SGX_ENCL_MM_ITER_NEXT = 1,