From patchwork Tue Aug 27 00:11:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11115863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC2331398 for ; Tue, 27 Aug 2019 00:13:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BA6572173E for ; Tue, 27 Aug 2019 00:13:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727563AbfH0ANb (ORCPT ); Mon, 26 Aug 2019 20:13:31 -0400 Received: from mga01.intel.com ([192.55.52.88]:1582 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727380AbfH0ANb (ORCPT ); Mon, 26 Aug 2019 20:13:31 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Aug 2019 17:11:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,435,1559545200"; d="scan'208";a="171021248" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga007.jf.intel.com with ESMTP; 26 Aug 2019 17:11:28 -0700 From: Sean Christopherson To: Jarkko Sakkinen Cc: linux-sgx@vger.kernel.org Subject: [PATCH 4/4] x86/sgx: Take encl->lock inside of mm->mmap_sem for EADD Date: Mon, 26 Aug 2019 17:11:28 -0700 Message-Id: <20190827001128.25066-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190827001128.25066-1-sean.j.christopherson@intel.com> References: <20190827001128.25066-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Reverse the order in which encl->lock and mm->mmap_sem are taken during ENCLAVE_ADD_PAGE so as to adhere to SGX's lock ordering requirements. Attempting to acquire mm->mmap_sem while holding encl->lock can result in deadlock. Refactor EEXTEND and the final bookkeeping out of __sgx_encl_add_page() so that mm->mmap_sem can be dropped after EADD without spreading the lock/unlock across multiple functions. Reported-by: Jarkko Sakkinen Signed-off-by: Sean Christopherson Acked-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/ioctl.c | 55 ++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 22 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 170ed538b02b..4a9ae1090433 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -317,43 +317,40 @@ static int sgx_validate_secinfo(struct sgx_secinfo *secinfo) static int __sgx_encl_add_page(struct sgx_encl *encl, struct sgx_encl_page *encl_page, struct sgx_epc_page *epc_page, - struct sgx_secinfo *secinfo, unsigned long src, - unsigned long mrmask) + struct sgx_secinfo *secinfo, unsigned long src) { struct sgx_pageinfo pginfo; struct vm_area_struct *vma; int ret; - int i; pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page); pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page); pginfo.metadata = (unsigned long)secinfo; pginfo.contents = src; - down_read(¤t->mm->mmap_sem); - /* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */ if (encl_page->vm_max_prot_bits & VM_EXEC) { vma = find_vma(current->mm, src); - if (!vma) { - up_read(¤t->mm->mmap_sem); + if (!vma) return -EFAULT; - } - if (!(vma->vm_flags & VM_MAYEXEC)) { - up_read(¤t->mm->mmap_sem); + if (!(vma->vm_flags & VM_MAYEXEC)) return -EACCES; - } } __uaccess_begin(); ret = __eadd(&pginfo, sgx_epc_addr(epc_page)); __uaccess_end(); - up_read(¤t->mm->mmap_sem); + return ret ? -EFAULT : 0; +} - if (ret) - return -EFAULT; +static int __sgx_encl_extend(struct sgx_encl *encl, + struct sgx_epc_page *epc_page, + unsigned long mrmask) +{ + int ret; + int i; for_each_set_bit(i, &mrmask, 16) { ret = __eextend(sgx_epc_addr(encl->secs.epc_page), @@ -364,12 +361,6 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, return -EFAULT; } } - - encl_page->encl = encl; - encl_page->epc_page = epc_page; - encl->secs_child_cnt++; - sgx_mark_page_reclaimable(encl_page->epc_page); - return 0; } @@ -398,19 +389,39 @@ static int sgx_encl_add_page(struct sgx_encl *encl, goto err_out_free; } + down_read(¤t->mm->mmap_sem); + mutex_lock(&encl->lock); + /* + * Insert prior to EADD in case of OOM. EADD modifies MRENCLAVE, i.e. + * can't be gracefully unwound, while failure on EADD/EXTEND is limited + * to userspace errors (or kernel/hardware bugs). + */ ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc), encl_page); - if (ret) + if (ret) { + up_read(¤t->mm->mmap_sem); goto err_out_shrink; + } ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo, - addp->src, addp->mrmask); + addp->src); + up_read(¤t->mm->mmap_sem); + + if (ret) + goto err_out; + + ret = __sgx_encl_extend(encl, epc_page, addp->mrmask); if (ret) goto err_out; + encl_page->encl = encl; + encl_page->epc_page = epc_page; + encl->secs_child_cnt++; + sgx_mark_page_reclaimable(encl_page->epc_page); mutex_unlock(&encl->lock); + return 0; err_out: