From patchwork Thu Dec 15 16:13:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Haitao" X-Patchwork-Id: 9476489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5B24860825 for ; Thu, 15 Dec 2016 16:13:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 46F2F287CF for ; Thu, 15 Dec 2016 16:13:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3BA6F287D6; Thu, 15 Dec 2016 16:13:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3028F287CF for ; Thu, 15 Dec 2016 16:13:50 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id AC2C781FBB for ; Thu, 15 Dec 2016 08:13:50 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 657BD81FB2 for ; Thu, 15 Dec 2016 08:13:49 -0800 (PST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga102.jf.intel.com with ESMTP; 15 Dec 2016 08:13:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,352,1477983600"; d="scan'208";a="42906648" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga005.fm.intel.com with ESMTP; 15 Dec 2016 08:13:48 -0800 Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 15 Dec 2016 08:13:48 -0800 Received: from fmsmsx118.amr.corp.intel.com ([169.254.1.233]) by FMSMSX114.amr.corp.intel.com ([10.18.116.8]) with mapi id 14.03.0248.002; Thu, 15 Dec 2016 08:13:48 -0800 From: "Huang, Haitao" To: Jarkko Sakkinen , "intel-sgx-kernel-dev@lists.01.org" Thread-Topic: [intel-sgx-kernel-dev] [PATCH v9 06/10] intel_sgx: kill the enclave when any of its VMAs are closed Thread-Index: AQHSVuKYstkgRkmxBkKnTyKY61mDq6EJLarQ Date: Thu, 15 Dec 2016 16:13:48 +0000 Message-ID: <2EFE7EC26817864D8C91BD43A5737DD4533BE5E8@fmsmsx118.amr.corp.intel.com> References: <20161215144959.31245-1-jarkko.sakkinen@linux.intel.com> <20161215144959.31245-7-jarkko.sakkinen@linux.intel.com> In-Reply-To: <20161215144959.31245-7-jarkko.sakkinen@linux.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.1.200.107] MIME-Version: 1.0 Subject: Re: [intel-sgx-kernel-dev] [PATCH v9 06/10] intel_sgx: kill the enclave when any of its VMAs are closed X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Li, Xun" , "Shanahan, Mark" , "Xing, Cedric" , "Gu, Zongmin" , "Zhao, Yebin Andy" , "Gu, Junjun" Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP Just wonder if this will be an issue for SGX2.0 where we may trim some pages after being loaded during enclave life. I haven't looked at SGX2 branch for a while and not sure we cause VMA closes. Just throw out here for team to discuss. Haitao -----Original Message----- From: intel-sgx-kernel-dev [mailto:intel-sgx-kernel-dev-bounces@lists.01.org] On Behalf Of Jarkko Sakkinen Sent: Thursday, December 15, 2016 8:50 AM To: intel-sgx-kernel-dev@lists.01.org Subject: [intel-sgx-kernel-dev] [PATCH v9 06/10] intel_sgx: kill the enclave when any of its VMAs are closed ELRANGE protects from non-enclave access but still closing any of the VMAs leaves the enclave and remaining VMAs to a state that is not wanted for any legit application using enclaves. Therefore it is in this situation advisable to kill the whole enclave. One could also consider hardening the vma_open callback in a way that it would also kill the enclave if any new VMAs areis created after the first EADD or in the corner case after the first EINIT (if enclave does only have the SECS page). This makes sense because you could enforce that EPCM entries and PTEs stay in sync, which is a good property. However, most of the time you want to implement a relocator for the user space that runs inside the enclave because the ELRANGE position varies based on which address mmap() gives. Since the enclave code is signed and measured the relocation can only happen after EINIT. That's why we will not do such enforcement. The ramifications of this are not too bad. In the worst case you could end up crashing your user space process in a safe way (#GP due to EPCM/PTE mismatch). Signed-off-by: Jarkko Sakkinen --- drivers/platform/x86/intel_sgx.h | 2 +- drivers/platform/x86/intel_sgx_ioctl.c | 10 ++++---- drivers/platform/x86/intel_sgx_util.c | 13 ++++------ drivers/platform/x86/intel_sgx_vma.c | 43 ++++++++++++++-------------------- 4 files changed, 30 insertions(+), 38 deletions(-) - if (!encl->vma_cnt) { + if (encl->flags & SGX_ENCL_DEAD) { entry = ERR_PTR(-EFAULT); goto out; } @@ -370,7 +363,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, if (!(encl->flags & SGX_ENCL_DEBUG) || !(encl->flags & SGX_ENCL_INITIALIZED) || - (encl->flags & SGX_ENCL_SUSPEND)) + (encl->flags & SGX_ENCL_DEAD)) return -EFAULT; sgx_dbg(encl, "%s addr=0x%lx, len=%d\n", op_str, addr, len); -- 2.9.3 diff --git a/drivers/platform/x86/intel_sgx.h b/drivers/platform/x86/intel_sgx.h index 464763d..018cd03 100644 --- a/drivers/platform/x86/intel_sgx.h +++ b/drivers/platform/x86/intel_sgx.h @@ -130,12 +130,12 @@ enum sgx_encl_flags { SGX_ENCL_DEBUG = BIT(1), SGX_ENCL_SECS_EVICTED = BIT(2), SGX_ENCL_SUSPEND = BIT(3), + SGX_ENCL_DEAD = BIT(4), }; struct sgx_encl { unsigned int flags; unsigned int secs_child_cnt; - unsigned int vma_cnt; struct mutex lock; struct task_struct *owner; struct mm_struct *mm; diff --git a/drivers/platform/x86/intel_sgx_ioctl.c b/drivers/platform/x86/intel_sgx_ioctl.c index b377200..d0113b0 100644 --- a/drivers/platform/x86/intel_sgx_ioctl.c +++ b/drivers/platform/x86/intel_sgx_ioctl.c @@ -255,7 +255,10 @@ static bool sgx_process_add_page_req(struct sgx_add_page_req *req) mutex_lock(&encl->lock); - if (!encl->vma_cnt || sgx_find_encl(encl->mm, encl_page->addr, &vma)) + if (encl->flags & SGX_ENCL_DEAD) + goto out; + + if (sgx_find_encl(encl->mm, encl_page->addr, &vma)) goto out; backing = sgx_get_backing(encl, encl_page); @@ -317,7 +320,7 @@ static void sgx_add_page_worker(struct work_struct *work) do { schedule(); - if (encl->flags & SGX_ENCL_SUSPEND) + if (encl->flags & SGX_ENCL_DEAD) skip_rest = true; mutex_lock(&encl->lock); @@ -578,7 +581,6 @@ static long sgx_ioc_enclave_create(struct file *filep, unsigned int cmd, up_read(¤t->mm->mmap_sem); goto out; } - encl->vma_cnt++; vma->vm_private_data = encl; up_read(¤t->mm->mmap_sem); @@ -682,7 +684,7 @@ static int __encl_add_page(struct sgx_encl *encl, mutex_lock(&encl->lock); - if (encl->flags & SGX_ENCL_INITIALIZED) { + if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) { ret = -EINVAL; goto out; } diff --git a/drivers/platform/x86/intel_sgx_util.c b/drivers/platform/x86/intel_sgx_util.c index 3878d9a..d5238c2 100644 --- a/drivers/platform/x86/intel_sgx_util.c +++ b/drivers/platform/x86/intel_sgx_util.c @@ -135,21 +135,18 @@ void sgx_zap_tcs_ptes(struct sgx_encl *encl, struct vm_area_struct *vma) bool sgx_pin_mm(struct sgx_encl *encl) { - if (encl->flags & SGX_ENCL_SUSPEND) - return false; - mutex_lock(&encl->lock); - if (encl->vma_cnt) { - atomic_inc(&encl->mm->mm_count); - } else { + if (encl->flags & SGX_ENCL_DEAD) { mutex_unlock(&encl->lock); return false; } + + atomic_inc(&encl->mm->mm_count); mutex_unlock(&encl->lock); down_read(&encl->mm->mmap_sem); - if (!encl->vma_cnt) { + if (encl->flags & SGX_ENCL_DEAD) { sgx_unpin_mm(encl); return false; } @@ -177,7 +174,7 @@ void sgx_invalidate(struct sgx_encl *encl) break; } - encl->vma_cnt = 0; + encl->flags |= SGX_ENCL_DEAD; } int sgx_find_encl(struct mm_struct *mm, unsigned long addr, diff --git a/drivers/platform/x86/intel_sgx_vma.c b/drivers/platform/x86/intel_sgx_vma.c index 0649978..f848fd1 100644 --- a/drivers/platform/x86/intel_sgx_vma.c +++ b/drivers/platform/x86/intel_sgx_vma.c @@ -70,48 +70,41 @@ static void sgx_vma_open(struct vm_area_struct *vma) { - struct sgx_encl *encl; - - /* Was vm_private_data nullified as a result of the previous fork? */ - encl = vma->vm_private_data; - if (!encl) - goto out_fork; + struct sgx_encl *encl = vma->vm_private_data; - /* Was the process forked? mm_struct changes when the process is - * forked. + /* When forking for the second time vm_private_data is already set to + * NULL. */ + if (!encl) { + zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); + return; + } + + /* Invalidate enclave when the process has been forked. */ mutex_lock(&encl->lock); if (encl->mm != vma->vm_mm) { - mutex_unlock(&encl->lock); - goto out_fork; + sgx_invalidate(encl); + vma->vm_private_data = NULL; } - encl->vma_cnt++; mutex_unlock(&encl->lock); - kref_get(&encl->refcount); - return; -out_fork: - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); - vma->vm_private_data = NULL; + if (vma->vm_private_data) + kref_get(&encl->refcount); } static void sgx_vma_close(struct vm_area_struct *vma) { struct sgx_encl *encl = vma->vm_private_data; - /* If process was forked, VMA is still there but - * vm_private_data is set to NULL. + /* When forking for the second time vm_private_data is already set to + * NULL. */ if (!encl) return; mutex_lock(&encl->lock); - encl->vma_cnt--; - vma->vm_private_data = NULL; - - sgx_zap_tcs_ptes(encl, vma); zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); - + encl->flags |= SGX_ENCL_DEAD; mutex_unlock(&encl->lock); kref_put(&encl->refcount, sgx_encl_release); @@ -187,7 +180,7 @@ static struct sgx_encl_page *sgx_vma_do_fault(struct vm_area_struct *vma, mutex_lock(&encl->lock);