From patchwork Wed Jun 14 17:37:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 9787107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4E62260325 for ; Wed, 14 Jun 2017 17:38:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35E72268AE for ; Wed, 14 Jun 2017 17:38:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2AC6A2684F; Wed, 14 Jun 2017 17:38:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F3EF52684F for ; Wed, 14 Jun 2017 17:38:00 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 0405221A16ED1; Wed, 14 Jun 2017 10:36:45 -0700 (PDT) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 80D4121A16EC6 for ; Wed, 14 Jun 2017 10:36:43 -0700 (PDT) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Jun 2017 10:37:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,341,1493708400"; d="scan'208";a="113210016" Received: from sjchrist-ts.jf.intel.com ([10.54.74.20]) by orsmga005.jf.intel.com with ESMTP; 14 Jun 2017 10:37:45 -0700 From: Sean Christopherson To: intel-sgx-kernel-dev@lists.01.org Date: Wed, 14 Jun 2017 10:37:28 -0700 Message-Id: <1497461858-20309-3-git-send-email-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497461858-20309-1-git-send-email-sean.j.christopherson@intel.com> References: <1497461858-20309-1-git-send-email-sean.j.christopherson@intel.com> Subject: [intel-sgx-kernel-dev] [RFC][PATCH 02/12] intel_sgx: walk pages via radix then VMA tree to zap TCS X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP In sgx_zap_tcs_ptes, iterate over an enclave's pages using its radix tree, i.e. page_tree, instead of its linked list, i.e. load_list. This removes a dependency on sgx_encl's load_list, which will allow for the removal of load_list. Walk the pages to look for loaded TCS and only conditionally search for a TCS page's VMA when necessary. Walking the pages before VMAs improves the performance of sgx_zap_tcs_ptes from O(m * log(m) * n) to O(log(m) * n), where m is the number of VMAs and n is the number of pages. This mitigates any potential performance regression that would occur due to walking all of an enclave's pages as opposed to only its loaded pages. Signed-off-by: Sean Christopherson --- drivers/platform/x86/intel_sgx/sgx.h | 2 -- drivers/platform/x86/intel_sgx/sgx_util.c | 31 ++++++++++++------------------- 2 files changed, 12 insertions(+), 21 deletions(-) diff --git a/drivers/platform/x86/intel_sgx/sgx.h b/drivers/platform/x86/intel_sgx/sgx.h index 78d048e..4c18f9f 100644 --- a/drivers/platform/x86/intel_sgx/sgx.h +++ b/drivers/platform/x86/intel_sgx/sgx.h @@ -195,8 +195,6 @@ void sgx_insert_pte(struct sgx_encl *encl, struct vm_area_struct *vma); int sgx_eremove(struct sgx_epc_page *epc_page); struct vm_area_struct *sgx_find_vma(struct sgx_encl *encl, unsigned long addr); -void sgx_zap_tcs_ptes(struct sgx_encl *encl, - struct vm_area_struct *vma); void sgx_invalidate(struct sgx_encl *encl, bool flush_cpus); void sgx_flush_cpus(struct sgx_encl *encl); int sgx_find_encl(struct mm_struct *mm, unsigned long addr, diff --git a/drivers/platform/x86/intel_sgx/sgx_util.c b/drivers/platform/x86/intel_sgx/sgx_util.c index 021e789..d6d96b4 100644 --- a/drivers/platform/x86/intel_sgx/sgx_util.c +++ b/drivers/platform/x86/intel_sgx/sgx_util.c @@ -108,33 +108,26 @@ struct vm_area_struct *sgx_find_vma(struct sgx_encl *encl, unsigned long addr) return NULL; } -void sgx_zap_tcs_ptes(struct sgx_encl *encl, struct vm_area_struct *vma) +static void sgx_zap_tcs_ptes(struct sgx_encl *encl) { - struct sgx_epc_page *tmp; + struct vm_area_struct *vma; struct sgx_encl_page *entry; + struct radix_tree_iter iter; + void **slot; - list_for_each_entry(tmp, &encl->load_list, list) { - entry = tmp->encl_page; - if ((entry->flags & SGX_ENCL_PAGE_TCS) && - entry->addr >= vma->vm_start && - entry->addr < vma->vm_end) - zap_vma_ptes(vma, entry->addr, PAGE_SIZE); + radix_tree_for_each_slot(slot, &encl->page_tree, &iter, 0) { + entry = *slot; + if (entry->epc_page && (entry->flags & SGX_ENCL_PAGE_TCS)) { + vma = sgx_find_vma(encl, entry->addr); + if (vma) + zap_vma_ptes(vma, entry->addr, PAGE_SIZE); + } } } void sgx_invalidate(struct sgx_encl *encl, bool flush_cpus) { - struct vm_area_struct *vma; - unsigned long addr; - - for (addr = encl->base; addr < (encl->base + encl->size); - addr = vma->vm_end) { - vma = sgx_find_vma(encl, addr); - if (vma) - sgx_zap_tcs_ptes(encl, vma); - else - break; - } + sgx_zap_tcs_ptes(encl); encl->flags |= SGX_ENCL_DEAD;