From patchwork Thu Dec 15 22:24:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 9477021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5219760571 for ; Thu, 15 Dec 2016 22:24:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 401BC28860 for ; Thu, 15 Dec 2016 22:24:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 317E428870; Thu, 15 Dec 2016 22:24:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2C3E328860 for ; Thu, 15 Dec 2016 22:24:19 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id F1BA381EAC for ; Thu, 15 Dec 2016 14:24:19 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 0CD4281EAC for ; Thu, 15 Dec 2016 14:24:19 -0800 (PST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP; 15 Dec 2016 14:24:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,354,1477983600"; d="scan'208";a="42863438" Received: from unknown (HELO localhost) ([10.249.37.126]) by orsmga005.jf.intel.com with ESMTP; 15 Dec 2016 14:24:17 -0800 From: Jarkko Sakkinen To: intel-sgx-kernel-dev@lists.01.org Date: Fri, 16 Dec 2016 00:24:12 +0200 Message-Id: <20161215222412.24004-1-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.9.3 Subject: [intel-sgx-kernel-dev] [PATCH RFC] intel_sgx: simplify sgx_write_pages() X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP Now that sgx_ewb flow has a sane error recovery flow we can simplify sgx_write_pages() significantly by moving the pinning of backing page into sgx_ewb(). This was not possible before as in some situations pinning could legally fail. [Marked as RFC because it requires a pending patch set. I implemented this to show of benefits of the introduced recovery flow.] Signed-off-by: Jarkko Sakkinen --- drivers/platform/x86/intel_sgx_page_cache.c | 63 ++++++++++++----------------- 1 file changed, 26 insertions(+), 37 deletions(-) diff --git a/drivers/platform/x86/intel_sgx_page_cache.c b/drivers/platform/x86/intel_sgx_page_cache.c index 36d4d54..f62e5e7 100644 --- a/drivers/platform/x86/intel_sgx_page_cache.c +++ b/drivers/platform/x86/intel_sgx_page_cache.c @@ -233,41 +233,52 @@ static void sgx_etrack(struct sgx_epc_page *epc_page) } static int __sgx_ewb(struct sgx_encl *encl, - struct sgx_encl_page *encl_page, - struct page *backing) + struct sgx_encl_page *encl_page) { struct sgx_page_info pginfo; + struct page *backing; void *epc; void *va; int ret; - pginfo.srcpge = (unsigned long)kmap_atomic(backing); + backing = sgx_get_backing(encl, encl_page); + if (IS_ERR(backing)) { + ret = PTR_ERR(backing); + sgx_warn(encl, "pinning the backing page for EWB failed with %d\n", + ret); + return ret; + } + epc = sgx_get_epc_page(encl_page->epc_page); va = sgx_get_epc_page(encl_page->va_page->epc_page); + pginfo.srcpge = (unsigned long)kmap_atomic(backing); pginfo.pcmd = (unsigned long)&encl_page->pcmd; pginfo.linaddr = 0; pginfo.secs = 0; ret = __ewb(&pginfo, epc, (void *)((unsigned long)va + encl_page->va_offset)); + kunmap_atomic((void *)(unsigned long)pginfo.srcpge); sgx_put_epc_page(va); sgx_put_epc_page(epc); - kunmap_atomic((void *)(unsigned long)pginfo.srcpge); + sgx_put_backing(backing, true); return ret; } static bool sgx_ewb(struct sgx_encl *encl, - struct sgx_encl_page *entry, - struct page *backing) + struct sgx_encl_page *entry) { - int ret = __sgx_ewb(encl, entry, backing); + int ret = __sgx_ewb(encl, entry); + + if (ret < 0) + return false; if (ret == SGX_NOT_TRACKED) { /* slow path, IPI needed */ smp_call_function(sgx_ipi_cb, NULL, 1); - ret = __sgx_ewb(encl, entry, backing); + ret = __sgx_ewb(encl, entry); } if (ret) { @@ -294,11 +305,8 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) { struct sgx_encl_page *entry; struct sgx_encl_page *tmp; - struct page *pages[SGX_NR_SWAP_CLUSTER_MAX + 1]; struct vm_area_struct *evma; unsigned int free_flags; - int cnt = 0; - int i = 0; if (list_empty(src)) return; @@ -316,25 +324,14 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) continue; } - pages[cnt] = sgx_get_backing(encl, entry); - if (IS_ERR(pages[cnt])) { - list_del(&entry->load_list); - list_add_tail(&entry->load_list, &encl->load_list); - entry->flags &= ~SGX_ENCL_PAGE_RESERVED; - continue; - } - zap_vma_ptes(evma, entry->addr, PAGE_SIZE); sgx_eblock(entry->epc_page); - cnt++; } /* ETRACK */ sgx_etrack(encl->secs_page.epc_page); /* EWB */ - i = 0; - while (!list_empty(src)) { entry = list_first_entry(src, struct sgx_encl_page, load_list); @@ -344,29 +341,21 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) evma = sgx_find_vma(encl, entry->addr); if (evma) { - if (sgx_ewb(encl, entry, pages[i])) + if (sgx_ewb(encl, entry)) free_flags = SGX_FREE_SKIP_EREMOVE; encl->secs_child_cnt--; } sgx_free_encl_page(entry, encl, free_flags); - sgx_put_backing(pages[i++], evma); } - /* Allow SECS page eviction only when the encl is initialized. */ - if (!encl->secs_child_cnt && - (encl->flags & SGX_ENCL_INITIALIZED)) { - pages[cnt] = sgx_get_backing(encl, &encl->secs_page); - if (!IS_ERR(pages[cnt])) { - free_flags = 0; - if (sgx_ewb(encl, &encl->secs_page, pages[cnt])) - free_flags = SGX_FREE_SKIP_EREMOVE; - - encl->flags |= SGX_ENCL_SECS_EVICTED; + if (!encl->secs_child_cnt && (encl->flags & SGX_ENCL_INITIALIZED)) { + free_flags = 0; + if (sgx_ewb(encl, &encl->secs_page)) + free_flags = SGX_FREE_SKIP_EREMOVE; - sgx_free_encl_page(&encl->secs_page, encl, free_flags); - sgx_put_backing(pages[cnt], true); - } + encl->flags |= SGX_ENCL_SECS_EVICTED; + sgx_free_encl_page(&encl->secs_page, encl, free_flags); } mutex_unlock(&encl->lock);