From patchwork Mon Sep 16 10:17:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 11146721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 806071745 for ; Mon, 16 Sep 2019 10:18:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6722C206C2 for ; Mon, 16 Sep 2019 10:18:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729112AbfIPKS5 (ORCPT ); Mon, 16 Sep 2019 06:18:57 -0400 Received: from mga14.intel.com ([192.55.52.115]:12263 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725853AbfIPKS5 (ORCPT ); Mon, 16 Sep 2019 06:18:57 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Sep 2019 03:18:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,492,1559545200"; d="scan'208";a="180387504" Received: from sweber1-mobl1.ger.corp.intel.com (HELO localhost) ([10.252.40.159]) by orsmga008.jf.intel.com with ESMTP; 16 Sep 2019 03:18:53 -0700 From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Sean Christopherson , Shay Katz-zamir , Serge Ayoun Subject: [PATCH v3 08/17] x86/sgx: Calculate page index in sgx_reclaimer_write() Date: Mon, 16 Sep 2019 13:17:54 +0300 Message-Id: <20190916101803.30726-9-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190916101803.30726-1-jarkko.sakkinen@linux.intel.com> References: <20190916101803.30726-1-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Move page index calculation for backing storage to sgx_reclaimer_write() as it somewhat simplifies the flow and makes it also easier to control as the high level write logic is consolidated to a single function. Cc: Sean Christopherson Cc: Shay Katz-zamir Cc: Serge Ayoun Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/reclaim.c | 33 +++++++++---------------------- 1 file changed, 9 insertions(+), 24 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c index 353888256b2b..e758a06919e4 100644 --- a/arch/x86/kernel/cpu/sgx/reclaim.c +++ b/arch/x86/kernel/cpu/sgx/reclaim.c @@ -222,26 +222,15 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page) static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page, struct sgx_va_page *va_page, unsigned int va_offset, - unsigned int pt) + unsigned int page_index) { - struct sgx_encl_page *encl_page = epc_page->owner; struct sgx_pageinfo pginfo; unsigned long pcmd_offset; struct page *backing; - pgoff_t page_index; pgoff_t pcmd_index; struct page *pcmd; int ret; - if (pt != SGX_SECINFO_SECS && pt != SGX_SECINFO_TCS && - pt != SGX_SECINFO_REG) - return -EINVAL; - - if (pt == SGX_SECINFO_SECS) - page_index = PFN_DOWN(encl->size); - else - page_index = SGX_ENCL_PAGE_INDEX(encl_page); - pcmd_index = sgx_pcmd_index(encl, page_index); pcmd_offset = sgx_pcmd_offset(page_index); @@ -308,7 +297,8 @@ static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl) return cpumask; } -static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt) +static void sgx_encl_ewb(struct sgx_epc_page *epc_page, + unsigned int page_index) { struct sgx_encl_page *encl_page = epc_page->owner; struct sgx_encl *encl = encl_page->encl; @@ -325,7 +315,8 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt) if (sgx_va_page_full(va_page)) list_move_tail(&va_page->list, &encl->va_pages); - ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, pt); + ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, + page_index); if (ret == SGX_NOT_TRACKED) { ret = __etrack(sgx_epc_addr(encl->secs.epc_page)); if (ret) { @@ -335,7 +326,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt) } ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, - pt); + page_index); if (ret == SGX_NOT_TRACKED) { /* * Slow path, send IPIs to kick cpus out of the @@ -347,7 +338,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt) on_each_cpu_mask(sgx_encl_ewb_cpumask(encl), sgx_ipi_cb, NULL, 1); ret = __sgx_encl_ewb(encl, epc_page, va_page, - va_offset, pt); + va_offset, page_index); } } @@ -364,17 +355,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page) { struct sgx_encl_page *encl_page = epc_page->owner; struct sgx_encl *encl = encl_page->encl; - unsigned int pt; int ret; - if (encl_page->desc & SGX_ENCL_PAGE_TCS) - pt = SGX_SECINFO_TCS; - else - pt = SGX_SECINFO_REG; - mutex_lock(&encl->lock); - sgx_encl_ewb(epc_page, pt); + sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page)); if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) { ret = __eremove(sgx_epc_addr(epc_page)); WARN(ret, "EREMOVE returned %d\n", ret); @@ -386,7 +371,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page) if (!encl->secs_child_cnt && (atomic_read(&encl->flags) & (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED))) { - sgx_encl_ewb(encl->secs.epc_page, SGX_SECINFO_SECS); + sgx_encl_ewb(encl->secs.epc_page, PFN_DOWN(encl->size)); sgx_free_page(encl->secs.epc_page); encl->secs.epc_page = NULL;