From patchwork Mon Sep 16 10:17:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 11146723 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB1651745 for ; Mon, 16 Sep 2019 10:19:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 932E8206C2 for ; Mon, 16 Sep 2019 10:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727028AbfIPKTB (ORCPT ); Mon, 16 Sep 2019 06:19:01 -0400 Received: from mga14.intel.com ([192.55.52.115]:12263 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725853AbfIPKTB (ORCPT ); Mon, 16 Sep 2019 06:19:01 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Sep 2019 03:19:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,492,1559545200"; d="scan'208";a="180387519" Received: from sweber1-mobl1.ger.corp.intel.com (HELO localhost) ([10.252.40.159]) by orsmga008.jf.intel.com with ESMTP; 16 Sep 2019 03:18:58 -0700 From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Sean Christopherson , Shay Katz-zamir , Serge Ayoun Subject: [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write() Date: Mon, 16 Sep 2019 13:17:55 +0300 Message-Id: <20190916101803.30726-10-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190916101803.30726-1-jarkko.sakkinen@linux.intel.com> References: <20190916101803.30726-1-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Do enclave state checks only in sgx_reclaimer_write(). Checking the enclave state is not part of the sgx_encl_ewb() flow. The check is done differently for SECS and for addressable pages. Cc: Sean Christopherson Cc: Shay Katz-zamir Cc: Serge Ayoun Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/reclaim.c | 69 +++++++++++++++---------------- 1 file changed, 34 insertions(+), 35 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c index e758a06919e4..a3e36f959c74 100644 --- a/arch/x86/kernel/cpu/sgx/reclaim.c +++ b/arch/x86/kernel/cpu/sgx/reclaim.c @@ -308,47 +308,45 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, encl_page->desc &= ~SGX_ENCL_PAGE_RECLAIMED; - if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) { - va_page = list_first_entry(&encl->va_pages, struct sgx_va_page, - list); - va_offset = sgx_alloc_va_slot(va_page); - if (sgx_va_page_full(va_page)) - list_move_tail(&va_page->list, &encl->va_pages); + va_page = list_first_entry(&encl->va_pages, struct sgx_va_page, + list); + va_offset = sgx_alloc_va_slot(va_page); + if (sgx_va_page_full(va_page)) + list_move_tail(&va_page->list, &encl->va_pages); + + ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, + page_index); + if (ret == SGX_NOT_TRACKED) { + ret = __etrack(sgx_epc_addr(encl->secs.epc_page)); + if (ret) { + if (encls_failed(ret) || + encls_returned_code(ret)) + ENCLS_WARN(ret, "ETRACK"); + } ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, page_index); if (ret == SGX_NOT_TRACKED) { - ret = __etrack(sgx_epc_addr(encl->secs.epc_page)); - if (ret) { - if (encls_failed(ret) || - encls_returned_code(ret)) - ENCLS_WARN(ret, "ETRACK"); - } - - ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, - page_index); - if (ret == SGX_NOT_TRACKED) { - /* - * Slow path, send IPIs to kick cpus out of the - * enclave. Note, it's imperative that the cpu - * mask is generated *after* ETRACK, else we'll - * miss cpus that entered the enclave between - * generating the mask and incrementing epoch. - */ - on_each_cpu_mask(sgx_encl_ewb_cpumask(encl), - sgx_ipi_cb, NULL, 1); - ret = __sgx_encl_ewb(encl, epc_page, va_page, - va_offset, page_index); - } + /* + * Slow path, send IPIs to kick cpus out of the + * enclave. Note, it's imperative that the cpu + * mask is generated *after* ETRACK, else we'll + * miss cpus that entered the enclave between + * generating the mask and incrementing epoch. + */ + on_each_cpu_mask(sgx_encl_ewb_cpumask(encl), + sgx_ipi_cb, NULL, 1); + ret = __sgx_encl_ewb(encl, epc_page, va_page, + va_offset, page_index); } + } - if (ret) - if (encls_failed(ret) || encls_returned_code(ret)) - ENCLS_WARN(ret, "EWB"); + if (ret) + if (encls_failed(ret) || encls_returned_code(ret)) + ENCLS_WARN(ret, "EWB"); - encl_page->desc |= va_offset; - encl_page->va_page = va_page; - } + encl_page->desc |= va_offset; + encl_page->va_page = va_page; } static void sgx_reclaimer_write(struct sgx_epc_page *epc_page) @@ -359,10 +357,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page) mutex_lock(&encl->lock); - sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page)); if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) { ret = __eremove(sgx_epc_addr(epc_page)); WARN(ret, "EREMOVE returned %d\n", ret); + } else { + sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page)); } encl_page->epc_page = NULL;