From patchwork Thu Apr 28 20:11:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 12831121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 848CEC43217 for ; Thu, 28 Apr 2022 20:11:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236978AbiD1UO4 (ORCPT ); Thu, 28 Apr 2022 16:14:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236986AbiD1UOz (ORCPT ); Thu, 28 Apr 2022 16:14:55 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F053BF951 for ; Thu, 28 Apr 2022 13:11:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651176700; x=1682712700; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FjVvWUklc8U0tgSt4/FdkYDQfKTwavQNkbDD7Ucz8BA=; b=i6XBN8K+wUgyBlmGqZPv33Dj0MfTd86IlYZygYnH+zT5UL33xSbNrNN0 r7smuL/cMlXepRG3q5VIFb4GA0X7DEEV6BUYOZbwdfqiFjStUPVTw6j5M nkSyrGcTHwjykln3mlz6Q36p3t+kZ6C4WzlIFOWxwytEh6xR+oJIlGpOo zal0+blMI4KrqFIuvmwEuOq0iIn8hphEfyYVXHZFXE/B1mXpfkDjR5Kaj EZVSKMByx4GDioa9oqcEo3/KjYD0CToagpr3cpUSdRfOJj2YG+T23payO QbT6QDsRIOrLyLOvIdiAWNg4ehkDkAgEc9OHYy3AO6AqziA5xEjCcAnBS A==; X-IronPort-AV: E=McAfee;i="6400,9594,10331"; a="246324263" X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="246324263" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:34 -0700 X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="514458611" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:32 -0700 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, linux-sgx@vger.kernel.org Cc: haitao.huang@intel.com Subject: [RFC PATCH 1/4] x86/sgx: Do not free backing memory on ENCLS[ELDU] failure Date: Thu, 28 Apr 2022 13:11:24 -0700 Message-Id: <6fad9ec14ee94eaeb6d287988db60875da83b7bb.1651171455.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Recent commit 08999b2489b4 ("x86/sgx: Free backing memory after faulting the enclave page") frees the backing storage after it becomes unneeded due to the data from it being loaded back into the EPC (Enclave Page Cache). The backing storage is freed after running ENCLS[ELDU], whether ENCLS[ELDU] succeeded or not. If ENCLS[ELDU] thus failed then the data within that page is lost. Exit with error without removing the backing storage if it could not be restored to the enclave. Fixes: 08999b2489b4 ("x86/sgx: Free backing memory after faulting the enclave page") Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/sgx/encl.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 1a2cbe44b8d9..e5d2661800ac 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -81,6 +81,10 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page, ENCLS_WARN(ret, "ELDU"); ret = -EFAULT; + kunmap_atomic(pcmd_page); + kunmap_atomic((void *)(unsigned long)pginfo.contents); + sgx_encl_put_backing(&b, false); + return ret; } memset(pcmd_page + b.pcmd_offset, 0, sizeof(struct sgx_pcmd)); From patchwork Thu Apr 28 20:11:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 12831123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42793C4332F for ; Thu, 28 Apr 2022 20:11:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236958AbiD1UO4 (ORCPT ); Thu, 28 Apr 2022 16:14:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236978AbiD1UOz (ORCPT ); Thu, 28 Apr 2022 16:14:55 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89284BAB80 for ; Thu, 28 Apr 2022 13:11:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651176700; x=1682712700; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AbSUaBWdZnzGniwZJlFChUAkrxFwT9ZUrv5NlzFn2gI=; b=kDgJXoenA6oF7SluQmZ8OPX59eeJIPPKQm7N/xL0pfb52SDhuXeBHQMg MqrYnrxoq0kEJj1QbyFRXtGnrYkx6oPHpOGpYcc42Bp1qoudB6gBdvDur tWn7UiMMfTPpxlz+1LsKGKbr86lKl5w1Pgg7e1v4ozoyDzJQocHnhlMjf po/TuhMD4VOUtgpcJV8ATwFBqDVbfV6c2QARMCQi/2g1ME9sZdX5Fdv7i cI+MbQfjDJXg4rsi3L+LamvYOn8VQxaHQ92BWDEhMK0DgnmHHViKESSm3 dceo1h6u/U51fj+k1OgohDp7JYtKRsMly4xjaQcgkl3divUTDJSCMDPed A==; X-IronPort-AV: E=McAfee;i="6400,9594,10331"; a="246324260" X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="246324260" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:34 -0700 X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="514458606" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:32 -0700 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, linux-sgx@vger.kernel.org Cc: haitao.huang@intel.com Subject: [RFC PATCH 2/4] x86/sgx: Set dirty bit after modifying page contents Date: Thu, 28 Apr 2022 13:11:25 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Recent commit 08999b2489b4 ("x86/sgx: Free backing memory after faulting the enclave page") expanded __sgx_encl_eldu() to clear an enclave page's PCMD (Paging Crypto MetaData) from the PCMD page in the backing store after the enclave page is restored to the enclave. Since the PCMD page in the backing store is modified the page should be set as dirty when releasing the reference to ensure the modified data is retained. Fixes: 08999b2489b4 ("x86/sgx: Free backing memory after faulting the enclave page") Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/sgx/encl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index e5d2661800ac..e03f124ce772 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -98,7 +98,7 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page, kunmap_atomic(pcmd_page); kunmap_atomic((void *)(unsigned long)pginfo.contents); - sgx_encl_put_backing(&b, false); + sgx_encl_put_backing(&b, true); sgx_encl_truncate_backing_page(encl, page_index); From patchwork Thu Apr 28 20:11:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 12831122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB30C433FE for ; Thu, 28 Apr 2022 20:11:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231637AbiD1UO4 (ORCPT ); Thu, 28 Apr 2022 16:14:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236958AbiD1UOz (ORCPT ); Thu, 28 Apr 2022 16:14:55 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57266BAB83 for ; Thu, 28 Apr 2022 13:11:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651176699; x=1682712699; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MwZD4A5ibwQEWUZhITr2v2x3X9zt9CcswzCScYnQjWg=; b=DOxs9UD2P6mvjVcw9nBE7jyf9so9dJmut6qiHjCmCmztJ25JDuyzVZx5 f6XCK4ArVy6pGuijwrbB8BHcS4A2h8UERTR3vHqr+8LAwz4ycElcXRkQS Ex/hAd5Skl27KUGntq/ny5SdqYdCI3p5Fzm8qi7K6zmUL61OAHePRImmW +mK62OJsC9mekFw/FzgRLzzKfup3x8VMLoUfONJcg++g7Vktt+I7qtNPk DIKNhH+yagcqWx2KRZhtpghJv4xqF7LcvILJYG+lZY6PLw+IIfGDYlZCB TDoRUCBNWm2Un+vzxjMi2SGSWwyFOVQr6Gd/pKMSoDOKEXGSxO0OdzB5N Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10331"; a="246324259" X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="246324259" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:34 -0700 X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="514458610" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:32 -0700 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, linux-sgx@vger.kernel.org Cc: haitao.huang@intel.com Subject: [RFC PATCH 3/4] x86/sgx: Obtain backing storage page with enclave mutex held Date: Thu, 28 Apr 2022 13:11:26 -0700 Message-Id: <24fd9203331d11918b785c6a67f85d799d100be8.1651171455.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org The SGX backing storage is accessed on two paths: when there are insufficient enclave pages in the EPC the reclaimer works to move enclave pages to the backing storage and as enclaves access pages that have been moved to the backing storage they are retrieved from there as part of page fault handling. An oversubscribed SGX system will often run the reclaimer and page fault handler concurrently and needs to ensure that the backing store is accessed safely between the reclaimer and the page fault handler. The scenarios to consider here are: (a) faulting a page right after it was reclaimed, (b) faulting a page and reclaiming another page that are sharing a PCMD page. The reclaimer obtains pages from the backing storage without holding the enclave mutex and runs the risk of concurrently accessing the backing storage with the page fault handler that does access the backing storage with the enclave mutex held. In the scenario below a page is written to the backing store by the reclaimer and then immediately faulted back, before the reclaimer is able to set the dirty bit of the page: sgx_reclaim_pages() { sgx_vma_fault() { ... ... /* write data to backing store */ sgx_reclaimer_write(); mutex_lock(&encl->lock); __sgx_encl_eldu() { ... /* page not dirty - * contents may not be * up to date */ sgx_encl_get_backing(); ... } ... /* set page dirty */ sgx_encl_put_backing(); ... mutex_unlock(&encl->lock); } } While it is not possible to concurrently reclaim and fault the same enclave page the PCMD pages are shared between enclave pages in the enclave and enclave pages in the backing store. In the below scenario a PCMD page is truncated from the backing store after all its pages have been loaded in to the enclave at the same time the PCMD page is loaded from the backing store when one of its pages are reclaimed: sgx_reclaim_pages() { sgx_vma_fault() { ... mutex_lock(&encl->lock); ... __sgx_encl_eldu() { ... if (pcmd_page_empty) { /* * EPC page being reclaimed /* * shares a PCMD page with an * PCMD page truncated * enclave page that is being * while requested from * faulted in. * reclaimer. */ */ sgx_encl_get_backing() <----------> sgx_encl_truncate_backing_page() } } } Protect the reclaimer's backing store access with the enclave's mutex to ensure that it can safely run concurrently with the page fault handler. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/sgx/main.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 0e8741a80cf3..ae79b8d6f645 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -252,6 +252,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, sgx_encl_ewb(epc_page, backing); encl_page->epc_page = NULL; encl->secs_child_cnt--; + sgx_encl_put_backing(backing, true); if (!encl->secs_child_cnt && test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) { ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size), @@ -323,11 +324,14 @@ static void sgx_reclaim_pages(void) goto skip; page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base); + + mutex_lock(&encl_page->encl->lock); ret = sgx_encl_get_backing(encl_page->encl, page_index, &backing[i]); - if (ret) + if (ret) { + mutex_unlock(&encl_page->encl->lock); goto skip; + } - mutex_lock(&encl_page->encl->lock); encl_page->desc |= SGX_ENCL_PAGE_BEING_RECLAIMED; mutex_unlock(&encl_page->encl->lock); continue; @@ -355,7 +359,6 @@ static void sgx_reclaim_pages(void) encl_page = epc_page->owner; sgx_reclaimer_write(epc_page, &backing[i]); - sgx_encl_put_backing(&backing[i], true); kref_put(&encl_page->encl->refcount, sgx_encl_release); epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; From patchwork Thu Apr 28 20:11:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 12831119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57CD1C433EF for ; Thu, 28 Apr 2022 20:11:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236956AbiD1UOy (ORCPT ); Thu, 28 Apr 2022 16:14:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231637AbiD1UOx (ORCPT ); Thu, 28 Apr 2022 16:14:53 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F82ABAB80 for ; Thu, 28 Apr 2022 13:11:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651176695; x=1682712695; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7r8lX++img5Z6cdFhREpobiVE/JBh+N7O293Rb5WiWU=; b=I0X/9bgpuWvRm+m1tnpB8HVGP3EtcDifFubyTY3WmIGw21Fd3ZnWNtIH gbh/5RZ9wV9fa9SmFZ1/sJEBjGFMZ3M2PTb0BIkHXPl5Kw1Tfwy+zNCSA 5YWqHNEkXSdjufjdFLCc4Ikkr0QxhlUxZHCRA5qiJnc4tuii9YwXdOym4 pJhaFybfsXsqjPGoJ5Xj36GZHEIMwGv70fMxYtEOf+znykmqMgcDpjtIn fH+77AbfWUj1J8GelhMosMREjJOz3C+yWQbuhungwRFmy8fZfvXyvBkmD eEC/w/6ehkk5hwuzIqZxKGEGWhnPEUX/DG13YRZAWk2VJGvh6ZwQskCqj Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10331"; a="246324256" X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="246324256" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:34 -0700 X-IronPort-AV: E=Sophos;i="5.91,296,1647327600"; d="scan'208";a="514458613" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2022 13:11:32 -0700 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, linux-sgx@vger.kernel.org Cc: haitao.huang@intel.com Subject: [RFC PATCH 4/4] x86/sgx: Do not allocate backing pages when loading from backing store Date: Thu, 28 Apr 2022 13:11:27 -0700 Message-Id: <117862f7eb5bfef54d3b28f53746e6cf9e05508e.1651171455.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Preface: This is intended as a debugging aid forming part of the investigation into ENCLS[ELDU] returning #GP. This is not intended for inclusion. Changelog: The shmem backing store is used to (a) store encrypted enclave pages when they are reclaimed from the enclave, and (b) load encrypted enclave pages back into the enclave when they are accessed. The same interface, sgx_encl_get_backing_page(), is used whether a new backing page is needed to store an enclave page being reclaimed or whether an existing backing page is loaded to reload an enclave page to the EPC. This is because of this flow: sgx_encl_get_backing_page() shmem_read_mapping_page_gfp() shmem_getpage_gfp(..., ..., ..., SGP_CACHE, ...) With this interface used the backing pages are retrieved with the SGP_CACHE flag that will automatically allocate a backing page if it is not present. In an effort to diagnose #GP ENCLS[ELDU] the interface is split to ensure that when a backing page is expected to exist it is just loaded (lookup), not allocated. Replace sgx_encl_get_backing() with sgx_encl_lookup_backing() and sgx_encl_alloc_backing() to distinguish whether a backing page needs to be allocated or is expected to exist. sgx_encl_alloc_backing() is used by the reclaimer during the ENCLS[EWB] flow and sgx_encl_lookup_backing() is used in the ENCLS[ELDU] flow. An IDA is used to keep track of PCMD page allocation to ensure these pages are allocated once. This patch revealed that there are scenarios where the backing store does not contain a page expected to exist - sgx_encl_lookup_backing() fails with -ENOENT. This would explain ENCLS[ELDU] returning #GP since previously such a missing page would be allocated and thus trigger a MAC verification failure. Specifically, with the debugging included enabled an oversubscribe stress test encounters the error: sgx: sgx_encl_get_backing_page:847 fail 1 backing page with -2 The reason why the backing page is not present is not understood. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/sgx/encl.c | 83 ++++++++++++++++++++++++++++----- arch/x86/kernel/cpu/sgx/encl.h | 8 +++- arch/x86/kernel/cpu/sgx/ioctl.c | 1 + arch/x86/kernel/cpu/sgx/main.c | 6 +-- 4 files changed, 82 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index e03f124ce772..22ed886dc825 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -60,7 +60,7 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page, page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index); - ret = sgx_encl_get_backing(encl, page_index, &b); + ret = sgx_encl_lookup_backing(encl, page_index, &b); if (ret) return ret; @@ -102,8 +102,10 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page, sgx_encl_truncate_backing_page(encl, page_index); - if (pcmd_page_empty) + if (pcmd_page_empty) { + ida_free(&encl->pcmd_in_backing, PFN_DOWN(page_pcmd_off)); sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off)); + } return ret; } @@ -617,6 +619,7 @@ void sgx_encl_release(struct kref *ref) if (encl->backing) fput(encl->backing); + ida_destroy(&encl->pcmd_in_backing); cleanup_srcu_struct(&encl->srcu); @@ -807,17 +810,39 @@ const cpumask_t *sgx_encl_cpumask(struct sgx_encl *encl) } static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, - pgoff_t index) + pgoff_t index, enum sgp_type sgp) { struct inode *inode = encl->backing->f_path.dentry->d_inode; - struct address_space *mapping = inode->i_mapping; - gfp_t gfpmask = mapping_gfp_mask(mapping); + struct page *page = NULL; + int ret; + + ret = shmem_getpage(inode, index, &page, sgp); + if (ret) { + pr_debug("%s:%d fail %d backing page with %d\n", + __func__, __LINE__, sgp, ret); + return ERR_PTR(ret); + } + + if (!page) { + pr_debug("%s:%d fail %d backing page with NULL page\n", + __func__, __LINE__, sgp); + return ERR_PTR(-EFAULT); + } - return shmem_read_mapping_page_gfp(mapping, index, gfpmask); + if (PageHWPoison(page)) { + pr_debug("%s:%d fail %d backing page with poison page\n", + __func__, __LINE__, sgp); + unlock_page(page); + put_page(page); + return ERR_PTR(-EIO); + } + + unlock_page(page); + return page; } /** - * sgx_encl_get_backing() - Pin the backing storage + * sgx_encl_alloc_backing() - Pin the backing storage * @encl: an enclave pointer * @page_index: enclave page index * @backing: data for accessing backing storage for the page @@ -829,18 +854,54 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, * 0 on success, * -errno otherwise. */ -int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, - struct sgx_backing *backing) +int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index, + struct sgx_backing *backing) +{ + pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index); + pgoff_t pcmd_index = PFN_DOWN(page_pcmd_off); + struct page *contents; + struct page *pcmd; + int ret; + + contents = sgx_encl_get_backing_page(encl, page_index, SGP_CACHE); + if (IS_ERR(contents)) + return PTR_ERR(contents); + + ret = ida_alloc_range(&encl->pcmd_in_backing, pcmd_index, + pcmd_index, GFP_KERNEL); + if (ret == -ENOSPC) { + /* pcmd_index backing page already created, just look it up */ + pcmd = sgx_encl_get_backing_page(encl, pcmd_index, SGP_NOALLOC); + } else if (ret >= 0) { + pcmd = sgx_encl_get_backing_page(encl, pcmd_index, SGP_CACHE); + } else { + pcmd = ERR_PTR(ret); + } + if (IS_ERR(pcmd)) { + put_page(contents); + return PTR_ERR(pcmd); + } + + backing->page_index = page_index; + backing->contents = contents; + backing->pcmd = pcmd; + backing->pcmd_offset = page_pcmd_off & (PAGE_SIZE - 1); + + return 0; +} + +int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index, + struct sgx_backing *backing) { pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index); struct page *contents; struct page *pcmd; - contents = sgx_encl_get_backing_page(encl, page_index); + contents = sgx_encl_get_backing_page(encl, page_index, SGP_NOALLOC); if (IS_ERR(contents)) return PTR_ERR(contents); - pcmd = sgx_encl_get_backing_page(encl, PFN_DOWN(page_pcmd_off)); + pcmd = sgx_encl_get_backing_page(encl, PFN_DOWN(page_pcmd_off), SGP_NOALLOC); if (IS_ERR(pcmd)) { put_page(contents); return PTR_ERR(pcmd); diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 66adb8faec45..2a8d3bd3338f 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -8,6 +8,7 @@ #define _X86_ENCL_H #include +#include #include #include #include @@ -62,6 +63,7 @@ struct sgx_encl { cpumask_t cpumask; struct file *backing; + struct ida pcmd_in_backing; struct kref refcount; struct list_head va_pages; unsigned long mm_list_version; @@ -107,8 +109,10 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, void sgx_encl_release(struct kref *ref); int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm); const cpumask_t *sgx_encl_cpumask(struct sgx_encl *encl); -int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, - struct sgx_backing *backing); +int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index, + struct sgx_backing *backing); +int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index, + struct sgx_backing *backing); void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write); int sgx_encl_test_and_clear_young(struct mm_struct *mm, struct sgx_encl_page *page); diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 1c2f40b72551..94d3817b40ff 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -82,6 +82,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) } encl->backing = backing; + ida_init(&encl->pcmd_in_backing); secs_epc = sgx_alloc_epc_page(&encl->secs, true); if (IS_ERR(secs_epc)) { diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index ae79b8d6f645..148ec695b1b3 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -255,8 +255,8 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, sgx_encl_put_backing(backing, true); if (!encl->secs_child_cnt && test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) { - ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size), - &secs_backing); + ret = sgx_encl_alloc_backing(encl, PFN_DOWN(encl->size), + &secs_backing); if (ret) goto out; @@ -326,7 +326,7 @@ static void sgx_reclaim_pages(void) page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base); mutex_lock(&encl_page->encl->lock); - ret = sgx_encl_get_backing(encl_page->encl, page_index, &backing[i]); + ret = sgx_encl_alloc_backing(encl_page->encl, page_index, &backing[i]); if (ret) { mutex_unlock(&encl_page->encl->lock); goto skip;