From patchwork Thu Aug 25 08:08:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12954328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B0FFC28D13 for ; Thu, 25 Aug 2022 08:08:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233215AbiHYII2 (ORCPT ); Thu, 25 Aug 2022 04:08:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231615AbiHYII1 (ORCPT ); Thu, 25 Aug 2022 04:08:27 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58DA2A286F; Thu, 25 Aug 2022 01:08:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0C722B8275D; Thu, 25 Aug 2022 08:08:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E88BFC433C1; Thu, 25 Aug 2022 08:08:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661414903; bh=XfOFqfzXCa+k6af03iVwJecCGx56YZxl1KlR3f3DDwo=; h=From:To:Cc:Subject:Date:From; b=k7a8g6XEzoLLalTff7jdXR7oSs5dsEF5weJVHSYu2BF1hpeWN+/X7jsgozSdbvSnB ReC+X45k6PHHHFn0DipHpSjyHzMV63uAhLIw6DWFlFKgl6HKZnVKzbA1LvfAMhdRIf DRdKaYfeYc9TVf1ck/xLK2sKM/blZ+3PvFrsnuNrc5rBKrcgzQKLtzyzHymL3UmPWr 4fsP1912cmlmoe0FtM3uFidIgLI9xE6+nt42KDeyLFHXnGfz98vIJRMCsRjWrH9HyD TggA+jXNRdx+G0vYdZIlnUMDcop8C1DLzHsI7/JihiwmwfpOwq8dni2HE8WK61WHDS N/5ftX8RLPYZQ== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Paul Menzel , Haitao Huang , Dave Hansen , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), "H. Peter Anvin" , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)) Subject: [PATCH v3] x86/sgx: Do not consider unsanitized pages an error Date: Thu, 25 Aug 2022 11:08:02 +0300 Message-Id: <20220825080802.259528-1-jarkko@kernel.org> X-Mailer: git-send-email 2.37.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org If sgx_dirty_page_list ends up being non-empty, currently this triggers WARN_ON(), which produces a lot of noise, and can potentially crash the kernel, depending on the kernel command line. However, if the SGX subsystem initialization is retracted, the sanitization process could end up in the middle, and sgx_dirty_page_list be left non-empty for legit reasons. Replace this faulty behavior with more verbose version __sgx_sanitize_pages(), which can optionally print EREMOVE error code and the number of unsanitized pages. Link: https://lore.kernel.org/linux-sgx/20220825051827.246698-1-jarkko@kernel.org/T/#u Reported-by: Paul Menzel Fixes: 51ab30eb2ad4 ("x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list") Signed-off-by: Jarkko Sakkinen Cc: Haitao Huang Cc: Dave Hansen Cc: Reinette Chatre Reviewed-by: Haitao Huang --- v3: - Remove WARN_ON(). - Tuned comments and the commit message a bit. v2: - Replaced WARN_ON() with optional pr_info() inside __sgx_sanitize_pages(). - Rewrote the commit message. - Added the fixes tag. --- arch/x86/kernel/cpu/sgx/main.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 515e2a5f25bb..d204520a5e26 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -50,16 +50,17 @@ static LIST_HEAD(sgx_dirty_page_list); * from the input list, and made available for the page allocator. SECS pages * prepending their children in the input list are left intact. */ -static void __sgx_sanitize_pages(struct list_head *dirty_page_list) +static void __sgx_sanitize_pages(struct list_head *dirty_page_list, bool verbose) { struct sgx_epc_page *page; + int dirty_count = 0; LIST_HEAD(dirty); int ret; /* dirty_page_list is thread-local, no need for a lock: */ while (!list_empty(dirty_page_list)) { if (kthread_should_stop()) - return; + break; page = list_first_entry(dirty_page_list, struct sgx_epc_page, list); @@ -90,14 +91,22 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list) list_del(&page->list); sgx_free_epc_page(page); } else { + if (verbose) + pr_err_ratelimited(EREMOVE_ERROR_MESSAGE, ret, ret); + /* The page is not yet clean - move to the dirty list. */ list_move_tail(&page->list, &dirty); + dirty_count++; } cond_resched(); } list_splice(&dirty, dirty_page_list); + + /* Can happen, when the initialization is retracted: */ + if (verbose && dirty_count > 0) + pr_info("%d unsanitized pages\n", dirty_count); } static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page) @@ -394,11 +403,8 @@ static int ksgxd(void *p) * Sanitize pages in order to recover from kexec(). The 2nd pass is * required for SECS pages, whose child pages blocked EREMOVE. */ - __sgx_sanitize_pages(&sgx_dirty_page_list); - __sgx_sanitize_pages(&sgx_dirty_page_list); - - /* sanity check: */ - WARN_ON(!list_empty(&sgx_dirty_page_list)); + __sgx_sanitize_pages(&sgx_dirty_page_list, false); + __sgx_sanitize_pages(&sgx_dirty_page_list, true); while (!kthread_should_stop()) { if (try_to_freeze())