From patchwork Mon Sep 5 23:54:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12966627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 032BEC54EE9 for ; Mon, 5 Sep 2022 23:54:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232709AbiIEXyo (ORCPT ); Mon, 5 Sep 2022 19:54:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232432AbiIEXyh (ORCPT ); Mon, 5 Sep 2022 19:54:37 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E54BFE00C; Mon, 5 Sep 2022 16:54:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3AB1660EBD; Mon, 5 Sep 2022 23:54:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50579C433C1; Mon, 5 Sep 2022 23:54:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662422067; bh=vvuMPl31A2NsTLseaVjxryl65N/xiu0DmIwNE5n8IGU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sJqlItyapxrefZztsXaWBQvXBxOeula1KpJqHpWooIPapa8Yl61M4yjv+XNf4w9ao 9HyriwP9UOQ4x/j7vJb5ABM2KUia5XQe+asr1R+AwdALzHl7kkTnRIHa7QOtW5mJBX Ke22+fIg7MzEdyxoz8L/0yEt6uLY6vwMePcoOW/knL65oZeHFXCTRn2QfPn3uQaWY0 9zKQIHWegNhYqnoWrY8p6O2ISw6danXOlq97g1KE+OuyorFtO73D7027PGY3RQ4YWa oKy2Pc3rRhuAUQOjM57wf9sf2vkscIN89ZynoXeBvL4bibmSgZfSR8VHo3erWzTKxv Ohl7QK97BeIvg== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Haitao Huang , Vijay Dhanraj , Reinette Chatre , Dave Hansen , Jarkko Sakkinen , stable@vger.kernel.org, Paul Menzel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), "H. Peter Anvin" , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)) Subject: [PATCH v2 1/2] x86/sgx: Do not fail on incomplete sanitization on premature stop of ksgxd Date: Tue, 6 Sep 2022 02:54:14 +0300 Message-Id: <20220905235415.9519-2-jarkko@kernel.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220905235415.9519-1-jarkko@kernel.org> References: <20220905235415.9519-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Unsanitized pages trigger WARN_ON() unconditionally, which can panic the whole computer, if /proc/sys/kernel/panic_on_warn is set. In sgx_init(), if misc_register() fails or misc_register() succeeds but neither sgx_drv_init() nor sgx_vepc_init() succeeds, then ksgxd will be prematurely stopped. This may leave unsanitized pages, which will result a false warning. Refine __sgx_sanitize_pages() to return: 1. Zero when the sanitization process is complete or ksgxd has been requested to stop. 2. The number of unsanitized pages otherwise. Use the return value as the criteria for triggering output, and tone down the output to pr_err() to prevent the whole system to be taken down if for some reason sanitization process does not complete. Link: https://lore.kernel.org/linux-sgx/20220825051827.246698-1-jarkko@kernel.org/T/#u Fixes: 51ab30eb2ad4 ("x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list") Cc: stable@vger.kernel.org # v5.13+ Reported-by: Paul Menzel Signed-off-by: Jarkko Sakkinen --- v8: - Discard changes that are not relevant for the stable fix. This does absolutely minimum to address the bug: https://lore.kernel.org/linux-sgx/a5fa56bdc57d6472a306bd8d795afc674b724538.camel@intel.com/ v7: - Rewrote commit message. - Do not return -ECANCELED on premature stop. Instead use zero both premature stop and complete sanitization. v6: - Address Reinette's feedback: https://lore.kernel.org/linux-sgx/Yw6%2FiTzSdSw%2FY%2FVO@kernel.org/ v5: - Add the klog dump and sysctl option to the commit message. v4: - Explain expectations for dirty_page_list in the function header, instead of an inline comment. - Improve commit message to explain the conditions better. - Return the number of pages left dirty to ksgxd() and print warning after the 2nd call, if there are any. v3: - Remove WARN_ON(). - Tuned comments and the commit message a bit. v2: - Replaced WARN_ON() with optional pr_info() inside __sgx_sanitize_pages(). - Rewrote the commit message. - Added the fixes tag. --- arch/x86/kernel/cpu/sgx/main.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 515e2a5f25bb..2ec2d7b7da54 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -49,9 +49,13 @@ static LIST_HEAD(sgx_dirty_page_list); * Reset post-kexec EPC pages to the uninitialized state. The pages are removed * from the input list, and made available for the page allocator. SECS pages * prepending their children in the input list are left intact. + * + * Return 0 when sanitization was successful or kthread was stopped, and the + * number of unsanitized pages otherwise. */ -static void __sgx_sanitize_pages(struct list_head *dirty_page_list) +static unsigned long __sgx_sanitize_pages(struct list_head *dirty_page_list) { + unsigned long left_dirty = 0; struct sgx_epc_page *page; LIST_HEAD(dirty); int ret; @@ -59,7 +63,7 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list) /* dirty_page_list is thread-local, no need for a lock: */ while (!list_empty(dirty_page_list)) { if (kthread_should_stop()) - return; + return 0; page = list_first_entry(dirty_page_list, struct sgx_epc_page, list); @@ -92,12 +96,14 @@ static void __sgx_sanitize_pages(struct list_head *dirty_page_list) } else { /* The page is not yet clean - move to the dirty list. */ list_move_tail(&page->list, &dirty); + left_dirty++; } cond_resched(); } list_splice(&dirty, dirty_page_list); + return left_dirty; } static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page) @@ -388,6 +394,8 @@ void sgx_reclaim_direct(void) static int ksgxd(void *p) { + unsigned long left_dirty; + set_freezable(); /* @@ -395,10 +403,7 @@ static int ksgxd(void *p) * required for SECS pages, whose child pages blocked EREMOVE. */ __sgx_sanitize_pages(&sgx_dirty_page_list); - __sgx_sanitize_pages(&sgx_dirty_page_list); - - /* sanity check: */ - WARN_ON(!list_empty(&sgx_dirty_page_list)); + WARN_ON(__sgx_sanitize_pages(&sgx_dirty_page_list)); while (!kthread_should_stop()) { if (try_to_freeze()) From patchwork Mon Sep 5 23:54:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12966626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5462DECAAD3 for ; Mon, 5 Sep 2022 23:54:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232630AbiIEXyo (ORCPT ); Mon, 5 Sep 2022 19:54:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232355AbiIEXyh (ORCPT ); Mon, 5 Sep 2022 19:54:37 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9B592BDD; Mon, 5 Sep 2022 16:54:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2716EB8076C; Mon, 5 Sep 2022 23:54:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79235C433D7; Mon, 5 Sep 2022 23:54:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662422070; bh=oHvBd1ffjg/s99ZfllkCt1FwNVqrZNIyO0l4IQri3jg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p80gw7rQuiV6cd187I7r7hGa76cgPTs1uokk/W+dlXSP+2oLyl+DZ7YnfiZ81PmLn laoROTnP/I6oBqwsJszakMj6pZPhUe5u9XB1WMsrbB/bN+sGppqs3m2YuVOtSZV8Ja T7yHcM2s245THucsIg4hzW9sOGpbPlsnsSUoKVT5gnwQ0xezUqbOQtMiepAt/yLois q1FlItvmEJplUNaEerZO4ljGeHrk8MtKib7GZX0yC7AQGSfWY7K+iWLorvFY4j2by0 KawqedoQfopcfxKqQBPD+XHkqSE8SqJyEt8aruYwXQxhFot/G1HpOXcy548FEwIGgD NbM9FRuQJr++g== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Haitao Huang , Vijay Dhanraj , Reinette Chatre , Dave Hansen , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), "H. Peter Anvin" , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)) Subject: [PATCH v2 2/2] x86/sgx: Handle VA page allocation failure for EAUG on PF. Date: Tue, 6 Sep 2022 02:54:15 +0300 Message-Id: <20220905235415.9519-3-jarkko@kernel.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220905235415.9519-1-jarkko@kernel.org> References: <20220905235415.9519-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org From: Haitao Huang VM_FAULT_NOPAGE is expected behaviour for -EBUSY failure path, when augmenting a page, as this means that the reclaimer thread has been triggered, and the intention is just to round-trip in ring-3, and retry with a new page fault. Fixes: 5a90d2c3f5ef ("x86/sgx: Support adding of pages to an initialized enclave") Signed-off-by: Haitao Huang Tested-by: Vijay Dhanraj Reviewed-by: Reinette Chatre Signed-off-by: Jarkko Sakkinen --- v4: * Remove extra white space. v3: * Added Reinette's ack. v2: * Removed reviewed-by, no other changes. --- arch/x86/kernel/cpu/sgx/encl.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index f40d64206ded..9f13d724172e 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -347,8 +347,11 @@ static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, } va_page = sgx_encl_grow(encl, false); - if (IS_ERR(va_page)) + if (IS_ERR(va_page)) { + if (PTR_ERR(va_page) == -EBUSY) + vmret = VM_FAULT_NOPAGE; goto err_out_epc; + } if (va_page) list_add(&va_page->list, &encl->va_pages);