From patchwork Sun Mar 19 00:15:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13180143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BB0FC761A6 for ; Sun, 19 Mar 2023 00:16:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1123C28000D; Sat, 18 Mar 2023 20:16:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09928280001; Sat, 18 Mar 2023 20:16:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDFAD28000D; Sat, 18 Mar 2023 20:16:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C87F1280001 for ; Sat, 18 Mar 2023 20:16:23 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A3C39C0CF8 for ; Sun, 19 Mar 2023 00:16:23 +0000 (UTC) X-FDA: 80583731046.14.430830F Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf21.hostedemail.com (Postfix) with ESMTP id A11801C0014 for ; Sun, 19 Mar 2023 00:16:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=fspsw9ew; spf=pass (imf21.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679184981; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=+6GvvYu5yHulXslGrW9yeLU+ZeVo4mtLKdyIMGCutuA=; b=WDGZxzD/DDT/N4LYp/P+ZQRzs1bhFzKDOBt43KJO6XU/jV9rPTS6WvlI4qlMXD8q7c8QOd fegBoDACLyEJVYJSHz1OOcG/v4b7BxCrICK9nNki7hfvgsO8vEkEXkLEjlGD1G8E0KcjYy TcILP+93jDEvuz80IegV051W6l+dUxM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=fspsw9ew; spf=pass (imf21.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679184981; a=rsa-sha256; cv=none; b=qiIAOX4mXYUvtQAJJCFXgr5iLbpqhoXtURmMuSNlhP0JNgu3+wI4+vSa1K0lWa3TPoIa9z KQ5sy1dIcFUbCJ5NbsY41wMlvCTfYue+FWZZZ+lg0DXb1LaR0S8zgGVW1r0MZBEN4oOWJq iV4pDRbMbKFdtOLqNLv/yE7qVhj4B9I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184981; x=1710720981; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=bCew3mJcfQObTYD8BfRHQ7rk2S0psN/tr65S9eiRcMg=; b=fspsw9ewqJpGfvmQ4lqCVzDtO19/eaH0XVDnloBYxIMiUkKAIuaGQjqK dpqUTphCgFU7KtcUPZ3ACJ6m+ecjfTC3g4ALPtAnIhjQE8OeenmkeHkRp 6w+JRJKPjBTo4Y8jA2Mv1ux+mG9JL2gC4+nrf8ZY4iuX4GwzH8T4hoyYs fAtoWat4qVOr/V7sWNT1HwnO5NeTq4EhTo9iDSL+mV5yd5bFkhdvWd3tb skYTgWBNCSHjY3mZWYalThidPkfwXwXz1CC9+yI3h2NA14wMrz81B02bq zq3Y4qJl3AB0KQIz4MgCka/IBMbqmzW53c49V5VWeEJLqfEEHZBtVF2p5 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491062" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491062" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672841" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672841" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:18 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 15/40] x86/mm: Update ptep/pmdp_set_wrprotect() for _PAGE_SAVED_DIRTY Date: Sat, 18 Mar 2023 17:15:10 -0700 Message-Id: <20230319001535.23210-16-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A11801C0014 X-Rspam-User: X-Stat-Signature: tsecuk3a9qxp1zwfcg7m1rgx4pfyn8gf X-HE-Tag: 1679184981-811035 X-HE-Meta: U2FsdGVkX1+pMs6QdhullzC8kmaJL/E2VNrHiQqT5j8LaGWLqUhGTk4U+V9h9i3GzurGb9iQ7xAvMRtd+3YGzwtX4HPjyDZdAYfjW4uQ/HNC+Rmz4HYJxS5JjzfaJ/mr+I4SYpqCZlJgzSmNwevR4w5cixbE8Wed+vKl3b/++4r5fOWQ6c8srnqZgTAElvmvNc8mwYsppmdLlqlUPgXnhbMg1UBuRFTFHD3IOPFI9B9ttc8YA74j3RlmkyoYNUql1b0YTdnF6VbZt82d+GdYEv8P/Sivr5/cZEhcfuoNys30fSgZu2/PhaGa/kInNcVoXIU1C31BI2crup2BG83aajZfevV4ROc/j+tLhWFbLFIWhJvoqc2nyX1YMN+r/h/lLyIK4abg85/nsssd2PAMzG5V+4HRumIE86bC+cwTU0iWGhrb7i0BWDKIzkquWivNaa9UcfHbyv2bDASI2QSEc83YcHEcRZrYaUzbN6bO+jHP9ShQVvA6uNaDswNUWQnXT14gptl+kKgDnZcHjTzEECIVwnOvkriNDk4npn9RmoWH4Yu30rcSdftwETGbdaXGHZV4/dPyyVnT2rpJvjy3EZ7rxOqv5C7pCMEQCdvv2YFD8Owry964/8ob4iIrXWSvwEIiqEOEk0yHzvWcYVbD1hJf7M8YYQWsFA+F9lmah6BXM3Sih4L95omSHbvLB8aMzBsGPk5V1J1yEuqJpT0YnymJBUlN97EDUSb0UrogBlcpTIpVAtoWY4TN+tdW6mPXOmjnSFt2AmYr3biNblW//WzH94ut+Y7jUEkr8ZmwdSQ1TM5bq/oV6bVd+XK44jyrMJLtaPs97clZHHviV3NwSf58XVFdiqpLI/TTJXjDgk7WtmFXVKumX/yWExZ5qPb81Hr8swlqU8KDdObvaJ0IhDPfEWHEu/YeTqVSopwq7qj8DsmcwA330IFuvGnALznQn4Zih/IEJk28pQipzvE 0U3c5tbx tSsaddDhSWPuy4wroyh060qnP91eg/duAhjBAVd3Y0yDnVMc/HPkpg7PZmbPT+93TQ3UJtLNyVLMjqDeI7u3m8kZzqjd8aNy5VwqO0cYPLa42jGVsv7LEeNL8zFxtXHMtJe+FWjXaZ5Uch2584l1VirR3f2BjM4cFDAsgOnzSKwTskiHu88gjcMnQCzc+3xLg7NifauOS2jo2tgW55w50Q98f/xqB/83G8NHU254uRmx4WIXpwodi++ywYrcGMslHP1R65heO2PAXfw8kF9BbkuuXg+IVDcfSU954OP32DFRvAWm4rG+gsTF0u7eIacracVtMF1uEtkIxiaxB4GsgFbhO7QpzqFITGMS/cMirseaCp6K9gU4Lw4IYgB2lFX/YbTQOIpy82sihiuU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When shadow stack is in use, Write=0,Dirty=1 PTE are preserved for shadow stack. Copy-on-write PTEs then have Write=0,SavedDirty=1. When a PTE goes from Write=1,Dirty=1 to Write=0,SavedDirty=1, it could become a transient shadow stack PTE in two cases: 1. Some processors can start a write but end up seeing a Write=0 PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting shadow stack, and a TLB flush is not necessary. 2. When _PAGE_DIRTY is replaced with _PAGE_SAVED_DIRTY non-atomically, a transient shadow stack PTE can be created as a result. Thus, prevent that with cmpxchg. In the case of pmdp_set_wrprotect(), for nopmd configs the ->pmd operated on does not exist and the logic would need to be different. Although the extra functionality will normally be optimized out when user shadow stacks are not configured, also exclude it in the preprocessor stage so that it will still compile. User shadow stack is not supported there by Linux anyway. Leave the cpu_feature_enabled() check so that the functionality also gets disabled based on runtime detection of the feature. Similarly, compile it out in ptep_set_wrprotect() due to a clang warning on i386. Like above, the code path should get optimized out on i386 since shadow stack is not supported on 32 bit kernels, but this makes the compiler happy. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v6: - Fix comment and log to update for _PAGE_COW being replaced with _PAGE_SAVED_DIRTY. v5: - Commit log verbiage and formatting (Boris) - Remove capitalization on shadow stack (Boris) - Fix i386 warning on recent clang v3: - Remove unnecessary #ifdef (Dave Hansen) v2: - Compile out some code due to clang build error - Clarify commit log (dhansen) - Normalize PTE bit descriptions between patches (dhansen) - Update comment with text from (dhansen) --- arch/x86/include/asm/pgtable.h | 35 ++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7360783f2140..349fcab0405a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1192,6 +1192,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pte_t old_pte, new_pte; + + old_pte = READ_ONCE(*ptep); + do { + new_pte = pte_wrprotect(old_pte); + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); + + return; + } +#endif clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); } @@ -1244,6 +1261,24 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pmd_t old_pmd, new_pmd; + + old_pmd = READ_ONCE(*pmdp); + do { + new_pmd = pmd_wrprotect(old_pmd); + } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd)); + + return; + } +#endif + clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); }