From patchwork Thu Jan 19 21:22:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13108790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D995C46467 for ; Thu, 19 Jan 2023 21:23:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 995DC6B0092; Thu, 19 Jan 2023 16:23:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91F5A6B0093; Thu, 19 Jan 2023 16:23:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74B2C6B0095; Thu, 19 Jan 2023 16:23:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5B0A16B0092 for ; Thu, 19 Jan 2023 16:23:47 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2F7F01A0869 for ; Thu, 19 Jan 2023 21:23:47 +0000 (UTC) X-FDA: 80372825694.21.A43A55D Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf16.hostedemail.com (Postfix) with ESMTP id 3437F18000D for ; Thu, 19 Jan 2023 21:23:45 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="b98X5/zX"; spf=pass (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674163425; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=qRKDNK2oMQO5oOmk+unpG3nEq+k+7RxaZmTu69DQeXE=; b=holrM65BxBSEgImQdaBHThVxEFaXECZqcIwew/2xlTNvx+kUtlY7/Kc0cWNspdS8+R66Uu Etqc0u7vhyOtPPgZ+UZDXaaXwh2bELKYFI6VDOUuchOoQGWNyXm+FtGmtFurSy0UWxVgOx pfpKGP9+0/uLPagesWFLDsQadhddhA4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="b98X5/zX"; spf=pass (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674163425; a=rsa-sha256; cv=none; b=vff9BboXkOBO3flX1ukIMJPhr/tZro/7cTY1hHgoO/yZ1T7+Sxs2ZOtK8frsJvrlBdsP5D nAENgPvpAGFU7jCJ5JHyRwCNIOqKvjWa9NlA0QebsSxU7Nz9Yuqzocpgdu7Sh6MPGbsF5C hDSzC2POR14F1e+EvG0DsO98nW9bQo8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674163425; x=1705699425; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=BvuOTjZc5NHm9ltMy7F/33kG5ApCivtjPVo72HaZTJE=; b=b98X5/zXpgsQ4ZoSooa98Obqwk47qS8izwohTigwJkbEDSG56iFyO5g/ T0Ip9aEkihdsgUfSJDHIjpYJNwa8EzfFNYbxJVWIPBmTIlv1BsG9Bh9HR RVv7z3vL3jxd/45E9X/CyyZ7bDqwm95oD0aK35tn5hxGFyagRB7ah5w3U /D9lTj1ZybhB4Q0for2xJspeRyiSb56N7mOyxqkx2fv+MfM9P+qCKzZiM tjX4mIXNE6rmka20OiKhlvs7qOkifo4QZ3GULVUOMffy7Frqq6UmZWNRp GecanX1A5DRIO2teQP54Sj0Zr+1JxTiPJ7IPN0lw5oA1rVB5ISd+TN556 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10595"; a="323119436" X-IronPort-AV: E=Sophos;i="5.97,230,1669104000"; d="scan'208";a="323119436" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 13:23:44 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10595"; a="989139043" X-IronPort-AV: E=Sophos;i="5.97,230,1669104000"; d="scan'208";a="989139043" Received: from hossain3-mobl.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.252.128.187]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 13:23:42 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v5 12/39] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY to _PAGE_COW Date: Thu, 19 Jan 2023 13:22:50 -0800 Message-Id: <20230119212317.8324-13-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230119212317.8324-1-rick.p.edgecombe@intel.com> References: <20230119212317.8324-1-rick.p.edgecombe@intel.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3437F18000D X-Stat-Signature: 8bn8xubrdwxbpij1gyrfc64mjae6nixm X-Rspam-User: X-HE-Tag: 1674163425-815341 X-HE-Meta: U2FsdGVkX18E4i+B1wL+C5DFuzkunH520lx/pVKq0j5owxEOONHRU7OnKM9Xa5g1pUmFx50TfacEEFZWjtpRPPV5levAZN+zM718QRqsIoybzbXtSQQGdCbTzJ+7pm9S82AnTOEMhICG0tGm129sRlLwLwf3hvJU/oZRuQMr6SWZPxBSKhlDcnc8gSbwS972OK0pYsrAXbRStlHCVnqDa7Oe/vO0afuVq1+pu0Wj2g6MOJv/0mMsWSOheMGk5lpbNd1BkoXcRZ90Oj6EhH0oQL4luAAT4g7FC4LTtgDVE7oBx9j187gUf6NHgaz96J47DXrwMfJ+kkl/8Oqk3PziWHlj7lJN4x+R4wqcHaG6gyYn0cL1ujKcJm92KlD72tBJMU+USHODnA7U2seNRHeRTl9aiXYz7VStViH1CyUR3RC23IEn0vzIXYU4+hU14rOQd+TglsKTBMngV68roDZ9h+mA/wR0oyk7EknHwysXOo1gS+vLuUWJSWCZNTsGw15QTcniH69aSoG2NcSzO8H4fk3o1NNYC/OszCyRt7lGmRQ98UXRmSLU40f3sD43qwc3D9e7Eje4rU5xHIqmZ/zTMP+Khrsd6XNCv7Sp4iasRdnQMvy9lr/ygKlxBGJJKW2PgyLr5YPPV43fwPphnjzUChgvFhWq1s3F7wbx7OpDhThhp0iRcXZXBfQj1ThPNaZDzJcQE/G93H1PEtrd3+L7lpyVWusfl81Qd9eotrUfcvV38NLc6cHnqN4TRqRnR+i91d2rlEL692ktGck23lcnfA67+UR/yiaiAim52c9wmHMs5Tbo9w2/IGps2l+dOIWiRPYscE07KKcNxuYU/J/Oh2rcWzW4mrlB6+w2N3Esi5r0ch4KHc8wjvKxLOIJhJ9npYmI/1Q6yrh6gCsqVdifcXzD2+N05q6w7pJkKxRfdSBpFIcT3f2M3QSuEWhF/Vhj05QGVOqkFQA/PIP3oma aCwDlcAI wxJBdMlSwgWo2atn1tDo4hufsGDrpUoM48G/0cOE31Nisqq4UnDH4X2ly7yGaWs3rdwXNDcDQxInUU36pwugzNpskkaHeHiutdepjG44TVm8hk0xtvJ/9oFtOx1kyH6/HaqsiZWqMlomD8WB/1b21bSduWKNkJDka6qDg2ol3TMxMDozlE/kIltWa73bzkZNvLnPm8VJS/usvBAOX70fFek5vGiWibpLNYFjA38EvBbKcvr0bD28nmpaPJxCpySFJQzNgmHQQ9n5Z+4MuI2a4icdjPQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu When shadow stack is in use, Write=0,Dirty=1 PTE are preserved for shadow stack. Copy-on-write PTEs then have Write=0,Cow=1. When a PTE goes from Write=1,Dirty=1 to Write=0,Cow=1, it could become a transient shadow stack PTE in two cases: 1. Some processors can start a write but end up seeing a Write=0 PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting shadow stack, and a TLB flush is not necessary. 2. When _PAGE_DIRTY is replaced with _PAGE_COW non-atomically, a transient shadow stack PTE can be created as a result. Thus, prevent that with cmpxchg. In the case of pmdp_set_wrprotect(), for nopmd configs the ->pmd operated on does not exist and the logic would need to be different. Although the extra functionality will normally be optimized out when user shadow stacks are not configured, also exclude it in the preprocessor stage so that it will still compile. User shadow stack is not supported there by Linux anyway. Leave the cpu_feature_enabled() check so that the functionality also gets disabled based on runtime detection of the feature. Similarly, compile it out in ptep_set_wrprotect() due to a clang warning on i386. Like above, the code path should get optimized out on i386 since shadow stack is not supported on 32 bit kernels, but this makes the compiler happy. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Tested-by: Pengfei Xu Tested-by: John Allen Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook --- v5: - Commit log verbiage and formatting (Boris) - Remove capitalization on shadow stack (Boris) - Fix i386 warning on recent clang v3: - Remove unnecessary #ifdef (Dave Hansen) v2: - Compile out some code due to clang build error - Clarify commit log (dhansen) - Normalize PTE bit descriptions between patches (dhansen) - Update comment with text from (dhansen) Yu-cheng v30: - Replace (pmdval_t) cast with CONFIG_PGTABLE_LEVELES > 2 (Borislav Petkov). arch/x86/include/asm/pgtable.h | 37 ++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7942eff2af50..c5047eb5f406 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1232,6 +1232,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * Avoid accidentally creating shadow stack PTEs + * (Write=0,Dirty=1). Use cmpxchg() to prevent races with + * the hardware setting Dirty=1. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pte_t old_pte, new_pte; + + old_pte = READ_ONCE(*ptep); + do { + new_pte = pte_wrprotect(old_pte); + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); + + return; + } +#endif clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); } @@ -1284,6 +1301,26 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { +#ifdef CONFIG_X86_USER_SHADOW_STACK + /* + * If shadow stack is enabled, pmd_wrprotect() moves _PAGE_DIRTY + * to _PAGE_COW (see comments at pmd_wrprotect()). + * When a thread reads a RW=1, Dirty=0 PMD and before changing it + * to RW=0, Dirty=0, another thread could have written to the page + * and the PMD is RW=1, Dirty=1 now. + */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) { + pmd_t old_pmd, new_pmd; + + old_pmd = READ_ONCE(*pmdp); + do { + new_pmd = pmd_wrprotect(old_pmd); + } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd)); + + return; + } +#endif + clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); }