From patchwork Sat Feb 18 21:14:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13145649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 356B3C61DA4 for ; Sat, 18 Feb 2023 21:16:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E70B6280008; Sat, 18 Feb 2023 16:16:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DD0D8280004; Sat, 18 Feb 2023 16:16:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C21F1280008; Sat, 18 Feb 2023 16:16:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B2483280004 for ; Sat, 18 Feb 2023 16:16:10 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7F846A01E9 for ; Sat, 18 Feb 2023 21:16:10 +0000 (UTC) X-FDA: 80481670500.05.4C23794 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf12.hostedemail.com (Postfix) with ESMTP id A1FEC40024 for ; Sat, 18 Feb 2023 21:16:07 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=juAsqDcB; spf=pass (imf12.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676754967; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=DmmT6dkaMq6VPb+Vq+6I+ElYXeodGHO11MDhoMSj6m8=; b=EeLTl/5L0820zSZgyLg2LfoHnL3To+5QJJe3kSBtMJWKzKoPkW8q7DAGK1B8hEeCL0mznZ NDWJUtpY420fYZj/IF6V4jgQHo/c2tR9F1NMXHsMoIoYoHJBM/tZv2aigpncrCet7mOXyC yAwoHKVplmwWKcXykj6BbNsUg4PKTD8= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=juAsqDcB; spf=pass (imf12.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676754967; a=rsa-sha256; cv=none; b=8I7y1kpQfF+GvPMx3DKHil+mOCotxwyLj914iapAIV8rSJV5qWytta5AdnsU0+uaVNXthR hsmJyOefchA8Ykc0/UBVnXDVSrWZ/M0GnBMramx3YlQtO9V4lxRKCif5qzrHE3kRoPM+/K mTeBUrfIko6gg1iLpO2At7dInSJIZ5A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676754967; x=1708290967; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=n3X6PvKLUTwSn8Fp1OjRbulohHVicRUFLge5hfismaI=; b=juAsqDcBCqOcSxIWmwU4RGufRAHqndoYh0gqx+RKvcB9RWwv2EUflzS9 3bTSr2/rG0C0pTTkvSiHdkYjBQonVONfWvLHqY5CZgWk8mF0P4LEtDbAO v6T913tyQ2XpzXlu5kWucNCpPXgCd0HOuX2y6dzSmPm6zRsaqY2tRhqdj VMxukJViKy3Mcpj1YI4h+HLfw7mx0mk9dFj37W3/dEZTwcryRwzQHyIPR 7NbkQQBc5IrZuu31AjGhWXkmHNqm2GTMrxpd+1oyRrhOEJ1qBoH4Nie31 McxfdZaN3c9p9sHNYQV+TRyBjviI0tlXoE9FDIZ3ApRc7nBp5D2sGUN8T g==; X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="418427278" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="418427278" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:04 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="664241611" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="664241611" Received: from adityava-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.80.223]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:03 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu , Christoph Hellwig Subject: [PATCH v6 09/41] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Date: Sat, 18 Feb 2023 13:14:01 -0800 Message-Id: <20230218211433.26859-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230218211433.26859-1-rick.p.edgecombe@intel.com> References: <20230218211433.26859-1-rick.p.edgecombe@intel.com> X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: bhxfrki77drrs551juhee5nr5nnojpue X-Rspamd-Queue-Id: A1FEC40024 X-HE-Tag: 1676754967-936709 X-HE-Meta: U2FsdGVkX18CTDkjCRysGlOf1DcfLKvMqvOZxUytm3Md0h14NYpS4KB/sLWcp5ENI3X0X6B/iFxqTuVDHsckBnx4gig9rB5TIiHVOT86H2ShJPvBHZpqDPZI86bxrHW3OBTuXqdrkCmVRX2zX2dErGE7YREFvdWWuVd623BfQV2Dnb5PPMq2PTdFEalPd+Pw0V7BnWRqVdnL4OaXhvKBjTWBKho/5SmNT0c4PIHjmzlH/ctxGIHdYiKE/h3LUZgS8UQkTMCGBEtzQ3ugQmIEpvcNfVSD7pKo+JUVKugLNYJIYvFLqtdRglq4WDXxWmKdBpJ+TU4bfbqy8fwDQymarLoxPg88k+DSXc0BhTw6ihR79qx5VxELWo1aijDyFY0a5ctuPDn6qdkIAl9NSwqj/IOBcdRN1/53f9fY5lucj+E3FeJGpQqJh4KHDQo2pAVHzu/76AZF7QPqnMZu5MqOA+X3h0TpElB2G4qDRcR0mUoWHDvwjzjleVMCXcPMUDUHG+IBT62ld8UkfJTIkV6aioMF3Q0w9B0jMR39bP5g3ln4tI3UiTOZ2VF6eDOAOx2fR07/k0eFWKwpOX3gQKJfBZv1AUkj3x6y8rJDDH2ICcnvZCP7Vy4TYdr9xyJ123iHeT7fSnIwTIlfqR+CC3NEkukIDCWIev+tkT49j0nBk3rPKNdv9qEfPNoPAn4X0TrZ3Q0qO8HaOg5z+X6l526l9x8i1/BBeE0t4W2aWYbhD406Td3LobIvO2gURgYyQ4piJHyYh1RQHaXd98VKn21VHKwfVEfVfM2WNbU7KmP+1NB8JbRBG5O87wwT8c6t0i1o0ebGEZplF2wBUe/yogNq48BcWMYZrnJOLMPkxpnTP3qnG4HaeSZybBwxNeDVpj3ukrXs5fvP8SH+MzKtuDDdpl2a7XnMSkR+oA8BeYoWAl0jjSEaYMxnW4/PBOy07twtEusXSIvRfFUDg8a4G57 Y5XN9Obe NRu+z5qWKSCim/Oc89kb/TxGiv3MFyLY3vwtbCLo1y4gyG78fknc3HuUUWs1on/z0GmldJTjXuGfk1Vy6r4OrbMK5H9imISeYHBpGm6pTUDfbU8qKZpSamMtVZQUsNWCoZbP+al54oS3qKlsk+W9TrgNZLmE7/8W94HeDp7zoRTJ8p81venZ5twnzwBONjm0Y7mPPy0p9Cir/j6SYrWfTfWfnOgXzJ5n6IHwX/Oi7pVRojF3/yf2qdLrnOkDRdTWiklPlPVaqVRbdvQVBgvYmrpH/MAzwIwFnOfvdJLPHOl+6ybW/JQbPn8BFoB+NOW5JgWRkkSS0YsdBfbJ/vnkiDTQmN7OFtIQhqp5gc3IBPC7TK45kvA0SHbVcGHdcvYb+Fz8o9Dv2vh+596JRyTYHSEnGYLZ+Dqy5P1lY6Q+6a027B07AapDRjMTIwpopYjfhq/thsJz3W+m2h6CsZB5LZdplpiT7lw9slJl6f/k7evUYF5D3/7wYWNV5HqvjFBqZ+j5WqCTV/1C1aPwZ7HKPsJ74vkk40n89FEuxyebb2duxoX89IsEy1iRLwQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu New processors that support Shadow Stack regard Write=0,Dirty=1 PTEs as shadow stack pages. In normal cases, it can be helpful to create Write=1 PTEs as also Dirty=1 if HW dirty tracking is not needed, because if the Dirty bit is not already set the CPU has to set Dirty=1 when the memory gets written to. This creates additional work for the CPU. So traditional wisdom was to simply set the Dirty bit whenever you didn't care about it. However, it was never really very helpful for read-only kernel memory. When CR4.CET=1 and IA32_S_CET.SH_STK_EN=1, some instructions can write to such supervisor memory. The kernel does not set IA32_S_CET.SH_STK_EN, so avoiding kernel Write=0,Dirty=1 memory is not strictly needed for any functional reason. But having Write=0,Dirty=1 kernel memory doesn't have any functional benefit either, so to reduce ambiguity between shadow stack and regular Write=0 pages, remove Dirty=1 from any kernel Write=0 PTEs. Tested-by: Pengfei Xu Tested-by: John Allen Reviewed-by: Kees Cook Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: "H. Peter Anvin" Cc: Kees Cook Cc: Thomas Gleixner Cc: Dave Hansen Cc: Christoph Hellwig Cc: Andy Lutomirski Cc: Ingo Molnar Cc: Borislav Petkov Cc: Peter Zijlstra --- v6: - Also remove dirty from newly added set_memory_rox() v5: - Spelling and grammer in commit log (Boris) v3: - Update commit log (Andrew Cooper, Peterz) v2: - Normalize PTE bit descriptions between patches --- arch/x86/include/asm/pgtable_types.h | 6 +++--- arch/x86/mm/pat/set_memory.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 447d4bee25c4..0646ad00178b 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -192,10 +192,10 @@ enum page_cache_mode { #define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) -#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) -#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) +#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) -#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) +#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX| 0| 0|___G) #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW| 0|___A| 0|___D|_PSE|___G) #define __PAGE_KERNEL_WP (__PP|__RW| 0|___A|__NX|___D| 0|___G| __WP) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 356758b7d4b4..1b5c0dc9f32b 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2073,12 +2073,12 @@ int set_memory_nx(unsigned long addr, int numpages) int set_memory_ro(unsigned long addr, int numpages) { - return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0); + return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY), 0); } int set_memory_rox(unsigned long addr, int numpages) { - pgprot_t clr = __pgprot(_PAGE_RW); + pgprot_t clr = __pgprot(_PAGE_RW | _PAGE_DIRTY); if (__supported_pte_mask & _PAGE_NX) clr.pgprot |= _PAGE_NX;