From patchwork Tue Feb 13 16:05:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxwell Bland X-Patchwork-Id: 13555321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A02C48260 for ; Tue, 13 Feb 2024 16:06:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E6FF8D0011; Tue, 13 Feb 2024 11:06:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 996C88D0001; Tue, 13 Feb 2024 11:06:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85EE08D0011; Tue, 13 Feb 2024 11:06:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 780BD8D0001 for ; Tue, 13 Feb 2024 11:06:02 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 45EB2140A4D for ; Tue, 13 Feb 2024 16:06:02 +0000 (UTC) X-FDA: 81787256964.24.1ABE00E Received: from mail-lf1-f47.google.com (mail-lf1-f47.google.com [209.85.167.47]) by imf07.hostedemail.com (Postfix) with ESMTP id 7714040037 for ; Tue, 13 Feb 2024 16:05:58 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=motorola-com.20230601.gappssmtp.com header.s=20230601 header.b=l71X6fJq; dmarc=pass (policy=none) header.from=motorola.com; spf=pass (imf07.hostedemail.com: domain of mbland@motorola.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=mbland@motorola.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707840358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=CEDvSvYhyauFeH8HCgsVW9bjAlPcgBCfLAOG21KUOeA=; b=NToBgecwqqTgqxTvhhJqtNSEWa42Hm3TpOSUaSjoSnn8Y3VQ4VDZRtNzymOMKIqFv9WtVE HGk2K/o/qptXtoIzG8HRrGgASo/vGP/YeVCykIBsHbvBOMCzDfjJ3/Nt8K4aerx6ImYv1m Gw8wZY/9YN5S2GCpXDZcWGVsmTbYMzY= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=motorola-com.20230601.gappssmtp.com header.s=20230601 header.b=l71X6fJq; dmarc=pass (policy=none) header.from=motorola.com; spf=pass (imf07.hostedemail.com: domain of mbland@motorola.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=mbland@motorola.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707840358; a=rsa-sha256; cv=none; b=f+zbWX6EVcIh66zK/ksZKLdHs1KTecC3IBLLEs0ZXYmblUdiwDWHj0f+5x1CquSJreZI8s 0xwrAk40EVLiiW/MUZqITD0VYhD1IjL2ocORZMeTgikBCiplgeCh+WDFKI0R15o9nAZ0Lw GC6taeix/za8MQdewo1mJGeWjX4jcmQ= Received: by mail-lf1-f47.google.com with SMTP id 2adb3069b0e04-511a04c837bso64464e87.0 for ; Tue, 13 Feb 2024 08:05:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=motorola-com.20230601.gappssmtp.com; s=20230601; t=1707840356; x=1708445156; darn=kvack.org; h=cc:to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=CEDvSvYhyauFeH8HCgsVW9bjAlPcgBCfLAOG21KUOeA=; b=l71X6fJquVur0Gegq/bEbam9khhHnNJi8SrSFmOxjGd1LMq7r2iFQ69XKkuMbXIbh4 5pSghjgelxfQ8OGLgBAb75CGlKdu3sc7GLOWED21o/0ZfmgEC8l87buh5TxiwXU9AbQr xUYAjCxIhBkPIXltgSN1Czi11lPykqS98JX39ti6wpwc0yF1pquwom+OEj5WtsHMrJJ1 RvDOu8wqm2wOug6Uv60n9A0jsQvN4rBEWDTRY4CphTc1d9RY+ScVcS7PkOxnKnajD5iO D5nSjc11alT+pYpWw++mXlrboHhY9Qnui9jpoUvVrI9ETiq0GPbEDoUkS2rzjz9HHyud Ve/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707840356; x=1708445156; h=cc:to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=CEDvSvYhyauFeH8HCgsVW9bjAlPcgBCfLAOG21KUOeA=; b=ZLoHfa0H2Z+GntZzNjIb3HDt1CpT7W2mse6e9XK5xZ3vZk4n3YpCXggO0opJFt7TTg oIawAqPMs2KLVWd/fhhv0NmdNlxXbgcM67ubhNLkORBi9UuMnObDyAORhvhBFoPD75aJ tzhJguKk0iqqYxmUVsNNcXIShbOmTEtt0tfdyYotLV/C1f0VU6kHVlRjbGXTO5E4nsZw 1IMB5JlBLIkbGk0+dp/oUmQBP/sOVIxG9NYR2ZtBi7JR+HmxI6BA4SHEWFchRr+MEVof C7oFVW4abqaI8Hk0BLoHh+whYlKneMMbRTeHWZ5zIeforlofgNjezXH/TbEugvdONDmq cUvA== X-Forwarded-Encrypted: i=1; AJvYcCWEsNSAEwnwFR0hnyQCBCcOJJgNYY3D1T9xSgSysHQV7X5YuTWu5lZ9+Z87JHxtGHEP6zC+Q3x20C1Jys3IDNX+Gcw= X-Gm-Message-State: AOJu0Yzn4pESg1QdfMbDwLaOO+CBJmhDsnnFAl8d4tB0spkW8FG7g+1R aTOIhNGp85yy3v62Q/dCSmQuqH6xqDi7ScXotsnoltKe5+3XbLajQLkwAgz8y6uChBsoI8rMIEO W7sLofeAgh5pFdcHxMl+0C+e/4iSqoQWcYWGI X-Google-Smtp-Source: AGHT+IHYMpkxa2matGsJqOQHdcpvyUVrmAhdJnBDChaCNAXCjN5JzqoJXHX6E/S0U3nduZoXLj/LKbNruFUGLFN1u3c= X-Received: by 2002:a05:6512:158f:b0:511:47f7:62e0 with SMTP id bp15-20020a056512158f00b0051147f762e0mr9204732lfb.21.1707840356309; Tue, 13 Feb 2024 08:05:56 -0800 (PST) MIME-Version: 1.0 From: Maxwell Bland Date: Tue, 13 Feb 2024 10:05:45 -0600 Message-ID: Subject: [PATCH] arm64: allow post-init vmalloc PXNTable To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, catalin.marinas@arm.com, will@kernel.org, dennis@kernel.org, tj@kernel.org, cl@linux.com, akpm@linux-foundation.org, shikemeng@huaweicloud.com, david@redhat.com, rppt@kernel.org, anshuman.khandual@arm.com, willy@infradead.org, ryan.roberts@arm.com, rick.p.edgecombe@intel.com, pcc@google.com, mbland@motorola.com, mark.rutland@arm.com, rmk+kernel@armlinux.org.uk, tglx@linutronix.de, gshan@redhat.com, gregkh@linuxfoundation.org, Jonathan.Cameron@huawei.com, james.morse@arm.com, awheeler@motorola.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7714040037 X-Stat-Signature: zpz4wpws9j3bjx4dcq876sa1fy7w74qx X-HE-Tag: 1707840358-591002 X-HE-Meta: U2FsdGVkX1891KfNjqnC6JbmiM0vclFj2ZxhNlFTOxBypbji+gGEbJag2pdH9a+7L9u9QqNeyz6ZyWQ0wfzpbklNB7wtHoegLxcge1x7EPCBWSOhaoGZq0qjeQudhkkM1SW00VhQoIo6KO87xXS3EC2J/GHApR/Uf56yz8sSPiSZrElX9d7oMFsPLrAqTSAsfqGUujpLsstIYcAlJLXzMdCDBm+EAxP7GDI8FYD3q2yiRfNifRZuSO6vZ1TYgZvHAF7Tlu0+U7HJmElLWfdRfv25e1bQUNJUeVdH2wWubB3AnuzimzRCNlc7KMK4gsAYwMt4rNKqn3q2CH8o+rKlkoBYQuqtH4f56EWmUFSAc5fmq/C3eURIMbpConfYA5OyUBAqK6YEYEhBRfgIRUvzSGl3PrVs99nIa2a0NRi4YBROsXtPyfE28VI1ldw6jfEgJ7ngBcBhD1tQ8JTRB/3DMotj6nL7iHOatn6Tv+FzMVal3rJv0chSTaAogSGVDb9HfXkljaCD6+3G4zwTAh8KS6oz+Vj60gQB0ovW73xLk7ubUQsUEczVsg6aOQJwy2mMY8GROh8Ni8RJi2oLlqiRt7xnouyNt6JyucUou7XTWbS5hRYOKqupS/4aUyYkWBrXqieP60Jlut9oSWXvl9DJrIM9pZTq1m/t13IC/6Ju9lvwoCsq5rrLUftvyLYxPqKhoCPj5a/G8xAgfqwMktNVHZNt+d2/ubDveazuMyasBTI8c8nA4sA4QzxDubtDoOYwKYlauji3q8JU4DroOhETrGoRfmQfL1moxuwJfSYENXnvc7L/h8SL1LrFOIG7O+bDjGoTMFXcQgjnczAxICE/KPo77lsNufczHs72WSBSn1G9+LLnp/JlEO+xYUfb2q7RyjrCpzbbfA9l70C8OjWrPArWDgHr/IuMv9V+MaR1DzJFy5d/y6cD2yLEqTVbaCZsvVUHZb6YrYQjnMqpZQ5 9vT0EPPK AhoFQzVPCeiuHvQGKAYDcs0hpaAFdBOT7OZ2qTq7tt0jrJC7aTabV50fnVYuCgbC+lQrBfRPZU0PBW3QRblkwCMo840WYpnQjfFOxCBEIIGF48sfC7yOvgLo/dXhTYojSbm9Gw3PX6loCHPM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Apologies if this is a duplicate mail, it will be the last one. Moto's SMTP server sucks!! Ensures that PXNTable can be set on all table descriptors allocated through vmalloc. Normally, PXNTable is set only during initial memory mapping and does not apply thereafter, making it possible for attackers to target post-init allocated writable PTEs as a staging region for injection of their code into the kernel. Presently it is not possible to efficiently prevent these attacks as VMALLOC_END overlaps with _text, e.g.: VMALLOC_START ffff800080000000 VMALLOC_END fffffbfff0000000 _text ffffb6c0c1400000 _end ffffb6c0c3e40000 Setting VMALLOC_END to _text in init would resolve this issue with the caveat of a sizeable reduction in the size of available vmalloc memory due to requirements on aslr randomness. However, there are circumstances where this trade-off is necessary: in particular, hypervisor-level security monitors where 1) the microarchitecture contains race conditions on PTE level updates or 2) a per-PTE update verifier comes at a significant hit to performance. Because the address of _text is aslr-sensitive and this patch associates this value with VMALLOC_END, we remove the use of VMALLOC_END in a print statement in mm/percpu.c. However, only the format string is updated in crash_core.c, since we are dead at that point regardless. VMALLOC_END is updated in kernel/setup.c to associate the feature closely with aslr and region allocation code. Signed-off-by: Maxwell Bland Signed-off-by: Maxwell Bland --- arch/arm64/Kconfig | 13 +++++++++++++ arch/arm64/include/asm/pgtable.h | 6 ++++++ arch/arm64/include/asm/vmalloc-pxn.h | 10 ++++++++++ arch/arm64/kernel/crash_core.c | 2 +- arch/arm64/kernel/setup.c | 9 +++++++++ mm/percpu.c | 4 ++-- 6 files changed, 41 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/include/asm/vmalloc-pxn.h rc = -EINVAL; base-commit: 716f4aaa7b48a55c73d632d0657b35342b1fefd7 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index aa7c1d435139..5f1e75d70e14 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2165,6 +2165,19 @@ config ARM64_DEBUG_PRIORITY_MASKING If unsure, say N endif # ARM64_PSEUDO_NMI +config ARM64_VMALLOC_PXN + bool "Ensures table descriptors pointing to kernel data are PXNTable" + help + Reduces the range of the kernel data vmalloc region to remove any + overlap with kernel code, making it possible to enable the PXNTable + bit on table descriptors allocated after the kernel's initial memory + mapping. + + This increases the performance of security monitors which protect + against malicious updates to page table entries. + + If unsure, say N. + config RELOCATABLE bool "Build a relocatable kernel image" if EXPERT select ARCH_HAS_RELR diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 79ce70fbb751..49f64ea77c81 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -22,7 +22,9 @@ * and fixed mappings */ #define VMALLOC_START (MODULES_END) +#ifndef CONFIG_ARM64_VMALLOC_PXN #define VMALLOC_END (VMEMMAP_START - SZ_256M) +#endif #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) @@ -35,6 +37,10 @@ #include #include +#ifdef CONFIG_ARM64_VMALLOC_PXN +#include +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE diff --git a/arch/arm64/include/asm/vmalloc-pxn.h b/arch/arm64/include/asm/vmalloc-pxn.h new file mode 100644 index 000000000000..c8c4f878eb62 --- /dev/null +++ b/arch/arm64/include/asm/vmalloc-pxn.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_VMALLOC_PXN_H +#define _ASM_ARM64_VMALLOC_PXN_H + +#ifdef CONFIG_ARM64_VMALLOC_PXN +extern u64 __vmalloc_end __ro_after_init; +#define VMALLOC_END (__vmalloc_end) +#endif /* CONFIG_ARM64_VMALLOC_PXN */ + +#endif /* _ASM_ARM64_VMALLOC_PXN_H */ diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c index 66cde752cd74..39dccae11a40 100644 --- a/arch/arm64/kernel/crash_core.c +++ b/arch/arm64/kernel/crash_core.c @@ -24,7 +24,7 @@ void arch_crash_save_vmcoreinfo(void) vmcoreinfo_append_str("NUMBER(MODULES_VADDR)=0x%lx\n", MODULES_VADDR); vmcoreinfo_append_str("NUMBER(MODULES_END)=0x%lx\n", MODULES_END); vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); - vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%lx\n", VMALLOC_END); + vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%llx\n", VMALLOC_END); vmcoreinfo_append_str("NUMBER(VMEMMAP_START)=0x%lx\n", VMEMMAP_START); vmcoreinfo_append_str("NUMBER(VMEMMAP_END)=0x%lx\n", VMEMMAP_END); vmcoreinfo_append_str("NUMBER(kimage_voffset)=0x%llx\n", diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 42c690bb2d60..b7ccee672743 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -54,6 +54,11 @@ #include #include +#ifdef CONFIG_ARM64_VMALLOC_PXN +u64 __vmalloc_end __ro_after_init = VMEMMAP_START - SZ_256M; +EXPORT_SYMBOL(__vmalloc_end); +#endif /* CONFIG_ARM64_VMALLOC_PXN */ + static int num_standard_resources; static struct resource *standard_resources; @@ -298,6 +303,10 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) kaslr_init(); +#ifdef CONFIG_ARM64_VMALLOC_PXN + __vmalloc_end = ALIGN_DOWN((u64) _text, PMD_SIZE); +#endif + /* * If know now we are going to need KPTI then use non-global * mappings from the start, avoiding the cost of rewriting diff --git a/mm/percpu.c b/mm/percpu.c index 4e11fc1e6def..a902500ebfa0 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3128,8 +3128,8 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size, /* warn if maximum distance is further than 75% of vmalloc space */ if (max_distance > VMALLOC_TOTAL * 3 / 4) { - pr_warn("max_distance=0x%lx too large for vmalloc space 0x%lx\n", - max_distance, VMALLOC_TOTAL); + pr_warn("max_distance=0x%lx too large for vmalloc space\n", + max_distance); #ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK /* and fail if we have fallback */