From patchwork Wed Apr 3 21:08:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxwell Bland X-Patchwork-Id: 13632242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C79B7C04FF6 for ; Tue, 16 Apr 2024 17:25:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Subject:Date:From:Cc:To: References:In-Reply-To:Message-Id:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DK+AKqdzu1nySt7aA85W/gZaMvN91uivbFbxjJTXtTY=; b=bkWWEddlLG2tAP 3M5F+Bc4wzOAy1Jw1YUCi6KqjczVWk0IZ38TM9qVE4Z6/CPepMCZgFUxXm/I7xiNOpmChTP7PZoRM mhXz20xYPnJx1Nm1TjIgEGVjqD2mW1gima05EWrMeDDpGIFP5IdC2ltsLlAA7HoC0HH34nzOlyx9s n7MerclryJTfNw6LGSSB5TBxjvgA0G8zHpaSLc20bBtuluHKc1URSDvfCwHN7fAu37z/Kxf63X6RJ kh71nVU1WJ1VarhOGTitrLCqiuIbP96VpnnvPceZkYE8flG2d1ywf3Bz7uuyA1JG4KuK3DOWAJa5x fhyFBRXFUN3jsB7mZqsA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwmYn-0000000DAJ9-3xdK; Tue, 16 Apr 2024 17:25:33 +0000 Received: from mx0a-00823401.pphosted.com ([148.163.148.104]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwmYh-0000000DAGP-0VqP for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 17:25:32 +0000 Received: from pps.filterd (m0355085.ppops.net [127.0.0.1]) by mx0a-00823401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43GEh0Zk006822; Tue, 16 Apr 2024 17:24:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=motorola.com; h= message-id:in-reply-to:references:to:cc:from:date:subject; s= DKIM202306; bh=WtpkOtfUVklTETlWsQkVzQfkRpaN5YigWXkv8+BPIYU=; b=s w41Pz1cA5VDn6mBbbiYwn9PX+cQjWhh9XdAvz2IOJX7TMz/0Sxn6N2jrLkdOKdYg /whIbQ8ivQ7viekFNKMrGyY9UqFud9GSPI7JYPrckgL+CJ26q4vezQdZVCeNROMb FMPWF5p9rQE7TYFBuFddPpFXAVsUEUhSIP8h2RlIYEzqSzMZ+jp38y+HHs6seeql aYk94e1mJinCaILLf6M4qs6ATWjPC+/2OB/qMzUD2mxOYZjRMCaSWfxu1kwHbkUM FC1R8PRkwUaAcDRVlRTTykk+8VlYvW8Ud/CN7bGBnURWVwBCZ5lLO9btQw/2hxjE qiDJ4vgvm7uZHhIhMdRRg== Received: from va32lpfpp04.lenovo.com ([104.232.228.24]) by mx0a-00823401.pphosted.com (PPS) with ESMTPS id 3xhea8bv1a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 16 Apr 2024 17:24:58 +0000 (GMT) Received: from va32lmmrp01.lenovo.com (va32lmmrp01.mot.com [10.62.177.113]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by va32lpfpp04.lenovo.com (Postfix) with ESMTPS id 4VJrWs0vh7zj9hF; Tue, 16 Apr 2024 17:24:57 +0000 (UTC) Received: from ilclbld243.mot.com (ilclbld243.mot.com [100.64.22.29]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: mbland) by va32lmmrp01.lenovo.com (Postfix) with ESMTPSA id 4VJrWs0fdFz2VZS6; Tue, 16 Apr 2024 17:24:57 +0000 (UTC) Message-Id: <20240416122254.868007168-3-mbland@motorola.com> In-Reply-To: <20240416122254.868007168-1-mbland@motorola.com> References: <20240416122254.868007168-1-mbland@motorola.com> To: linux-arm-kernel@lists.infradead.org Cc: Maxwell Bland , linux-kernel@vger.kernel.org, Catalin Marinas , Will Deacon , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Zi Shen Lim , Mark Rutland , Ard Biesheuvel , Maxwell Bland , Kees Cook , Sami Tolvanen , Baoquan He , Jonathan Cameron , Greg Kroah-Hartman , Ryo Takakura , James Morse , Christophe Leroy , bpf@vger.kernel.org From: Maxwell Bland Date: Wed, 3 Apr 2024 16:08:15 -0500 Subject: [PATCH 2/5] arm64: mm: code and data partitioning for aslr X-Proofpoint-GUID: oSkwsmd8buYDKwvkDlsObdQxE94z6XCb X-Proofpoint-ORIG-GUID: oSkwsmd8buYDKwvkDlsObdQxE94z6XCb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-16_14,2024-04-16_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 priorityscore=1501 bulkscore=0 phishscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999 suspectscore=0 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2404010003 definitions=main-2404160108 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_102528_888356_59289950 X-CRM114-Status: GOOD ( 30.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Uses hooks in the vmalloc infrastructure to prevent interleaving code and data pages, working to both maintain compatible management assumptions made by non-arch-specific code and make management of these regions more precise and conformant, allowing, for example, the maintenance of PXNTable bits on dynamically allocated memory or the immutability of certain page middle directory and higher level descriptors. Signed-off-by: Maxwell Bland --- arch/arm64/include/asm/module.h | 12 +++++ arch/arm64/include/asm/vmalloc.h | 17 ++++++- arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/module.c | 7 ++- arch/arm64/kernel/probes/kprobes.c | 7 +-- arch/arm64/kernel/setup.c | 4 ++ arch/arm64/kernel/vmalloc.c | 71 ++++++++++++++++++++++++++++++ arch/arm64/mm/ptdump.c | 4 +- arch/arm64/net/bpf_jit_comp.c | 8 ++-- 9 files changed, 117 insertions(+), 15 deletions(-) create mode 100644 arch/arm64/kernel/vmalloc.c diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h index 79550b22ba19..e50d7a240ad7 100644 --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -65,4 +65,16 @@ static inline const Elf_Shdr *find_section(const Elf_Ehdr *hdr, return NULL; } +extern u64 module_direct_base __ro_after_init; +extern u64 module_plt_base __ro_after_init; + +int __init module_init_limits(void); + +#define MODULES_ASLR_START ((module_plt_base) ? module_plt_base : \ + module_direct_base) +#define MODULES_ASLR_END ((module_plt_base) ? module_plt_base + SZ_2G : \ + module_direct_base + SZ_128M) + +void *module_alloc(unsigned long size); + #endif /* __ASM_MODULE_H */ diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 38fafffe699f..93f8f1e2b1ce 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -4,6 +4,9 @@ #include #include +struct vmap_area; +struct kmem_cache; + #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP #define arch_vmap_pud_supported arch_vmap_pud_supported @@ -23,7 +26,7 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot) return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } -#endif +#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged static inline pgprot_t arch_vmap_pgprot_tagged(pgprot_t prot) @@ -31,4 +34,16 @@ static inline pgprot_t arch_vmap_pgprot_tagged(pgprot_t prot) return pgprot_tagged(prot); } +#ifdef CONFIG_RANDOMIZE_BASE + +#define arch_skip_va arch_skip_va +inline bool arch_skip_va(struct vmap_area *va, unsigned long vstart); + +#define arch_refine_vmap_space arch_refine_vmap_space +inline void arch_refine_vmap_space(struct rb_root *root, + struct list_head *head, + struct kmem_cache *cachep); + +#endif /* CONFIG_RANDOMIZE_BASE */ + #endif /* _ASM_ARM64_VMALLOC_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 763824963ed1..4298a2168544 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -56,7 +56,7 @@ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o obj-$(CONFIG_PARAVIRT) += paravirt.o -obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o +obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o vmalloc.o obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o obj-$(CONFIG_ELF_CORE) += elfcore.o obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \ diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 47e0be610bb6..58329b27624d 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -26,8 +26,8 @@ #include #include -static u64 module_direct_base __ro_after_init = 0; -static u64 module_plt_base __ro_after_init = 0; +u64 module_direct_base __ro_after_init; +u64 module_plt_base __ro_after_init; /* * Choose a random page-aligned base address for a window of 'size' bytes which @@ -66,7 +66,7 @@ static u64 __init random_bounding_box(u64 size, u64 start, u64 end) * we may fall back to PLTs where they could have been avoided, but this keeps * the logic significantly simpler. */ -static int __init module_init_limits(void) +int __init module_init_limits(void) { u64 kernel_end = (u64)_end; u64 kernel_start = (u64)_text; @@ -108,7 +108,6 @@ static int __init module_init_limits(void) return 0; } -subsys_initcall(module_init_limits); void *module_alloc(unsigned long size) { diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 327855a11df2..89968f05177f 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -131,9 +131,10 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) void *alloc_insn_page(void) { - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); + return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_ASLR_START, + MODULES_ASLR_END, GFP_KERNEL, PAGE_KERNEL_ROX, + VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, + __builtin_return_address(0)); } /* arm kprobe: install breakpoint in text */ diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 65a052bf741f..908ee0ccc606 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -53,6 +53,7 @@ #include #include #include +#include static int num_standard_resources; static struct resource *standard_resources; @@ -321,6 +322,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) arm64_memblock_init(); + paging_init(); acpi_table_upgrade(); @@ -366,6 +368,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) "This indicates a broken bootloader or old kernel\n", boot_args[1], boot_args[2], boot_args[3]); } + + module_init_limits(); } static inline bool cpu_can_disable(unsigned int cpu) diff --git a/arch/arm64/kernel/vmalloc.c b/arch/arm64/kernel/vmalloc.c new file mode 100644 index 000000000000..00a463f3692f --- /dev/null +++ b/arch/arm64/kernel/vmalloc.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AArch64 vmap area management code + * + * Author: Maxwell Bland + */ + +#include +#include + +#include + +/* + * Prevents the allocation of new vmap_areas from dynamic code + * region if the virtual address requested is not explicitly the + * module region. + */ +inline bool arch_skip_va(struct vmap_area *va, unsigned long vstart) +{ + return (vstart != MODULES_ASLR_START && + va->va_start >= MODULES_ASLR_START && + va->va_end <= MODULES_ASLR_END); +} + +/* + * Splits a vmap area in two and allocates a new area if needed + */ +inline struct vmap_area * +try_split_alloc_vmap_area(struct rb_root *root, + struct list_head *head, + struct kmem_cache *vmap_area_cachep, + unsigned long addr) +{ + struct vmap_area *va; + int ret; + struct vmap_area *lva = NULL; + + va = __find_vmap_area(addr, root); + if (!va) { + pr_err("%s: could not find vmap\n", __func__); + return NULL; + } + + lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); + if (!lva) { + pr_err("%s: unable to allocate va for range\n", __func__); + return NULL; + } + lva->va_start = addr; + lva->va_end = va->va_end; + ret = va_clip(root, head, va, addr, va->va_end - addr); + if (WARN_ON_ONCE(ret)) { + pr_err("%s: unable to clip code base region\n", __func__); + kmem_cache_free(vmap_area_cachep, lva); + return NULL; + } + insert_vmap_area_augment(lva, NULL, root, head); + return lva; +} + +/* + * Run during vmalloc_init, ensures that there exist explicit rb tree + * node delineations between code and data + */ +inline void arch_refine_vmap_space(struct rb_root *root, + struct list_head *head, + struct kmem_cache *cachep) +{ + try_split_alloc_vmap_area(root, head, cachep, MODULES_ASLR_START); + try_split_alloc_vmap_area(root, head, cachep, MODULES_ASLR_END); +} diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 6986827e0d64..796231a4fd63 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -261,9 +261,7 @@ static void note_page(struct ptdump_state *pt_st, unsigned long addr, int level, } pt_dump_seq_printf(st->seq, "%9lu%c %s", delta, *unit, pg_level[st->level].name); - if (st->current_prot && pg_level[st->level].bits) - dump_prot(st, pg_level[st->level].bits, - pg_level[st->level].num); + dump_prot(st, pg_level[st->level].bits, pg_level[st->level].num); pt_dump_seq_puts(st->seq, "\n"); if (addr >= st->marker[1].start_address) { diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 122021f9bdfc..6ed6e00b8b4a 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -13,6 +13,8 @@ #include #include #include +#include +#include #include #include @@ -1790,18 +1792,18 @@ void *bpf_arch_text_copy(void *dst, void *src, size_t len) u64 bpf_jit_alloc_exec_limit(void) { - return VMALLOC_END - VMALLOC_START; + return MODULES_ASLR_END - MODULES_ASLR_START; } void *bpf_jit_alloc_exec(unsigned long size) { /* Memory is intended to be executable, reset the pointer tag. */ - return kasan_reset_tag(vmalloc(size)); + return kasan_reset_tag(module_alloc(size)); } void bpf_jit_free_exec(void *addr) { - return vfree(addr); + return module_memfree(addr); } /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */ From patchwork Fri Apr 12 15:00:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxwell Bland X-Patchwork-Id: 13632240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97E6EC4345F for ; Tue, 16 Apr 2024 17:25:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Subject:Date:From:Cc:To: References:In-Reply-To:Message-Id:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1/v9Wwv5A+pr/uwJLivYd+YDeJOIslXxgmstm8og5W0=; b=0m7duS7hXGXWOW gSO1eFBi/er41Mi+4GNpBiynFSmfakHVXV0JzdWV0UJTBOfseyM7WHTmEpb5Ff1pMsE8Z8tuYrQ1H ew1qPmlZNGoTELtFZcWSuvLGvBrkgPW+hSnG5kvHZeYhWgXes+8OF9ZNUhIsJOj6tCuDLkssh80OS 6sYu7yUHh57C/TNA/NeKpO4/6FXRoBsWLCH22nUfiKua+mPmtDatevoiVQIrTtxU5X05L/xN6joN9 OcuIAzcZntKw7DtR73FzK6RmQwFUq5C/IDlDI8nTqSvn+AEqXf+WwL4yij25wQmbot1+EKQl0rU7/ CynLndMYwejR6hSsT0VQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwmYO-0000000DA94-2bJo; Tue, 16 Apr 2024 17:25:08 +0000 Received: from mx0b-00823401.pphosted.com ([148.163.152.46]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwmYK-0000000DA7y-0PqL for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 17:25:06 +0000 Received: from pps.filterd (m0355092.ppops.net [127.0.0.1]) by mx0b-00823401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43GEstYE010955; Tue, 16 Apr 2024 17:24:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=motorola.com; h= message-id:in-reply-to:references:to:cc:from:date:subject; s= DKIM202306; bh=bxlhcCLLu2nVwVVshKc9HTXBGPT+xoKrYdwmFjIADko=; b=A LFUaIzpDXni0XvmvS0y2Pxqhcy4PQpTgtvxQlsDj8ehh8lmGWKw/KdeDd8BuUd8Q hMBekukD2lvL+7VD+InMM3QMnnniANJSzk4HQYX0F3VzKOg16JlETL9y8pT4Uewi K5mg5FPC/nPBzPUS7n4OW0xJ0Lx2W91X6kEbRo1tCH913lQPZqh7mMBqyFTn2Ke6 5RCAEdwT2KoGnNwjqDI+okLTM6csDjI07GYqTd23+rZUqoRAUzMrgungknlghjUI otEww2i4ylzuDct59uiCTfJLLIqtdgfsHjXj7PtkQXqOKESgOg+YnW7bUlt/6/qV 3OSJR5JfFMbHDiub4UYuA== Received: from va32lpfpp02.lenovo.com ([104.232.228.22]) by mx0b-00823401.pphosted.com (PPS) with ESMTPS id 3xha96c2th-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 16 Apr 2024 17:24:57 +0000 (GMT) Received: from va32lmmrp01.lenovo.com (va32lmmrp01.mot.com [10.62.177.113]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by va32lpfpp02.lenovo.com (Postfix) with ESMTPS id 4VJrWs1PZ0z53xyY; Tue, 16 Apr 2024 17:24:57 +0000 (UTC) Received: from ilclbld243.mot.com (ilclbld243.mot.com [100.64.22.29]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: mbland) by va32lmmrp01.lenovo.com (Postfix) with ESMTPSA id 4VJrWs15XGz2VZRf; Tue, 16 Apr 2024 17:24:57 +0000 (UTC) Message-Id: <20240416122254.868007168-5-mbland@motorola.com> In-Reply-To: <20240416122254.868007168-1-mbland@motorola.com> References: <20240416122254.868007168-1-mbland@motorola.com> To: linux-arm-kernel@lists.infradead.org Cc: Maxwell Bland , Catalin Marinas , Will Deacon , Ard Biesheuvel , Maxwell Bland , linux-kernel@vger.kernel.org From: Maxwell Bland Date: Fri, 12 Apr 2024 10:00:34 -0500 Subject: [PATCH 4/5] arm64: dynamic enforcement of PXNTable X-Proofpoint-GUID: c40_PDWtT41G9D4WZ4RFrQBRvh-FsbVL X-Proofpoint-ORIG-GUID: c40_PDWtT41G9D4WZ4RFrQBRvh-FsbVL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-16_14,2024-04-16_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0 impostorscore=0 mlxlogscore=624 priorityscore=1501 adultscore=0 lowpriorityscore=0 clxscore=1015 malwarescore=0 spamscore=0 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2404010003 definitions=main-2404160108 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_102504_288309_CAB8BF91 X-CRM114-Status: GOOD ( 15.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org PXNTable is enforced during the init process to ensure that regions of user memory and kernel data cannot be executed from, preventing attacks which write to writable kernel pages and then modify the kernel's page tables to make this code executable. This patch ensures this protection is also preserved for dynamically allocated pages/pagetables, making it so that all PMDs populated outside of the module code region are PXNTable by default. Signed-off-by: Maxwell Bland --- arch/arm64/include/asm/pgalloc.h | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 5785272144e8..2376b4e7915c 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -12,6 +12,7 @@ #include #include #include +#include #define __HAVE_ARCH_PGD_FREE #define __HAVE_ARCH_PUD_FREE @@ -119,6 +120,12 @@ static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t ptep, set_pmd(pmdp, __pmd(__phys_to_pmd_val(ptep) | prot)); } +static inline bool vaddr_is_data(unsigned long vaddr) +{ + return ((vaddr + PMD_SIZE < MODULES_ASLR_START || vaddr >= MODULES_ASLR_END) && + (vaddr + PMD_SIZE < (unsigned long) _text || vaddr >= (unsigned long) _etext)); +} + /* * Populate the pmdp entry with a pointer to the pte. This pmd is part * of the mm address space. @@ -127,8 +134,11 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep, unsigned long vaddr) { + pmdval_t pmd = PMD_TYPE_TABLE | PMD_TABLE_UXN; VM_BUG_ON(mm && mm != &init_mm); - __pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE | PMD_TABLE_UXN); + if (vaddr_is_data(vaddr)) + pmd |= PMD_TABLE_PXN; + __pmd_populate(pmdp, __pa(ptep), pmd); } static inline void From patchwork Mon Apr 15 19:51:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxwell Bland X-Patchwork-Id: 13632255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 118FFC4345F for ; Tue, 16 Apr 2024 17:26:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Subject:Date:From:Cc:To: References:In-Reply-To:Message-Id:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JxE6lom+S5qBPZRC9UJGY4PEa34oWAobuRdAvAem5Y8=; b=KRQu0dIB771K7X 7nc5pMrGlAjMU6fdJC31CqhYIA+U4RIVGWJeCgXJJzcIOBfvCO5C91xWVZq5zd1/KxnBWykQvTyOk gpbFNMO/FbsfDa5G/18W3F09sjZTSI898+iNNo5XWO7lJwPD5u4/FzsYwYBeQmntwQsryEktWkTKg IcS8I1OgXWvzEovdNO2/S6VYX8HWKuWs98k/qXam1suilRx2GOKSWmSQEQz2MglIuHHw2fo41p30t LxBrcSfvSIkV4lH81hMwxo+mQbi8XKGIZ3zH11J4UPuR7lYkZnOWBP51zkqJknSsW+f2VuemSqgAo GlZGYjO/poOETwjO5NNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwmZ2-0000000DASi-2Fvx; Tue, 16 Apr 2024 17:25:48 +0000 Received: from mx0b-00823401.pphosted.com ([148.163.152.46]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwmYq-0000000DAKU-1qd2; Tue, 16 Apr 2024 17:25:39 +0000 Received: from pps.filterd (m0355091.ppops.net [127.0.0.1]) by mx0b-00823401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43GFQBHE026682; Tue, 16 Apr 2024 17:24:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=motorola.com; h= message-id:in-reply-to:references:to:cc:from:date:subject; s= DKIM202306; bh=Oc2SUwX6uzMkXCJf4ZSU5CLVXrP2N2CrXvm6EA3BRc0=; b=f VFZT85i9fG9UZIMquW7lOcKAq20DOeCPjOdPiSoX40Uy/gdhpas7agKPawUQhT7x jlu6AEhZt7ljLk2n20UpKTnolmUli9RA4S94pfqgo82LJmyfQGSWX+2ckqN3E3a6 jp+/jciyo6S9UM3hNfRiVNfOIKTGmn3Gd7Kqf7Fg+ODOygCLrVJKEFbUu3tVWAti fadgXso/3HtOEdqpW7Sq+tQ8RIqcbtaSSGSjhS7q9YsinUy0LnphQn2+GK2iDP1m aaqJyHM7uz4vWoEonBp43YZWPoTF4kyTlWY8hHQneojPAs3B2gfooI2z9rSq2qK+ 8Td0+HxT5kbdpLFNyLd1A== Received: from va32lpfpp04.lenovo.com ([104.232.228.24]) by mx0b-00823401.pphosted.com (PPS) with ESMTPS id 3xhjbek979-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 16 Apr 2024 17:24:57 +0000 (GMT) Received: from va32lmmrp01.lenovo.com (va32lmmrp01.mot.com [10.62.177.113]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by va32lpfpp04.lenovo.com (Postfix) with ESMTPS id 4VJrWs1Wv3zj9hH; Tue, 16 Apr 2024 17:24:57 +0000 (UTC) Received: from ilclbld243.mot.com (ilclbld243.mot.com [100.64.22.29]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: mbland) by va32lmmrp01.lenovo.com (Postfix) with ESMTPSA id 4VJrWs17Mmz2VZS6; Tue, 16 Apr 2024 17:24:57 +0000 (UTC) Message-Id: <20240416122254.868007168-6-mbland@motorola.com> In-Reply-To: <20240416122254.868007168-1-mbland@motorola.com> References: <20240416122254.868007168-1-mbland@motorola.com> To: linux-mm@kvack.org Cc: Maxwell Bland , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , Ard Biesheuvel , Mark Rutland , Maxwell Bland , Alexandre Ghiti , Yu Chien Peter Lin , Song Shuai , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org From: Maxwell Bland Date: Mon, 15 Apr 2024 14:51:32 -0500 Subject: [PATCH 5/5] ptdump: add state parameter for non-leaf callback X-Proofpoint-ORIG-GUID: Obuaco9Ts8gQobghqAGopelkIlnGG11N X-Proofpoint-GUID: Obuaco9Ts8gQobghqAGopelkIlnGG11N X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-16_14,2024-04-16_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 suspectscore=0 bulkscore=0 clxscore=1015 mlxscore=0 lowpriorityscore=0 malwarescore=0 phishscore=0 adultscore=0 mlxlogscore=960 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2404010003 definitions=main-2404160108 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_102536_729461_10B01ED5 X-CRM114-Status: GOOD ( 17.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ptdump can now note non-leaf descriptor entries, a useful addition for debugging table descriptor permissions when working on related code Signed-off-by: Maxwell Bland --- arch/arm64/mm/ptdump.c | 6 ++++-- arch/powerpc/mm/ptdump/ptdump.c | 2 ++ arch/riscv/mm/ptdump.c | 6 ++++-- arch/s390/mm/dump_pagetables.c | 6 ++++-- arch/x86/mm/dump_pagetables.c | 3 ++- include/linux/ptdump.h | 1 + mm/ptdump.c | 13 +++++++++++++ 7 files changed, 30 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 796231a4fd63..1a6f4a3513e5 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -299,7 +299,8 @@ void ptdump_walk(struct seq_file *s, struct ptdump_info *info) .range = (struct ptdump_range[]){ {info->base_addr, end}, {0, 0} - } + }, + .note_non_leaf = false } }; @@ -335,7 +336,8 @@ bool ptdump_check_wx(void) .range = (struct ptdump_range[]) { {_PAGE_OFFSET(vabits_actual), ~0UL}, {0, 0} - } + }, + .note_non_leaf = false } }; diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c index 9dc239967b77..89e673f5fd3d 100644 --- a/arch/powerpc/mm/ptdump/ptdump.c +++ b/arch/powerpc/mm/ptdump/ptdump.c @@ -307,6 +307,7 @@ static int ptdump_show(struct seq_file *m, void *v) .ptdump = { .note_page = note_page, .range = ptdump_range, + .note_non_leaf = false } }; @@ -340,6 +341,7 @@ bool ptdump_check_wx(void) .ptdump = { .note_page = note_page, .range = ptdump_range, + .note_non_leaf = false } }; diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 1289cc6d3700..b355633afcaf 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -328,7 +328,8 @@ static void ptdump_walk(struct seq_file *s, struct ptd_mm_info *pinfo) .range = (struct ptdump_range[]) { {pinfo->base_addr, pinfo->end}, {0, 0} - } + }, + .note_non_leaf = false } }; @@ -350,7 +351,8 @@ bool ptdump_check_wx(void) .range = (struct ptdump_range[]) { {KERN_VIRT_START, ULONG_MAX}, {0, 0} - } + }, + .note_non_leaf = false } }; diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c index ffd07ed7b4af..6468cfd53e2a 100644 --- a/arch/s390/mm/dump_pagetables.c +++ b/arch/s390/mm/dump_pagetables.c @@ -200,7 +200,8 @@ bool ptdump_check_wx(void) .range = (struct ptdump_range[]) { {.start = 0, .end = max_addr}, {.start = 0, .end = 0}, - } + }, + .note_non_leaf = false }, .seq = NULL, .level = -1, @@ -239,7 +240,8 @@ static int ptdump_show(struct seq_file *m, void *v) .range = (struct ptdump_range[]) { {.start = 0, .end = max_addr}, {.start = 0, .end = 0}, - } + }, + .note_non_leaf = false }, .seq = m, .level = -1, diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 89079ea73e65..43f00dfb955f 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -380,7 +380,8 @@ bool ptdump_walk_pgd_level_core(struct seq_file *m, .ptdump = { .note_page = note_page, .effective_prot = effective_prot, - .range = ptdump_ranges + .range = ptdump_ranges, + .note_non_leaf = false }, .level = -1, .to_dmesg = dmesg, diff --git a/include/linux/ptdump.h b/include/linux/ptdump.h index 8dbd51ea8626..b3e793a5c77f 100644 --- a/include/linux/ptdump.h +++ b/include/linux/ptdump.h @@ -16,6 +16,7 @@ struct ptdump_state { int level, u64 val); void (*effective_prot)(struct ptdump_state *st, int level, u64 val); const struct ptdump_range *range; + bool note_non_leaf; }; bool ptdump_walk_pgd_level_core(struct seq_file *m, diff --git a/mm/ptdump.c b/mm/ptdump.c index 106e1d66e9f9..97da7a765b22 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -41,6 +41,9 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 0, pgd_val(val)); + if (st->note_non_leaf && !pgd_leaf(val)) + st->note_page(st, addr, 0, pgd_val(val)); + if (pgd_leaf(val)) { st->note_page(st, addr, 0, pgd_val(val)); walk->action = ACTION_CONTINUE; @@ -64,6 +67,9 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 1, p4d_val(val)); + if (st->note_non_leaf && !p4d_leaf(val)) + st->note_page(st, addr, 1, p4d_val(val)); + if (p4d_leaf(val)) { st->note_page(st, addr, 1, p4d_val(val)); walk->action = ACTION_CONTINUE; @@ -87,6 +93,9 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 2, pud_val(val)); + if (st->note_non_leaf && !pud_leaf(val)) + st->note_page(st, addr, 2, pud_val(val)); + if (pud_leaf(val)) { st->note_page(st, addr, 2, pud_val(val)); walk->action = ACTION_CONTINUE; @@ -108,6 +117,10 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 3, pmd_val(val)); + + if (st->note_non_leaf && !pmd_leaf(val)) + st->note_page(st, addr, 3, pmd_val(val)); + if (pmd_leaf(val)) { st->note_page(st, addr, 3, pmd_val(val)); walk->action = ACTION_CONTINUE;