From patchwork Mon Sep 18 07:29:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13169CD37B0 for ; Mon, 18 Sep 2023 07:30:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CmdhcD0GPpZa2SiPyK7QoUCoXkSGHopdd58fz7Uscew=; b=D4EOb6c4Tp1y3n A3SeCGf09JAgR3M5rolIcSlkTqo5nXvuEHrQ2jMhJlQqX2YSlX7il3+sbVMNzip4kDskPS4dhOmX+ SXM5NFA8vUiAoJc4xSuOlmHP4wkjiHRSk4OQsgsCOqkiVK+e2nuVMbhg/DP0TjifWXaqxL6KwxzlK bBBA7ohGoopNMITtk0R/CTtwpjgJbWR0/9tdrZZ6dmNNNDcMrxsntMpjFcngVbLaSwfzhmvwDeIPF D3xF+LC1mlgjN7FBahqmWcKOcXhlIkkBZuBI9xoPYwCcoIgm4HR0QfkeI6AzPj5PGFbLoNSQdxRoC 7BdOc2B4lVFv81IV+CDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8iC-00Eh5s-1e; Mon, 18 Sep 2023 07:30:28 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8iA-00Eh4h-0H; Mon, 18 Sep 2023 07:30:27 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6C5CD60E15; Mon, 18 Sep 2023 07:30:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55C0EC43397; Mon, 18 Sep 2023 07:30:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022224; bh=KSPIxmcvM6pw+gN3UxNfSb/hLNccmDr75av1M8zdRQI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n1BsBmpfW3QKK3NJROVE3ZzVUFsmFfDjPKEBn27xuClR0kSquHorXzHaU80koKHjq oZZubDbrJshf2OzAsSW/lYZeiCxCVkNHUyRKf0J25B3oyQqT68RZdj7eYC/N0Cta3F s7s8WnXGl08TzUW7RJcoZTxms2v0q/MjRq68IGaI1IRBqPpLAZ6vaNChPkgshwIF5h 3KnstVmy4jP0LDvFd3KogmLI2XFZUpl0AY5gcnG4Aq2Zy+SxUhV393i/W7f5/3shGG k9UrFYiH9x7//wdEvOIFolwWa2pPRCHkauyGN/FXYwMai04Ht7kCMou6e5Iw71dZtp HuRwsr2VE4kwA== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 01/13] nios2: define virtual address space for modules Date: Mon, 18 Sep 2023 10:29:43 +0300 Message-Id: <20230918072955.2507221-2-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003026_209599_590FAC23 X-CRM114-Status: GOOD ( 14.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" nios2 uses kmalloc() to implement module_alloc() because CALL26/PCREL26 cannot reach all of vmalloc address space. Define module space as 32MiB below the kernel base and switch nios2 to use vmalloc for module allocations. Suggested-by: Thomas Gleixner Acked-by: Dinh Nguyen Acked-by: Song Liu Signed-off-by: Mike Rapoport (IBM) --- arch/nios2/include/asm/pgtable.h | 5 ++++- arch/nios2/kernel/module.c | 19 ++++--------------- 2 files changed, 8 insertions(+), 16 deletions(-) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 5144506dfa69..d2fb42fb6db8 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -25,7 +25,10 @@ #include #define VMALLOC_START CONFIG_NIOS2_KERNEL_MMU_REGION_BASE -#define VMALLOC_END (CONFIG_NIOS2_KERNEL_REGION_BASE - 1) +#define VMALLOC_END (CONFIG_NIOS2_KERNEL_REGION_BASE - SZ_32M - 1) + +#define MODULES_VADDR (CONFIG_NIOS2_KERNEL_REGION_BASE - SZ_32M) +#define MODULES_END (CONFIG_NIOS2_KERNEL_REGION_BASE - 1) struct mm_struct; diff --git a/arch/nios2/kernel/module.c b/arch/nios2/kernel/module.c index 76e0a42d6e36..9c97b7513853 100644 --- a/arch/nios2/kernel/module.c +++ b/arch/nios2/kernel/module.c @@ -21,23 +21,12 @@ #include -/* - * Modules should NOT be allocated with kmalloc for (obvious) reasons. - * But we do it for now to avoid relocation issues. CALL26/PCREL26 cannot reach - * from 0x80000000 (vmalloc area) to 0xc00000000 (kernel) (kmalloc returns - * addresses in 0xc0000000) - */ void *module_alloc(unsigned long size) { - if (size == 0) - return NULL; - return kmalloc(size, GFP_KERNEL); -} - -/* Free memory returned from module_alloc */ -void module_memfree(void *module_region) -{ - kfree(module_region); + return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, + GFP_KERNEL, PAGE_KERNEL_EXEC, + VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, + __builtin_return_address(0)); } int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, From patchwork Mon Sep 18 07:29:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18241C46CA1 for ; Mon, 18 Sep 2023 07:31:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MvntdG0H2/fpgvBbdkjUTljqjhBHSuN6TBsqOE+flwU=; b=nemBgtw9CF/6kh 8/34dQkChbzzWoMuUMfxLr81cqDqXHb+AER437W6DyBRXgW8FkU/Db+S7PZFPZLzDuMGem0Xg8FJ8 35Rx7ascngQDWdP0HL+u36x/8X6BJdoawBxVglWBAeSbjhI/O/91QF4LaFSbMXHyHaMlm16C74Ap8 iJHFehZoaHSo6JbuXMJ6PBXajPOC+spWVB2cvF/wQjgD7fkbHzlfblnLA8tUfwoMxZq+JR4apk8Ep 271fwzemA/+oYXwjtG4A8L15Kh5TUpuQPTy5bDpeiTJ7EZ+LwjIyV7DstsCt4WEupcEAGcVtsdNSU C1BU+AlGswvfArYXjbtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8iP-00EhBV-3A; Mon, 18 Sep 2023 07:30:41 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8iL-00Eh8q-1W; Mon, 18 Sep 2023 07:30:40 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D32ADB80BEC; Mon, 18 Sep 2023 07:30:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D537C433C8; Mon, 18 Sep 2023 07:30:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022234; bh=bvwcP2LqH7M0LqBQS8gC0daDhK3BG4ji8BhLvEjeq+s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TBLQcR1lAmU6KOalSsJ/oR1o5zZoazG/YXTfJUDqgJb0o6vPktmqTK+vjgO53nDe2 wRRsB421t+7jR7di9+tRCC1Vc5UL9ISE3HJXSgRpGXB2AcnDWCxIzabugyKHowlQRM dVtPneEfqW56D4trpvlIHzeEr1S//A3oRTZGBXCHfaluSiZJrflI+JHArvH50+TYYA RgIwhEVFloUsg24G4ybg8nt2Ea/Mr7bBH7Fum/wXH0+jrq62z5TRAralvtFuyV85hy FhD+mpWRLiJSUQyEBvo38Sy93YKxiF+tl1IvR4PCSFVan844bfOzzMlfFZccF2EytU +u0ufqp2nMUIg== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 02/13] mm: introduce execmem_text_alloc() and execmem_free() Date: Mon, 18 Sep 2023 10:29:44 +0300 Message-Id: <20230918072955.2507221-3-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003037_822589_82CDD292 X-CRM114-Status: GOOD ( 32.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" module_alloc() is used everywhere as a mean to allocate memory for code. Beside being semantically wrong, this unnecessarily ties all subsystems that need to allocate code, such as ftrace, kprobes and BPF to modules and puts the burden of code allocation to the modules code. Several architectures override module_alloc() because of various constraints where the executable memory can be located and this causes additional obstacles for improvements of code allocation. Start splitting code allocation from modules by introducing execmem_text_alloc() and execmem_free() APIs. Initially, execmem_text_alloc() is a wrapper for module_alloc() and execmem_free() is a replacement of module_memfree() to allow updating all call sites to use the new APIs. Since architectures define different restrictions on placement, permissions, alignment and other parameters for memory that can be used by different subsystems that allocate executable memory, execmem_text_alloc() takes a type argument, that will be used to identify the calling subsystem and to allow architectures define parameters for ranges suitable for that subsystem. The name execmem_text_alloc() emphasizes that the allocated memory is for executable code, the allocations of the associated data, like data sections of a module will use execmem_data_alloc() interface that will be added later. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/kernel/kprobes.c | 4 +-- arch/s390/kernel/ftrace.c | 4 +-- arch/s390/kernel/kprobes.c | 4 +-- arch/s390/kernel/module.c | 5 +-- arch/sparc/net/bpf_jit_comp_32.c | 8 ++--- arch/x86/kernel/ftrace.c | 6 ++-- arch/x86/kernel/kprobes/core.c | 4 +-- include/linux/execmem.h | 56 ++++++++++++++++++++++++++++++++ include/linux/moduleloader.h | 3 -- kernel/bpf/core.c | 6 ++-- kernel/kprobes.c | 8 ++--- kernel/module/Kconfig | 1 + kernel/module/main.c | 25 +++++--------- mm/Kconfig | 3 ++ mm/Makefile | 1 + mm/execmem.c | 26 +++++++++++++++ 16 files changed, 120 insertions(+), 44 deletions(-) create mode 100644 include/linux/execmem.h create mode 100644 mm/execmem.c diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index b20ee72e873a..62228c7072a2 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -19,8 +19,8 @@ #include #include #include -#include #include +#include #include #include #include @@ -130,7 +130,7 @@ void *alloc_insn_page(void) { void *page; - page = module_alloc(PAGE_SIZE); + page = execmem_text_alloc(EXECMEM_KPROBES, PAGE_SIZE); if (!page) return NULL; diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c index c46381ea04ec..4052e10eb6a4 100644 --- a/arch/s390/kernel/ftrace.c +++ b/arch/s390/kernel/ftrace.c @@ -7,13 +7,13 @@ * Author(s): Martin Schwidefsky */ -#include #include #include #include #include #include #include +#include #include #include #include @@ -220,7 +220,7 @@ static int __init ftrace_plt_init(void) { const char *start, *end; - ftrace_plt = module_alloc(PAGE_SIZE); + ftrace_plt = execmem_text_alloc(EXECMEM_FTRACE, PAGE_SIZE); if (!ftrace_plt) panic("cannot allocate ftrace plt\n"); diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c index d4b863ed0aa7..48928460dcb9 100644 --- a/arch/s390/kernel/kprobes.c +++ b/arch/s390/kernel/kprobes.c @@ -9,7 +9,6 @@ #define pr_fmt(fmt) "kprobes: " fmt -#include #include #include #include @@ -21,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -38,7 +38,7 @@ void *alloc_insn_page(void) { void *page; - page = module_alloc(PAGE_SIZE); + page = execmem_text_alloc(EXECMEM_KPROBES, PAGE_SIZE); if (!page) return NULL; set_memory_rox((unsigned long)page, 1); diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index 42215f9404af..db5561d0c233 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -76,7 +77,7 @@ void *module_alloc(unsigned long size) #ifdef CONFIG_FUNCTION_TRACER void module_arch_cleanup(struct module *mod) { - module_memfree(mod->arch.trampolines_start); + execmem_free(mod->arch.trampolines_start); } #endif @@ -510,7 +511,7 @@ static int module_alloc_ftrace_hotpatch_trampolines(struct module *me, size = FTRACE_HOTPATCH_TRAMPOLINES_SIZE(s->sh_size); numpages = DIV_ROUND_UP(size, PAGE_SIZE); - start = module_alloc(numpages * PAGE_SIZE); + start = execmem_text_alloc(EXECMEM_FTRACE, numpages * PAGE_SIZE); if (!start) return -ENOMEM; set_memory_rox((unsigned long)start, numpages); diff --git a/arch/sparc/net/bpf_jit_comp_32.c b/arch/sparc/net/bpf_jit_comp_32.c index a74e5004c6c8..5fa9c45fba0a 100644 --- a/arch/sparc/net/bpf_jit_comp_32.c +++ b/arch/sparc/net/bpf_jit_comp_32.c @@ -1,10 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 -#include #include #include #include #include #include +#include #include #include @@ -713,7 +713,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; if (unlikely(proglen + ilen > oldproglen)) { pr_err("bpb_jit_compile fatal error\n"); kfree(addrs); - module_memfree(image); + execmem_free(image); return; } memcpy(image + proglen, temp, ilen); @@ -736,7 +736,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; break; } if (proglen == oldproglen) { - image = module_alloc(proglen); + image = execmem_text_alloc(EXECMEM_BPF, proglen); if (!image) goto out; } @@ -758,7 +758,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; void bpf_jit_free(struct bpf_prog *fp) { if (fp->jited) - module_memfree(fp->bpf_func); + execmem_free(fp->bpf_func); bpf_prog_unlock_free(fp); } diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 12df54ff0e81..ae56d79a6a74 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -25,6 +25,7 @@ #include #include #include +#include #include @@ -261,15 +262,14 @@ void arch_ftrace_update_code(int command) #ifdef CONFIG_X86_64 #ifdef CONFIG_MODULES -#include /* Module allocation simplifies allocating memory for code */ static inline void *alloc_tramp(unsigned long size) { - return module_alloc(size); + return execmem_text_alloc(EXECMEM_FTRACE, size); } static inline void tramp_free(void *tramp) { - module_memfree(tramp); + execmem_free(tramp); } #else /* Trampolines can only be created if modules are supported */ diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index e8babebad7b8..c4f58e893efd 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -40,12 +40,12 @@ #include #include #include -#include #include #include #include #include #include +#include #include #include @@ -448,7 +448,7 @@ void *alloc_insn_page(void) { void *page; - page = module_alloc(PAGE_SIZE); + page = execmem_text_alloc(EXECMEM_KPROBES, PAGE_SIZE); if (!page) return NULL; diff --git a/include/linux/execmem.h b/include/linux/execmem.h new file mode 100644 index 000000000000..3491bf7e9714 --- /dev/null +++ b/include/linux/execmem.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_EXECMEM_ALLOC_H +#define _LINUX_EXECMEM_ALLOC_H + +#include + +/** + * enum execmem_type - types of executable memory ranges + * + * There are several subsystems that allocate executable memory. + * Architectures define different restrictions on placement, + * permissions, alignment and other parameters for memory that can be used + * by these subsystems. + * Types in this enum identify subsystems that allocate executable memory + * and let architectures define parameters for ranges suitable for + * allocations by each subsystem. + * + * @EXECMEM_DEFAULT: default parameters that would be used for types that + * are not explcitly defined. + * @EXECMEM_MODULE_TEXT: parameters for module text sections + * @EXECMEM_KPROBES: parameters for kprobes + * @EXECMEM_FTRACE: parameters for ftrace + * @EXECMEM_BPF: parameters for BPF + * @EXECMEM_TYPE_MAX: + */ +enum execmem_type { + EXECMEM_DEFAULT, + EXECMEM_MODULE_TEXT = EXECMEM_DEFAULT, + EXECMEM_KPROBES, + EXECMEM_FTRACE, + EXECMEM_BPF, + EXECMEM_TYPE_MAX, +}; + +/** + * execmem_text_alloc - allocate executable memory + * @type: type of the allocation + * @size: how many bytes of memory are required + * + * Allocates memory that will contain executable code, either generated or + * loaded from kernel modules. + * + * The memory will have protections defined by architecture for executable + * region of the @type. + * + * Return: a pointer to the allocated memory or %NULL + */ +void *execmem_text_alloc(enum execmem_type type, size_t size); + +/** + * execmem_free - free executable memory + * @ptr: pointer to the memory that should be freed + */ +void execmem_free(void *ptr); + +#endif /* _LINUX_EXECMEM_ALLOC_H */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 001b2ce83832..a23718aa2f4d 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -29,9 +29,6 @@ unsigned int arch_mod_section_prepend(struct module *mod, unsigned int section); sections. Returns NULL on failure. */ void *module_alloc(unsigned long size); -/* Free memory returned from module_alloc. */ -void module_memfree(void *module_region); - /* Determines if the section name is an init section (that is only used during * module loading). */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 4e3ce0542e31..75249f2d9f77 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -37,6 +36,7 @@ #include #include #include +#include #include #include @@ -1007,12 +1007,12 @@ void bpf_jit_uncharge_modmem(u32 size) void *__weak bpf_jit_alloc_exec(unsigned long size) { - return module_alloc(size); + return execmem_text_alloc(EXECMEM_BPF, size); } void __weak bpf_jit_free_exec(void *addr) { - module_memfree(addr); + execmem_free(addr); } struct bpf_binary_header * diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 0c6185aefaef..0ccb4d2ec9a2 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -39,6 +38,7 @@ #include #include #include +#include #include #include @@ -113,17 +113,17 @@ enum kprobe_slot_state { void __weak *alloc_insn_page(void) { /* - * Use module_alloc() so this page is within +/- 2GB of where the + * Use execmem_text_alloc() so this page is within +/- 2GB of where the * kernel image and loaded module images reside. This is required * for most of the architectures. * (e.g. x86-64 needs this to handle the %rip-relative fixups.) */ - return module_alloc(PAGE_SIZE); + return execmem_text_alloc(EXECMEM_KPROBES, PAGE_SIZE); } static void free_insn_page(void *page) { - module_memfree(page); + execmem_free(page); } struct kprobe_insn_cache kprobe_insn_slots = { diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig index 33a2e991f608..813e116bdee6 100644 --- a/kernel/module/Kconfig +++ b/kernel/module/Kconfig @@ -2,6 +2,7 @@ menuconfig MODULES bool "Enable loadable module support" modules + select EXECMEM help Kernel modules are small pieces of compiled code which can be inserted in the running kernel, rather than being diff --git a/kernel/module/main.c b/kernel/module/main.c index 98fedfdb8db5..4ec982cc943c 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -57,6 +57,7 @@ #include #include #include +#include #include #include "internal.h" @@ -1179,16 +1180,6 @@ resolve_symbol_wait(struct module *mod, return ksym; } -void __weak module_memfree(void *module_region) -{ - /* - * This memory may be RO, and freeing RO memory in an interrupt is not - * supported by vmalloc. - */ - WARN_ON(in_interrupt()); - vfree(module_region); -} - void __weak module_arch_cleanup(struct module *mod) { } @@ -1207,7 +1198,7 @@ static void *module_memory_alloc(unsigned int size, enum mod_mem_type type) { if (mod_mem_use_vmalloc(type)) return vzalloc(size); - return module_alloc(size); + return execmem_text_alloc(EXECMEM_MODULE_TEXT, size); } static void module_memory_free(void *ptr, enum mod_mem_type type) @@ -1215,7 +1206,7 @@ static void module_memory_free(void *ptr, enum mod_mem_type type) if (mod_mem_use_vmalloc(type)) vfree(ptr); else - module_memfree(ptr); + execmem_free(ptr); } static void free_mod_mem(struct module *mod) @@ -2479,9 +2470,9 @@ static void do_free_init(struct work_struct *w) llist_for_each_safe(pos, n, list) { initfree = container_of(pos, struct mod_initfree, node); - module_memfree(initfree->init_text); - module_memfree(initfree->init_data); - module_memfree(initfree->init_rodata); + execmem_free(initfree->init_text); + execmem_free(initfree->init_data); + execmem_free(initfree->init_rodata); kfree(initfree); } } @@ -2584,10 +2575,10 @@ static noinline int do_init_module(struct module *mod) * We want to free module_init, but be aware that kallsyms may be * walking this with preempt disabled. In all the failure paths, we * call synchronize_rcu(), but we don't want to slow down the success - * path. module_memfree() cannot be called in an interrupt, so do the + * path. execmem_free() cannot be called in an interrupt, so do the * work and call synchronize_rcu() in a work queue. * - * Note that module_alloc() on most architectures creates W+X page + * Note that execmem_text_alloc() on most architectures creates W+X page * mappings which won't be cleaned up until do_free_init() runs. Any * code such as mark_rodata_ro() which depends on those mappings to * be cleaned up needs to sync with the queued work - ie diff --git a/mm/Kconfig b/mm/Kconfig index 264a2df5ecf5..fb12931238e8 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1258,6 +1258,9 @@ config LOCK_MM_AND_FIND_VMA bool depends on !STACK_GROWSUP +config EXECMEM + bool + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index ec65984e2ade..2e5fec94f09c 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -138,3 +138,4 @@ obj-$(CONFIG_IO_MAPPING) += io-mapping.o obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o +obj-$(CONFIG_EXECMEM) += execmem.o diff --git a/mm/execmem.c b/mm/execmem.c new file mode 100644 index 000000000000..638dc2b26a81 --- /dev/null +++ b/mm/execmem.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include + +static void *execmem_alloc(size_t size) +{ + return module_alloc(size); +} + +void *execmem_text_alloc(enum execmem_type type, size_t size) +{ + return execmem_alloc(size); +} + +void execmem_free(void *ptr) +{ + /* + * This memory may be RO, and freeing RO memory in an interrupt is not + * supported by vmalloc. + */ + WARN_ON(in_interrupt()); + vfree(ptr); +} From patchwork Mon Sep 18 07:29:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3FB1CCD13D2 for ; Mon, 18 Sep 2023 07:31:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yGrctakw0keAAisU/+mc1K3zEdMXVO5Kebf6g0MUSRo=; b=U0vUvuwcmh0e7E soP3dq8iBvAYrzyo5ULbZycCl+SCzmolewk1SA9P6jhuSkYCjLJnPKqcFhSeW4GRAmIFmKZuI3IrJ Fc8z1c/ZXPG3uN9D1ueFKUH4EEYoMgjwxLqnyfOPzIeIND6cTM4enHLsK7Mdv6Nau1r5OFDL1Q6ZC Gp7Q5X0m1C+iLpY4laXRk1ZIkfyNaqHLwL49sQNVSL4rXnK0DR3QgpGmZhnhTnFs7Hk4gs0IVNq/+ JcvA95dlEzq/m4ViP7d7+TL+7oeZpQQJxI1nCEJHKuntWY32OUI+8EfwnvUwo/rG0MM25yMW0PmDg 92OPKGMmd3qru3goAmHg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8ic-00EhJK-0W; Mon, 18 Sep 2023 07:30:54 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8iW-00EhEI-00; Mon, 18 Sep 2023 07:30:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 19782CE0B82; Mon, 18 Sep 2023 07:30:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1667C433AB; Mon, 18 Sep 2023 07:30:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022244; bh=sRqJHHj6sN+zVuxAlEgaERQMpk9iFfs+P0v/5bsjzqU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sSCiLbiNncF007b8yg/PCaamnCR3IktSTan43cauE8/7iFy9m2xixDIpEVuIJReLP jrggUxEzdIvYURXnGZUsrjawe2u27YGN2kA82YiX1l3NRKgcmaNnhRD4rJ3lwcYZFH kcsyJaoBejSzi/mjeTbKUXDpZatwwEE68FNOT2kvW74EcuAw7/7pEI72l1dj86JMJG tO7yZMu8nliG1qmp1eySQMtAnE8tCUfbmYaCkFhY/eq4QY0IuzpNmP6jmRXX4716Qz r28qd7FAGP4eyZIrKeBmxc2Zjbl6SHjAiWIHvssIlogTeakI1PofbFTxkyfvNtlbRA f8aCB2/C6EHLw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 03/13] mm/execmem, arch: convert simple overrides of module_alloc to execmem Date: Mon, 18 Sep 2023 10:29:45 +0300 Message-Id: <20230918072955.2507221-4-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003048_390179_F8782E7C X-CRM114-Status: GOOD ( 24.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Several architectures override module_alloc() only to define address range for code allocations different than VMALLOC address space. Provide a generic implementation in execmem that uses the parameters for address space ranges, required alignment and page protections provided by architectures. The architectures must fill execmem_params structure and implement execmem_arch_params() that returns a pointer to that structure. This way the execmem initialization won't be called from every architecture, but rather from a central place, namely initialization of the core memory management. The execmem provides execmem_text_alloc() API that wraps __vmalloc_node_range() with the parameters defined by the architectures. If an architecture does not implement execmem_arch_params(), execmem_text_alloc() will fall back to module_alloc(). The name execmem_text_alloc() emphasizes that the allocated memory is for executable code, the allocations of the associated data, like data sections of a module will use execmem_data_alloc() interface that will be added later. Signed-off-by: Mike Rapoport (IBM) --- arch/loongarch/kernel/module.c | 18 ++++++++-- arch/mips/kernel/module.c | 19 +++++++--- arch/nios2/kernel/module.c | 19 +++++++--- arch/parisc/kernel/module.c | 23 +++++++----- arch/riscv/kernel/module.c | 20 ++++++++--- arch/sparc/kernel/module.c | 44 +++++++++++------------ include/linux/execmem.h | 44 +++++++++++++++++++++++ mm/execmem.c | 66 ++++++++++++++++++++++++++++++++-- mm/mm_init.c | 2 ++ 9 files changed, 203 insertions(+), 52 deletions(-) diff --git a/arch/loongarch/kernel/module.c b/arch/loongarch/kernel/module.c index b8b86088b2dd..a1d8fe9796fa 100644 --- a/arch/loongarch/kernel/module.c +++ b/arch/loongarch/kernel/module.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -469,10 +470,21 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); + execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + + return &execmem_params; } static void module_init_ftrace_plt(const Elf_Ehdr *hdr, diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 0c936cbf20c5..1c959074b35f 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include extern void jump_label_apply_nops(struct module *mod); @@ -33,11 +34,21 @@ static LIST_HEAD(dbe_list); static DEFINE_SPINLOCK(dbe_lock); #ifdef MODULE_START -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULE_START, + .end = MODULE_END, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END, - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; + + return &execmem_params; } #endif diff --git a/arch/nios2/kernel/module.c b/arch/nios2/kernel/module.c index 9c97b7513853..5a8df4f9c04e 100644 --- a/arch/nios2/kernel/module.c +++ b/arch/nios2/kernel/module.c @@ -18,15 +18,24 @@ #include #include #include +#include #include -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL_EXEC, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, - VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, - __builtin_return_address(0)); + return &execmem_params; } int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c index d214bbe3c2af..0c6dfd1daef3 100644 --- a/arch/parisc/kernel/module.c +++ b/arch/parisc/kernel/module.c @@ -49,6 +49,7 @@ #include #include #include +#include #include #include @@ -173,15 +174,21 @@ static inline int reassemble_22(int as22) ((as22 & 0x0003ff) << 3)); } -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .pgprot = PAGE_KERNEL_RWX, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - /* using RWX means less protection for modules, but it's - * easier than trying to map the text, data, init_text and - * init_data correctly */ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, - PAGE_KERNEL_RWX, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_params.ranges[EXECMEM_DEFAULT].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_DEFAULT].end = VMALLOC_END; + + return &execmem_params; } #ifndef CONFIG_64BIT diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 7c651d55fcbd..343a0edfb6dd 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -436,12 +437,21 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, } #if defined(CONFIG_MMU) && defined(CONFIG_64BIT) -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, - MODULES_END, GFP_KERNEL, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + + return &execmem_params; } #endif diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c index 66c45a2764bc..1d8d1fba95b9 100644 --- a/arch/sparc/kernel/module.c +++ b/arch/sparc/kernel/module.c @@ -14,6 +14,10 @@ #include #include #include +#include +#ifdef CONFIG_SPARC64 +#include +#endif #include #include @@ -21,34 +25,26 @@ #include "entry.h" +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { #ifdef CONFIG_SPARC64 - -#include - -static void *module_map(unsigned long size) -{ - if (PAGE_ALIGN(size) > MODULES_LEN) - return NULL; - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); -} + .start = MODULES_VADDR, + .end = MODULES_END, #else -static void *module_map(unsigned long size) -{ - return vmalloc(size); -} -#endif /* CONFIG_SPARC64 */ - -void *module_alloc(unsigned long size) + .start = VMALLOC_START, + .end = VMALLOC_END, +#endif + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - void *ret; - - ret = module_map(size); - if (ret) - memset(ret, 0, size); + execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; - return ret; + return &execmem_params; } /* Make generic code ignore STT_REGISTER dummy undefined symbols. */ diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 3491bf7e9714..44e213625053 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -32,6 +32,44 @@ enum execmem_type { EXECMEM_TYPE_MAX, }; +/** + * struct execmem_range - definition of a memory range suitable for code and + * related data allocations + * @start: address space start + * @end: address space end (inclusive) + * @pgprot: permissions for memory in this address space + * @alignment: alignment required for text allocations + */ +struct execmem_range { + unsigned long start; + unsigned long end; + pgprot_t pgprot; + unsigned int alignment; +}; + +/** + * struct execmem_params - architecture parameters for code allocations + * @ranges: array of ranges defining architecture specific parameters for + * each type of executable memory allocations + */ +struct execmem_params { + struct execmem_range ranges[EXECMEM_TYPE_MAX]; +}; + +/** + * execmem_arch_params - supply parameters for allocations of executable memory + * + * A hook for architectures to define parameters for allocations of + * executable memory described by struct execmem_params + * + * For architectures that do not implement this method a default set of + * parameters will be used + * + * Return: a structure defining architecture parameters and restrictions + * for allocations of executable memory + */ +struct execmem_params *execmem_arch_params(void); + /** * execmem_text_alloc - allocate executable memory * @type: type of the allocation @@ -53,4 +91,10 @@ void *execmem_text_alloc(enum execmem_type type, size_t size); */ void execmem_free(void *ptr); +#ifdef CONFIG_EXECMEM +void execmem_init(void); +#else +static inline void execmem_init(void) {} +#endif + #endif /* _LINUX_EXECMEM_ALLOC_H */ diff --git a/mm/execmem.c b/mm/execmem.c index 638dc2b26a81..f25a5e064886 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -5,14 +5,26 @@ #include #include -static void *execmem_alloc(size_t size) +static struct execmem_params execmem_params; + +static void *execmem_alloc(size_t size, struct execmem_range *range) { - return module_alloc(size); + unsigned long start = range->start; + unsigned long end = range->end; + unsigned int align = range->alignment; + pgprot_t pgprot = range->pgprot; + + return __vmalloc_node_range(size, align, start, end, + GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS, + NUMA_NO_NODE, __builtin_return_address(0)); } void *execmem_text_alloc(enum execmem_type type, size_t size) { - return execmem_alloc(size); + if (!execmem_params.ranges[type].start) + return module_alloc(size); + + return execmem_alloc(size, &execmem_params.ranges[type]); } void execmem_free(void *ptr) @@ -24,3 +36,51 @@ void execmem_free(void *ptr) WARN_ON(in_interrupt()); vfree(ptr); } + +struct execmem_params * __weak execmem_arch_params(void) +{ + return NULL; +} + +static bool execmem_validate_params(struct execmem_params *p) +{ + struct execmem_range *r = &p->ranges[EXECMEM_DEFAULT]; + + if (!r->alignment || !r->start || !r->end || !pgprot_val(r->pgprot)) { + pr_crit("Invalid parameters for execmem allocator, module loading will fail"); + return false; + } + + return true; +} + +static void execmem_init_missing(struct execmem_params *p) +{ + struct execmem_range *default_range = &p->ranges[EXECMEM_DEFAULT]; + + for (int i = EXECMEM_DEFAULT + 1; i < EXECMEM_TYPE_MAX; i++) { + struct execmem_range *r = &p->ranges[i]; + + if (!r->start) { + r->pgprot = default_range->pgprot; + r->alignment = default_range->alignment; + r->start = default_range->start; + r->end = default_range->end; + } + } +} + +void __init execmem_init(void) +{ + struct execmem_params *p = execmem_arch_params(); + + if (!p) + return; + + if (!execmem_validate_params(p)) + return; + + execmem_init_missing(p); + + execmem_params = *p; +} diff --git a/mm/mm_init.c b/mm/mm_init.c index 50f2f34745af..7c002b36da21 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -2797,4 +2798,5 @@ void __init mm_core_init(void) pti_init(); kmsan_init_runtime(); mm_cache_init(); + execmem_init(); } From patchwork Mon Sep 18 07:29:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82C72CD37B0 for ; Mon, 18 Sep 2023 07:31:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2EQ9bhRk/PFkQj1R2gfIvJQd0kuQ0L7cqYZbg1Tbu7A=; b=ICaq7G9WJbtRRJ QuLL4BR4bGJVAdVrpJKPdG75xhqxKUtjahN5FpneRlRphOzRTMJnv1Ki4albqqQGgG6oFcq+wu6Gi l06C6SmBFOleU9+N/yrKLqDkcMYKoYmZZECfJoV3FowEtG/Sw4e6XY6oTCHRDCF888jq/vXYz6Rlk BUqx2VuG+G0XJE0OgUEtSeZOYZAgaRkI5UhbXgq7tS5a9S3LbIR/yOBN3rsl4NUkbgWBncoj/nduR 84rhbfMM3ZW594Fa5FGe9tFw2VEQD6tydRaBJjIYTikv6X+N8sdpbM6zF8VlrIcGfBIi+ev38g1LR vV2q6NTWimuhpuOLIG6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8ij-00EhOX-02; Mon, 18 Sep 2023 07:31:01 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8id-00EhK8-1N; Mon, 18 Sep 2023 07:30:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AF9D260F83; Mon, 18 Sep 2023 07:30:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AFDC9C433B9; Mon, 18 Sep 2023 07:30:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022253; bh=Ab3JxL0vUtd+Dgg9Evr/24yil9x7cJIlVNJ+mIHZ+Fw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B3MpqZd8kSnshGSIZk32sAvbfxOLIcvRzx/L6e6twO1GyIkCPbyDJ8YFZK7YV3DYj XGhAnNQsgAWy/xSCUlo5IqNl0WBTUqkmBh0vCS1bt48EfgT1+dVwPuJNvN1ivOBOMn gvH0utW4rdMOUN0nD6KFxNUfq8geoRPWIJ6VKmk1aH2z09LsV+3CM1A5GN8hUlJfXJ ASdeswxXtYTc0LaHjCD2b/Jj+5rTLeYIx1gwiLh7ZEjRZ1XJfx7lSWfp5ovjLk7FgH t/b+tjGlJaBG8XvWXgVzie1/pkYA3m8DAD1W+ztdCKD9/UtdKJD1oiqOVjKdWg5YrB ItW+yxqIqOhzA== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 04/13] mm/execmem, arch: convert remaining overrides of module_alloc to execmem Date: Mon, 18 Sep 2023 10:29:46 +0300 Message-Id: <20230918072955.2507221-5-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003055_560424_61CAE3A6 X-CRM114-Status: GOOD ( 27.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Extend execmem parameters to accommodate more complex overrides of module_alloc() by architectures. This includes specification of a fallback range required by arm, arm64 and powerpc and support for allocation of KASAN shadow required by arm64, s390 and x86. The core implementation of execmem_alloc() takes care of suppressing warnings when the initial allocation fails but there is a fallback range defined. Signed-off-by: Mike Rapoport (IBM) --- arch/arm/kernel/module.c | 38 ++++++++++++--------- arch/arm64/kernel/module.c | 57 ++++++++++++++------------------ arch/powerpc/kernel/module.c | 52 ++++++++++++++--------------- arch/s390/kernel/module.c | 52 +++++++++++------------------ arch/x86/kernel/module.c | 64 +++++++++++------------------------- include/linux/execmem.h | 14 ++++++++ mm/execmem.c | 43 ++++++++++++++++++++++-- 7 files changed, 167 insertions(+), 153 deletions(-) diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c index e74d84f58b77..2c7651a2d84c 100644 --- a/arch/arm/kernel/module.c +++ b/arch/arm/kernel/module.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -34,23 +35,28 @@ #endif #ifdef CONFIG_MMU -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - gfp_t gfp_mask = GFP_KERNEL; - void *p; - - /* Silence the initial allocation */ - if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) - gfp_mask |= __GFP_NOWARN; - - p = __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - gfp_mask, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, - __builtin_return_address(0)); - if (!IS_ENABLED(CONFIG_ARM_MODULE_PLTS) || p) - return p; - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; + + r->pgprot = PAGE_KERNEL_EXEC; + + if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) { + r->fallback_start = VMALLOC_START; + r->fallback_end = VMALLOC_END; + } + + return &execmem_params; } #endif diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index dd851297596e..cd6320de1c54 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -108,46 +109,38 @@ static int __init module_init_limits(void) return 0; } -subsys_initcall(module_init_limits); -void *module_alloc(unsigned long size) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .alignment = MODULE_ALIGN, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - void *p = NULL; + struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; - /* - * Where possible, prefer to allocate within direct branch range of the - * kernel such that no PLTs are necessary. - */ - if (module_direct_base) { - p = __vmalloc_node_range(size, MODULE_ALIGN, - module_direct_base, - module_direct_base + SZ_128M, - GFP_KERNEL | __GFP_NOWARN, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); - } + module_init_limits(); - if (!p && module_plt_base) { - p = __vmalloc_node_range(size, MODULE_ALIGN, - module_plt_base, - module_plt_base + SZ_2G, - GFP_KERNEL | __GFP_NOWARN, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); - } + r->pgprot = PAGE_KERNEL; - if (!p) { - pr_warn_ratelimited("%s: unable to allocate memory\n", - __func__); - } + if (module_direct_base) { + r->start = module_direct_base; + r->end = module_direct_base + SZ_128M; - if (p && (kasan_alloc_module_shadow(p, size, GFP_KERNEL) < 0)) { - vfree(p); - return NULL; + if (module_plt_base) { + r->fallback_start = module_plt_base; + r->fallback_end = module_plt_base + SZ_2G; + } + } else if (module_plt_base) { + r->start = module_plt_base; + r->end = module_plt_base + SZ_2G; } - /* Memory is intended to be executable, reset the pointer tag. */ - return kasan_reset_tag(p); + return &execmem_params; } enum aarch64_reloc_op { diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index f6d6ae0a1692..f4dd26f693a3 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -89,39 +90,38 @@ int module_finalize(const Elf_Ehdr *hdr, return 0; } -static __always_inline void * -__module_alloc(unsigned long size, unsigned long start, unsigned long end, bool nowarn) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; - gfp_t gfp = GFP_KERNEL | (nowarn ? __GFP_NOWARN : 0); - - /* - * Don't do huge page allocations for modules yet until more testing - * is done. STRICT_MODULE_RWX may require extra work to support this - * too. - */ - return __vmalloc_node_range(size, 1, start, end, gfp, prot, - VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} + struct execmem_range *range = &execmem_params.ranges[EXECMEM_DEFAULT]; -void *module_alloc(unsigned long size) -{ #ifdef MODULES_VADDR unsigned long limit = (unsigned long)_etext - SZ_32M; - void *ptr = NULL; - - BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR); /* First try within 32M limit from _etext to avoid branch trampolines */ - if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) - ptr = __module_alloc(size, limit, MODULES_END, true); - - if (!ptr) - ptr = __module_alloc(size, MODULES_VADDR, MODULES_END, false); - - return ptr; + if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) { + range->start = limit; + range->end = MODULES_END; + range->fallback_start = MODULES_VADDR; + range->fallback_end = MODULES_END; + } else { + range->start = MODULES_VADDR; + range->end = MODULES_END; + } #else - return __module_alloc(size, VMALLOC_START, VMALLOC_END, false); + range->start = VMALLOC_START; + range->end = VMALLOC_END; #endif + + range->pgprot = prot; + + return &execmem_params; } diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index db5561d0c233..538d5f24af66 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -37,41 +37,29 @@ #define PLT_ENTRY_SIZE 22 -static unsigned long get_module_load_offset(void) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .alignment = MODULE_ALIGN, + .pgprot = PAGE_KERNEL, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) { - static DEFINE_MUTEX(module_kaslr_mutex); - static unsigned long module_load_offset; - - if (!kaslr_enabled()) - return 0; - /* - * Calculate the module_load_offset the first time this code - * is called. Once calculated it stays the same until reboot. - */ - mutex_lock(&module_kaslr_mutex); - if (!module_load_offset) + unsigned long module_load_offset = 0; + unsigned long start; + + if (kaslr_enabled()) module_load_offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - mutex_unlock(&module_kaslr_mutex); - return module_load_offset; -} -void *module_alloc(unsigned long size) -{ - gfp_t gfp_mask = GFP_KERNEL; - void *p; - - if (PAGE_ALIGN(size) > MODULES_LEN) - return NULL; - p = __vmalloc_node_range(size, MODULE_ALIGN, - MODULES_VADDR + get_module_load_offset(), - MODULES_END, gfp_mask, PAGE_KERNEL, - VM_FLUSH_RESET_PERMS | VM_DEFER_KMEMLEAK, - NUMA_NO_NODE, __builtin_return_address(0)); - if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) { - vfree(p); - return NULL; - } - return p; + start = MODULES_VADDR + module_load_offset; + execmem_params.ranges[EXECMEM_DEFAULT].start = start; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + + return &execmem_params; } #ifdef CONFIG_FUNCTION_TRACER diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 5f71a0cf4399..9d37375e2f05 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -36,55 +37,30 @@ do { \ } while (0) #endif -#ifdef CONFIG_RANDOMIZE_BASE -static unsigned long module_load_offset; +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .alignment = MODULE_ALIGN, + }, + }, +}; -/* Mutex protects the module_load_offset. */ -static DEFINE_MUTEX(module_kaslr_mutex); - -static unsigned long int get_module_load_offset(void) -{ - if (kaslr_enabled()) { - mutex_lock(&module_kaslr_mutex); - /* - * Calculate the module_load_offset the first time this - * code is called. Once calculated it stays the same until - * reboot. - */ - if (module_load_offset == 0) - module_load_offset = - get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - mutex_unlock(&module_kaslr_mutex); - } - return module_load_offset; -} -#else -static unsigned long int get_module_load_offset(void) -{ - return 0; -} -#endif - -void *module_alloc(unsigned long size) +struct execmem_params __init *execmem_arch_params(void) { - gfp_t gfp_mask = GFP_KERNEL; - void *p; - - if (PAGE_ALIGN(size) > MODULES_LEN) - return NULL; + unsigned long module_load_offset = 0; + unsigned long start; - p = __vmalloc_node_range(size, MODULE_ALIGN, - MODULES_VADDR + get_module_load_offset(), - MODULES_END, gfp_mask, PAGE_KERNEL, - VM_FLUSH_RESET_PERMS | VM_DEFER_KMEMLEAK, - NUMA_NO_NODE, __builtin_return_address(0)); + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_enabled()) + module_load_offset = + get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) { - vfree(p); - return NULL; - } + start = MODULES_VADDR + module_load_offset; + execmem_params.ranges[EXECMEM_DEFAULT].start = start; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; - return p; + return &execmem_params; } #ifdef CONFIG_X86_32 diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 44e213625053..806ad1a0088d 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -32,19 +32,33 @@ enum execmem_type { EXECMEM_TYPE_MAX, }; +/** + * enum execmem_module_flags - options for executable memory allocations + * @EXECMEM_KASAN_SHADOW: allocate kasan shadow + */ +enum execmem_range_flags { + EXECMEM_KASAN_SHADOW = (1 << 0), +}; + /** * struct execmem_range - definition of a memory range suitable for code and * related data allocations * @start: address space start * @end: address space end (inclusive) + * @fallback_start: start of the range for fallback allocations + * @fallback_end: end of the range for fallback allocations (inclusive) * @pgprot: permissions for memory in this address space * @alignment: alignment required for text allocations + * @flags: options for memory allocations for this range */ struct execmem_range { unsigned long start; unsigned long end; + unsigned long fallback_start; + unsigned long fallback_end; pgprot_t pgprot; unsigned int alignment; + enum execmem_range_flags flags; }; /** diff --git a/mm/execmem.c b/mm/execmem.c index f25a5e064886..a8c2f44d0133 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -11,12 +11,46 @@ static void *execmem_alloc(size_t size, struct execmem_range *range) { unsigned long start = range->start; unsigned long end = range->end; + unsigned long fallback_start = range->fallback_start; + unsigned long fallback_end = range->fallback_end; unsigned int align = range->alignment; pgprot_t pgprot = range->pgprot; + bool kasan = range->flags & EXECMEM_KASAN_SHADOW; + unsigned long vm_flags = VM_FLUSH_RESET_PERMS; + bool fallback = !!fallback_start; + gfp_t gfp_flags = GFP_KERNEL; + void *p; - return __vmalloc_node_range(size, align, start, end, - GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); + if (PAGE_ALIGN(size) > (end - start)) + return NULL; + + if (kasan) + vm_flags |= VM_DEFER_KMEMLEAK; + + if (fallback) + gfp_flags |= __GFP_NOWARN; + + p = __vmalloc_node_range(size, align, start, end, gfp_flags, + pgprot, vm_flags, NUMA_NO_NODE, + __builtin_return_address(0)); + + if (!p && fallback) { + start = fallback_start; + end = fallback_end; + gfp_flags = GFP_KERNEL; + + p = __vmalloc_node_range(size, align, start, end, gfp_flags, + pgprot, vm_flags, NUMA_NO_NODE, + __builtin_return_address(0)); + } + + if (p && kasan && + (kasan_alloc_module_shadow(p, size, GFP_KERNEL) < 0)) { + vfree(p); + return NULL; + } + + return kasan_reset_tag(p); } void *execmem_text_alloc(enum execmem_type type, size_t size) @@ -66,6 +100,9 @@ static void execmem_init_missing(struct execmem_params *p) r->alignment = default_range->alignment; r->start = default_range->start; r->end = default_range->end; + r->flags = default_range->flags; + r->fallback_start = default_range->fallback_start; + r->fallback_end = default_range->fallback_end; } } } From patchwork Mon Sep 18 07:29:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BEBFBCD37B0 for ; Mon, 18 Sep 2023 07:31:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KsIcmkeZB2HNgsb0xF884fTPjaQTZ0uENIjeN97aYl0=; b=hFpxb4wV6FSv97 6VkSgUnCuo4Vja8NxTFUSe9w40wgm72cpqCcIpmLhFsE/SmsSKTzQ4S+q3aQ8PSxJuJFcfqrNvMNW qEuM0pJT4/wkVimzPjfKMcaA8IjSI7OG7YHBNmsxQbg1Kh4etokQTwpHPLS4yQAs8VdIES8WkHumj L9AWs6EL23s6HjR95b0F+4vdCzTFetSAcnRoae6/VGBz5l7c3UnvCSTL6Dw8QvoCU16sYhhCvp7KE Gq9oy7b1zsRzE0iV4OT/gldJaHHaklS16p5LhH8nF0TGJfu9OjHrXgIApq6QwaHDQysQSZE5PFq6X P0QCMFw1DQxJTuDGy/Sg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8ip-00EhTd-22; Mon, 18 Sep 2023 07:31:07 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8im-00EhR0-2V; Mon, 18 Sep 2023 07:31:06 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 44E2A60E15; Mon, 18 Sep 2023 07:31:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68ED7C433C8; Mon, 18 Sep 2023 07:30:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022263; bh=mbiiTE+Ax/Si/BZxcJVEdVXi15nThNmQlUqiyILb0zo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ojfgNgsup07S+iaZErkHdPn6pFXvUxpfd0rqcJ0g/fFjdgDTdp5RCwZChtRwqs+hU 358zdLhiRZXyw3aIFuKwPq3ciyzug59B+euqQg31hDJF2IYAsE+6D264chrRpy0JCv lIJDaEg0ezBhP+IOLDXi95uXUolbh37QjiMgyCp6vToq0/niAQX3TSUJ4+A3chXFuR Fz6pQnaNxbhFzVVMJL+/JI6h0Va38Wr9Y8Zzq0rWzkuGOl0+j0c7QpIR3g6g0WJ4vh Na6tomyrzTQLlYxetqXYeBzvBjSJ3J7TMO8PQeFVdg4iyOutejtYTgrDjMGzC7l/J1 0uZomhqve3rWw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 05/13] modules, execmem: drop module_alloc Date: Mon, 18 Sep 2023 10:29:47 +0300 Message-Id: <20230918072955.2507221-6-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003104_933486_781FBF5A X-CRM114-Status: GOOD ( 18.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Define default parameters for address range for code allocations using the current values in module_alloc() and make execmem_text_alloc() use these defaults when an architecture does not supply its specific parameters. With this, execmem_text_alloc() implements memory allocation in a way compatible with module_alloc() and can be used as a replacement for module_alloc(). Signed-off-by: Mike Rapoport (IBM) --- include/linux/execmem.h | 8 ++++++++ include/linux/moduleloader.h | 12 ------------ kernel/module/main.c | 7 ------- mm/execmem.c | 12 ++++++++---- 4 files changed, 16 insertions(+), 23 deletions(-) diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 806ad1a0088d..519bdfdca595 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -4,6 +4,14 @@ #include +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(CONFIG_KASAN_VMALLOC) +#include +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) +#else +#define MODULE_ALIGN PAGE_SIZE +#endif + /** * enum execmem_type - types of executable memory ranges * diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index a23718aa2f4d..8c81f389117d 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -25,10 +25,6 @@ int module_frob_arch_sections(Elf_Ehdr *hdr, /* Additional bytes needed by arch in front of individual sections */ unsigned int arch_mod_section_prepend(struct module *mod, unsigned int section); -/* Allocator used for allocating struct module, core sections and init - sections. Returns NULL on failure. */ -void *module_alloc(unsigned long size); - /* Determines if the section name is an init section (that is only used during * module loading). */ @@ -118,12 +114,4 @@ void module_arch_cleanup(struct module *mod); /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ - !defined(CONFIG_KASAN_VMALLOC) -#include -#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) -#else -#define MODULE_ALIGN PAGE_SIZE -#endif - #endif diff --git a/kernel/module/main.c b/kernel/module/main.c index 4ec982cc943c..c4146bfcd0a7 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1601,13 +1601,6 @@ static void free_modinfo(struct module *mod) } } -void * __weak module_alloc(unsigned long size) -{ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - bool __weak module_init_section(const char *name) { return strstarts(name, ".init"); diff --git a/mm/execmem.c b/mm/execmem.c index a8c2f44d0133..abcbd07e05ac 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -55,9 +55,6 @@ static void *execmem_alloc(size_t size, struct execmem_range *range) void *execmem_text_alloc(enum execmem_type type, size_t size) { - if (!execmem_params.ranges[type].start) - return module_alloc(size); - return execmem_alloc(size, &execmem_params.ranges[type]); } @@ -111,8 +108,15 @@ void __init execmem_init(void) { struct execmem_params *p = execmem_arch_params(); - if (!p) + if (!p) { + p = &execmem_params; + p->ranges[EXECMEM_DEFAULT].start = VMALLOC_START; + p->ranges[EXECMEM_DEFAULT].end = VMALLOC_END; + p->ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL_EXEC; + p->ranges[EXECMEM_DEFAULT].alignment = 1; + return; + } if (!execmem_validate_params(p)) return; From patchwork Mon Sep 18 07:29:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 34370CD13D1 for ; Mon, 18 Sep 2023 07:31:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GDFCcaZoMXW7Hnpc36Y3DHbm9rXdzy+kjDeYJPU77GY=; b=vHSreyDlVG3qFx oPNUSSnvDIlbhHgfr1jzJ3qtVmRdkscwDHkoS1MGA9gLPpLsGku03l003FhIW+PQnXSBUIAtDgqbo QwgAn9Avq+oGMKRsI/UVIIBQNkt2ieMqmdMbkAQ6GM2h9N1t+TdKMpF6kVfuNDNoWqkQgqIbRgphc azcjeQxg6Yri0u5oH3gZNYajsgODcKSk2vdjcGWOFTzm6hs7p3OzUD/Ib/VQNoFs2gSZf2O171Baw PI9ZNDUC+OVoz7nuNSq15ozAA9LYDExTTqy5Ujq9sRWfB2Nl1l+gR5Ykybfja1dIFnSR1dUYSIRfU vyyGvSzpKPp9/aqFrI0g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8j2-00EhfU-0y; Mon, 18 Sep 2023 07:31:20 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8iy-00Eha0-2v; Mon, 18 Sep 2023 07:31:18 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 03A47CE0C47; Mon, 18 Sep 2023 07:31:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2285FC433AD; Mon, 18 Sep 2023 07:31:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022273; bh=GrzrJESS7vx7QyV0O3w+oX12p1W5MbqUcsXcqyR+rMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L26X5Mym5jbDCDCi9pBttJiCUmGDJK7QCrrwON0wsqLsaMRtQtziMfk+8vmq102wK apB9eMngz4FDoZOnyfOnpxsYAYV+s6z6uy/A7GUFANPsDrtCidzVD8sw9PZ9EeZeSS BBZbnyT3UkLAeKFTSJ+WQXJ44CEr3Yz7/npPDOeCiIyuKPQdJ6wFULwDSI42C+21i2 TtK4M26N760ml2Bd8ec+c+dtMjxHoi8aQYpVD0iQ48GL/MfHLj1xfBm+sXCRSzVgn2 IlOo26svfqfJVEKxvnTHvne8XyU+XB4i1UAyBnP2cTSXuXl9ONzyCPVFbHoWx7LKOg fqTz/6KsDW2hQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 06/13] mm/execmem: introduce execmem_data_alloc() Date: Mon, 18 Sep 2023 10:29:48 +0300 Message-Id: <20230918072955.2507221-7-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003117_326770_2C6F0FB1 X-CRM114-Status: GOOD ( 21.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Data related to code allocations, such as module data section, need to comply with architecture constraints for its placement and its allocation right now was done using execmem_text_alloc(). Create a dedicated API for allocating data related to code allocations and allow architectures to define address ranges for data allocations. Since currently this is only relevant for powerpc variants that use the VMALLOC address space for module data allocations, automatically reuse address ranges defined for text unless address range for data is explicitly defined by an architecture. With separation of code and data allocations, data sections of the modules are now mapped as PAGE_KERNEL rather than PAGE_KERNEL_EXEC which was a default on many architectures. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/kernel/module.c | 12 ++++++++++++ include/linux/execmem.h | 19 +++++++++++++++++++ kernel/module/main.c | 15 +++------------ mm/execmem.c | 17 ++++++++++++++++- 4 files changed, 50 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index f4dd26f693a3..824d9541a310 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -95,6 +95,9 @@ static struct execmem_params execmem_params __ro_after_init = { [EXECMEM_DEFAULT] = { .alignment = 1, }, + [EXECMEM_MODULE_DATA] = { + .alignment = 1, + }, }, }; @@ -103,7 +106,12 @@ struct execmem_params __init *execmem_arch_params(void) pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; struct execmem_range *range = &execmem_params.ranges[EXECMEM_DEFAULT]; + /* + * BOOK3S_32 and 8xx define MODULES_VADDR for text allocations and + * allow allocating data in the entire vmalloc space + */ #ifdef MODULES_VADDR + struct execmem_range *data = &execmem_params.ranges[EXECMEM_MODULE_DATA]; unsigned long limit = (unsigned long)_etext - SZ_32M; /* First try within 32M limit from _etext to avoid branch trampolines */ @@ -116,6 +124,10 @@ struct execmem_params __init *execmem_arch_params(void) range->start = MODULES_VADDR; range->end = MODULES_END; } + data->start = VMALLOC_START; + data->end = VMALLOC_END; + data->pgprot = PAGE_KERNEL; + data->alignment = 1; #else range->start = VMALLOC_START; range->end = VMALLOC_END; diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 519bdfdca595..09d45ac786e9 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -29,6 +29,7 @@ * @EXECMEM_KPROBES: parameters for kprobes * @EXECMEM_FTRACE: parameters for ftrace * @EXECMEM_BPF: parameters for BPF + * @EXECMEM_MODULE_DATA: parameters for module data sections * @EXECMEM_TYPE_MAX: */ enum execmem_type { @@ -37,6 +38,7 @@ enum execmem_type { EXECMEM_KPROBES, EXECMEM_FTRACE, EXECMEM_BPF, + EXECMEM_MODULE_DATA, EXECMEM_TYPE_MAX, }; @@ -107,6 +109,23 @@ struct execmem_params *execmem_arch_params(void); */ void *execmem_text_alloc(enum execmem_type type, size_t size); +/** + * execmem_data_alloc - allocate memory for data coupled to code + * @type: type of the allocation + * @size: how many bytes of memory are required + * + * Allocates memory that will contain data coupled with executable code, + * like data sections in kernel modules. + * + * The memory will have protections defined by architecture. + * + * The allocated memory will reside in an area that does not impose + * restrictions on the addressing modes. + * + * Return: a pointer to the allocated memory or %NULL + */ +void *execmem_data_alloc(enum execmem_type type, size_t size); + /** * execmem_free - free executable memory * @ptr: pointer to the memory that should be freed diff --git a/kernel/module/main.c b/kernel/module/main.c index c4146bfcd0a7..2ae83a6abf66 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1188,25 +1188,16 @@ void __weak module_arch_freeing_init(struct module *mod) { } -static bool mod_mem_use_vmalloc(enum mod_mem_type type) -{ - return IS_ENABLED(CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC) && - mod_mem_type_is_core_data(type); -} - static void *module_memory_alloc(unsigned int size, enum mod_mem_type type) { - if (mod_mem_use_vmalloc(type)) - return vzalloc(size); + if (mod_mem_type_is_data(type)) + return execmem_data_alloc(EXECMEM_MODULE_DATA, size); return execmem_text_alloc(EXECMEM_MODULE_TEXT, size); } static void module_memory_free(void *ptr, enum mod_mem_type type) { - if (mod_mem_use_vmalloc(type)) - vfree(ptr); - else - execmem_free(ptr); + execmem_free(ptr); } static void free_mod_mem(struct module *mod) diff --git a/mm/execmem.c b/mm/execmem.c index abcbd07e05ac..aeff85261360 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -53,11 +53,23 @@ static void *execmem_alloc(size_t size, struct execmem_range *range) return kasan_reset_tag(p); } +static inline bool execmem_range_is_data(enum execmem_type type) +{ + return type == EXECMEM_MODULE_DATA; +} + void *execmem_text_alloc(enum execmem_type type, size_t size) { return execmem_alloc(size, &execmem_params.ranges[type]); } +void *execmem_data_alloc(enum execmem_type type, size_t size) +{ + WARN_ON_ONCE(!execmem_range_is_data(type)); + + return execmem_alloc(size, &execmem_params.ranges[type]); +} + void execmem_free(void *ptr) { /* @@ -93,7 +105,10 @@ static void execmem_init_missing(struct execmem_params *p) struct execmem_range *r = &p->ranges[i]; if (!r->start) { - r->pgprot = default_range->pgprot; + if (execmem_range_is_data(i)) + r->pgprot = PAGE_KERNEL; + else + r->pgprot = default_range->pgprot; r->alignment = default_range->alignment; r->start = default_range->start; r->end = default_range->end; From patchwork Mon Sep 18 07:29:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C338CD3422 for ; Mon, 18 Sep 2023 07:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Vx/W2DjQJOTlwMN9DUtMx5y4iySmrEFOkt62t7eiZ2U=; b=UkcsYtsxmL23WP Ao1nsjk74e5QheBl/I63oyOtVDhtL/yF43qfTvHsxXKeKaYCucW7fXFxKUtqNbaNuwjMCsfMO13a/ WRnlVAdsNjcpnsqAw5ZIwLCaw6UXX09QZYgnzCNsjTaChKG/yUbtb0rlRV1fMx/jlGOMyerk5baLm +UvxfXQlZa/w3IF46Jmg7UIeW3JiZQ3q1//bEh1NBUK5EZNqmB6w4AZddGEeqlWrGeCPB9OdtEnQn sSu4YbSJUpLAkfWAeBdVP8bkIaKvaVvdPeaz5SgnCG1AgtdfBZVwhzANdYw/mI0pJFOr3LHtBKsyO PD1myZztefYEOUQ/JDcg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jC-00Ehnj-0a; Mon, 18 Sep 2023 07:31:30 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8j8-00EhkQ-2M; Mon, 18 Sep 2023 07:31:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id B566CCE0BDB; Mon, 18 Sep 2023 07:31:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3F8CC433BF; Mon, 18 Sep 2023 07:31:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022283; bh=uhUpg7LkgDj87OPKWv+2ZQQTqeXwsgqGQfVhMMOlf6I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QwbKIwCmsi/m+QIPzGWyDvn+Tk81CaEzaNlQaj1OrbfY6aH7VjNTOMfyqPjfbO3ZI IrekkL8ga776lr+J3x8Ygyh2wVMU8QELU6aweeJeBpGEbBfrtJxHf+rNjrvDsHVs7+ 6bm1TS2tPNGOkx/h7R28N2r8hennFfDFz6aE6FuiYGkO4GZPqz8JGrUJkS1iup7wLd Isg0JWVRjOAmJTjw/rUuKi838+cqznX9dO82PejgtCEzU2RQuKCDDPVyOOKasyX6kb 8fpNnivohYt0YSrYt8tzLAAtD+1zq8pRlzkl8JhZ7kRPGcY2/LRuqibiB20CPobXX4 xLjJAnEDRG4hQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 07/13] arm64, execmem: extend execmem_params for generated code allocations Date: Mon, 18 Sep 2023 10:29:49 +0300 Message-Id: <20230918072955.2507221-8-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003127_124752_D24334C2 X-CRM114-Status: GOOD ( 15.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" The memory allocations for kprobes and BPF on arm64 can be placed anywhere in vmalloc address space and currently this is implemented with overrides of alloc_insn_page() and bpf_jit_alloc_exec() in arm64. Define EXECMEM_KPROBES and EXECMEM_BPF ranges in arm64::execmem_params and drop overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Signed-off-by: Mike Rapoport (IBM) Acked-by: Will Deacon --- arch/arm64/kernel/module.c | 13 +++++++++++++ arch/arm64/kernel/probes/kprobes.c | 7 ------- arch/arm64/net/bpf_jit_comp.c | 11 ----------- 3 files changed, 13 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index cd6320de1c54..d27db168d2a2 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -116,6 +116,16 @@ static struct execmem_params execmem_params __ro_after_init = { .flags = EXECMEM_KASAN_SHADOW, .alignment = MODULE_ALIGN, }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .alignment = 1, + }, }, }; @@ -140,6 +150,9 @@ struct execmem_params __init *execmem_arch_params(void) r->end = module_plt_base + SZ_2G; } + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; + execmem_params.ranges[EXECMEM_BPF].pgprot = PAGE_KERNEL; + return &execmem_params; } diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 70b91a8c6bb3..6fccedd02b2a 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -129,13 +129,6 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) return 0; } -void *alloc_insn_page(void) -{ - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - /* arm kprobe: install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 150d1c6543f7..3a7590f828d1 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1687,17 +1687,6 @@ u64 bpf_jit_alloc_exec_limit(void) return VMALLOC_END - VMALLOC_START; } -void *bpf_jit_alloc_exec(unsigned long size) -{ - /* Memory is intended to be executable, reset the pointer tag. */ - return kasan_reset_tag(vmalloc(size)); -} - -void bpf_jit_free_exec(void *addr) -{ - return vfree(addr); -} - /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */ bool bpf_jit_supports_subprog_tailcalls(void) { From patchwork Mon Sep 18 07:29:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6965DC46CA1 for ; Mon, 18 Sep 2023 07:32:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=E+NRcwuJsYwkqM4L+zhOMXIHGbTV3eFIGoBlOKW9UvQ=; b=uKki5OKBXQpb/t dcCB7z7ib3YLVjiXw6vdnreH77ZHtp7sYyfTE7TNz4/in4QTVF6pGFduycZotRzlBWE4f/QdPbtqt eVNgZ6FqsB0EnCdmx2h/emJGeW3azPTvUmAm0rGS0JpgH6VU9TTrOLTXHAzZV2Zyu9i/SCO8VKQKp MR1/+/Ln6vXSCBkUnxpVGHxib0oSFB/Nj7XPt7BTEL91JWcPKj/I+U4OHiHITXFWRu7QExDo2qXH9 rmp2YUX8HXcMBvkyclTRwdMKcDNjPSlg7Mx6ii69UrF6nAmJ0qvo+rN9U1u6VLnomAKTnOrbdTWKb 86Y1eQBfymn/hxmJCcAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jN-00Ehwz-09; Mon, 18 Sep 2023 07:31:41 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jI-00Ehs3-0M; Mon, 18 Sep 2023 07:31:38 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 771DBCE0CE2; Mon, 18 Sep 2023 07:31:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90D66C4160E; Mon, 18 Sep 2023 07:31:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022292; bh=Bxy7NNRI6l4akC8Uk0JGSyZsyMn1J934EmvkFA96qyo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sUl4tcVd+CE1RIfDd+iHzBuo/TtqH+sjUX4AKoidjsBhiPE9N8u5sBFCrh0U2OnnW 02VNXIu/PasFZvJmqp4LQS/fRSyOWf9W0w4R1mThqbVlQHxfh9deE8Gen6PrybSKrA HQTfOkMmKWxzdY89y0l1tlT8/WpmYk4oOHqiKiyXMFacErx4oC2oV4nkojcJ6bpZun 9S9TLaBh1Kth+hdxlxcmXjxDVU8y/jq3PgPfSmOZxNeFoZYJdyBqlLfSCGOoTzYYQr s02+6pJi35T4N9EWcJQvWIbsw2teHNSr/sLV1vIIuBQWC8WuxxKTSKFPi24i20hJWy PMgDbwaP/8Fzw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 08/13] riscv: extend execmem_params for generated code allocations Date: Mon, 18 Sep 2023 10:29:50 +0300 Message-Id: <20230918072955.2507221-9-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003136_496179_814F2F7C X-CRM114-Status: GOOD ( 14.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" The memory allocations for kprobes and BPF on RISC-V are not placed in the modules area and these custom allocations are implemented with overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Slightly reorder execmem_params initialization to support both 32 and 64 bit variants, define EXECMEM_KPROBES and EXECMEM_BPF ranges in riscv::execmem_params and drop overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Signed-off-by: Mike Rapoport (IBM) Reviewed-by: Alexandre Ghiti --- arch/riscv/kernel/module.c | 21 ++++++++++++++++++++- arch/riscv/kernel/probes/kprobes.c | 10 ---------- arch/riscv/net/bpf_jit_core.c | 13 ------------- 3 files changed, 20 insertions(+), 24 deletions(-) diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 343a0edfb6dd..31505ecb5c72 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -436,20 +436,39 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -#if defined(CONFIG_MMU) && defined(CONFIG_64BIT) +#ifdef CONFIG_MMU static struct execmem_params execmem_params __ro_after_init = { .ranges = { [EXECMEM_DEFAULT] = { .pgprot = PAGE_KERNEL, .alignment = 1, }, + [EXECMEM_KPROBES] = { + .pgprot = PAGE_KERNEL_READ_EXEC, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, }, }; struct execmem_params __init *execmem_arch_params(void) { +#ifdef CONFIG_64BIT execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; +#else + execmem_params.ranges[EXECMEM_DEFAULT].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_DEFAULT].end = VMALLOC_END; +#endif + + execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_KPROBES].end = VMALLOC_END; + + execmem_params.ranges[EXECMEM_BPF].start = BPF_JIT_REGION_START; + execmem_params.ranges[EXECMEM_BPF].end = BPF_JIT_REGION_END; return &execmem_params; } diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c index 2f08c14a933d..e64f2f3064eb 100644 --- a/arch/riscv/kernel/probes/kprobes.c +++ b/arch/riscv/kernel/probes/kprobes.c @@ -104,16 +104,6 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) return 0; } -#ifdef CONFIG_MMU -void *alloc_insn_page(void) -{ - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_READ_EXEC, - VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, - __builtin_return_address(0)); -} -#endif - /* install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c index 7b70ccb7fec3..c8a758f0882b 100644 --- a/arch/riscv/net/bpf_jit_core.c +++ b/arch/riscv/net/bpf_jit_core.c @@ -218,19 +218,6 @@ u64 bpf_jit_alloc_exec_limit(void) return BPF_JIT_REGION_SIZE; } -void *bpf_jit_alloc_exec(unsigned long size) -{ - return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START, - BPF_JIT_REGION_END, GFP_KERNEL, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); -} - -void bpf_jit_free_exec(void *addr) -{ - return vfree(addr); -} - void *bpf_arch_text_copy(void *dst, void *src, size_t len) { int ret; From patchwork Mon Sep 18 07:29:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9CCA6CD37B0 for ; Mon, 18 Sep 2023 07:32:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8SAai/OIuYi7qx/0enE4+er9mt5AI4ZG6KKdMvJ5mL4=; b=rwOtY6iQwTiHVG folFoPivAF+VNAKwdmDrVpHO5hgnkMQCLBibnQXWZhk4BlLznymCm/OvY2pcIq0AwQu5jxPREmNel zBCZJfvtWfqTbAx5SMZ6RlY6AiJORuQ9lSW8C80vdm/mwklfNruRUqWRoARDUPvDFgMYRJWI1e5XM O97x7C6LqNhdOc/1gLZPKb5wkyt+2IizAkJbyBkFk4Yl6FgJjslr0dbQ9kYcnGTscteqQ4EooZuIc G22duM9ih1sCAkFCU0le1PxEO0g+KY2Nk19jOgJwrX2vC6s95tFw9p18pNJlTnB0sU7x4dKngbzTr SPyIer0OajQG2P9fi9Hg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jT-00Ei2n-0j; Mon, 18 Sep 2023 07:31:47 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jP-00Ehyj-1W; Mon, 18 Sep 2023 07:31:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DE75660F8F; Mon, 18 Sep 2023 07:31:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5155AC433B7; Mon, 18 Sep 2023 07:31:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022302; bh=35ZlmvV7rKqwzQaOmbBijH380/EeVtPirZbGWiToTW8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d2NNM60PB58jmuhS2dGN8NLbq1KB4ygYvRR0t6DZZJS7qBVZpROHao20O2uQV3bli 0QvXp0s5+eLFeJhc/w10Fn3KC9lksq3uhoYzEZC5CWp+6gjai+j/d0a89/bBUhyjJy FfXXBFOSn88hPGWgJbSxUtSr8ULAUympoyAWJKP5uJmt4lykdt9r5kJb2u9np9CG9l pAU2vChu/4/QGlbV5wO6V/EhzFjCpLIlsY1oLhcdgzQ+bitYphtILzsTup5ZLrX357 OmssgGZ5JmBAamSicGtU177ClYYdO2+YHkslBuFOroSFhEYDOcBWDp8NAr0VRv3fiX mkBw/0g11HpDQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 09/13] powerpc: extend execmem_params for kprobes allocations Date: Mon, 18 Sep 2023 10:29:51 +0300 Message-Id: <20230918072955.2507221-10-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003143_602466_DF78C6A9 X-CRM114-Status: GOOD ( 15.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" powerpc overrides kprobes::alloc_insn_page() to remove writable permissions when STRICT_MODULE_RWX is on. Add definition of EXECMEM_KRPOBES to execmem_params to allow using the generic kprobes::alloc_insn_page() with the desired permissions. As powerpc uses breakpoint instructions to inject kprobes, it does not need to constrain kprobe allocations to the modules area and can use the entire vmalloc address space. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/kernel/kprobes.c | 14 -------------- arch/powerpc/kernel/module.c | 11 +++++++++++ 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index 62228c7072a2..14c5ddec3056 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -126,20 +126,6 @@ kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr, unsigned long offse return (kprobe_opcode_t *)(addr + offset); } -void *alloc_insn_page(void) -{ - void *page; - - page = execmem_text_alloc(EXECMEM_KPROBES, PAGE_SIZE); - if (!page) - return NULL; - - if (strict_module_rwx_enabled()) - set_memory_rox((unsigned long)page, 1); - - return page; -} - int arch_prepare_kprobe(struct kprobe *p) { int ret = 0; diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index 824d9541a310..bf2c62aef628 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -95,6 +95,9 @@ static struct execmem_params execmem_params __ro_after_init = { [EXECMEM_DEFAULT] = { .alignment = 1, }, + [EXECMEM_KPROBES] = { + .alignment = 1, + }, [EXECMEM_MODULE_DATA] = { .alignment = 1, }, @@ -135,5 +138,13 @@ struct execmem_params __init *execmem_arch_params(void) range->pgprot = prot; + execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_END; + + if (strict_module_rwx_enabled()) + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; + else + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_EXEC; + return &execmem_params; } From patchwork Mon Sep 18 07:29:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4F75C46CA1 for ; Mon, 18 Sep 2023 07:32:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3rSC7l+1zOXv2NBkXl6uPNKavfYe8QW4K2QpbHVFmyg=; b=L0mBihGVDsnI4S xkJUJEgJrFHv3kaiQsoWNml9X3/Ugs3u8A//1/zol0a7PNh7yjrE9NWQT2Z2P2GCS6bmIekXs8qtL dixkv3He+K2uDlJ96DqXjp/yuSv+veuegqCE3VUBARn9Kl3d9g8hzwX/kXuAriRSy5Td1lj9NQbUh 2QFELUpHlY8Y2F/y5oWMNoDJmfIT531DCRJbbEeug1aLGIWfXKmb6icBiKyH+gsYa7IfnYExGP7A7 3oTZe1JdiA9quNmmpN9BaRYnOZGziVfmACWTSNaWPgpKVUg9trEXlEeZudgaBp4Trr2+frQfT6u2e Wy8VXzotaVDoP1MpjT8A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8ji-00EiH6-2x; Mon, 18 Sep 2023 07:32:02 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jb-00Ei9J-1C; Mon, 18 Sep 2023 07:32:01 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B7B3BB80BFF; Mon, 18 Sep 2023 07:31:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D35BC433CD; Mon, 18 Sep 2023 07:31:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022312; bh=/lkcmGgfZ0h40rnIgrlGtkFY3fqKKa7LS9K2y40s078=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BpYN3pG2mJV/rqYXUts/nj29qV9EPMjt0zx/SMPu/4jisp6o2O8xaHc0tuU0dozmC EkX/reooL7FObsZ0D9hJ8qVr3s0jxZwQDniBogZP502PMqTzmb9zf8IC8v48ADq9JJ vhZzaS3bIiYR4KpLC0gQ9XGiX8Sg8U3ZgIlfxPbUxlyBunRPZGtKxFV4rwutmG1U2Q xNiJqoyc2QgzMbsZ7fvm26WevVyonKdQpLbr248krCbEnUGRO38cNr6hKTgmskgkoC M+BAIpPXGnu21jR6jiTi0lpov13TwKfc4fKKAJMozA7dQLpxyNDE6MCIa6xWY0rBPk BD29RoWukxl2Q== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 10/13] arch: make execmem setup available regardless of CONFIG_MODULES Date: Mon, 18 Sep 2023 10:29:52 +0300 Message-Id: <20230918072955.2507221-11-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003155_727693_4900313F X-CRM114-Status: GOOD ( 30.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" execmem does not depend on modules, on the contrary modules use execmem. To make execmem available when CONFIG_MODULES=n, for instance for kprobes, split execmem_params initialization out from arch/kernel/module.c and compile it when CONFIG_EXECMEM=y Signed-off-by: Mike Rapoport (IBM) --- arch/arm/kernel/module.c | 38 ---------- arch/arm/mm/init.c | 38 ++++++++++ arch/arm64/kernel/module.c | 130 -------------------------------- arch/arm64/mm/init.c | 132 +++++++++++++++++++++++++++++++++ arch/loongarch/kernel/module.c | 18 ----- arch/loongarch/mm/init.c | 20 +++++ arch/mips/kernel/module.c | 19 ----- arch/mips/mm/init.c | 20 +++++ arch/parisc/kernel/module.c | 17 ----- arch/parisc/mm/init.c | 22 +++++- arch/powerpc/kernel/module.c | 60 --------------- arch/powerpc/mm/mem.c | 62 ++++++++++++++++ arch/riscv/kernel/module.c | 39 ---------- arch/riscv/mm/init.c | 39 ++++++++++ arch/s390/kernel/module.c | 25 ------- arch/s390/mm/init.c | 28 +++++++ arch/sparc/kernel/module.c | 23 ------ arch/sparc/mm/Makefile | 2 + arch/sparc/mm/execmem.c | 25 +++++++ arch/x86/kernel/module.c | 27 ------- arch/x86/mm/init.c | 29 ++++++++ 21 files changed, 416 insertions(+), 397 deletions(-) create mode 100644 arch/sparc/mm/execmem.c diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c index 2c7651a2d84c..3282f304f6b1 100644 --- a/arch/arm/kernel/module.c +++ b/arch/arm/kernel/module.c @@ -16,50 +16,12 @@ #include #include #include -#include #include #include #include #include -#ifdef CONFIG_XIP_KERNEL -/* - * The XIP kernel text is mapped in the module area for modules and - * some other stuff to work without any indirect relocations. - * MODULES_VADDR is redefined here and not in asm/memory.h to avoid - * recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off. - */ -#undef MODULES_VADDR -#define MODULES_VADDR (((unsigned long)_exiprom + ~PMD_MASK) & PMD_MASK) -#endif - -#ifdef CONFIG_MMU -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; - - r->pgprot = PAGE_KERNEL_EXEC; - - if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) { - r->fallback_start = VMALLOC_START; - r->fallback_end = VMALLOC_END; - } - - return &execmem_params; -} -#endif - bool module_init_section(const char *name) { return strstarts(name, ".init") || diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a42e4cd11db2..c0b536e398b4 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -486,3 +487,40 @@ void free_initrd_mem(unsigned long start, unsigned long end) free_reserved_area((void *)start, (void *)end, -1, "initrd"); } #endif + +#ifdef CONFIG_XIP_KERNEL +/* + * The XIP kernel text is mapped in the module area for modules and + * some other stuff to work without any indirect relocations. + * MODULES_VADDR is redefined here and not in asm/memory.h to avoid + * recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off. + */ +#undef MODULES_VADDR +#define MODULES_VADDR (((unsigned long)_exiprom + ~PMD_MASK) & PMD_MASK) +#endif + +#if defined(CONFIG_MMU) && defined(CONFIG_EXECMEM) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; + + r->pgprot = PAGE_KERNEL_EXEC; + + if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) { + r->fallback_start = VMALLOC_START; + r->fallback_end = VMALLOC_END; + } + + return &execmem_params; +} +#endif diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index d27db168d2a2..eb1505128b75 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -20,142 +20,12 @@ #include #include #include -#include #include #include #include #include -static u64 module_direct_base __ro_after_init = 0; -static u64 module_plt_base __ro_after_init = 0; - -/* - * Choose a random page-aligned base address for a window of 'size' bytes which - * entirely contains the interval [start, end - 1]. - */ -static u64 __init random_bounding_box(u64 size, u64 start, u64 end) -{ - u64 max_pgoff, pgoff; - - if ((end - start) >= size) - return 0; - - max_pgoff = (size - (end - start)) / PAGE_SIZE; - pgoff = get_random_u32_inclusive(0, max_pgoff); - - return start - pgoff * PAGE_SIZE; -} - -/* - * Modules may directly reference data and text anywhere within the kernel - * image and other modules. References using PREL32 relocations have a +/-2G - * range, and so we need to ensure that the entire kernel image and all modules - * fall within a 2G window such that these are always within range. - * - * Modules may directly branch to functions and code within the kernel text, - * and to functions and code within other modules. These branches will use - * CALL26/JUMP26 relocations with a +/-128M range. Without PLTs, we must ensure - * that the entire kernel text and all module text falls within a 128M window - * such that these are always within range. With PLTs, we can expand this to a - * 2G window. - * - * We chose the 128M region to surround the entire kernel image (rather than - * just the text) as using the same bounds for the 128M and 2G regions ensures - * by construction that we never select a 128M region that is not a subset of - * the 2G region. For very large and unusual kernel configurations this means - * we may fall back to PLTs where they could have been avoided, but this keeps - * the logic significantly simpler. - */ -static int __init module_init_limits(void) -{ - u64 kernel_end = (u64)_end; - u64 kernel_start = (u64)_text; - u64 kernel_size = kernel_end - kernel_start; - - /* - * The default modules region is placed immediately below the kernel - * image, and is large enough to use the full 2G relocation range. - */ - BUILD_BUG_ON(KIMAGE_VADDR != MODULES_END); - BUILD_BUG_ON(MODULES_VSIZE < SZ_2G); - - if (!kaslr_enabled()) { - if (kernel_size < SZ_128M) - module_direct_base = kernel_end - SZ_128M; - if (kernel_size < SZ_2G) - module_plt_base = kernel_end - SZ_2G; - } else { - u64 min = kernel_start; - u64 max = kernel_end; - - if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { - pr_info("2G module region forced by RANDOMIZE_MODULE_REGION_FULL\n"); - } else { - module_direct_base = random_bounding_box(SZ_128M, min, max); - if (module_direct_base) { - min = module_direct_base; - max = module_direct_base + SZ_128M; - } - } - - module_plt_base = random_bounding_box(SZ_2G, min, max); - } - - pr_info("%llu pages in range for non-PLT usage", - module_direct_base ? (SZ_128M - kernel_size) / PAGE_SIZE : 0); - pr_info("%llu pages in range for PLT usage", - module_plt_base ? (SZ_2G - kernel_size) / PAGE_SIZE : 0); - - return 0; -} - -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .flags = EXECMEM_KASAN_SHADOW, - .alignment = MODULE_ALIGN, - }, - [EXECMEM_KPROBES] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .alignment = 1, - }, - [EXECMEM_BPF] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; - - module_init_limits(); - - r->pgprot = PAGE_KERNEL; - - if (module_direct_base) { - r->start = module_direct_base; - r->end = module_direct_base + SZ_128M; - - if (module_plt_base) { - r->fallback_start = module_plt_base; - r->fallback_end = module_plt_base + SZ_2G; - } - } else if (module_plt_base) { - r->start = module_plt_base; - r->end = module_plt_base + SZ_2G; - } - - execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; - execmem_params.ranges[EXECMEM_BPF].pgprot = PAGE_KERNEL; - - return &execmem_params; -} - enum aarch64_reloc_op { RELOC_OP_NONE, RELOC_OP_ABS, diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 8a0f8604348b..9b7716b4d84c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include @@ -547,3 +548,134 @@ void dump_mem_limit(void) pr_emerg("Memory Limit: none\n"); } } + +#ifdef CONFIG_EXECMEM +static u64 module_direct_base __ro_after_init = 0; +static u64 module_plt_base __ro_after_init = 0; + +/* + * Choose a random page-aligned base address for a window of 'size' bytes which + * entirely contains the interval [start, end - 1]. + */ +static u64 __init random_bounding_box(u64 size, u64 start, u64 end) +{ + u64 max_pgoff, pgoff; + + if ((end - start) >= size) + return 0; + + max_pgoff = (size - (end - start)) / PAGE_SIZE; + pgoff = get_random_u32_inclusive(0, max_pgoff); + + return start - pgoff * PAGE_SIZE; +} + +/* + * Modules may directly reference data and text anywhere within the kernel + * image and other modules. References using PREL32 relocations have a +/-2G + * range, and so we need to ensure that the entire kernel image and all modules + * fall within a 2G window such that these are always within range. + * + * Modules may directly branch to functions and code within the kernel text, + * and to functions and code within other modules. These branches will use + * CALL26/JUMP26 relocations with a +/-128M range. Without PLTs, we must ensure + * that the entire kernel text and all module text falls within a 128M window + * such that these are always within range. With PLTs, we can expand this to a + * 2G window. + * + * We chose the 128M region to surround the entire kernel image (rather than + * just the text) as using the same bounds for the 128M and 2G regions ensures + * by construction that we never select a 128M region that is not a subset of + * the 2G region. For very large and unusual kernel configurations this means + * we may fall back to PLTs where they could have been avoided, but this keeps + * the logic significantly simpler. + */ +static int __init module_init_limits(void) +{ + u64 kernel_end = (u64)_end; + u64 kernel_start = (u64)_text; + u64 kernel_size = kernel_end - kernel_start; + + /* + * The default modules region is placed immediately below the kernel + * image, and is large enough to use the full 2G relocation range. + */ + BUILD_BUG_ON(KIMAGE_VADDR != MODULES_END); + BUILD_BUG_ON(MODULES_VSIZE < SZ_2G); + + if (!kaslr_enabled()) { + if (kernel_size < SZ_128M) + module_direct_base = kernel_end - SZ_128M; + if (kernel_size < SZ_2G) + module_plt_base = kernel_end - SZ_2G; + } else { + u64 min = kernel_start; + u64 max = kernel_end; + + if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { + pr_info("2G module region forced by RANDOMIZE_MODULE_REGION_FULL\n"); + } else { + module_direct_base = random_bounding_box(SZ_128M, min, max); + if (module_direct_base) { + min = module_direct_base; + max = module_direct_base + SZ_128M; + } + } + + module_plt_base = random_bounding_box(SZ_2G, min, max); + } + + pr_info("%llu pages in range for non-PLT usage", + module_direct_base ? (SZ_128M - kernel_size) / PAGE_SIZE : 0); + pr_info("%llu pages in range for PLT usage", + module_plt_base ? (SZ_2G - kernel_size) / PAGE_SIZE : 0); + + return 0; +} + +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .alignment = MODULE_ALIGN, + }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; + + module_init_limits(); + + r->pgprot = PAGE_KERNEL; + + if (module_direct_base) { + r->start = module_direct_base; + r->end = module_direct_base + SZ_128M; + + if (module_plt_base) { + r->fallback_start = module_plt_base; + r->fallback_end = module_plt_base + SZ_2G; + } + } else if (module_plt_base) { + r->start = module_plt_base; + r->end = module_plt_base + SZ_2G; + } + + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; + execmem_params.ranges[EXECMEM_BPF].pgprot = PAGE_KERNEL; + + return &execmem_params; +} +#endif diff --git a/arch/loongarch/kernel/module.c b/arch/loongarch/kernel/module.c index a1d8fe9796fa..181b5f8b09f1 100644 --- a/arch/loongarch/kernel/module.c +++ b/arch/loongarch/kernel/module.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include @@ -470,23 +469,6 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; - execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; - - return &execmem_params; -} - static void module_init_ftrace_plt(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod) { diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c index f3fe8c06ba4d..26b10a51309c 100644 --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -247,3 +248,22 @@ EXPORT_SYMBOL(invalid_pmd_table); #endif pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned_bss; EXPORT_SYMBOL(invalid_pte_table); + +#ifdef CONFIG_EXECMEM +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + + return &execmem_params; +} +#endif diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 1c959074b35f..ebf9496f5db0 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c @@ -33,25 +33,6 @@ struct mips_hi16 { static LIST_HEAD(dbe_list); static DEFINE_SPINLOCK(dbe_lock); -#ifdef MODULE_START -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULE_START, - .end = MODULE_END, - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; - - return &execmem_params; -} -#endif - static void apply_r_mips_32(u32 *location, u32 base, Elf_Addr v) { *location = base + v; diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5dcb525a8995..55e7869d03f2 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include @@ -573,3 +574,22 @@ EXPORT_SYMBOL_GPL(invalid_pmd_table); #endif pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned_bss; EXPORT_SYMBOL(invalid_pte_table); + +#if defined(CONFIG_EXECMEM) && defined(MODULE_START) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULE_START, + .end = MODULE_END, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; + + return &execmem_params; +} +#endif diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c index 0c6dfd1daef3..fecd2760b7a6 100644 --- a/arch/parisc/kernel/module.c +++ b/arch/parisc/kernel/module.c @@ -174,23 +174,6 @@ static inline int reassemble_22(int as22) ((as22 & 0x0003ff) << 3)); } -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .pgprot = PAGE_KERNEL_RWX, - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - execmem_params.ranges[EXECMEM_DEFAULT].start = VMALLOC_START; - execmem_params.ranges[EXECMEM_DEFAULT].end = VMALLOC_END; - - return &execmem_params; -} - #ifndef CONFIG_64BIT static inline unsigned long count_gots(const Elf_Rela *rela, unsigned long n) { diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index a088c243edea..c87fed38e38e 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -24,6 +24,7 @@ #include /* for node_online_map */ #include /* for release_pages */ #include +#include #include #include @@ -479,7 +480,7 @@ void free_initmem(void) /* finally dump all the instructions which were cached, since the * pages are no-longer executable */ flush_icache_range(init_begin, init_end); - + free_initmem_default(POISON_FREE_INITMEM); /* set up a new led state on systems shipped LED State panel */ @@ -919,3 +920,22 @@ static const pgprot_t protection_map[16] = { [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_RWX }; DECLARE_VM_GET_PAGE_PROT + +#ifdef CONFIG_EXECMEM +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .pgprot = PAGE_KERNEL_RWX, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + execmem_params.ranges[EXECMEM_DEFAULT].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_DEFAULT].end = VMALLOC_END; + + return &execmem_params; +} +#endif diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index bf2c62aef628..b30e00964a60 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include #include @@ -89,62 +88,3 @@ int module_finalize(const Elf_Ehdr *hdr, return 0; } - -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .alignment = 1, - }, - [EXECMEM_KPROBES] = { - .alignment = 1, - }, - [EXECMEM_MODULE_DATA] = { - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; - struct execmem_range *range = &execmem_params.ranges[EXECMEM_DEFAULT]; - - /* - * BOOK3S_32 and 8xx define MODULES_VADDR for text allocations and - * allow allocating data in the entire vmalloc space - */ -#ifdef MODULES_VADDR - struct execmem_range *data = &execmem_params.ranges[EXECMEM_MODULE_DATA]; - unsigned long limit = (unsigned long)_etext - SZ_32M; - - /* First try within 32M limit from _etext to avoid branch trampolines */ - if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) { - range->start = limit; - range->end = MODULES_END; - range->fallback_start = MODULES_VADDR; - range->fallback_end = MODULES_END; - } else { - range->start = MODULES_VADDR; - range->end = MODULES_END; - } - data->start = VMALLOC_START; - data->end = VMALLOC_END; - data->pgprot = PAGE_KERNEL; - data->alignment = 1; -#else - range->start = VMALLOC_START; - range->end = VMALLOC_END; -#endif - - range->pgprot = prot; - - execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_START; - execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_END; - - if (strict_module_rwx_enabled()) - execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; - else - execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_EXEC; - - return &execmem_params; -} diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 8b121df7b08f..06f4bb6fb780 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -406,3 +407,64 @@ int devmem_is_allowed(unsigned long pfn) * the EHEA driver. Drop this when drivers/net/ethernet/ibm/ehea is removed. */ EXPORT_SYMBOL_GPL(walk_system_ram_range); + +#ifdef CONFIG_EXECMEM +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .alignment = 1, + }, + [EXECMEM_KPROBES] = { + .alignment = 1, + }, + [EXECMEM_MODULE_DATA] = { + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; + struct execmem_range *range = &execmem_params.ranges[EXECMEM_DEFAULT]; + + /* + * BOOK3S_32 and 8xx define MODULES_VADDR for text allocations and + * allow allocating data in the entire vmalloc space + */ +#ifdef MODULES_VADDR + struct execmem_range *data = &execmem_params.ranges[EXECMEM_MODULE_DATA]; + unsigned long limit = (unsigned long)_etext - SZ_32M; + + /* First try within 32M limit from _etext to avoid branch trampolines */ + if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) { + range->start = limit; + range->end = MODULES_END; + range->fallback_start = MODULES_VADDR; + range->fallback_end = MODULES_END; + } else { + range->start = MODULES_VADDR; + range->end = MODULES_END; + } + data->start = VMALLOC_START; + data->end = VMALLOC_END; + data->pgprot = PAGE_KERNEL; + data->alignment = 1; +#else + range->start = VMALLOC_START; + range->end = VMALLOC_END; +#endif + + range->pgprot = prot; + + execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_END; + + if (strict_module_rwx_enabled()) + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; + else + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_EXEC; + + return &execmem_params; +} +#endif diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 31505ecb5c72..8af08d5449bf 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include @@ -436,44 +435,6 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -#ifdef CONFIG_MMU -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - [EXECMEM_KPROBES] = { - .pgprot = PAGE_KERNEL_READ_EXEC, - .alignment = 1, - }, - [EXECMEM_BPF] = { - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ -#ifdef CONFIG_64BIT - execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; - execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; -#else - execmem_params.ranges[EXECMEM_DEFAULT].start = VMALLOC_START; - execmem_params.ranges[EXECMEM_DEFAULT].end = VMALLOC_END; -#endif - - execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_START; - execmem_params.ranges[EXECMEM_KPROBES].end = VMALLOC_END; - - execmem_params.ranges[EXECMEM_BPF].start = BPF_JIT_REGION_START; - execmem_params.ranges[EXECMEM_BPF].end = BPF_JIT_REGION_END; - - return &execmem_params; -} -#endif - int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *me) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 0798bd861dcb..b0f7848f39e3 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -24,6 +24,7 @@ #include #endif #include +#include #include #include @@ -1564,3 +1565,41 @@ void __init pgtable_cache_init(void) preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules"); } #endif + +#if defined(CONFIG_MMU) && defined(CONFIG_EXECMEM) +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + [EXECMEM_KPROBES] = { + .pgprot = PAGE_KERNEL_READ_EXEC, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ +#ifdef CONFIG_64BIT + execmem_params.ranges[EXECMEM_DEFAULT].start = MODULES_VADDR; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; +#else + execmem_params.ranges[EXECMEM_DEFAULT].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_DEFAULT].end = VMALLOC_END; +#endif + + execmem_params.ranges[EXECMEM_KPROBES].start = VMALLOC_START; + execmem_params.ranges[EXECMEM_KPROBES].end = VMALLOC_END; + + execmem_params.ranges[EXECMEM_BPF].start = BPF_JIT_REGION_START; + execmem_params.ranges[EXECMEM_BPF].end = BPF_JIT_REGION_END; + + return &execmem_params; +} +#endif diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index 538d5f24af66..81a8d92ca092 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -37,31 +37,6 @@ #define PLT_ENTRY_SIZE 22 -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .flags = EXECMEM_KASAN_SHADOW, - .alignment = MODULE_ALIGN, - .pgprot = PAGE_KERNEL, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - unsigned long module_load_offset = 0; - unsigned long start; - - if (kaslr_enabled()) - module_load_offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - - start = MODULES_VADDR + module_load_offset; - execmem_params.ranges[EXECMEM_DEFAULT].start = start; - execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; - - return &execmem_params; -} - #ifdef CONFIG_FUNCTION_TRACER void module_arch_cleanup(struct module *mod) { diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 8b94d2212d33..2e6d6512fc5f 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -311,3 +312,30 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) vmem_remove_mapping(start, size); } #endif /* CONFIG_MEMORY_HOTPLUG */ + +#ifdef CONFIG_EXECMEM +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .alignment = MODULE_ALIGN, + .pgprot = PAGE_KERNEL, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + unsigned long module_load_offset = 0; + unsigned long start; + + if (kaslr_enabled()) + module_load_offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; + + start = MODULES_VADDR + module_load_offset; + execmem_params.ranges[EXECMEM_DEFAULT].start = start; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + + return &execmem_params; +} +#endif diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c index 1d8d1fba95b9..dff1d85ba202 100644 --- a/arch/sparc/kernel/module.c +++ b/arch/sparc/kernel/module.c @@ -14,7 +14,6 @@ #include #include #include -#include #ifdef CONFIG_SPARC64 #include #endif @@ -25,28 +24,6 @@ #include "entry.h" -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { -#ifdef CONFIG_SPARC64 - .start = MODULES_VADDR, - .end = MODULES_END, -#else - .start = VMALLOC_START, - .end = VMALLOC_END, -#endif - .alignment = 1, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; - - return &execmem_params; -} - /* Make generic code ignore STT_REGISTER dummy undefined symbols. */ int module_frob_arch_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs, diff --git a/arch/sparc/mm/Makefile b/arch/sparc/mm/Makefile index 871354aa3c00..87e2cf7efb5b 100644 --- a/arch/sparc/mm/Makefile +++ b/arch/sparc/mm/Makefile @@ -15,3 +15,5 @@ obj-$(CONFIG_SPARC32) += leon_mm.o # Only used by sparc64 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o + +obj-$(CONFIG_EXECMEM) += execmem.o diff --git a/arch/sparc/mm/execmem.c b/arch/sparc/mm/execmem.c new file mode 100644 index 000000000000..fb53a859869a --- /dev/null +++ b/arch/sparc/mm/execmem.c @@ -0,0 +1,25 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { +#ifdef CONFIG_SPARC64 + .start = MODULES_VADDR, + .end = MODULES_END, +#else + .start = VMALLOC_START, + .end = VMALLOC_END, +#endif + .alignment = 1, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; + + return &execmem_params; +} diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 9d37375e2f05..c52d591c0f3f 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include @@ -37,32 +36,6 @@ do { \ } while (0) #endif -static struct execmem_params execmem_params __ro_after_init = { - .ranges = { - [EXECMEM_DEFAULT] = { - .flags = EXECMEM_KASAN_SHADOW, - .alignment = MODULE_ALIGN, - }, - }, -}; - -struct execmem_params __init *execmem_arch_params(void) -{ - unsigned long module_load_offset = 0; - unsigned long start; - - if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_enabled()) - module_load_offset = - get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - - start = MODULES_VADDR + module_load_offset; - execmem_params.ranges[EXECMEM_DEFAULT].start = start; - execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; - execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; - - return &execmem_params; -} - #ifdef CONFIG_X86_32 int apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 679893ea5e68..022af7ab50f9 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -1099,3 +1100,31 @@ unsigned long arch_max_swapfile_size(void) return pages; } #endif + +#ifdef CONFIG_EXECMEM +static struct execmem_params execmem_params __ro_after_init = { + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .alignment = MODULE_ALIGN, + }, + }, +}; + +struct execmem_params __init *execmem_arch_params(void) +{ + unsigned long module_load_offset = 0; + unsigned long start; + + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_enabled()) + module_load_offset = + get_random_u32_inclusive(1, 1024) * PAGE_SIZE; + + start = MODULES_VADDR + module_load_offset; + execmem_params.ranges[EXECMEM_DEFAULT].start = start; + execmem_params.ranges[EXECMEM_DEFAULT].end = MODULES_END; + execmem_params.ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL; + + return &execmem_params; +} +#endif /* CONFIG_EXECMEM */ From patchwork Mon Sep 18 07:29:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 958B0C46CA1 for ; Mon, 18 Sep 2023 07:32:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kmRl+Ispcfaal/6f9UKb4Tx4XefxHvQZ/I9c1wgMT2o=; b=CqWJup6GE1JVW1 0WYznDOiuqt7EDRkQOO2bwFt4ZX8UEOAcILhRFS+rulw0NkmXFnT08JSeplkkSCldBuX1vrsmeNXa jKIJTqg+MAClwFoBXF6GohuxgezTCNYT0ZoWikU3++RtXTEb0xDzYTqTr+sBX3M8rdauohhm+WCSG Bk2v1AskqeMllGjaCoIQqgNJyBnT2DtFW7OM4FC1ZQoVbkQOVt1HSVu4418XWhKEwdQh3Bt59cUzn l5oiStP/ib6krVE6kVtluNCoatLULn4mHwMbFxAUZ4uAIvE0kQC5ndyR2GpOhhDUEPHJZxNPJCRX8 RmoxAEUE7Wy3p6OquxiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jp-00EiNj-1x; Mon, 18 Sep 2023 07:32:09 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8ji-00EiGv-39; Mon, 18 Sep 2023 07:32:05 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6804C60FA1; Mon, 18 Sep 2023 07:32:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE792C433BC; Mon, 18 Sep 2023 07:31:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022322; bh=1UL8tl9eANdNOFG3ZTSYgBTww7VCSn8pNWCEuJfnTK4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jOnWnURBWcrt92iQ1E5Vr3mXWyQ6RAJ4i9B8tsY7FXOUVwWaNtFLbRVsEzVfJ0Nfg zcF2T76bilI3SPkaUjpMzEfs1sQN/JzTRMR83DnXh+Ov+MJ2AuKwZwF8Fnu/AObC0k QAdcrio7/1JZevOAFEj9aMF9cUGQUD6UCX9K1G2jVF2I2HWuIYbnj0wy/Rm4EXVNom HeAH3TaXGuMZDe4Y3BnUwv/DnuIZbNW8G+v45crhACvScxYPJb49nAjf6MNnqIIZFe chFFMgGweSzKG7hh/3g3keI9zbSw1HmwgbPcUexA3rwgKV2tInhKeo5IsMuopaavQ2 VqOvvHTW8VtZw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 11/13] x86/ftrace: enable dynamic ftrace without CONFIG_MODULES Date: Mon, 18 Sep 2023 10:29:53 +0300 Message-Id: <20230918072955.2507221-12-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003203_130281_8082FB7B X-CRM114-Status: GOOD ( 13.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Dynamic ftrace must allocate memory for code and this was impossible without CONFIG_MODULES. With execmem separated from the modules code, execmem_text_alloc() is available regardless of CONFIG_MODULES. Remove dependency of dynamic ftrace on CONFIG_MODULES and make CONFIG_DYNAMIC_FTRACE select CONFIG_EXECMEM in Kconfig. Signed-off-by: Mike Rapoport (IBM) --- arch/x86/Kconfig | 1 + arch/x86/kernel/ftrace.c | 10 ---------- 2 files changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 982b777eadc7..cc7c4a0a8c16 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -35,6 +35,7 @@ config X86_64 select SWIOTLB select ARCH_HAS_ELFCORE_COMPAT select ZONE_DMA32 + select EXECMEM if DYNAMIC_FTRACE config FORCE_DYNAMIC_FTRACE def_bool y diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index ae56d79a6a74..7ed7e8297ba3 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -261,8 +261,6 @@ void arch_ftrace_update_code(int command) /* Currently only x86_64 supports dynamic trampolines */ #ifdef CONFIG_X86_64 -#ifdef CONFIG_MODULES -/* Module allocation simplifies allocating memory for code */ static inline void *alloc_tramp(unsigned long size) { return execmem_text_alloc(EXECMEM_FTRACE, size); @@ -271,14 +269,6 @@ static inline void tramp_free(void *tramp) { execmem_free(tramp); } -#else -/* Trampolines can only be created if modules are supported */ -static inline void *alloc_tramp(unsigned long size) -{ - return NULL; -} -static inline void tramp_free(void *tramp) { } -#endif /* Defined as markers to the end of the ftrace default trampolines */ extern void ftrace_regs_caller_end(void); From patchwork Mon Sep 18 07:29:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C631FCD37B0 for ; Mon, 18 Sep 2023 07:32:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w8axNuZf88igPvPHQ7cX++4iWY1TaqJ/Jbi+unSB5yw=; b=LM5i8jR3Fi8lDO iMZiSaJkZd6G9S7Ln9gapw6XFCDDJtNo2gViezHD4XTZsJgdIn/dkaNd/soMS2SoRm7dvNYZB5mfe 3/ILadRk+rT0tssHeiPr8Av4dUoueN+6kG0XSBH+4UCD0VJzIUNvOtMdmscrCAxgb80XsHsLehYOR rBEe1Cuz/v3DyN/6e8JWmZjQw+bVwyt8w4wQtjjJVQjzNm5856qafkRn9WPgD0cwjILlkmKIa0k9E eWg83sbHFR/n1Y9BTl0ynzy6qRut0CWasX27aP3glEWg50lqvhOqwO9hze3/3kBkQ3E1+Us1lYkAn XzqhKKOe8/l2m6iYrI/A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jw-00EiTl-0v; Mon, 18 Sep 2023 07:32:16 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8jt-00EiQb-11; Mon, 18 Sep 2023 07:32:15 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B68D660FA0; Mon, 18 Sep 2023 07:32:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B212CC433A9; Mon, 18 Sep 2023 07:32:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022331; bh=g2ysQzPrupxvZBEk8R4nz6nGqIwxox2mdTADaK2KSBo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kT3NctsEzR46ZMcvQmZYEzpOY6ldmIA+UpjJflzhcN6xMD3eDfrH5TshpxPSzqYKZ 4RtqZhzZqqMGjQFg0TiMJ4pIA+4pgayZMlfPQIIGjFZSqPjnRRF0/SACLSULT/WzJX e5w4J3wzseysRp3HjruUqlEklzGHP8/6ICjr9417gwdOyxSuUT0n9ca5FaDmjgMb4N wh+TlKoXNolf6tuh2rWFnkZ3l88wMsKsT34tZyIMrsdUBBtjJC/AKV44QLF3klepDN XUU8NCM4DCe8D1KxoJHEl8Op3szJMcdX1IE3cH+lAoAp79l+6uCu78MbUgn4aNR/c+ j8TJxIqTZhi6w== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 12/13] kprobes: remove dependency on CONFIG_MODULES Date: Mon, 18 Sep 2023 10:29:54 +0300 Message-Id: <20230918072955.2507221-13-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003213_461250_50317B02 X-CRM114-Status: GOOD ( 18.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" kprobes depended on CONFIG_MODULES because it has to allocate memory for code. Since code allocations are now implemented with execmem, kprobes can be enabled in non-modular kernels. Add #ifdef CONFIG_MODULE guards for the code dealing with kprobes inside modules, make CONFIG_KPROBES select CONFIG_EXECMEM and drop the dependency of CONFIG_KPROBES on CONFIG_MODULES. Signed-off-by: Mike Rapoport (IBM) --- arch/Kconfig | 2 +- kernel/kprobes.c | 43 +++++++++++++++++++++---------------- kernel/trace/trace_kprobe.c | 11 ++++++++++ 3 files changed, 37 insertions(+), 19 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 12d51495caec..c52a600b63ca 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -52,9 +52,9 @@ config GENERIC_ENTRY config KPROBES bool "Kprobes" - depends on MODULES depends on HAVE_KPROBES select KALLSYMS + select EXECMEM select TASKS_RCU if PREEMPTION help Kprobes allows you to trap at almost any kernel address and diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 0ccb4d2ec9a2..c95d0088f966 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1580,6 +1580,7 @@ static int check_kprobe_address_safe(struct kprobe *p, goto out; } +#ifdef CONFIG_MODULES /* Check if 'p' is probing a module. */ *probed_mod = __module_text_address((unsigned long) p->addr); if (*probed_mod) { @@ -1603,6 +1604,8 @@ static int check_kprobe_address_safe(struct kprobe *p, ret = -ENOENT; } } +#endif + out: preempt_enable(); jump_label_unlock(); @@ -2495,24 +2498,6 @@ int kprobe_add_area_blacklist(unsigned long start, unsigned long end) return 0; } -/* Remove all symbols in given area from kprobe blacklist */ -static void kprobe_remove_area_blacklist(unsigned long start, unsigned long end) -{ - struct kprobe_blacklist_entry *ent, *n; - - list_for_each_entry_safe(ent, n, &kprobe_blacklist, list) { - if (ent->start_addr < start || ent->start_addr >= end) - continue; - list_del(&ent->list); - kfree(ent); - } -} - -static void kprobe_remove_ksym_blacklist(unsigned long entry) -{ - kprobe_remove_area_blacklist(entry, entry + 1); -} - int __weak arch_kprobe_get_kallsym(unsigned int *symnum, unsigned long *value, char *type, char *sym) { @@ -2577,6 +2562,25 @@ static int __init populate_kprobe_blacklist(unsigned long *start, return ret ? : arch_populate_kprobe_blacklist(); } +#ifdef CONFIG_MODULES +/* Remove all symbols in given area from kprobe blacklist */ +static void kprobe_remove_area_blacklist(unsigned long start, unsigned long end) +{ + struct kprobe_blacklist_entry *ent, *n; + + list_for_each_entry_safe(ent, n, &kprobe_blacklist, list) { + if (ent->start_addr < start || ent->start_addr >= end) + continue; + list_del(&ent->list); + kfree(ent); + } +} + +static void kprobe_remove_ksym_blacklist(unsigned long entry) +{ + kprobe_remove_area_blacklist(entry, entry + 1); +} + static void add_module_kprobe_blacklist(struct module *mod) { unsigned long start, end; @@ -2678,6 +2682,7 @@ static struct notifier_block kprobe_module_nb = { .notifier_call = kprobes_module_callback, .priority = 0 }; +#endif void kprobe_free_init_mem(void) { @@ -2737,8 +2742,10 @@ static int __init init_kprobes(void) err = arch_init_kprobes(); if (!err) err = register_die_notifier(&kprobe_exceptions_nb); +#ifdef CONFIG_MODULES if (!err) err = register_module_notifier(&kprobe_module_nb); +#endif kprobes_initialized = (err == 0); kprobe_sysctls_init(); diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index 3d7a180a8427..25a5293a80c0 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -111,6 +111,7 @@ static nokprobe_inline bool trace_kprobe_within_module(struct trace_kprobe *tk, return strncmp(module_name(mod), name, len) == 0 && name[len] == ':'; } +#ifdef CONFIG_MODULES static nokprobe_inline bool trace_kprobe_module_exist(struct trace_kprobe *tk) { char *p; @@ -129,6 +130,12 @@ static nokprobe_inline bool trace_kprobe_module_exist(struct trace_kprobe *tk) return ret; } +#else +static inline bool trace_kprobe_module_exist(struct trace_kprobe *tk) +{ + return false; +} +#endif static bool trace_kprobe_is_busy(struct dyn_event *ev) { @@ -670,6 +677,7 @@ static int register_trace_kprobe(struct trace_kprobe *tk) return ret; } +#ifdef CONFIG_MODULES /* Module notifier call back, checking event on the module */ static int trace_kprobe_module_callback(struct notifier_block *nb, unsigned long val, void *data) @@ -704,6 +712,7 @@ static struct notifier_block trace_kprobe_module_nb = { .notifier_call = trace_kprobe_module_callback, .priority = 1 /* Invoked after kprobe module callback */ }; +#endif static int __trace_kprobe_create(int argc, const char *argv[]) { @@ -1810,8 +1819,10 @@ static __init int init_kprobe_trace_early(void) if (ret) return ret; +#ifdef CONFIG_MODULES if (register_module_notifier(&trace_kprobe_module_nb)) return -EINVAL; +#endif return 0; } From patchwork Mon Sep 18 07:29:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13389039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 994D9CD37B0 for ; Mon, 18 Sep 2023 07:32:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/yp6Yp+l7GmYEoYvLBvpcAoCJ4JCeo9+WizF6hnQ19E=; b=wHScUGFvWKTEH+ r/+9kMhuwps/POSg2HEw2gaw3eCSYE0z8nmbf+c6eojxEwdZLWaRYIet+qInAFRThYGQe+7MiWzL5 H0RC/s3YQmbzZ+A1I4j/4mnI8v4couH61AjOxanZINZ1wsFHKJNTTvmoLG0FX1YELM1kKvgZlWBQ7 qkHvoWFv5JdYUaCZVbL/xTizhrTccp9oFSpSUzZhSMPyQUYbFuJctaKTg5h4lOJ/19XAaOfXeJ6tq 6/iVEtOmAmwBtyMbEpN0yY7hIlACBmYW6nQDoaK4fyJuM45nGVC8mkvHyFZXNC9BLo2QfHpQEE4Ea Wsy6aMMfBpJTHIeHJ24A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qi8k9-00EiiB-0B; Mon, 18 Sep 2023 07:32:29 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qi8k2-00EiZQ-2t; Mon, 18 Sep 2023 07:32:26 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5D75D60F9A; Mon, 18 Sep 2023 07:32:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6BC40C433B9; Mon, 18 Sep 2023 07:32:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022341; bh=07Jp3LphXzl4EkHja0JdIUA/xjHpNGiZuguIXloPztM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vnbzf3QhQuVmwYQAf2Mt0WSZBefd5lhLSFtkXhwMC+n+X7N4ki+6T0boDwsgMr0XN AAdXg1yYsIa/lFW8JGjF+xmiLYY63eHqaMwytJIlEVwOkPrMrojuYJZhQpE797YZju GZcg8J0n+fEArc0GhUtiTV4LpXaeVrImgv2qDiS2pZ01DSA9wJwPYqPfL7KPbaX6bE QYQH9HSRdQuGBz3raOjQKyLrYnfzs7cZoMWOYQ1LzctZYuLHDNxMU/K31vWL6RTmKL zi7xoAjilllhk4DMDBlwvUkOVzwZl6SwuVwOf6rk+tbCRJ2s6/D0PIr6IMDdrgSqtE mxvGlqxn371yQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 13/13] bpf: remove CONFIG_BPF_JIT dependency on CONFIG_MODULES of Date: Mon, 18 Sep 2023 10:29:55 +0300 Message-Id: <20230918072955.2507221-14-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230918_003222_983895_7E32C527 X-CRM114-Status: GOOD ( 15.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" BPF just-in-time compiler depended on CONFIG_MODULES because it used module_alloc() to allocate memory for the generated code. Since code allocations are now implemented with execmem, drop dependency of CONFIG_BPF_JIT on CONFIG_MODULES and make it select CONFIG_EXECMEM. Suggested-by: Björn Töpel Signed-off-by: Mike Rapoport (IBM) --- kernel/bpf/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig index 6a906ff93006..5be11c906f93 100644 --- a/kernel/bpf/Kconfig +++ b/kernel/bpf/Kconfig @@ -42,7 +42,7 @@ config BPF_JIT bool "Enable BPF Just In Time compiler" depends on BPF depends on HAVE_CBPF_JIT || HAVE_EBPF_JIT - depends on MODULES + select EXECMEM help BPF programs are normally handled by a BPF interpreter. This option allows the kernel to generate native code when a program is loaded