From patchwork Mon Sep 18 07:29:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13388994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A827ACD3425 for ; Mon, 18 Sep 2023 07:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240321AbjIRHcD (ORCPT ); Mon, 18 Sep 2023 03:32:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240322AbjIRHbi (ORCPT ); Mon, 18 Sep 2023 03:31:38 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66DEB138; Mon, 18 Sep 2023 00:31:23 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3F8CC433BF; Mon, 18 Sep 2023 07:31:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695022283; bh=uhUpg7LkgDj87OPKWv+2ZQQTqeXwsgqGQfVhMMOlf6I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QwbKIwCmsi/m+QIPzGWyDvn+Tk81CaEzaNlQaj1OrbfY6aH7VjNTOMfyqPjfbO3ZI IrekkL8ga776lr+J3x8Ygyh2wVMU8QELU6aweeJeBpGEbBfrtJxHf+rNjrvDsHVs7+ 6bm1TS2tPNGOkx/h7R28N2r8hennFfDFz6aE6FuiYGkO4GZPqz8JGrUJkS1iup7wLd Isg0JWVRjOAmJTjw/rUuKi838+cqznX9dO82PejgtCEzU2RQuKCDDPVyOOKasyX6kb 8fpNnivohYt0YSrYt8tzLAAtD+1zq8pRlzkl8JhZ7kRPGcY2/LRuqibiB20CPobXX4 xLjJAnEDRG4hQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?b?bA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Mike Rapoport , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v3 07/13] arm64, execmem: extend execmem_params for generated code allocations Date: Mon, 18 Sep 2023 10:29:49 +0300 Message-Id: <20230918072955.2507221-8-rppt@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230918072955.2507221-1-rppt@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: "Mike Rapoport (IBM)" The memory allocations for kprobes and BPF on arm64 can be placed anywhere in vmalloc address space and currently this is implemented with overrides of alloc_insn_page() and bpf_jit_alloc_exec() in arm64. Define EXECMEM_KPROBES and EXECMEM_BPF ranges in arm64::execmem_params and drop overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Signed-off-by: Mike Rapoport (IBM) Acked-by: Will Deacon --- arch/arm64/kernel/module.c | 13 +++++++++++++ arch/arm64/kernel/probes/kprobes.c | 7 ------- arch/arm64/net/bpf_jit_comp.c | 11 ----------- 3 files changed, 13 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index cd6320de1c54..d27db168d2a2 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -116,6 +116,16 @@ static struct execmem_params execmem_params __ro_after_init = { .flags = EXECMEM_KASAN_SHADOW, .alignment = MODULE_ALIGN, }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .alignment = 1, + }, }, }; @@ -140,6 +150,9 @@ struct execmem_params __init *execmem_arch_params(void) r->end = module_plt_base + SZ_2G; } + execmem_params.ranges[EXECMEM_KPROBES].pgprot = PAGE_KERNEL_ROX; + execmem_params.ranges[EXECMEM_BPF].pgprot = PAGE_KERNEL; + return &execmem_params; } diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 70b91a8c6bb3..6fccedd02b2a 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -129,13 +129,6 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) return 0; } -void *alloc_insn_page(void) -{ - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - /* arm kprobe: install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 150d1c6543f7..3a7590f828d1 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1687,17 +1687,6 @@ u64 bpf_jit_alloc_exec_limit(void) return VMALLOC_END - VMALLOC_START; } -void *bpf_jit_alloc_exec(unsigned long size) -{ - /* Memory is intended to be executable, reset the pointer tag. */ - return kasan_reset_tag(vmalloc(size)); -} - -void bpf_jit_free_exec(void *addr) -{ - return vfree(addr); -} - /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */ bool bpf_jit_supports_subprog_tailcalls(void) {