From patchwork Sun May 5 16:06:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B278EC4345F for ; Sun, 5 May 2024 16:07:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ETA3vspOVT+/VB9v2tjXXqy1IG7cq8BEG03uD8FfWYI=; b=aCvJK0rV5KA93F OHEyfvFxznpQ8+uhpU0ltT3Ba9Y0m0+UMbXOr6kSx7nGjx4ZdQHacKp4Ar6aGxjSQHgYCZLMk1mHA 7VnpuvvMifOol1F966P/axbI4XZSexP5jCi7eFASi2NRxxFJATWI/H54eTTkkK7iELtGVWyUGpRK6 gCT8sR69VGKioDK1E0t0W6Yn/YTRispHx9sso+pvG7OGI6i1vxY6Q83dsJ0JLBBf+eh/3m/JrUvdd mRppkP0ggygwT29nFT1/KY08Z/gdZ2fRTSUIkEdcoLmyNovQ48mEA0bQCUfxtyhdkjmr8nOOuNFs6 MD6xosAHG8YmTz+0oYWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOH-00000004glP-3KJJ; Sun, 05 May 2024 16:07:05 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOD-00000004ghu-2lpq; Sun, 05 May 2024 16:07:03 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8666060C90; Sun, 5 May 2024 16:07:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F4F3C4DDE3; Sun, 5 May 2024 16:06:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925220; bh=M6QQiy27Yx7cZTOC0617KB9Ln6pKKwikPWIcgenY/bM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qq9xkaB6DJwbvT6OTP2sGk7b6G6bmDN5GyK5sBXSm32BVbedH8jHvu58xQEar8ba4 uKzUigAgbBAtsz8r5l3mt2L2za5eSKU/+BjQNHwkb8bkk1dSzjGw0MuhLVplgL5ZP8 9+4Xjjkzq/fb5aRZ15hEIYgAO+SLQ3ZdibwPKIo+aOqxO1Wde5HY2qBWwQ/K0irIeT ivaaNIlis9trjR8Xspz86sAH3Biv9Y1hr0lxZc08RADzGXQ7JtgHd1tAb126rA5WHB b8dLhnVtioffkmnpyWBCyjSpVlTprGY0DDKi7/kPoX4BYqxHmygNlAT6hSyOu4Q7Tl 8KyBh8i4EO9TA== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 01/16] arm64: module: remove unneeded call to kasan_alloc_module_shadow() Date: Sun, 5 May 2024 19:06:13 +0300 Message-ID: <20240505160628.2323363-2-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090701_820478_BB2E6437 X-CRM114-Status: GOOD ( 11.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Since commit f6f37d9320a1 ("arm64: select KASAN_VMALLOC for SW/HW_TAGS modes") KASAN_VMALLOC is always enabled when KASAN is on. This means that allocations in module_alloc() will be tracked by KASAN protection for vmalloc() and that kasan_alloc_module_shadow() will be always an empty inline and there is no point in calling it. Drop meaningless call to kasan_alloc_module_shadow() from module_alloc(). Signed-off-by: Mike Rapoport (IBM) --- arch/arm64/kernel/module.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 47e0be610bb6..e92da4da1b2a 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -141,11 +141,6 @@ void *module_alloc(unsigned long size) __func__); } - if (p && (kasan_alloc_module_shadow(p, size, GFP_KERNEL) < 0)) { - vfree(p); - return NULL; - } - /* Memory is intended to be executable, reset the pointer tag. */ return kasan_reset_tag(p); } From patchwork Sun May 5 16:06:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 049D7C25B10 for ; Sun, 5 May 2024 16:07:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+9+oQoKkH4E+S3pDQD3bI0SLM+L5M9zB34elmO8FmxA=; b=eg2UM9pGNHEsJO 7Ch6zVFeorp7ZlKuWqzG83/3/CngAfQYey4YPtwvMJYDAYC8RZzmtUbgE7hSLC1dFTiv6id9cO/eQ CUyLWRGJiAkxEGNXK+ctsrPJ030wqPLGykZO4IvdJ3YLSvvVKsalp1R0++jJ/eukT5RTzaUk2xDKI KCvqf3F4AEJojBiZNE2+kMCVtYWeG5ImcNHVM9IPfTkkJC6CCbGWWBqtX2ltpqC4mMb+FhNsza/0y L8Y+gDL+U4x72IB5xKpAgSthWq8BLpYxXRMHyVeFDOLT5ex147FZsVbFCTA/h8JcJuO0XWQW+kCtw N20UAj9Ixt79sRq0juYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOW-00000004gxc-1BKW; Sun, 05 May 2024 16:07:20 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOQ-00000004gsl-3czH; Sun, 05 May 2024 16:07:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id B697CCE0AB2; Sun, 5 May 2024 16:07:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BA2B9C4AF66; Sun, 5 May 2024 16:07:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925232; bh=DCxnhfiOCuZ1vbHkz6N1kzg5vD2Mqvj8bCN8VHj/6O4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QWa5fX+V89TTOP2GlOwvt4twcfdvHMMVPziSmieuPgSBPhWUfbisumxwPB1HxUlJC SghRc/O2IWhq+5uAIL19kBNECjN8Eixj0O+M//1MOzKPjdGEP8/j+8gjwWBR7/8Vtv i7T8kC3S32Mwv38l3PPjKsuGj9NwQptUrFC7qD4hlKFuj2gsb7BzQBL9P1Qxlab7zw t8bDt0pLdKBWzzFTEBo2OuKTmEqM7tos601oUB0a7RFNS6UWZeZpUR1839V3e6YHkl zJMtmY6L2ynGW1NwTp+wFQaP7cjkH15KVcoDbS6MIsORkgjuPaM+IhnyDgczaJyRYq dq0jDZ+sJ6Etw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 02/16] mips: module: rename MODULE_START to MODULES_VADDR Date: Sun, 5 May 2024 19:06:14 +0300 Message-ID: <20240505160628.2323363-3-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090715_660995_09E6123A X-CRM114-Status: GOOD ( 13.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" and MODULE_END to MODULES_END to match other architectures that define custom address space for modules. Signed-off-by: Mike Rapoport (IBM) --- arch/mips/include/asm/pgtable-64.h | 4 ++-- arch/mips/kernel/module.c | 4 ++-- arch/mips/mm/fault.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 20ca48c1b606..c0109aff223b 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -147,8 +147,8 @@ #if defined(CONFIG_MODULES) && defined(KBUILD_64BIT_SYM32) && \ VMALLOC_START != CKSSEG /* Load modules into 32bit-compatible segment. */ -#define MODULE_START CKSSEG -#define MODULE_END (FIXADDR_START-2*PAGE_SIZE) +#define MODULES_VADDR CKSSEG +#define MODULES_END (FIXADDR_START-2*PAGE_SIZE) #endif #define pte_ERROR(e) \ diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 7b2fbaa9cac5..9a6c96014904 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c @@ -31,10 +31,10 @@ struct mips_hi16 { static LIST_HEAD(dbe_list); static DEFINE_SPINLOCK(dbe_lock); -#ifdef MODULE_START +#ifdef MODULES_VADDR void *module_alloc(unsigned long size) { - return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END, + return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); } diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index aaa9a242ebba..37fedeaca2e9 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -83,8 +83,8 @@ static void __do_page_fault(struct pt_regs *regs, unsigned long write, if (unlikely(address >= VMALLOC_START && address <= VMALLOC_END)) goto VMALLOC_FAULT_TARGET; -#ifdef MODULE_START - if (unlikely(address >= MODULE_START && address < MODULE_END)) +#ifdef MODULES_VADDR + if (unlikely(address >= MODULES_VADDR && address < MODULES_END)) goto VMALLOC_FAULT_TARGET; #endif From patchwork Sun May 5 16:06:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1C62C4345F for ; Sun, 5 May 2024 16:07:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fZMp1unhlW6IdZ699IKTU3QkCO7RZP5sm1+wfwGLwis=; b=t2O8QTYv1pmewT EAI7QWUq78DRM7w4B2Tv1n/6mK3h/4/zm06l2l4MTU3Xw2lHf9n+2VE/MgJ7ZFIOFJCvH2pYpkeM9 Tn0wV2eG5IK4/QiLqOb7v790Bm/7ZO8LQjH8skZT/2CzjnTymMWYAHQOdusGApNSNO6dfQbpzW97v V7mVxCfdFnPw+w9j12GfAhaqbgG4xrEp7NnqyzAbsOEdBL1ymRGnxKN2yjoXosEfWvV2LvDm7af9f 31+M94ohwo8JBP23ilQtTXVeCb4zNVGnISCp1IiqBOl9LKPOnLmyMkbUKO/PEG/fCSdK36A80VFDo vyr+aGOUbk3PIJF/aifg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOi-00000004hAg-2eEv; Sun, 05 May 2024 16:07:33 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOc-00000004h3h-2MOo; Sun, 05 May 2024 16:07:29 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 85129CE0AB4; Sun, 5 May 2024 16:07:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88E70C113CC; Sun, 5 May 2024 16:07:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925243; bh=Q5aCDGAMhe9LWjmBOe0KfqvsLthVcLCeaQ3ItO7ykso=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P4CuvetVGByVCBaaVxVGGOtA74RDHwFQjjwfPUauzLmoD3wdUW1jd7WI/FNUi9YXr ecSyHliyfMJi2cUpp51ECWceN7DeyR/FktquNEV76UWeLDSX8u5ntplCqdrTZGIyar D+P1a3P3j90UWgY0yEm1lcThQZQ68xgGlJ5P9pQsu+MdqtVgnD2sIkvJCtbHuzGhJY LIv/xEQp3DYvT0pZyyVNtIOhtztKpIuOB81GgZH8ewWLqk/Czr7GNGIJ8tGHZNrj1W kaDVaYV4Oj0AIJ4U/WkvPQlCnuijDIIbENO5+lgHxoBaL30p+JcIdJmJV59oRMq0WW 1ZePgAmim3Bkg== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 03/16] nios2: define virtual address space for modules Date: Sun, 5 May 2024 19:06:15 +0300 Message-ID: <20240505160628.2323363-4-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090727_049742_957E8529 X-CRM114-Status: GOOD ( 13.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" nios2 uses kmalloc() to implement module_alloc() because CALL26/PCREL26 cannot reach all of vmalloc address space. Define module space as 32MiB below the kernel base and switch nios2 to use vmalloc for module allocations. Suggested-by: Thomas Gleixner Acked-by: Dinh Nguyen Acked-by: Song Liu Signed-off-by: Mike Rapoport (IBM) --- arch/nios2/include/asm/pgtable.h | 5 ++++- arch/nios2/kernel/module.c | 19 ++++--------------- 2 files changed, 8 insertions(+), 16 deletions(-) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index d052dfcbe8d3..eab87c6beacb 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -25,7 +25,10 @@ #include #define VMALLOC_START CONFIG_NIOS2_KERNEL_MMU_REGION_BASE -#define VMALLOC_END (CONFIG_NIOS2_KERNEL_REGION_BASE - 1) +#define VMALLOC_END (CONFIG_NIOS2_KERNEL_REGION_BASE - SZ_32M - 1) + +#define MODULES_VADDR (CONFIG_NIOS2_KERNEL_REGION_BASE - SZ_32M) +#define MODULES_END (CONFIG_NIOS2_KERNEL_REGION_BASE - 1) struct mm_struct; diff --git a/arch/nios2/kernel/module.c b/arch/nios2/kernel/module.c index 76e0a42d6e36..9c97b7513853 100644 --- a/arch/nios2/kernel/module.c +++ b/arch/nios2/kernel/module.c @@ -21,23 +21,12 @@ #include -/* - * Modules should NOT be allocated with kmalloc for (obvious) reasons. - * But we do it for now to avoid relocation issues. CALL26/PCREL26 cannot reach - * from 0x80000000 (vmalloc area) to 0xc00000000 (kernel) (kmalloc returns - * addresses in 0xc0000000) - */ void *module_alloc(unsigned long size) { - if (size == 0) - return NULL; - return kmalloc(size, GFP_KERNEL); -} - -/* Free memory returned from module_alloc */ -void module_memfree(void *module_region) -{ - kfree(module_region); + return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, + GFP_KERNEL, PAGE_KERNEL_EXEC, + VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, + __builtin_return_address(0)); } int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, From patchwork Sun May 5 16:06:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98789C4345F for ; Sun, 5 May 2024 16:07:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6FaVSGPwDLmsh0+kW1DQsWSWdzhA138gvQsuPBI7UGY=; b=FQsasx9dY0Jvab X/7BDCeMg+N25SWisAhkR5Ra+TTpqTOY1Z1on9jJarbPr8zzVB61mbg9BtEbsnv5qkEsovqiPf4h9 YRnc56wHuoXWUTW1JTZcaRtTcCxADYWTW2VQWNNoYBZJLU+cKz9P8QkRRkHhtl/Kow2IIgzGblSHn 991oJZYe66KspuRSUx9FJ4jH7ee4IzDllPUZucpLczWs8v+WurZooo7iSr6cAGzAdonnQNbaq0LxA Bv8OYs9oSX67OFImNpwsSBOD2ocHuvU8PcQ/RiVupCtwQPdGP0ggXExHkS6DiHMrI+FeI0dVomi46 GXJcEN6/Buc5M+CYdEtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOx-00000004hOm-0tMU; Sun, 05 May 2024 16:07:47 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOo-00000004hEG-1YOj; Sun, 05 May 2024 16:07:44 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 50247CE0AB6; Sun, 5 May 2024 16:07:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 556B2C4DDE6; Sun, 5 May 2024 16:07:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925255; bh=S2WO9rrdnIu5PceUjVXVwYLrUCttkXy1oVefr4+OvH4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mnzdUSuFwZFr/+Ip2l7zLEpPo7HXw0QyaxcHIUqM8tYDbmvTYVi7whZRj6QYhAI5c u+OLcnSpi7V7V0jzzVPB9rWa+rJA3dD3nlH+8wp77vvEuH9IEFWLpzyrsWjJPj/jKs BDJmr/0582lWHgGlperk2Rs+BsENvYpmK5JvRFZoFvCPINOMHITRzkh8LcMznIdb8y hOOVaEDM0UZeQGuU9MlrUVaRg389IJ8S7gihJymsFxUbKAYtkt7Avfm7rb5zhV1CaN R6G5Pk6nhS/74iVs2XgCvdWj3a1mXB9GimpVhAzA7uDys6bqCeiKfazaVLcWk7io+d ZIn5pR9SP9ZcA== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 04/16] sparc: simplify module_alloc() Date: Sun, 5 May 2024 19:06:16 +0300 Message-ID: <20240505160628.2323363-5-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090739_170725_FE23F8D7 X-CRM114-Status: GOOD ( 12.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Define MODULES_VADDR and MODULES_END as VMALLOC_START and VMALLOC_END for 32-bit and reduce module_alloc() to __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, ...) as with the new defines the allocations becomes identical for both 32 and 64 bits. While on it, drop unused include of Suggested-by: Sam Ravnborg Signed-off-by: Mike Rapoport (IBM) Reviewed-by: Sam Ravnborg --- arch/sparc/include/asm/pgtable_32.h | 2 ++ arch/sparc/kernel/module.c | 25 +------------------------ 2 files changed, 3 insertions(+), 24 deletions(-) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index 9e85d57ac3f2..62bcafe38b1f 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -432,6 +432,8 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma, #define VMALLOC_START _AC(0xfe600000,UL) #define VMALLOC_END _AC(0xffc00000,UL) +#define MODULES_VADDR VMALLOC_START +#define MODULES_END VMALLOC_END /* We provide our own get_unmapped_area to cope with VA holes for userland */ #define HAVE_ARCH_UNMAPPED_AREA diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c index 66c45a2764bc..d37adb2a0b54 100644 --- a/arch/sparc/kernel/module.c +++ b/arch/sparc/kernel/module.c @@ -21,35 +21,12 @@ #include "entry.h" -#ifdef CONFIG_SPARC64 - -#include - -static void *module_map(unsigned long size) +void *module_alloc(unsigned long size) { - if (PAGE_ALIGN(size) > MODULES_LEN) - return NULL; return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); } -#else -static void *module_map(unsigned long size) -{ - return vmalloc(size); -} -#endif /* CONFIG_SPARC64 */ - -void *module_alloc(unsigned long size) -{ - void *ret; - - ret = module_map(size); - if (ret) - memset(ret, 0, size); - - return ret; -} /* Make generic code ignore STT_REGISTER dummy undefined symbols. */ int module_frob_arch_sections(Elf_Ehdr *hdr, From patchwork Sun May 5 16:06:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A8358C25B10 for ; Sun, 5 May 2024 16:08:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MCcG9/GARd4Jg/eqJAC6EzkdutrZ3JENZ4GAjXV8ie0=; b=wLxO734AgRas+w Y3dz2MaX28txIz0yiHV5i/scJ+zdUsksIt6rOmify3JsKoUESLvpxzpJq1OzE41GreP9/Vihe3mVc qJJ70TWuGX2oRDFUvk3KMwbWjCSBbCLGnf148XTUw1GH9oGWdMsgDRpFjxr9ibUATIshMsyH3AaD3 dPhUsRDdL/hSrqD9Zk3klViV6wOn55lzA2oC1R3GrM11nH92qhqIwEKfeWBVTfXa/H7PX4FNJvUPQ Ac6pUP3EmQNPQMa4tQ5WGRNbdX7kmE0B5rud87SZixye0iMet3+wZg5DsRUNTGYTdNtKmyot8BXu+ tn2Blsww6UL7JjIb6I2w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eP9-00000004hb3-31v6; Sun, 05 May 2024 16:07:59 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOy-00000004hPu-2hhG; Sun, 05 May 2024 16:07:56 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B1F8F60CBB; Sun, 5 May 2024 16:07:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 21FD0C4DDE7; Sun, 5 May 2024 16:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925267; bh=vbQmVIViLlnHnV/HmsoDNHPWB+6N7x9rENWps4lsXos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aAa5MSgJQJM45G3HokYLiiJiOZDEeE2ajV8iw/yf1DX/fQ3Kmw6A3USHH0Ot5m3Op TVZYg1AWKghmbLAQnkVMS+GBWsm8aShhkYMk2oUWFTCwZ221sTIuktvc3sHgn5Di67 0I35GiFLFgXviW5AdRB+u7RdDHkgSpW6HhkZwMFM/gPwvOTBBTyPfyI73Civ8D9YvS rNWePZxZHMorQMyuKc+VCAWVVq3t2idHccNWlW0YiN3gXtGTvkETcE/v5FdYXJ2KJf LyxaKhxqABmwc9SmSM4tJ4eoIiC7UL7HsecM0w4e+PWxQYptDGBGAdqaW44NkUBYgt DP/ubOUkGv4RQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 05/16] module: make module_memory_{alloc,free} more self-contained Date: Sun, 5 May 2024 19:06:17 +0300 Message-ID: <20240505160628.2323363-6-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090748_933368_03B4CA27 X-CRM114-Status: GOOD ( 18.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Move the logic related to the memory allocation and freeing into module_memory_alloc() and module_memory_free(). Signed-off-by: Mike Rapoport (IBM) Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Masami Hiramatsu (Google) --- kernel/module/main.c | 64 +++++++++++++++++++++++++++----------------- 1 file changed, 39 insertions(+), 25 deletions(-) diff --git a/kernel/module/main.c b/kernel/module/main.c index e1e8a7a9d6c1..5b82b069e0d3 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1203,15 +1203,44 @@ static bool mod_mem_use_vmalloc(enum mod_mem_type type) mod_mem_type_is_core_data(type); } -static void *module_memory_alloc(unsigned int size, enum mod_mem_type type) +static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { + unsigned int size = PAGE_ALIGN(mod->mem[type].size); + void *ptr; + + mod->mem[type].size = size; + if (mod_mem_use_vmalloc(type)) - return vzalloc(size); - return module_alloc(size); + ptr = vmalloc(size); + else + ptr = module_alloc(size); + + if (!ptr) + return -ENOMEM; + + /* + * The pointer to these blocks of memory are stored on the module + * structure and we keep that around so long as the module is + * around. We only free that memory when we unload the module. + * Just mark them as not being a leak then. The .init* ELF + * sections *do* get freed after boot so we *could* treat them + * slightly differently with kmemleak_ignore() and only grey + * them out as they work as typical memory allocations which + * *do* eventually get freed, but let's just keep things simple + * and avoid *any* false positives. + */ + kmemleak_not_leak(ptr); + + memset(ptr, 0, size); + mod->mem[type].base = ptr; + + return 0; } -static void module_memory_free(void *ptr, enum mod_mem_type type) +static void module_memory_free(struct module *mod, enum mod_mem_type type) { + void *ptr = mod->mem[type].base; + if (mod_mem_use_vmalloc(type)) vfree(ptr); else @@ -1229,12 +1258,12 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); if (mod_mem->size) - module_memory_free(mod_mem->base, type); + module_memory_free(mod, type); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); - module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); + module_memory_free(mod, MOD_DATA); } /* Free a module, remove from lists, etc. */ @@ -2225,7 +2254,6 @@ static int find_module_sections(struct module *mod, struct load_info *info) static int move_module(struct module *mod, struct load_info *info) { int i; - void *ptr; enum mod_mem_type t = 0; int ret = -ENOMEM; @@ -2234,26 +2262,12 @@ static int move_module(struct module *mod, struct load_info *info) mod->mem[type].base = NULL; continue; } - mod->mem[type].size = PAGE_ALIGN(mod->mem[type].size); - ptr = module_memory_alloc(mod->mem[type].size, type); - /* - * The pointer to these blocks of memory are stored on the module - * structure and we keep that around so long as the module is - * around. We only free that memory when we unload the module. - * Just mark them as not being a leak then. The .init* ELF - * sections *do* get freed after boot so we *could* treat them - * slightly differently with kmemleak_ignore() and only grey - * them out as they work as typical memory allocations which - * *do* eventually get freed, but let's just keep things simple - * and avoid *any* false positives. - */ - kmemleak_not_leak(ptr); - if (!ptr) { + + ret = module_memory_alloc(mod, type); + if (ret) { t = type; goto out_enomem; } - memset(ptr, 0, mod->mem[type].size); - mod->mem[type].base = ptr; } /* Transfer each section which specifies SHF_ALLOC */ @@ -2296,7 +2310,7 @@ static int move_module(struct module *mod, struct load_info *info) return 0; out_enomem: for (t--; t >= 0; t--) - module_memory_free(mod->mem[t].base, t); + module_memory_free(mod, t); return ret; } From patchwork Sun May 5 16:06:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A028C4345F for ; Sun, 5 May 2024 16:08:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Haq91K6agbqznxxYD6G5rS+T28FmKfBjIB+IXrcawz0=; b=K+3egHdECtF9LY L+Zbvo6pxoPBP4TRl70udMKTLn99A4Vvw9En02CIZ1rqSDOwyf06NXuz7ih2IOIFYwUJZ+wuO2sp9 eD2D8RG0Pf1p6ehMaDcAOXlL7rknM2NKPfObYzYWOsYotR7o2ue6ZavZC6tMDSGJlxbjh/AECkp+R po7i14BYUq/EMXPz7XlScvprEDMSPXJpvC04oFijbdKD2I3XBNI1hQmONeWS4hNDTwfkj23+5LC3F iwGb0pv9MXMCKX6aUsMw4TmzcmOzjiUmgyHp1oxyh0MAX+lVKBiNFXP/yZDHkL4oVXrGy3RsrFqVL v/JFAhs4rOUEoK4YqX4g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePg-00000004i40-31Sk; Sun, 05 May 2024 16:08:32 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePC-00000004hcB-0z7q; Sun, 05 May 2024 16:08:21 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id E4AB7CE0AB6; Sun, 5 May 2024 16:07:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E49CEC113CC; Sun, 5 May 2024 16:07:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925279; bh=O80Ih29W+vz+LMoOHSIificIIW/tpeoH6wwn4K1HsxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j/RMbwKE6mjFyW0y3wIziuIjzeC+9ZmAt144YV04fnGNWSBiGjLBMZHtE44jgK0W3 dh0wbz6RNWLysGAfKIn7jBeKdkPe6YXSXZZHn95U/tt6k+FCrHbSWOqI9AKirDXzF6 qkSy7bhY/SQn5hIed8cS3y1WKCNVA8iRzLyM1gd0GveDq8EqL+nCo9lV9g4P+5grw4 H//VKlmNGTrC/z40q6br7YMMnYpHbsCMOheUN6HLnhR9Uw8HE3CQE/Db4kWj6gQV48 SAK9eRMuDschdeyT3MhqzcctCW0fEadnMaMouCgVEGMFFhKEaY3HnsmatOyc4K2IXX UU49ZWoeL17gA== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 06/16] mm: introduce execmem_alloc() and execmem_free() Date: Sun, 5 May 2024 19:06:18 +0300 Message-ID: <20240505160628.2323363-7-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090802_955272_86A2FB86 X-CRM114-Status: GOOD ( 30.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" module_alloc() is used everywhere as a mean to allocate memory for code. Beside being semantically wrong, this unnecessarily ties all subsystems that need to allocate code, such as ftrace, kprobes and BPF to modules and puts the burden of code allocation to the modules code. Several architectures override module_alloc() because of various constraints where the executable memory can be located and this causes additional obstacles for improvements of code allocation. Start splitting code allocation from modules by introducing execmem_alloc() and execmem_free() APIs. Initially, execmem_alloc() is a wrapper for module_alloc() and execmem_free() is a replacement of module_memfree() to allow updating all call sites to use the new APIs. Since architectures define different restrictions on placement, permissions, alignment and other parameters for memory that can be used by different subsystems that allocate executable memory, execmem_alloc() takes a type argument, that will be used to identify the calling subsystem and to allow architectures define parameters for ranges suitable for that subsystem. No functional changes. Signed-off-by: Mike Rapoport (IBM) Acked-by: Masami Hiramatsu (Google) Acked-by: Song Liu --- arch/powerpc/kernel/kprobes.c | 6 ++-- arch/s390/kernel/ftrace.c | 4 +-- arch/s390/kernel/kprobes.c | 4 +-- arch/s390/kernel/module.c | 5 +-- arch/sparc/net/bpf_jit_comp_32.c | 8 ++--- arch/x86/kernel/ftrace.c | 6 ++-- arch/x86/kernel/kprobes/core.c | 4 +-- include/linux/execmem.h | 57 ++++++++++++++++++++++++++++++++ include/linux/moduleloader.h | 3 -- kernel/bpf/core.c | 6 ++-- kernel/kprobes.c | 8 ++--- kernel/module/Kconfig | 1 + kernel/module/main.c | 25 +++++--------- mm/Kconfig | 3 ++ mm/Makefile | 1 + mm/execmem.c | 32 ++++++++++++++++++ 16 files changed, 128 insertions(+), 45 deletions(-) create mode 100644 include/linux/execmem.h create mode 100644 mm/execmem.c diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index bbca90a5e2ec..9fcd01bb2ce6 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -19,8 +19,8 @@ #include #include #include -#include #include +#include #include #include #include @@ -130,7 +130,7 @@ void *alloc_insn_page(void) { void *page; - page = module_alloc(PAGE_SIZE); + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE); if (!page) return NULL; @@ -142,7 +142,7 @@ void *alloc_insn_page(void) } return page; error: - module_memfree(page); + execmem_free(page); return NULL; } diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c index c46381ea04ec..798249ef5646 100644 --- a/arch/s390/kernel/ftrace.c +++ b/arch/s390/kernel/ftrace.c @@ -7,13 +7,13 @@ * Author(s): Martin Schwidefsky */ -#include #include #include #include #include #include #include +#include #include #include #include @@ -220,7 +220,7 @@ static int __init ftrace_plt_init(void) { const char *start, *end; - ftrace_plt = module_alloc(PAGE_SIZE); + ftrace_plt = execmem_alloc(EXECMEM_FTRACE, PAGE_SIZE); if (!ftrace_plt) panic("cannot allocate ftrace plt\n"); diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c index f0cf20d4b3c5..3c1b1be744de 100644 --- a/arch/s390/kernel/kprobes.c +++ b/arch/s390/kernel/kprobes.c @@ -9,7 +9,6 @@ #define pr_fmt(fmt) "kprobes: " fmt -#include #include #include #include @@ -21,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -38,7 +38,7 @@ void *alloc_insn_page(void) { void *page; - page = module_alloc(PAGE_SIZE); + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE); if (!page) return NULL; set_memory_rox((unsigned long)page, 1); diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index 42215f9404af..ac97a905e8cd 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -76,7 +77,7 @@ void *module_alloc(unsigned long size) #ifdef CONFIG_FUNCTION_TRACER void module_arch_cleanup(struct module *mod) { - module_memfree(mod->arch.trampolines_start); + execmem_free(mod->arch.trampolines_start); } #endif @@ -510,7 +511,7 @@ static int module_alloc_ftrace_hotpatch_trampolines(struct module *me, size = FTRACE_HOTPATCH_TRAMPOLINES_SIZE(s->sh_size); numpages = DIV_ROUND_UP(size, PAGE_SIZE); - start = module_alloc(numpages * PAGE_SIZE); + start = execmem_alloc(EXECMEM_FTRACE, numpages * PAGE_SIZE); if (!start) return -ENOMEM; set_memory_rox((unsigned long)start, numpages); diff --git a/arch/sparc/net/bpf_jit_comp_32.c b/arch/sparc/net/bpf_jit_comp_32.c index da2df1e84ed4..bda2dbd3f4c5 100644 --- a/arch/sparc/net/bpf_jit_comp_32.c +++ b/arch/sparc/net/bpf_jit_comp_32.c @@ -1,10 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 -#include #include #include #include #include #include +#include #include #include @@ -713,7 +713,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; if (unlikely(proglen + ilen > oldproglen)) { pr_err("bpb_jit_compile fatal error\n"); kfree(addrs); - module_memfree(image); + execmem_free(image); return; } memcpy(image + proglen, temp, ilen); @@ -736,7 +736,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; break; } if (proglen == oldproglen) { - image = module_alloc(proglen); + image = execmem_alloc(EXECMEM_BPF, proglen); if (!image) goto out; } @@ -758,7 +758,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; void bpf_jit_free(struct bpf_prog *fp) { if (fp->jited) - module_memfree(fp->bpf_func); + execmem_free(fp->bpf_func); bpf_prog_unlock_free(fp); } diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 70139d9d2e01..c8ddb7abda7c 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -25,6 +25,7 @@ #include #include #include +#include #include @@ -261,15 +262,14 @@ void arch_ftrace_update_code(int command) #ifdef CONFIG_X86_64 #ifdef CONFIG_MODULES -#include /* Module allocation simplifies allocating memory for code */ static inline void *alloc_tramp(unsigned long size) { - return module_alloc(size); + return execmem_alloc(EXECMEM_FTRACE, size); } static inline void tramp_free(void *tramp) { - module_memfree(tramp); + execmem_free(tramp); } #else /* Trampolines can only be created if modules are supported */ diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index d0e49bd7c6f3..72e6a45e7ec2 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -40,12 +40,12 @@ #include #include #include -#include #include #include #include #include #include +#include #include #include @@ -495,7 +495,7 @@ void *alloc_insn_page(void) { void *page; - page = module_alloc(PAGE_SIZE); + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE); if (!page) return NULL; diff --git a/include/linux/execmem.h b/include/linux/execmem.h new file mode 100644 index 000000000000..8eebc8ef66e7 --- /dev/null +++ b/include/linux/execmem.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_EXECMEM_ALLOC_H +#define _LINUX_EXECMEM_ALLOC_H + +#include +#include + +/** + * enum execmem_type - types of executable memory ranges + * + * There are several subsystems that allocate executable memory. + * Architectures define different restrictions on placement, + * permissions, alignment and other parameters for memory that can be used + * by these subsystems. + * Types in this enum identify subsystems that allocate executable memory + * and let architectures define parameters for ranges suitable for + * allocations by each subsystem. + * + * @EXECMEM_DEFAULT: default parameters that would be used for types that + * are not explicitly defined. + * @EXECMEM_MODULE_TEXT: parameters for module text sections + * @EXECMEM_KPROBES: parameters for kprobes + * @EXECMEM_FTRACE: parameters for ftrace + * @EXECMEM_BPF: parameters for BPF + * @EXECMEM_TYPE_MAX: + */ +enum execmem_type { + EXECMEM_DEFAULT, + EXECMEM_MODULE_TEXT = EXECMEM_DEFAULT, + EXECMEM_KPROBES, + EXECMEM_FTRACE, + EXECMEM_BPF, + EXECMEM_TYPE_MAX, +}; + +/** + * execmem_alloc - allocate executable memory + * @type: type of the allocation + * @size: how many bytes of memory are required + * + * Allocates memory that will contain executable code, either generated or + * loaded from kernel modules. + * + * The memory will have protections defined by architecture for executable + * region of the @type. + * + * Return: a pointer to the allocated memory or %NULL + */ +void *execmem_alloc(enum execmem_type type, size_t size); + +/** + * execmem_free - free executable memory + * @ptr: pointer to the memory that should be freed + */ +void execmem_free(void *ptr); + +#endif /* _LINUX_EXECMEM_ALLOC_H */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 89b1e0ed9811..a3b8caee9405 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -29,9 +29,6 @@ unsigned int arch_mod_section_prepend(struct module *mod, unsigned int section); sections. Returns NULL on failure. */ void *module_alloc(unsigned long size); -/* Free memory returned from module_alloc. */ -void module_memfree(void *module_region); - /* Determines if the section name is an init section (that is only used during * module loading). */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 696bc55de8e8..75a54024e2f4 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -37,6 +36,7 @@ #include #include #include +#include #include #include @@ -1050,12 +1050,12 @@ void bpf_jit_uncharge_modmem(u32 size) void *__weak bpf_jit_alloc_exec(unsigned long size) { - return module_alloc(size); + return execmem_alloc(EXECMEM_BPF, size); } void __weak bpf_jit_free_exec(void *addr) { - module_memfree(addr); + execmem_free(addr); } struct bpf_binary_header * diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 65adc815fc6e..ddd7cdc16edf 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -39,6 +38,7 @@ #include #include #include +#include #include #include @@ -113,17 +113,17 @@ enum kprobe_slot_state { void __weak *alloc_insn_page(void) { /* - * Use module_alloc() so this page is within +/- 2GB of where the + * Use execmem_alloc() so this page is within +/- 2GB of where the * kernel image and loaded module images reside. This is required * for most of the architectures. * (e.g. x86-64 needs this to handle the %rip-relative fixups.) */ - return module_alloc(PAGE_SIZE); + return execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE); } static void free_insn_page(void *page) { - module_memfree(page); + execmem_free(page); } struct kprobe_insn_cache kprobe_insn_slots = { diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig index f3e0329337f6..744383c1eed1 100644 --- a/kernel/module/Kconfig +++ b/kernel/module/Kconfig @@ -2,6 +2,7 @@ menuconfig MODULES bool "Enable loadable module support" modules + select EXECMEM help Kernel modules are small pieces of compiled code which can be inserted in the running kernel, rather than being diff --git a/kernel/module/main.c b/kernel/module/main.c index 5b82b069e0d3..d56b7df0cbb6 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -57,6 +57,7 @@ #include #include #include +#include #include #include "internal.h" @@ -1179,16 +1180,6 @@ resolve_symbol_wait(struct module *mod, return ksym; } -void __weak module_memfree(void *module_region) -{ - /* - * This memory may be RO, and freeing RO memory in an interrupt is not - * supported by vmalloc. - */ - WARN_ON(in_interrupt()); - vfree(module_region); -} - void __weak module_arch_cleanup(struct module *mod) { } @@ -1213,7 +1204,7 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) if (mod_mem_use_vmalloc(type)) ptr = vmalloc(size); else - ptr = module_alloc(size); + ptr = execmem_alloc(EXECMEM_MODULE_TEXT, size); if (!ptr) return -ENOMEM; @@ -1244,7 +1235,7 @@ static void module_memory_free(struct module *mod, enum mod_mem_type type) if (mod_mem_use_vmalloc(type)) vfree(ptr); else - module_memfree(ptr); + execmem_free(ptr); } static void free_mod_mem(struct module *mod) @@ -2496,9 +2487,9 @@ static void do_free_init(struct work_struct *w) llist_for_each_safe(pos, n, list) { initfree = container_of(pos, struct mod_initfree, node); - module_memfree(initfree->init_text); - module_memfree(initfree->init_data); - module_memfree(initfree->init_rodata); + execmem_free(initfree->init_text); + execmem_free(initfree->init_data); + execmem_free(initfree->init_rodata); kfree(initfree); } } @@ -2608,10 +2599,10 @@ static noinline int do_init_module(struct module *mod) * We want to free module_init, but be aware that kallsyms may be * walking this with preempt disabled. In all the failure paths, we * call synchronize_rcu(), but we don't want to slow down the success - * path. module_memfree() cannot be called in an interrupt, so do the + * path. execmem_free() cannot be called in an interrupt, so do the * work and call synchronize_rcu() in a work queue. * - * Note that module_alloc() on most architectures creates W+X page + * Note that execmem_alloc() on most architectures creates W+X page * mappings which won't be cleaned up until do_free_init() runs. Any * code such as mark_rodata_ro() which depends on those mappings to * be cleaned up needs to sync with the queued work by invoking diff --git a/mm/Kconfig b/mm/Kconfig index b1448aa81e15..f08a216d4793 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1241,6 +1241,9 @@ config LOCK_MM_AND_FIND_VMA config IOMMU_MM_DATA bool +config EXECMEM + bool + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 4abb40b911ec..001336c91864 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -133,3 +133,4 @@ obj-$(CONFIG_IO_MAPPING) += io-mapping.o obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o +obj-$(CONFIG_EXECMEM) += execmem.o diff --git a/mm/execmem.c b/mm/execmem.c new file mode 100644 index 000000000000..480adc69b20d --- /dev/null +++ b/mm/execmem.c @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2002 Richard Henderson + * Copyright (C) 2001 Rusty Russell, 2002, 2010 Rusty Russell IBM. + * Copyright (C) 2023 Luis Chamberlain + * Copyright (C) 2024 Mike Rapoport IBM. + */ + +#include +#include +#include +#include + +static void *__execmem_alloc(size_t size) +{ + return module_alloc(size); +} + +void *execmem_alloc(enum execmem_type type, size_t size) +{ + return __execmem_alloc(size); +} + +void execmem_free(void *ptr) +{ + /* + * This memory may be RO, and freeing RO memory in an interrupt is not + * supported by vmalloc. + */ + WARN_ON(in_interrupt()); + vfree(ptr); +} From patchwork Sun May 5 16:06:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5DBA1C4345F for ; Sun, 5 May 2024 16:08:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Muc1cqEa4ck2GsMbZ3gVmX9XWje+nv5B3E9xwjlVGSA=; b=rWyPFewcg7roJF so48LUWETVKPjQpf50aUbHeqs2kX7/wW069F0Fm1quscC+bE+MEXwVKIvSfPuAKSx1rblO76ym3Ag 8Xl5wgSs1b2p9wgakuwoyX/eAOCwZyFKKn310I+L+z0C5QVCgw868Ozh3nu85dBCQjs+rwD2l7LRV X7B+zD4rPv1Rw8lMB4m/7UDsH6yLgZHluRSRUmCaKWJs2a7vWqQn4ZN054t0McjDL0Ok7XSAzE98l dii2GA0FR9kC27xYAOBtaQQHl7AbYJAvn7QVAIDNW4yl7BvrmTfDakq2hN483L9/35z3k0+/TnY3E XQwyJekvXbPipRJdH9LA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePU-00000004hta-1ZRo; Sun, 05 May 2024 16:08:20 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePR-00000004hqD-2vbj; Sun, 05 May 2024 16:08:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=P8adOcY5bgQffQMY+FSoXypp+yh7TEtbKuUsGAS8Fco=; b=NSPamcYd9MGhba5zRy1r6PiuAe +dBQf8Avi8mZQ/NC2oo7EvHWFQiB1omnyLauk68mm8aQ6AKD6P0XzetUjXVNYzBE11daObPM1Im4p 1FeR8R9QDtbEzTRgoKsRFKLabGct35JFJbRHs5NM4Rd4x0Y7p6cXjf3LhNqV9DOKhwZSPz4CwddXO NFzc5b6jM5Qm3tVFEQI9sRqFSHdgplVfegz6POIbC02hz8m/zrbT2+X9XwjD3tE1w0BHiO6J78X2H vzTmopJFzTnztNd93Sw7PoqYYDAa/QvS/CcLPq5JBVL1qbXRdE3ChbXL6E0joOzyUSfnovnY8srYr Fvk1JzwQ==; Received: from dfw.source.kernel.org ([139.178.84.217]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePN-00000001O5N-0hCB; Sun, 05 May 2024 16:08:15 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 533C260C90; Sun, 5 May 2024 16:08:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6E61C4DDE3; Sun, 5 May 2024 16:07:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925291; bh=H1Yrs9oS3D7w/IH5Du6/u6WvPsm3z4WLJALiS8Y8DQ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qy7fbZ3sR5SpWYo9BfaPnLREtW6tcVlwYjkUThI241bz+XFpPVmWviyMy6XqTm6aQ dORU/UekgwSj27ajsbinG1OwPjL9K+SIsvs4fsUK8b3eFCh2fUpSfqvbtanznko+qy 73Ro0XMWqbMJ9rPGK0W5cArd1g69QQW/8ZPzU5+NVnxrXnjRYbEu6WUmAwrx5eqBvc LJWmIu7MXQJ2RAZ1BmHzdr7cHlO+FJ89P3S6tCmUfcCvgysRPuyItr44uH0fIMQcJK zfjzHgHS0WwDn5Hh0Muj2jWVN1o261zIWg73Iv07eey1MVZIFFfYZnZF8PJhFxxWCc vezWRt98tYJRw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 07/16] mm/execmem, arch: convert simple overrides of module_alloc to execmem Date: Sun, 5 May 2024 19:06:19 +0300 Message-ID: <20240505160628.2323363-8-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_170813_991446_0D4C9C55 X-CRM114-Status: GOOD ( 21.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Several architectures override module_alloc() only to define address range for code allocations different than VMALLOC address space. Provide a generic implementation in execmem that uses the parameters for address space ranges, required alignment and page protections provided by architectures. The architectures must fill execmem_info structure and implement execmem_arch_setup() that returns a pointer to that structure. This way the execmem initialization won't be called from every architecture, but rather from a central place, namely a core_initcall() in execmem. The execmem provides execmem_alloc() API that wraps __vmalloc_node_range() with the parameters defined by the architectures. If an architecture does not implement execmem_arch_setup(), execmem_alloc() will fall back to module_alloc(). Signed-off-by: Mike Rapoport (IBM) Acked-by: Song Liu Reviewed-by: Masami Hiramatsu (Google) --- arch/loongarch/kernel/module.c | 19 ++++++++-- arch/mips/kernel/module.c | 20 ++++++++-- arch/nios2/kernel/module.c | 21 ++++++++--- arch/parisc/kernel/module.c | 24 ++++++++---- arch/riscv/kernel/module.c | 24 ++++++++---- arch/sparc/kernel/module.c | 20 ++++++++-- include/linux/execmem.h | 47 ++++++++++++++++++++++++ mm/execmem.c | 67 ++++++++++++++++++++++++++++++++-- mm/mm_init.c | 2 + 9 files changed, 210 insertions(+), 34 deletions(-) diff --git a/arch/loongarch/kernel/module.c b/arch/loongarch/kernel/module.c index c7d0338d12c1..ca6dd7ea1610 100644 --- a/arch/loongarch/kernel/module.c +++ b/arch/loongarch/kernel/module.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -490,10 +491,22 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } static void module_init_ftrace_plt(const Elf_Ehdr *hdr, diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 9a6c96014904..59225a3cf918 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include #include struct mips_hi16 { @@ -32,11 +33,22 @@ static LIST_HEAD(dbe_list); static DEFINE_SPINLOCK(dbe_lock); #ifdef MODULES_VADDR -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } #endif diff --git a/arch/nios2/kernel/module.c b/arch/nios2/kernel/module.c index 9c97b7513853..0d1ee86631fc 100644 --- a/arch/nios2/kernel/module.c +++ b/arch/nios2/kernel/module.c @@ -18,15 +18,26 @@ #include #include #include +#include #include -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, - VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL_EXEC, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c index d214bbe3c2af..bdfa85e10c1b 100644 --- a/arch/parisc/kernel/module.c +++ b/arch/parisc/kernel/module.c @@ -49,6 +49,7 @@ #include #include #include +#include #include #include @@ -173,15 +174,22 @@ static inline int reassemble_22(int as22) ((as22 & 0x0003ff) << 3)); } -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - /* using RWX means less protection for modules, but it's - * easier than trying to map the text, data, init_text and - * init_data correctly */ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, - PAGE_KERNEL_RWX, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL_RWX, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } #ifndef CONFIG_64BIT diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 5e5a82644451..182904127ba0 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -906,13 +907,22 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, } #if defined(CONFIG_MMU) && defined(CONFIG_64BIT) -void *module_alloc(unsigned long size) -{ - return __vmalloc_node_range(size, 1, MODULES_VADDR, - MODULES_END, GFP_KERNEL, - PAGE_KERNEL, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, - __builtin_return_address(0)); +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } #endif diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c index d37adb2a0b54..8b7ee45defc3 100644 --- a/arch/sparc/kernel/module.c +++ b/arch/sparc/kernel/module.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -21,11 +22,22 @@ #include "entry.h" -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } /* Make generic code ignore STT_REGISTER dummy undefined symbols. */ diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 8eebc8ef66e7..96fc59258467 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -33,6 +33,47 @@ enum execmem_type { EXECMEM_TYPE_MAX, }; +/** + * struct execmem_range - definition of an address space suitable for code and + * related data allocations + * @start: address space start + * @end: address space end (inclusive) + * @pgprot: permissions for memory in this address space + * @alignment: alignment required for text allocations + */ +struct execmem_range { + unsigned long start; + unsigned long end; + pgprot_t pgprot; + unsigned int alignment; +}; + +/** + * struct execmem_info - architecture parameters for code allocations + * @ranges: array of parameter sets defining architecture specific + * parameters for executable memory allocations. The ranges that are not + * explicitly initialized by an architecture use parameters defined for + * @EXECMEM_DEFAULT. + */ +struct execmem_info { + struct execmem_range ranges[EXECMEM_TYPE_MAX]; +}; + +/** + * execmem_arch_setup - define parameters for allocations of executable memory + * + * A hook for architectures to define parameters for allocations of + * executable memory. These parameters should be filled into the + * @execmem_info structure. + * + * For architectures that do not implement this method a default set of + * parameters will be used + * + * Return: a structure defining architecture parameters and restrictions + * for allocations of executable memory + */ +struct execmem_info *execmem_arch_setup(void); + /** * execmem_alloc - allocate executable memory * @type: type of the allocation @@ -54,4 +95,10 @@ void *execmem_alloc(enum execmem_type type, size_t size); */ void execmem_free(void *ptr); +#ifdef CONFIG_EXECMEM +void execmem_init(void); +#else +static inline void execmem_init(void) {} +#endif + #endif /* _LINUX_EXECMEM_ALLOC_H */ diff --git a/mm/execmem.c b/mm/execmem.c index 480adc69b20d..80e61c1e7319 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -11,14 +11,30 @@ #include #include -static void *__execmem_alloc(size_t size) +static struct execmem_info *execmem_info __ro_after_init; + +static void *__execmem_alloc(struct execmem_range *range, size_t size) { - return module_alloc(size); + unsigned long start = range->start; + unsigned long end = range->end; + unsigned int align = range->alignment; + pgprot_t pgprot = range->pgprot; + + return __vmalloc_node_range(size, align, start, end, + GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS, + NUMA_NO_NODE, __builtin_return_address(0)); } void *execmem_alloc(enum execmem_type type, size_t size) { - return __execmem_alloc(size); + struct execmem_range *range; + + if (!execmem_info) + return module_alloc(size); + + range = &execmem_info->ranges[type]; + + return __execmem_alloc(range, size); } void execmem_free(void *ptr) @@ -30,3 +46,48 @@ void execmem_free(void *ptr) WARN_ON(in_interrupt()); vfree(ptr); } + +static bool execmem_validate(struct execmem_info *info) +{ + struct execmem_range *r = &info->ranges[EXECMEM_DEFAULT]; + + if (!r->alignment || !r->start || !r->end || !pgprot_val(r->pgprot)) { + pr_crit("Invalid parameters for execmem allocator, module loading will fail"); + return false; + } + + return true; +} + +static void execmem_init_missing(struct execmem_info *info) +{ + struct execmem_range *default_range = &info->ranges[EXECMEM_DEFAULT]; + + for (int i = EXECMEM_DEFAULT + 1; i < EXECMEM_TYPE_MAX; i++) { + struct execmem_range *r = &info->ranges[i]; + + if (!r->start) { + r->pgprot = default_range->pgprot; + r->alignment = default_range->alignment; + r->start = default_range->start; + r->end = default_range->end; + } + } +} + +struct execmem_info * __weak execmem_arch_setup(void) +{ + return NULL; +} + +void __init execmem_init(void) +{ + struct execmem_info *info = execmem_arch_setup(); + + if (!info || !execmem_validate(info)) + return; + + execmem_init_missing(info); + + execmem_info = info; +} diff --git a/mm/mm_init.c b/mm/mm_init.c index 549e76af8f82..b6a1fcf6e13a 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -27,6 +27,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -2793,4 +2794,5 @@ void __init mm_core_init(void) pti_init(); kmsan_init_runtime(); mm_cache_init(); + execmem_init(); } From patchwork Sun May 5 16:06:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ADEE9C4345F for ; Sun, 5 May 2024 16:09:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=h90PWNC9BxEDlFBEZKaNc/5w+Nacn+iWOrolpBqi2t4=; b=grIWkTZnDaATxw /rEzGsMcHaOyd4+2PHwCMSTqNs1RxrTpyDlfQu6pjeoZfp5GakKE40HR3N2Lk0umxXRA5KZl7cq8K nIQM6Ub2YKqA1llVZeiIBu6cx2M2OkAKPLcbXA15AtHTnpTBuCZ9gnPZ6SLVFg1JRNtLSnFUfnVBZ 6+xTCkiI3WhYgDhdSpeeUnVVMqGb17+6SOcOcx6LqDcwpV4/7PxAL70mCA9Z+KWqGJCysxMUz37cc hR5W3sI67oAqE4ABWCYeT6GWbZ0QCGbNFin0cuZaUZJf/fOSwlcVoYmtDCgv7K01CpX/GjAndAM6G QnSizMdMOnJKz9Ff0wrA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePx-00000004iJ9-0kEJ; Sun, 05 May 2024 16:08:49 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePh-00000004i48-2BJx; Sun, 05 May 2024 16:08:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dUHb+PJwcHNeJJuSNqKm8o6d4m9NAW1QhHMz4OW5A9Y=; b=MH9kBoCRtIGspVSOuWIRaZWeC4 3IBYPRjshKn/C7D6zxlX4punlHt7afIBqG8IG/+IZOCnV+vM7EsO/TT6HFvv05Ja9xFyWR1PjR6Hm 2fIvB0k3biFS6OfWu6mj8peMKMqmOdeI9bKKSorbm4U0OLgGHcDbjqnfV0kme8cckMQ23HzZF8dd+ gM3Rk0ohF7xZsnDjfA1L+pNLCBRi6HQYjZ6eYC2wAXaAezfUH543eZ2mCE8Yqos1MT160wwu16KNT euQs77hYvAokavf/imSpkYTjuG2zmhcfwTfSY8nuL0GhJSKuf5L+mSuCOOOX1wBMTIw/FKrfpGV8F EgDn1o7g==; Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePc-00000001O89-0368; Sun, 05 May 2024 16:08:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id BD396CE0AB2; Sun, 5 May 2024 16:08:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87474C113CC; Sun, 5 May 2024 16:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925303; bh=mPkzltgN0mXGnoqVwLMYHmKvyIn9jEJGiEzdN/feIVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K/7t7dddjmQ8iT6NVAHgFxdHh4ql31eROOAKJS8bf4btKLnc30/pW1+kFUZA2f9rO myqSWJojYsdNVCy24kKurXli0fp3907hjPSFu3KRjYjuFvv3M2h5nFga7R95XJ70d/ +2+g7Nt2ULhrLvAqvM7kVgQaq5qEpegYoPs2DPUkJSG2efqs7I+ib9FSzoYe+acV4Q aJeTDfeXlLA3Ox64zNHuGLMQsr3LW7SvoH63itB6NgXI+Vff9b2UhwtswJFGhwxemG ck4XgrWnwHoj8BxB0cAMzTwnx+u5/Qaea/l5HDhbDV9j45whzko5tPvqUyx2eqyxiN sMNrijbbiA8yQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 08/16] mm/execmem, arch: convert remaining overrides of module_alloc to execmem Date: Sun, 5 May 2024 19:06:20 +0300 Message-ID: <20240505160628.2323363-9-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_170828_879796_195CF4CC X-CRM114-Status: GOOD ( 25.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Extend execmem parameters to accommodate more complex overrides of module_alloc() by architectures. This includes specification of a fallback range required by arm, arm64 and powerpc, EXECMEM_MODULE_DATA type required by powerpc, support for allocation of KASAN shadow required by s390 and x86 and support for late initialization of execmem required by arm64. The core implementation of execmem_alloc() takes care of suppressing warnings when the initial allocation fails but there is a fallback range defined. Signed-off-by: Mike Rapoport (IBM) Acked-by: Will Deacon Acked-by: Song Liu Tested-by: Liviu Dudau --- arch/Kconfig | 8 ++++ arch/arm/kernel/module.c | 41 ++++++++++++-------- arch/arm64/Kconfig | 1 + arch/arm64/kernel/module.c | 55 +++++++++++++++------------ arch/powerpc/kernel/module.c | 60 +++++++++++++++++++---------- arch/s390/kernel/module.c | 54 +++++++++++--------------- arch/x86/kernel/module.c | 70 +++++++++++----------------------- include/linux/execmem.h | 30 ++++++++++++++- include/linux/moduleloader.h | 12 ------ kernel/module/main.c | 26 +++---------- mm/execmem.c | 74 ++++++++++++++++++++++++++++++------ 11 files changed, 246 insertions(+), 185 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 65afb1de48b3..4fd0daa54e6c 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -960,6 +960,14 @@ config ARCH_WANTS_MODULES_DATA_IN_VMALLOC For architectures like powerpc/32 which have constraints on module allocation and need to allocate module data outside of module area. +config ARCH_WANTS_EXECMEM_LATE + bool + help + For architectures that do not allocate executable memory early on + boot, but rather require its initialization late when there is + enough entropy for module space randomization, for instance + arm64. + config HAVE_IRQ_EXIT_ON_IRQ_STACK bool help diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c index e74d84f58b77..a98fdf6ff26c 100644 --- a/arch/arm/kernel/module.c +++ b/arch/arm/kernel/module.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -34,23 +35,31 @@ #endif #ifdef CONFIG_MMU -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - gfp_t gfp_mask = GFP_KERNEL; - void *p; - - /* Silence the initial allocation */ - if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) - gfp_mask |= __GFP_NOWARN; - - p = __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, - gfp_mask, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, - __builtin_return_address(0)); - if (!IS_ENABLED(CONFIG_ARM_MODULE_PLTS) || p) - return p; - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, - __builtin_return_address(0)); + unsigned long fallback_start = 0, fallback_end = 0; + + if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) { + fallback_start = VMALLOC_START; + fallback_end = VMALLOC_END; + } + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL_EXEC, + .alignment = 1, + .fallback_start = fallback_start, + .fallback_end = fallback_end, + }, + }, + }; + + return &execmem_info; } #endif diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b11c98b3e84..74b34a78b7ac 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -105,6 +105,7 @@ config ARM64 select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) select ARCH_WANT_LD_ORPHAN_WARN + select ARCH_WANTS_EXECMEM_LATE if EXECMEM select ARCH_WANTS_NO_INSTR select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES select ARCH_HAS_UBSAN diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index e92da4da1b2a..b7a7a23f9f8f 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -108,41 +109,47 @@ static int __init module_init_limits(void) return 0; } -subsys_initcall(module_init_limits); -void *module_alloc(unsigned long size) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - void *p = NULL; + unsigned long fallback_start = 0, fallback_end = 0; + unsigned long start = 0, end = 0; + + module_init_limits(); /* * Where possible, prefer to allocate within direct branch range of the * kernel such that no PLTs are necessary. */ if (module_direct_base) { - p = __vmalloc_node_range(size, MODULE_ALIGN, - module_direct_base, - module_direct_base + SZ_128M, - GFP_KERNEL | __GFP_NOWARN, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); - } + start = module_direct_base; + end = module_direct_base + SZ_128M; - if (!p && module_plt_base) { - p = __vmalloc_node_range(size, MODULE_ALIGN, - module_plt_base, - module_plt_base + SZ_2G, - GFP_KERNEL | __GFP_NOWARN, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); - } - - if (!p) { - pr_warn_ratelimited("%s: unable to allocate memory\n", - __func__); + if (module_plt_base) { + fallback_start = module_plt_base; + fallback_end = module_plt_base + SZ_2G; + } + } else if (module_plt_base) { + start = module_plt_base; + end = module_plt_base + SZ_2G; } - /* Memory is intended to be executable, reset the pointer tag. */ - return kasan_reset_tag(p); + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = start, + .end = end, + .pgprot = PAGE_KERNEL, + .alignment = 1, + .fallback_start = fallback_start, + .fallback_end = fallback_end, + }, + }, + }; + + return &execmem_info; } enum aarch64_reloc_op { diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index f6d6ae0a1692..ac80559015a3 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -89,39 +90,56 @@ int module_finalize(const Elf_Ehdr *hdr, return 0; } -static __always_inline void * -__module_alloc(unsigned long size, unsigned long start, unsigned long end, bool nowarn) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; - gfp_t gfp = GFP_KERNEL | (nowarn ? __GFP_NOWARN : 0); + unsigned long fallback_start = 0, fallback_end = 0; + unsigned long start, end; /* - * Don't do huge page allocations for modules yet until more testing - * is done. STRICT_MODULE_RWX may require extra work to support this - * too. + * BOOK3S_32 and 8xx define MODULES_VADDR for text allocations and + * allow allocating data in the entire vmalloc space */ - return __vmalloc_node_range(size, 1, start, end, gfp, prot, - VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - -void *module_alloc(unsigned long size) -{ #ifdef MODULES_VADDR unsigned long limit = (unsigned long)_etext - SZ_32M; - void *ptr = NULL; BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR); /* First try within 32M limit from _etext to avoid branch trampolines */ - if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) - ptr = __module_alloc(size, limit, MODULES_END, true); - - if (!ptr) - ptr = __module_alloc(size, MODULES_VADDR, MODULES_END, false); + if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) { + start = limit; + fallback_start = MODULES_VADDR; + fallback_end = MODULES_END; + } else { + start = MODULES_VADDR; + } - return ptr; + end = MODULES_END; #else - return __module_alloc(size, VMALLOC_START, VMALLOC_END, false); + start = VMALLOC_START; + end = VMALLOC_END; #endif + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = start, + .end = end, + .pgprot = prot, + .alignment = 1, + .fallback_start = fallback_start, + .fallback_end = fallback_end, + }, + [EXECMEM_MODULE_DATA] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; } diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index ac97a905e8cd..7fee64fdc1bb 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -37,41 +37,31 @@ #define PLT_ENTRY_SIZE 22 -static unsigned long get_module_load_offset(void) +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) { - static DEFINE_MUTEX(module_kaslr_mutex); - static unsigned long module_load_offset; - - if (!kaslr_enabled()) - return 0; - /* - * Calculate the module_load_offset the first time this code - * is called. Once calculated it stays the same until reboot. - */ - mutex_lock(&module_kaslr_mutex); - if (!module_load_offset) + unsigned long module_load_offset = 0; + unsigned long start; + + if (kaslr_enabled()) module_load_offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - mutex_unlock(&module_kaslr_mutex); - return module_load_offset; -} -void *module_alloc(unsigned long size) -{ - gfp_t gfp_mask = GFP_KERNEL; - void *p; - - if (PAGE_ALIGN(size) > MODULES_LEN) - return NULL; - p = __vmalloc_node_range(size, MODULE_ALIGN, - MODULES_VADDR + get_module_load_offset(), - MODULES_END, gfp_mask, PAGE_KERNEL, - VM_FLUSH_RESET_PERMS | VM_DEFER_KMEMLEAK, - NUMA_NO_NODE, __builtin_return_address(0)); - if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) { - vfree(p); - return NULL; - } - return p; + start = MODULES_VADDR + module_load_offset; + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .start = start, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = MODULE_ALIGN, + }, + }, + }; + + return &execmem_info; } #ifdef CONFIG_FUNCTION_TRACER diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index e18914c0e38a..45b1a7c03379 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -36,55 +37,30 @@ do { \ } while (0) #endif -#ifdef CONFIG_RANDOMIZE_BASE -static unsigned long module_load_offset; +static struct execmem_info execmem_info __ro_after_init; -/* Mutex protects the module_load_offset. */ -static DEFINE_MUTEX(module_kaslr_mutex); - -static unsigned long int get_module_load_offset(void) -{ - if (kaslr_enabled()) { - mutex_lock(&module_kaslr_mutex); - /* - * Calculate the module_load_offset the first time this - * code is called. Once calculated it stays the same until - * reboot. - */ - if (module_load_offset == 0) - module_load_offset = - get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - mutex_unlock(&module_kaslr_mutex); - } - return module_load_offset; -} -#else -static unsigned long int get_module_load_offset(void) +struct execmem_info __init *execmem_arch_setup(void) { - return 0; -} -#endif - -void *module_alloc(unsigned long size) -{ - gfp_t gfp_mask = GFP_KERNEL; - void *p; - - if (PAGE_ALIGN(size) > MODULES_LEN) - return NULL; - - p = __vmalloc_node_range(size, MODULE_ALIGN, - MODULES_VADDR + get_module_load_offset(), - MODULES_END, gfp_mask, PAGE_KERNEL, - VM_FLUSH_RESET_PERMS | VM_DEFER_KMEMLEAK, - NUMA_NO_NODE, __builtin_return_address(0)); - - if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) { - vfree(p); - return NULL; - } - - return p; + unsigned long start, offset = 0; + + if (kaslr_enabled()) + offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; + + start = MODULES_VADDR + offset; + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .start = start, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = MODULE_ALIGN, + }, + }, + }; + + return &execmem_info; } #ifdef CONFIG_X86_32 diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 96fc59258467..32cef1144117 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -5,6 +5,14 @@ #include #include +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(CONFIG_KASAN_VMALLOC) +#include +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) +#else +#define MODULE_ALIGN PAGE_SIZE +#endif + /** * enum execmem_type - types of executable memory ranges * @@ -22,6 +30,7 @@ * @EXECMEM_KPROBES: parameters for kprobes * @EXECMEM_FTRACE: parameters for ftrace * @EXECMEM_BPF: parameters for BPF + * @EXECMEM_MODULE_DATA: parameters for module data sections * @EXECMEM_TYPE_MAX: */ enum execmem_type { @@ -30,22 +39,38 @@ enum execmem_type { EXECMEM_KPROBES, EXECMEM_FTRACE, EXECMEM_BPF, + EXECMEM_MODULE_DATA, EXECMEM_TYPE_MAX, }; +/** + * enum execmem_range_flags - options for executable memory allocations + * @EXECMEM_KASAN_SHADOW: allocate kasan shadow + */ +enum execmem_range_flags { + EXECMEM_KASAN_SHADOW = (1 << 0), +}; + /** * struct execmem_range - definition of an address space suitable for code and * related data allocations * @start: address space start * @end: address space end (inclusive) + * @fallback_start: start of the secondary address space range for fallback + * allocations on architectures that require it + * @fallback_end: start of the secondary address space (inclusive) * @pgprot: permissions for memory in this address space * @alignment: alignment required for text allocations + * @flags: options for memory allocations for this range */ struct execmem_range { unsigned long start; unsigned long end; + unsigned long fallback_start; + unsigned long fallback_end; pgprot_t pgprot; unsigned int alignment; + enum execmem_range_flags flags; }; /** @@ -82,6 +107,9 @@ struct execmem_info *execmem_arch_setup(void); * Allocates memory that will contain executable code, either generated or * loaded from kernel modules. * + * Allocates memory that will contain data coupled with executable code, + * like data sections in kernel modules. + * * The memory will have protections defined by architecture for executable * region of the @type. * @@ -95,7 +123,7 @@ void *execmem_alloc(enum execmem_type type, size_t size); */ void execmem_free(void *ptr); -#ifdef CONFIG_EXECMEM +#if defined(CONFIG_EXECMEM) && !defined(CONFIG_ARCH_WANTS_EXECMEM_LATE) void execmem_init(void); #else static inline void execmem_init(void) {} diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index a3b8caee9405..e395461d59e5 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -25,10 +25,6 @@ int module_frob_arch_sections(Elf_Ehdr *hdr, /* Additional bytes needed by arch in front of individual sections */ unsigned int arch_mod_section_prepend(struct module *mod, unsigned int section); -/* Allocator used for allocating struct module, core sections and init - sections. Returns NULL on failure. */ -void *module_alloc(unsigned long size); - /* Determines if the section name is an init section (that is only used during * module loading). */ @@ -126,12 +122,4 @@ void module_arch_cleanup(struct module *mod); /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ - !defined(CONFIG_KASAN_VMALLOC) -#include -#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) -#else -#define MODULE_ALIGN PAGE_SIZE -#endif - #endif diff --git a/kernel/module/main.c b/kernel/module/main.c index d56b7df0cbb6..91e185607d4b 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1188,24 +1188,20 @@ void __weak module_arch_freeing_init(struct module *mod) { } -static bool mod_mem_use_vmalloc(enum mod_mem_type type) -{ - return IS_ENABLED(CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC) && - mod_mem_type_is_core_data(type); -} - static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { unsigned int size = PAGE_ALIGN(mod->mem[type].size); + enum execmem_type execmem_type; void *ptr; mod->mem[type].size = size; - if (mod_mem_use_vmalloc(type)) - ptr = vmalloc(size); + if (mod_mem_type_is_data(type)) + execmem_type = EXECMEM_MODULE_DATA; else - ptr = execmem_alloc(EXECMEM_MODULE_TEXT, size); + execmem_type = EXECMEM_MODULE_TEXT; + ptr = execmem_alloc(execmem_type, size); if (!ptr) return -ENOMEM; @@ -1232,10 +1228,7 @@ static void module_memory_free(struct module *mod, enum mod_mem_type type) { void *ptr = mod->mem[type].base; - if (mod_mem_use_vmalloc(type)) - vfree(ptr); - else - execmem_free(ptr); + execmem_free(ptr); } static void free_mod_mem(struct module *mod) @@ -1630,13 +1623,6 @@ static void free_modinfo(struct module *mod) } } -void * __weak module_alloc(unsigned long size) -{ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - bool __weak module_init_section(const char *name) { return strstarts(name, ".init"); diff --git a/mm/execmem.c b/mm/execmem.c index 80e61c1e7319..0c4b36bc6d10 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -12,27 +12,49 @@ #include static struct execmem_info *execmem_info __ro_after_init; +static struct execmem_info default_execmem_info __ro_after_init; static void *__execmem_alloc(struct execmem_range *range, size_t size) { + bool kasan = range->flags & EXECMEM_KASAN_SHADOW; + unsigned long vm_flags = VM_FLUSH_RESET_PERMS; + gfp_t gfp_flags = GFP_KERNEL | __GFP_NOWARN; unsigned long start = range->start; unsigned long end = range->end; unsigned int align = range->alignment; pgprot_t pgprot = range->pgprot; + void *p; + + if (kasan) + vm_flags |= VM_DEFER_KMEMLEAK; + + p = __vmalloc_node_range(size, align, start, end, gfp_flags, + pgprot, vm_flags, NUMA_NO_NODE, + __builtin_return_address(0)); + if (!p && range->fallback_start) { + start = range->fallback_start; + end = range->fallback_end; + p = __vmalloc_node_range(size, align, start, end, gfp_flags, + pgprot, vm_flags, NUMA_NO_NODE, + __builtin_return_address(0)); + } + + if (!p) { + pr_warn_ratelimited("execmem: unable to allocate memory\n"); + return NULL; + } + + if (kasan && (kasan_alloc_module_shadow(p, size, GFP_KERNEL) < 0)) { + vfree(p); + return NULL; + } - return __vmalloc_node_range(size, align, start, end, - GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); + return kasan_reset_tag(p); } void *execmem_alloc(enum execmem_type type, size_t size) { - struct execmem_range *range; - - if (!execmem_info) - return module_alloc(size); - - range = &execmem_info->ranges[type]; + struct execmem_range *range = &execmem_info->ranges[type]; return __execmem_alloc(range, size); } @@ -67,10 +89,16 @@ static void execmem_init_missing(struct execmem_info *info) struct execmem_range *r = &info->ranges[i]; if (!r->start) { - r->pgprot = default_range->pgprot; + if (i == EXECMEM_MODULE_DATA) + r->pgprot = PAGE_KERNEL; + else + r->pgprot = default_range->pgprot; r->alignment = default_range->alignment; r->start = default_range->start; r->end = default_range->end; + r->flags = default_range->flags; + r->fallback_start = default_range->fallback_start; + r->fallback_end = default_range->fallback_end; } } } @@ -80,14 +108,36 @@ struct execmem_info * __weak execmem_arch_setup(void) return NULL; } -void __init execmem_init(void) +static void __init __execmem_init(void) { struct execmem_info *info = execmem_arch_setup(); - if (!info || !execmem_validate(info)) + if (!info) { + info = execmem_info = &default_execmem_info; + info->ranges[EXECMEM_DEFAULT].start = VMALLOC_START; + info->ranges[EXECMEM_DEFAULT].end = VMALLOC_END; + info->ranges[EXECMEM_DEFAULT].pgprot = PAGE_KERNEL_EXEC; + info->ranges[EXECMEM_DEFAULT].alignment = 1; + } + + if (!execmem_validate(info)) return; execmem_init_missing(info); execmem_info = info; } + +#ifdef CONFIG_ARCH_WANTS_EXECMEM_LATE +static int __init execmem_late_init(void) +{ + __execmem_init(); + return 0; +} +core_initcall(execmem_late_init); +#else +void __init execmem_init(void) +{ + __execmem_init(); +} +#endif From patchwork Sun May 5 16:06:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72369C41513 for ; Sun, 5 May 2024 16:09:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5TxCX/TAUJT2lexTBQ01fpO2yk6rl1eYna5dGV2TUuY=; b=YOyAjQZg0j/xal LH+gucjrovmfm3TQlgDgrQCH+8xlljilgWAC2vhAMUaUV63kVa7AVWUYveRIvyRtl6lNic0Y3FFo9 lzZzHeGSnghZuQlroUskHNTsuZDVHHQ9keYI0Awd1kJnNiVOCXiK5FcT0nWzoNoWSjNhp1q/so6IC zDWLeSy5CN4TbVMHPMXho8bKgQ2nBNjJWv5z7L4axR7C58yCE7B4GNVhrPKXrSdZk26gcUjtdTDYN wmBzKg22JQ4ok9kbHecGb5B4baDEib62GdvkXH2CSIQGeOPl/TZKAYCHrFOHHNSn3jtVgHwfsUz6N wUyunaQl/gW0HwIti8XA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQ6-00000004iQn-2O7T; Sun, 05 May 2024 16:08:58 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePp-00000004iBf-24XE; Sun, 05 May 2024 16:08:41 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4DBgWrGRLrMji5A4t9bmWQ2nAQ4rfSDlHheE5nQXccM=; b=dcbktFLtiGA/2jDhFgWvNn9rZ6 zD9J1HSkSgaUuG1DGkwuFvHmT+Au4+eaucbQq41YVcKUEQVFwb0AWa7qJEDjvFIkNRGrBy2OUo/Gf ftCpeye4Pi9bBaFAac/CN4htkTC4LIGgHvMJguk+eHbpXybrVwcPhfoovIrkTJpKZuBUyjotHk5tT xsLE1q3Vh8s4kJygR42K6eLewgauaZu42Ym9M6U5Hd2GFwsODvSf5FRWfLxwQbhmLxSA0PwjCY7lf L1H23ZBAH+daR7zeO0nZo+YMmNaJYSwkx7VuH6wx7nft+kJa73w5iskAmwp8n3771HM+Aogm3g9wi erBaAA8Q==; Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePk-00000001OAP-39eX; Sun, 05 May 2024 16:08:39 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 27C9460CA0; Sun, 5 May 2024 16:08:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D6A0C4DDE6; Sun, 5 May 2024 16:08:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925314; bh=Rvc7E+HMMA3ADjVUaU2tKv0yhJenjSe9igrLALIYU0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=shLi723c0/Ur3b0A8AHsKhvLohUMcp+fFUUMZ6JoWOuO5vEUs39+L/bLU4HZMazLh Oe6j35z9wVArorVs5UWBuTif/KnGVNE2nfSyvJKdHwrGSco94b1q6uzOJ2LqA78xCn r0X3Zn9A9c05w7DN1MgC7iLBw5Ui0bVVQ40QO9g5xM2P2hs4Y6ocEzmpBsDnuRAR4P G3oB630XCTubm/c95bjffP8cHPmyJKAYpKOuSeWQhA9ZJFkEyZKwxIOrliQxhSZR7j LUCtg/FcS7hI0tGhTTnDjYlsQ2LUMngmkGzHm7RFwEmCYtpOND4xtNEjQmNcnbgywN nVHYzWj/AfQUg== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 09/16] riscv: extend execmem_params for generated code allocations Date: Sun, 5 May 2024 19:06:21 +0300 Message-ID: <20240505160628.2323363-10-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_170837_429296_9DDA7B50 X-CRM114-Status: GOOD ( 13.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" The memory allocations for kprobes and BPF on RISC-V are not placed in the modules area and these custom allocations are implemented with overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Define MODULES_VADDR and MODULES_END as VMALLOC_START and VMALLOC_END for 32 bit and slightly reorder execmem_params initialization to support both 32 and 64 bit variants, define EXECMEM_KPROBES and EXECMEM_BPF ranges in riscv::execmem_params and drop overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Signed-off-by: Mike Rapoport (IBM) Reviewed-by: Alexandre Ghiti --- arch/riscv/include/asm/pgtable.h | 3 +++ arch/riscv/kernel/module.c | 14 +++++++++++++- arch/riscv/kernel/probes/kprobes.c | 10 ---------- arch/riscv/net/bpf_jit_core.c | 13 ------------- 4 files changed, 16 insertions(+), 24 deletions(-) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 9f8ea0e33eb1..5f21814e438e 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -55,6 +55,9 @@ #define MODULES_LOWEST_VADDR (KERNEL_LINK_ADDR - SZ_2G) #define MODULES_VADDR (PFN_ALIGN((unsigned long)&_end) - SZ_2G) #define MODULES_END (PFN_ALIGN((unsigned long)&_start)) +#else +#define MODULES_VADDR VMALLOC_START +#define MODULES_END VMALLOC_END #endif /* diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 182904127ba0..0e6415f00fca 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -906,7 +906,7 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -#if defined(CONFIG_MMU) && defined(CONFIG_64BIT) +#ifdef CONFIG_MMU static struct execmem_info execmem_info __ro_after_init; struct execmem_info __init *execmem_arch_setup(void) @@ -919,6 +919,18 @@ struct execmem_info __init *execmem_arch_setup(void) .pgprot = PAGE_KERNEL, .alignment = 1, }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL_READ_EXEC, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = BPF_JIT_REGION_START, + .end = BPF_JIT_REGION_END, + .pgprot = PAGE_KERNEL, + .alignment = PAGE_SIZE, + }, }, }; diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c index 2f08c14a933d..e64f2f3064eb 100644 --- a/arch/riscv/kernel/probes/kprobes.c +++ b/arch/riscv/kernel/probes/kprobes.c @@ -104,16 +104,6 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) return 0; } -#ifdef CONFIG_MMU -void *alloc_insn_page(void) -{ - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_READ_EXEC, - VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, - __builtin_return_address(0)); -} -#endif - /* install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c index 6b3acac30c06..e238fdbd5dbc 100644 --- a/arch/riscv/net/bpf_jit_core.c +++ b/arch/riscv/net/bpf_jit_core.c @@ -219,19 +219,6 @@ u64 bpf_jit_alloc_exec_limit(void) return BPF_JIT_REGION_SIZE; } -void *bpf_jit_alloc_exec(unsigned long size) -{ - return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START, - BPF_JIT_REGION_END, GFP_KERNEL, - PAGE_KERNEL, 0, NUMA_NO_NODE, - __builtin_return_address(0)); -} - -void bpf_jit_free_exec(void *addr) -{ - return vfree(addr); -} - void *bpf_arch_text_copy(void *dst, void *src, size_t len) { int ret; From patchwork Sun May 5 16:06:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD8A9C25B5C for ; Sun, 5 May 2024 17:15:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PvVwIgf2FxKML0Jy7ZFQBYlYG7GpLNK7eOE87UCVgJ4=; b=CK6Kjp3dmxYBQj WPwgnP0luV7SSpZaIxpF9bVES9ogJfenUuor00a/ZVYK7w2MKTPiOfF0MmMvZG/dq5LFnXb4nGJFu Cj5i5okK+/Po8z/KFK2nhYZ3S6gNVqH5Yt4Hy5orpe6Z/ifKt4q/W4mDdm1f34PKHkoIhr/pcd5W9 idDTTKtvjYH85eXCQVjULnrbjIVPUws5VuRXq1hrUNbACUjk2I8mbIhu70mPUsl2MDeKTLlqdMv7F Xh+MDv/2zs+rJAAzfU8Cf7Hqh8KldLeyc/MVpWhxjDycPITK47ZQ9gAa2JhRix+WW+mWDcUzqGcot 0wvoLKkqoCJSGjqey/JQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3fSM-00000004thM-31dk; Sun, 05 May 2024 17:15:22 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3ePv-00000004iHQ-37kV; Sun, 05 May 2024 16:09:00 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id E9BF360CBB; Sun, 5 May 2024 16:08:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C2B7C4AF66; Sun, 5 May 2024 16:08:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925326; bh=baelIo5gqQ6dNoONu464d37hHX6/MoaALRFlqjsyJfc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o8jiLvcfyNXB2iKxVuJHVMVEodBkbD1zblOEpItf8uhQXKQxqKdCGY7s0UctSKPri k3As8VAahDJ4NZSDJZmP61b1qd29PgNi4/WY8gt8pcKwe+U4mL0gWKr6G0c0l3jpEh lALY/R8NgYRkr74Dq4kIvOKK9Z8fxbuOpiw8GpmS2/XpjxcCx/Vu0WjnQ947IN3TrW v+f+OWtztPnMo6X5MTTu0wTYyR/w4hxiWyvDCgQheibsLdCYKyF8k/C/97DxOoCjPL DXuwU6qdfZL8+QFtJj1eo0qsbPEa5DTL9c098i09INCdxXLQTtnFDA8FsMheRTKOOZ 98lqdYyVWEYLg== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 10/16] arm64: extend execmem_info for generated code allocations Date: Sun, 5 May 2024 19:06:22 +0300 Message-ID: <20240505160628.2323363-11-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090848_346618_DBBC3C43 X-CRM114-Status: GOOD ( 12.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" The memory allocations for kprobes and BPF on arm64 can be placed anywhere in vmalloc address space and currently this is implemented with overrides of alloc_insn_page() and bpf_jit_alloc_exec() in arm64. Define EXECMEM_KPROBES and EXECMEM_BPF ranges in arm64::execmem_info and drop overrides of alloc_insn_page() and bpf_jit_alloc_exec(). Signed-off-by: Mike Rapoport (IBM) Acked-by: Will Deacon --- arch/arm64/kernel/module.c | 12 ++++++++++++ arch/arm64/kernel/probes/kprobes.c | 7 ------- arch/arm64/net/bpf_jit_comp.c | 11 ----------- 3 files changed, 12 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index b7a7a23f9f8f..a52240ea084b 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -146,6 +146,18 @@ struct execmem_info __init *execmem_arch_setup(void) .fallback_start = fallback_start, .fallback_end = fallback_end, }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL_ROX, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, }, }; diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 327855a11df2..4268678d0e86 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -129,13 +129,6 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) return 0; } -void *alloc_insn_page(void) -{ - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - /* arm kprobe: install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 122021f9bdfc..456f5af239fc 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1793,17 +1793,6 @@ u64 bpf_jit_alloc_exec_limit(void) return VMALLOC_END - VMALLOC_START; } -void *bpf_jit_alloc_exec(unsigned long size) -{ - /* Memory is intended to be executable, reset the pointer tag. */ - return kasan_reset_tag(vmalloc(size)); -} - -void bpf_jit_free_exec(void *addr) -{ - return vfree(addr); -} - /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */ bool bpf_jit_supports_subprog_tailcalls(void) { From patchwork Sun May 5 16:06:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DB7C1C4345F for ; Sun, 5 May 2024 17:15:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RSUWhqVx13oHb/wp5Mt2ElcAyJ4anc1OkCgg4EmKJOk=; b=ZlXlWwthNQWxmX VdWT+IwmED+M3R8k454wAsN8V9sA3P6iH/8CZ6Mvovsj/aykAihtR/tuS4xSv8Q9/Kg0iCcjmn4+H g7EJCj9MjaTB+RpavABtt51c3aXHyHBLriFGd4totoIaxsebt/P+/NzTpdLUqa2SIWGNJqjVU3rtG bVDzqZ3pDZnPS8gzmiCMGnMY7EuLl0U64ovzmxezhGT+F3N4bJVpTjB3t8ceQfPD6OyiJc7s/hp7c lJC9PMKmsaRsCbB/Hq5N4hlhXsIqDgZj6AZ1NVkwotl854mnvAfDcrecDOVAfPleG0AxVK7aDcmPu jkfna7EoOohcgUX2ATqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3fSN-00000004thb-1VXY; Sun, 05 May 2024 17:15:23 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQ9-00000004iSk-1D7i; Sun, 05 May 2024 16:09:12 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 2CECBCE0ABA; Sun, 5 May 2024 16:08:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2BD9DC4AF67; Sun, 5 May 2024 16:08:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925338; bh=QWGTfmxM6u4L/G8sJRSYS9LtFcq3UHArwE9pGA7ZHH0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V9ugyDvVuvSs/IjM/QctAFO62DH9XQommkAxsxi4rdAIIff0oz6X9Qud+GkrDWACM RYQi+mih740PBAh9yI8vFEoeet38MCrU8nEZEy07fSxynn/GgVcd072Ihsx2EaHbY4 Z27gTNeymS2cYPNHB2tcTnBIsC4q2mJ6iW7K1ExKD7uVI3lUYcdOAwJFh/DfD+OqrW 7/wkJExymiv498tChhAzeu53jc+ORXNU0laY/Aj4OLEg+tUJsKk0iVBj9EfjrO1Ij0 XBfIVdsalrZBo/iYhsLX7slbIb8fHOpQyFnOlaCclEnik8YVloSy1kYXig+pYSJtTX 9TAT/kTQtt5fw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 11/16] powerpc: extend execmem_params for kprobes allocations Date: Sun, 5 May 2024 19:06:23 +0300 Message-ID: <20240505160628.2323363-12-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090901_997032_23935CFC X-CRM114-Status: GOOD ( 14.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" powerpc overrides kprobes::alloc_insn_page() to remove writable permissions when STRICT_MODULE_RWX is on. Add definition of EXECMEM_KRPOBES to execmem_params to allow using the generic kprobes::alloc_insn_page() with the desired permissions. As powerpc uses breakpoint instructions to inject kprobes, it does not need to constrain kprobe allocations to the modules area and can use the entire vmalloc address space. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/kernel/kprobes.c | 20 -------------------- arch/powerpc/kernel/module.c | 7 +++++++ 2 files changed, 7 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index 9fcd01bb2ce6..14c5ddec3056 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -126,26 +126,6 @@ kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr, unsigned long offse return (kprobe_opcode_t *)(addr + offset); } -void *alloc_insn_page(void) -{ - void *page; - - page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE); - if (!page) - return NULL; - - if (strict_module_rwx_enabled()) { - int err = set_memory_rox((unsigned long)page, 1); - - if (err) - goto error; - } - return page; -error: - execmem_free(page); - return NULL; -} - int arch_prepare_kprobe(struct kprobe *p) { int ret = 0; diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index ac80559015a3..2a23cf7e141b 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -94,6 +94,7 @@ static struct execmem_info execmem_info __ro_after_init; struct execmem_info __init *execmem_arch_setup(void) { + pgprot_t kprobes_prot = strict_module_rwx_enabled() ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; unsigned long fallback_start = 0, fallback_end = 0; unsigned long start, end; @@ -132,6 +133,12 @@ struct execmem_info __init *execmem_arch_setup(void) .fallback_start = fallback_start, .fallback_end = fallback_end, }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = kprobes_prot, + .alignment = 1, + }, [EXECMEM_MODULE_DATA] = { .start = VMALLOC_START, .end = VMALLOC_END, From patchwork Sun May 5 16:06:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C45BC4345F for ; Sun, 5 May 2024 16:09:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YWmuO/3LMhFwhmb27Lnu7y3/AYo/uLRU82AuOOsw6MY=; b=Tu25I0u3+YKCX1 mKt726/KtxPo2FnVKG+wsAF3rFazwZYdXOXr/LanX9BTUjdt0UEoHwZG6L4TyXafXCWt4fWnuyI8b y3pY0Ir6rhdeh0DQQ9GI/GGBqAOCPVnnwJrUwP4c+FH1956dY9Zu72lLwpxWX37du6gjtAt07b/OR dzKhrc8lhdu3uW4eFNLctOg1/W1LPp8//QhstVrRwbmasQpGzyug28q7LFr8xaAWG+NU+0xCci5S+ yGXKDXRa9RQkqauKVEGyzB2vbrY8rD1L1pl3PhPa/YvY9JL+cs49Sb5EwaJpq8HFR8+dRQFnEG0it a+nMx1FWZtsLKGuREbVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQl-00000004j16-3kCF; Sun, 05 May 2024 16:09:39 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQR-00000004ikY-2nh6; Sun, 05 May 2024 16:09:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:Content-Type :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=T7xAcO5/CoULMfPQgJR7+ovdKspvbrejEkNN0k7yqQ4=; b=Iwa8ZLQZcgxUnY68l/4KnOgZIH zBZNMkZjt5l08RO6klzx46zYmGZMstd88RpUjb0uI6v+IQwzavDs8HUONEB4EsXCsOjVzFT+AOLK7 EhCL2VNpvjGalrS40O3zmADDUFv4zChcC0QlGDU75ffHbl8M9JzEhBMtHuwRBIAiafT8n+z+hKX6D +T9dctGHNQiB4GCfQaELFhDOeb9eAthoMz9bia7qOZskUsydhXfzPkQULcf/lkgtlanoaooAeX4JE jz/+ibftCTSwQofYMNHKL1eYrjkHLfT0qq60hmRuFaCvO1VBevDF8Jzg/UgnzVrou64pSTNHpgEvx zjk9i5yQ==; Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQN-00000001OHB-15lE; Sun, 05 May 2024 16:09:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id EB8C9CE0AB2; Sun, 5 May 2024 16:09:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED18EC4DDE2; Sun, 5 May 2024 16:08:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925350; bh=5OMBgTFrr36VXyNsljkCHRabEmmDGidsMIUvtOQa2/E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HUOjkTY0MY4Avcwze7CfRrRzvZ8uJCMrkjJMVefuOUwtMJ5EdhxU7JNfJcc54972f 3oOhpCvwmoQ9Q1GPkwpinF5evj2fIyiPqFqm2A6rXSrs6J+b5saRST0hEptRzBNchm Auf1aZnNL10BrlRGGKBPvzfY9zUsb8WDijER56YCQ/T3mYlDNtjhQrdcTBiDC4+1sT 39YY7luhKaiRxcdo24wF1OZY2OK73xFTQ7yliDwpr/XtTQ0GXkhPn658Wk2etTrXSA AvAmAW4UhT0vEPNgqOYrKQUZ1Ak+cHmYwEhJlWambSDSLpHUwgD8LRPOLI6LdEFSyk +vzxUot7u+I/w== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 12/16] arch: make execmem setup available regardless of CONFIG_MODULES Date: Sun, 5 May 2024 19:06:24 +0300 Message-ID: <20240505160628.2323363-13-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_170916_417538_0FB5EB41 X-CRM114-Status: GOOD ( 30.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" execmem does not depend on modules, on the contrary modules use execmem. To make execmem available when CONFIG_MODULES=n, for instance for kprobes, split execmem_params initialization out from arch/*/kernel/module.c and compile it when CONFIG_EXECMEM=y Signed-off-by: Mike Rapoport (IBM) Reviewed-by: Philippe Mathieu-Daudé --- arch/arm/kernel/module.c | 43 ---------- arch/arm/mm/init.c | 45 +++++++++++ arch/arm64/kernel/module.c | 140 --------------------------------- arch/arm64/mm/init.c | 140 +++++++++++++++++++++++++++++++++ arch/loongarch/kernel/module.c | 19 ----- arch/loongarch/mm/init.c | 21 +++++ arch/mips/kernel/module.c | 22 ------ arch/mips/mm/init.c | 23 ++++++ arch/nios2/kernel/module.c | 20 ----- arch/nios2/mm/init.c | 21 +++++ arch/parisc/kernel/module.c | 20 ----- arch/parisc/mm/init.c | 23 +++++- arch/powerpc/kernel/module.c | 63 --------------- arch/powerpc/mm/mem.c | 64 +++++++++++++++ arch/riscv/kernel/module.c | 34 -------- arch/riscv/mm/init.c | 35 +++++++++ arch/s390/kernel/module.c | 27 ------- arch/s390/mm/init.c | 30 +++++++ arch/sparc/kernel/module.c | 19 ----- arch/sparc/mm/Makefile | 2 + arch/sparc/mm/execmem.c | 21 +++++ arch/x86/kernel/module.c | 27 ------- arch/x86/mm/init.c | 29 +++++++ 23 files changed, 453 insertions(+), 435 deletions(-) create mode 100644 arch/sparc/mm/execmem.c diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c index a98fdf6ff26c..677f218f7e84 100644 --- a/arch/arm/kernel/module.c +++ b/arch/arm/kernel/module.c @@ -12,57 +12,14 @@ #include #include #include -#include #include #include -#include -#include #include #include #include #include -#ifdef CONFIG_XIP_KERNEL -/* - * The XIP kernel text is mapped in the module area for modules and - * some other stuff to work without any indirect relocations. - * MODULES_VADDR is redefined here and not in asm/memory.h to avoid - * recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off. - */ -#undef MODULES_VADDR -#define MODULES_VADDR (((unsigned long)_exiprom + ~PMD_MASK) & PMD_MASK) -#endif - -#ifdef CONFIG_MMU -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - unsigned long fallback_start = 0, fallback_end = 0; - - if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) { - fallback_start = VMALLOC_START; - fallback_end = VMALLOC_END; - } - - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .pgprot = PAGE_KERNEL_EXEC, - .alignment = 1, - .fallback_start = fallback_start, - .fallback_end = fallback_end, - }, - }, - }; - - return &execmem_info; -} -#endif - bool module_init_section(const char *name) { return strstarts(name, ".init") || diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index e8c6f4be0ce1..5345d218899a 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -486,3 +487,47 @@ void free_initrd_mem(unsigned long start, unsigned long end) free_reserved_area((void *)start, (void *)end, -1, "initrd"); } #endif + +#ifdef CONFIG_EXECMEM + +#ifdef CONFIG_XIP_KERNEL +/* + * The XIP kernel text is mapped in the module area for modules and + * some other stuff to work without any indirect relocations. + * MODULES_VADDR is redefined here and not in asm/memory.h to avoid + * recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off. + */ +#undef MODULES_VADDR +#define MODULES_VADDR (((unsigned long)_exiprom + ~PMD_MASK) & PMD_MASK) +#endif + +#ifdef CONFIG_MMU +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + unsigned long fallback_start = 0, fallback_end = 0; + + if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS)) { + fallback_start = VMALLOC_START; + fallback_end = VMALLOC_END; + } + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL_EXEC, + .alignment = 1, + .fallback_start = fallback_start, + .fallback_end = fallback_end, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_MMU */ + +#endif /* CONFIG_EXECMEM */ diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index a52240ea084b..36b25af56324 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -12,158 +12,18 @@ #include #include #include -#include #include #include #include #include #include #include -#include -#include #include #include #include #include -static u64 module_direct_base __ro_after_init = 0; -static u64 module_plt_base __ro_after_init = 0; - -/* - * Choose a random page-aligned base address for a window of 'size' bytes which - * entirely contains the interval [start, end - 1]. - */ -static u64 __init random_bounding_box(u64 size, u64 start, u64 end) -{ - u64 max_pgoff, pgoff; - - if ((end - start) >= size) - return 0; - - max_pgoff = (size - (end - start)) / PAGE_SIZE; - pgoff = get_random_u32_inclusive(0, max_pgoff); - - return start - pgoff * PAGE_SIZE; -} - -/* - * Modules may directly reference data and text anywhere within the kernel - * image and other modules. References using PREL32 relocations have a +/-2G - * range, and so we need to ensure that the entire kernel image and all modules - * fall within a 2G window such that these are always within range. - * - * Modules may directly branch to functions and code within the kernel text, - * and to functions and code within other modules. These branches will use - * CALL26/JUMP26 relocations with a +/-128M range. Without PLTs, we must ensure - * that the entire kernel text and all module text falls within a 128M window - * such that these are always within range. With PLTs, we can expand this to a - * 2G window. - * - * We chose the 128M region to surround the entire kernel image (rather than - * just the text) as using the same bounds for the 128M and 2G regions ensures - * by construction that we never select a 128M region that is not a subset of - * the 2G region. For very large and unusual kernel configurations this means - * we may fall back to PLTs where they could have been avoided, but this keeps - * the logic significantly simpler. - */ -static int __init module_init_limits(void) -{ - u64 kernel_end = (u64)_end; - u64 kernel_start = (u64)_text; - u64 kernel_size = kernel_end - kernel_start; - - /* - * The default modules region is placed immediately below the kernel - * image, and is large enough to use the full 2G relocation range. - */ - BUILD_BUG_ON(KIMAGE_VADDR != MODULES_END); - BUILD_BUG_ON(MODULES_VSIZE < SZ_2G); - - if (!kaslr_enabled()) { - if (kernel_size < SZ_128M) - module_direct_base = kernel_end - SZ_128M; - if (kernel_size < SZ_2G) - module_plt_base = kernel_end - SZ_2G; - } else { - u64 min = kernel_start; - u64 max = kernel_end; - - if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { - pr_info("2G module region forced by RANDOMIZE_MODULE_REGION_FULL\n"); - } else { - module_direct_base = random_bounding_box(SZ_128M, min, max); - if (module_direct_base) { - min = module_direct_base; - max = module_direct_base + SZ_128M; - } - } - - module_plt_base = random_bounding_box(SZ_2G, min, max); - } - - pr_info("%llu pages in range for non-PLT usage", - module_direct_base ? (SZ_128M - kernel_size) / PAGE_SIZE : 0); - pr_info("%llu pages in range for PLT usage", - module_plt_base ? (SZ_2G - kernel_size) / PAGE_SIZE : 0); - - return 0; -} - -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - unsigned long fallback_start = 0, fallback_end = 0; - unsigned long start = 0, end = 0; - - module_init_limits(); - - /* - * Where possible, prefer to allocate within direct branch range of the - * kernel such that no PLTs are necessary. - */ - if (module_direct_base) { - start = module_direct_base; - end = module_direct_base + SZ_128M; - - if (module_plt_base) { - fallback_start = module_plt_base; - fallback_end = module_plt_base + SZ_2G; - } - } else if (module_plt_base) { - start = module_plt_base; - end = module_plt_base + SZ_2G; - } - - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = start, - .end = end, - .pgprot = PAGE_KERNEL, - .alignment = 1, - .fallback_start = fallback_start, - .fallback_end = fallback_end, - }, - [EXECMEM_KPROBES] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .pgprot = PAGE_KERNEL_ROX, - .alignment = 1, - }, - [EXECMEM_BPF] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} - enum aarch64_reloc_op { RELOC_OP_NONE, RELOC_OP_ABS, diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 03efd86dce0a..9b5ab6818f7f 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include @@ -432,3 +433,142 @@ void dump_mem_limit(void) pr_emerg("Memory Limit: none\n"); } } + +#ifdef CONFIG_EXECMEM +static u64 module_direct_base __ro_after_init = 0; +static u64 module_plt_base __ro_after_init = 0; + +/* + * Choose a random page-aligned base address for a window of 'size' bytes which + * entirely contains the interval [start, end - 1]. + */ +static u64 __init random_bounding_box(u64 size, u64 start, u64 end) +{ + u64 max_pgoff, pgoff; + + if ((end - start) >= size) + return 0; + + max_pgoff = (size - (end - start)) / PAGE_SIZE; + pgoff = get_random_u32_inclusive(0, max_pgoff); + + return start - pgoff * PAGE_SIZE; +} + +/* + * Modules may directly reference data and text anywhere within the kernel + * image and other modules. References using PREL32 relocations have a +/-2G + * range, and so we need to ensure that the entire kernel image and all modules + * fall within a 2G window such that these are always within range. + * + * Modules may directly branch to functions and code within the kernel text, + * and to functions and code within other modules. These branches will use + * CALL26/JUMP26 relocations with a +/-128M range. Without PLTs, we must ensure + * that the entire kernel text and all module text falls within a 128M window + * such that these are always within range. With PLTs, we can expand this to a + * 2G window. + * + * We chose the 128M region to surround the entire kernel image (rather than + * just the text) as using the same bounds for the 128M and 2G regions ensures + * by construction that we never select a 128M region that is not a subset of + * the 2G region. For very large and unusual kernel configurations this means + * we may fall back to PLTs where they could have been avoided, but this keeps + * the logic significantly simpler. + */ +static int __init module_init_limits(void) +{ + u64 kernel_end = (u64)_end; + u64 kernel_start = (u64)_text; + u64 kernel_size = kernel_end - kernel_start; + + /* + * The default modules region is placed immediately below the kernel + * image, and is large enough to use the full 2G relocation range. + */ + BUILD_BUG_ON(KIMAGE_VADDR != MODULES_END); + BUILD_BUG_ON(MODULES_VSIZE < SZ_2G); + + if (!kaslr_enabled()) { + if (kernel_size < SZ_128M) + module_direct_base = kernel_end - SZ_128M; + if (kernel_size < SZ_2G) + module_plt_base = kernel_end - SZ_2G; + } else { + u64 min = kernel_start; + u64 max = kernel_end; + + if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { + pr_info("2G module region forced by RANDOMIZE_MODULE_REGION_FULL\n"); + } else { + module_direct_base = random_bounding_box(SZ_128M, min, max); + if (module_direct_base) { + min = module_direct_base; + max = module_direct_base + SZ_128M; + } + } + + module_plt_base = random_bounding_box(SZ_2G, min, max); + } + + pr_info("%llu pages in range for non-PLT usage", + module_direct_base ? (SZ_128M - kernel_size) / PAGE_SIZE : 0); + pr_info("%llu pages in range for PLT usage", + module_plt_base ? (SZ_2G - kernel_size) / PAGE_SIZE : 0); + + return 0; +} + +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + unsigned long fallback_start = 0, fallback_end = 0; + unsigned long start = 0, end = 0; + + module_init_limits(); + + /* + * Where possible, prefer to allocate within direct branch range of the + * kernel such that no PLTs are necessary. + */ + if (module_direct_base) { + start = module_direct_base; + end = module_direct_base + SZ_128M; + + if (module_plt_base) { + fallback_start = module_plt_base; + fallback_end = module_plt_base + SZ_2G; + } + } else if (module_plt_base) { + start = module_plt_base; + end = module_plt_base + SZ_2G; + } + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = start, + .end = end, + .pgprot = PAGE_KERNEL, + .alignment = 1, + .fallback_start = fallback_start, + .fallback_end = fallback_end, + }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL_ROX, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ diff --git a/arch/loongarch/kernel/module.c b/arch/loongarch/kernel/module.c index ca6dd7ea1610..36d6d9eeb7c7 100644 --- a/arch/loongarch/kernel/module.c +++ b/arch/loongarch/kernel/module.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -491,24 +490,6 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} - static void module_init_ftrace_plt(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod) { diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c index 4dd53427f657..bf789d114c2d 100644 --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -248,3 +249,23 @@ EXPORT_SYMBOL(invalid_pmd_table); #endif pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned_bss; EXPORT_SYMBOL(invalid_pte_table); + +#ifdef CONFIG_EXECMEM +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 59225a3cf918..ba0f62d8eff5 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c @@ -13,14 +13,12 @@ #include #include #include -#include #include #include #include #include #include #include -#include #include struct mips_hi16 { @@ -32,26 +30,6 @@ struct mips_hi16 { static LIST_HEAD(dbe_list); static DEFINE_SPINLOCK(dbe_lock); -#ifdef MODULES_VADDR -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} -#endif - static void apply_r_mips_32(u32 *location, u32 base, Elf_Addr v) { *location = base + v; diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 39f129205b0c..4583d1a2a73e 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include @@ -576,3 +577,25 @@ EXPORT_SYMBOL_GPL(invalid_pmd_table); #endif pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned_bss; EXPORT_SYMBOL(invalid_pte_table); + +#ifdef CONFIG_EXECMEM +#ifdef MODULES_VADDR +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} +#endif +#endif /* CONFIG_EXECMEM */ diff --git a/arch/nios2/kernel/module.c b/arch/nios2/kernel/module.c index 0d1ee86631fc..f4483243578d 100644 --- a/arch/nios2/kernel/module.c +++ b/arch/nios2/kernel/module.c @@ -13,33 +13,13 @@ #include #include #include -#include #include #include #include #include -#include #include -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .pgprot = PAGE_KERNEL_EXEC, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} - int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, unsigned int relsec, struct module *mod) diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c index 7bc82ee889c9..3459df28afee 100644 --- a/arch/nios2/mm/init.c +++ b/arch/nios2/mm/init.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -143,3 +144,23 @@ static const pgprot_t protection_map[16] = { [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = MKP(1, 1, 1) }; DECLARE_VM_GET_PAGE_PROT + +#ifdef CONFIG_EXECMEM +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL_EXEC, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c index bdfa85e10c1b..4e5d991b2b65 100644 --- a/arch/parisc/kernel/module.c +++ b/arch/parisc/kernel/module.c @@ -41,7 +41,6 @@ #include #include -#include #include #include #include @@ -49,7 +48,6 @@ #include #include #include -#include #include #include @@ -174,24 +172,6 @@ static inline int reassemble_22(int as22) ((as22 & 0x0003ff) << 3)); } -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .pgprot = PAGE_KERNEL_RWX, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} - #ifndef CONFIG_64BIT static inline unsigned long count_gots(const Elf_Rela *rela, unsigned long n) { diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index f876af56e13f..34d91cb8b259 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -24,6 +24,7 @@ #include /* for node_online_map */ #include /* for release_pages */ #include +#include #include #include @@ -481,7 +482,7 @@ void free_initmem(void) /* finally dump all the instructions which were cached, since the * pages are no-longer executable */ flush_icache_range(init_begin, init_end); - + free_initmem_default(POISON_FREE_INITMEM); /* set up a new led state on systems shipped LED State panel */ @@ -992,3 +993,23 @@ static const pgprot_t protection_map[16] = { [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_RWX }; DECLARE_VM_GET_PAGE_PROT + +#ifdef CONFIG_EXECMEM +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL_RWX, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index 2a23cf7e141b..77ea82e9dc5f 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -7,10 +7,8 @@ #include #include #include -#include #include #include -#include #include #include #include @@ -89,64 +87,3 @@ int module_finalize(const Elf_Ehdr *hdr, return 0; } - -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - pgprot_t kprobes_prot = strict_module_rwx_enabled() ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; - pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; - unsigned long fallback_start = 0, fallback_end = 0; - unsigned long start, end; - - /* - * BOOK3S_32 and 8xx define MODULES_VADDR for text allocations and - * allow allocating data in the entire vmalloc space - */ -#ifdef MODULES_VADDR - unsigned long limit = (unsigned long)_etext - SZ_32M; - - BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR); - - /* First try within 32M limit from _etext to avoid branch trampolines */ - if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) { - start = limit; - fallback_start = MODULES_VADDR; - fallback_end = MODULES_END; - } else { - start = MODULES_VADDR; - } - - end = MODULES_END; -#else - start = VMALLOC_START; - end = VMALLOC_END; -#endif - - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = start, - .end = end, - .pgprot = prot, - .alignment = 1, - .fallback_start = fallback_start, - .fallback_end = fallback_end, - }, - [EXECMEM_KPROBES] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .pgprot = kprobes_prot, - .alignment = 1, - }, - [EXECMEM_MODULE_DATA] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 3a440004b97d..5de62a3c1d4b 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -406,3 +407,66 @@ int devmem_is_allowed(unsigned long pfn) * the EHEA driver. Drop this when drivers/net/ethernet/ibm/ehea is removed. */ EXPORT_SYMBOL_GPL(walk_system_ram_range); + +#ifdef CONFIG_EXECMEM +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + pgprot_t kprobes_prot = strict_module_rwx_enabled() ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; + pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC; + unsigned long fallback_start = 0, fallback_end = 0; + unsigned long start, end; + + /* + * BOOK3S_32 and 8xx define MODULES_VADDR for text allocations and + * allow allocating data in the entire vmalloc space + */ +#ifdef MODULES_VADDR + unsigned long limit = (unsigned long)_etext - SZ_32M; + + BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR); + + /* First try within 32M limit from _etext to avoid branch trampolines */ + if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) { + start = limit; + fallback_start = MODULES_VADDR; + fallback_end = MODULES_END; + } else { + start = MODULES_VADDR; + } + + end = MODULES_END; +#else + start = VMALLOC_START; + end = VMALLOC_END; +#endif + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = start, + .end = end, + .pgprot = prot, + .alignment = 1, + .fallback_start = fallback_start, + .fallback_end = fallback_end, + }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = kprobes_prot, + .alignment = 1, + }, + [EXECMEM_MODULE_DATA] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 0e6415f00fca..906f9a3a5d65 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -11,10 +11,8 @@ #include #include #include -#include #include #include -#include #include #include @@ -906,38 +904,6 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, return 0; } -#ifdef CONFIG_MMU -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - [EXECMEM_KPROBES] = { - .start = VMALLOC_START, - .end = VMALLOC_END, - .pgprot = PAGE_KERNEL_READ_EXEC, - .alignment = 1, - }, - [EXECMEM_BPF] = { - .start = BPF_JIT_REGION_START, - .end = BPF_JIT_REGION_END, - .pgprot = PAGE_KERNEL, - .alignment = PAGE_SIZE, - }, - }, - }; - - return &execmem_info; -} -#endif - int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *me) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index fe8e159394d8..a8290609554f 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -24,6 +24,7 @@ #include #endif #include +#include #include #include @@ -1481,3 +1482,37 @@ void __init pgtable_cache_init(void) preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules"); } #endif + +#ifdef CONFIG_EXECMEM +#ifdef CONFIG_MMU +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + [EXECMEM_KPROBES] = { + .start = VMALLOC_START, + .end = VMALLOC_END, + .pgprot = PAGE_KERNEL_READ_EXEC, + .alignment = 1, + }, + [EXECMEM_BPF] = { + .start = BPF_JIT_REGION_START, + .end = BPF_JIT_REGION_END, + .pgprot = PAGE_KERNEL, + .alignment = PAGE_SIZE, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_MMU */ +#endif /* CONFIG_EXECMEM */ diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index 7fee64fdc1bb..91e207b50394 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -37,33 +37,6 @@ #define PLT_ENTRY_SIZE 22 -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - unsigned long module_load_offset = 0; - unsigned long start; - - if (kaslr_enabled()) - module_load_offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - - start = MODULES_VADDR + module_load_offset; - - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .flags = EXECMEM_KASAN_SHADOW, - .start = start, - .end = MODULES_END, - .pgprot = PAGE_KERNEL, - .alignment = MODULE_ALIGN, - }, - }, - }; - - return &execmem_info; -} - #ifdef CONFIG_FUNCTION_TRACER void module_arch_cleanup(struct module *mod) { diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index f6391442c0c2..e769d2726f4e 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -49,6 +49,7 @@ #include #include #include +#include pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(".bss..swapper_pg_dir"); pgd_t invalid_pg_dir[PTRS_PER_PGD] __section(".bss..invalid_pg_dir"); @@ -302,3 +303,32 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) vmem_remove_mapping(start, size); } #endif /* CONFIG_MEMORY_HOTPLUG */ + +#ifdef CONFIG_EXECMEM +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + unsigned long module_load_offset = 0; + unsigned long start; + + if (kaslr_enabled()) + module_load_offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; + + start = MODULES_VADDR + module_load_offset; + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .start = start, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = MODULE_ALIGN, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c index 8b7ee45defc3..b8c51cc23d96 100644 --- a/arch/sparc/kernel/module.c +++ b/arch/sparc/kernel/module.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include @@ -22,24 +21,6 @@ #include "entry.h" -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .start = MODULES_VADDR, - .end = MODULES_END, - .pgprot = PAGE_KERNEL, - .alignment = 1, - }, - }, - }; - - return &execmem_info; -} - /* Make generic code ignore STT_REGISTER dummy undefined symbols. */ int module_frob_arch_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs, diff --git a/arch/sparc/mm/Makefile b/arch/sparc/mm/Makefile index 809d993f6d88..2d1752108d77 100644 --- a/arch/sparc/mm/Makefile +++ b/arch/sparc/mm/Makefile @@ -14,3 +14,5 @@ obj-$(CONFIG_SPARC32) += leon_mm.o # Only used by sparc64 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o + +obj-$(CONFIG_EXECMEM) += execmem.o diff --git a/arch/sparc/mm/execmem.c b/arch/sparc/mm/execmem.c new file mode 100644 index 000000000000..0fac97dd5728 --- /dev/null +++ b/arch/sparc/mm/execmem.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .start = MODULES_VADDR, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = 1, + }, + }, + }; + + return &execmem_info; +} diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 45b1a7c03379..837450b6e882 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include @@ -37,32 +36,6 @@ do { \ } while (0) #endif -static struct execmem_info execmem_info __ro_after_init; - -struct execmem_info __init *execmem_arch_setup(void) -{ - unsigned long start, offset = 0; - - if (kaslr_enabled()) - offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; - - start = MODULES_VADDR + offset; - - execmem_info = (struct execmem_info){ - .ranges = { - [EXECMEM_DEFAULT] = { - .flags = EXECMEM_KASAN_SHADOW, - .start = start, - .end = MODULES_END, - .pgprot = PAGE_KERNEL, - .alignment = MODULE_ALIGN, - }, - }, - }; - - return &execmem_info; -} - #ifdef CONFIG_X86_32 int apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 679893ea5e68..be4fee17b717 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -1099,3 +1100,31 @@ unsigned long arch_max_swapfile_size(void) return pages; } #endif + +#ifdef CONFIG_EXECMEM +static struct execmem_info execmem_info __ro_after_init; + +struct execmem_info __init *execmem_arch_setup(void) +{ + unsigned long start, offset = 0; + + if (kaslr_enabled()) + offset = get_random_u32_inclusive(1, 1024) * PAGE_SIZE; + + start = MODULES_VADDR + offset; + + execmem_info = (struct execmem_info){ + .ranges = { + [EXECMEM_DEFAULT] = { + .flags = EXECMEM_KASAN_SHADOW, + .start = start, + .end = MODULES_END, + .pgprot = PAGE_KERNEL, + .alignment = MODULE_ALIGN, + }, + }, + }; + + return &execmem_info; +} +#endif /* CONFIG_EXECMEM */ From patchwork Sun May 5 16:06:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82CB8C04FFE for ; Sun, 5 May 2024 17:15:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=soTKBGSQwDkv8tEOPTfNZ4f1DBJoDGrjWG5h8WInwFA=; b=Pvi0a9OCdVLJ7D o1kCBMnHXJWk2DsRgomoBitzwNzMHzHP6gyXfYlJ4f0Hj56+w3yaeXVHvBwZYmzy+//yIslxHuDMi bVNhmDsd8LFjDZC718PNIvRXpr9aQ9TkAaNNQVdaaXSvm6spD8wVElpHZ5HKG2skRJuDTxK3jM0n5 hV/paL4Gv5kSJnKELZd0FAa9XZRtAKhNv+xeXP+khv+vbBRuGibfI8Eu0yBtiXWUXqGI9UhzwsTTy acn3DoMXTGpnB1ZSJAmcoM5y/w7fieWP9WH/44db8nTaQ11G4EUyxWS98F0mPBhQw69AxxA5bcjO0 Uq0Rin99rSynjKvLDKQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3fSQ-00000004tip-0iSL; Sun, 05 May 2024 17:15:26 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQV-00000004inr-2JWT; Sun, 05 May 2024 16:09:40 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5F1AC60CEE; Sun, 5 May 2024 16:09:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC4ABC113CC; Sun, 5 May 2024 16:09:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925362; bh=IjQRyjekukMjpd5h0+fXfebHcomYxPC0ei8vi/yCB1s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T7Y2tIoI+ouDiNcQpQ86aQxIzvrgfLXieAAtowk4KtTkMUCc8jxPAWO18JJ7SD9Xt UGr6EXLLow4ZiiZ9f1+fF0t/CKI95SkQ5wNiCan6darR1D58vauIEK88aQiX0uY+yK fFic9b/6Dr9IMMoO9aumzLI/hH0HrlsLDUsj4bpN4eBOrAuFq5d4d7SBei5FXxjNvb Qcolv2fZ6kgJVW4Pl5ylISFS6jnJLMVovlleGEduKhjiJeQ2hBhy43GrygVJ8E5Xuh cAZFS/Jh3HvrD6rXC8n01vsku8OgU0v1fZ9t/8pvT0jjsUQjs6FrZnrM7QlqVnOESq LuLV4gmANK50Q== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 13/16] x86/ftrace: enable dynamic ftrace without CONFIG_MODULES Date: Sun, 5 May 2024 19:06:25 +0300 Message-ID: <20240505160628.2323363-14-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090924_589689_3642231D X-CRM114-Status: GOOD ( 12.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Dynamic ftrace must allocate memory for code and this was impossible without CONFIG_MODULES. With execmem separated from the modules code, execmem_text_alloc() is available regardless of CONFIG_MODULES. Remove dependency of dynamic ftrace on CONFIG_MODULES and make CONFIG_DYNAMIC_FTRACE select CONFIG_EXECMEM in Kconfig. Signed-off-by: Mike Rapoport (IBM) --- arch/x86/Kconfig | 1 + arch/x86/kernel/ftrace.c | 10 ---------- 2 files changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4474bf32d0a4..f2917ccf4fb4 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -34,6 +34,7 @@ config X86_64 select SWIOTLB select ARCH_HAS_ELFCORE_COMPAT select ZONE_DMA32 + select EXECMEM if DYNAMIC_FTRACE config FORCE_DYNAMIC_FTRACE def_bool y diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index c8ddb7abda7c..8da0e66ca22d 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -261,8 +261,6 @@ void arch_ftrace_update_code(int command) /* Currently only x86_64 supports dynamic trampolines */ #ifdef CONFIG_X86_64 -#ifdef CONFIG_MODULES -/* Module allocation simplifies allocating memory for code */ static inline void *alloc_tramp(unsigned long size) { return execmem_alloc(EXECMEM_FTRACE, size); @@ -271,14 +269,6 @@ static inline void tramp_free(void *tramp) { execmem_free(tramp); } -#else -/* Trampolines can only be created if modules are supported */ -static inline void *alloc_tramp(unsigned long size) -{ - return NULL; -} -static inline void tramp_free(void *tramp) { } -#endif /* Defined as markers to the end of the ftrace default trampolines */ extern void ftrace_regs_caller_end(void); From patchwork Sun May 5 16:06:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A76EC4345F for ; Sun, 5 May 2024 17:15:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MPI3mcy/a2PAMEAxP8rhhugVID+a0hw1Awc6NlFxvCk=; b=BKuWg8WBDMRT49 yYi/4SYDVdd2vqOmDLTU2uEE5GbNZ3TARFBq0bxLiXzbXugxoi96mswx69lRl1dIWcDXAEC7hfFwz 7h5UkGqZUGSbNVaNkgGTqs9fyNrP1ZdI2G8yBc+l1SJAf5DOjAliG5dcXu2fp9s5EWgGj/iVsoAjO 78MoBzuSIsh3n/ktFBBsPrpQfLztUi6KLm6RGKJmEn7hK/JMsMfDY0oQZk6fRHoqFo000XZyADJuT paH4F4lWypbn7Gw2Q20m8GNYI7PTmpAcrOPukybSayh8zz6K4Tit77QabAxt0YcfcP+SUleUe3vXt uQBoyujS1zx2u6J4WJ1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3fSR-00000004tju-2gNj; Sun, 05 May 2024 17:15:27 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQj-00000004iyB-0Mkp; Sun, 05 May 2024 16:09:53 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 8EEB5CE0AB3; Sun, 5 May 2024 16:09:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A6A0C4DDE1; Sun, 5 May 2024 16:09:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925373; bh=dj8Jeem56mTOW4r4fQ6hbW3YTYti2N5hSv58QgNFPjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iGgFisiAEFSOr/0NCt1B6/ofJq6Y8eCTFiY7pWI9j4hTDzXZ6ABNwSpG76WCIUhcT th66rOSuWjs7gukW+VtRqy7le5COM2JbDmDXcMGGGmHwf7ZvSC66EEsfujIm7AOXyg k38kTuIo+MCs9Grb4mPndP5TB041EVmntdGbWC2MuJwyToWGtfpujdCPayDRUPDIJN HdVZ99JXyxdKwXN46hvq/r3eCJ7bQf/vEAL0OSAzMW+PjCBy8Dj/4Cb/+uB5v1F+1G KX2YObmfWD4r1+JmPC5Sblz7icAwe6/z7uxqhxFVXupHuqAjz2nIKTh3WUW+H0pvnN trj9hNJzYmw5A== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 14/16] powerpc: use CONFIG_EXECMEM instead of CONFIG_MODULES where appropriate Date: Sun, 5 May 2024 19:06:26 +0300 Message-ID: <20240505160628.2323363-15-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090937_605780_E8468155 X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" There are places where CONFIG_MODULES guards the code that depends on memory allocation being done with module_alloc(). Replace CONFIG_MODULES with CONFIG_EXECMEM in such places. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/kasan.h | 2 +- arch/powerpc/kernel/head_8xx.S | 4 ++-- arch/powerpc/kernel/head_book3s_32.S | 6 +++--- arch/powerpc/lib/code-patching.c | 2 +- arch/powerpc/mm/book3s32/mmu.c | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1c4be3373686..2e586733a464 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -285,7 +285,7 @@ config PPC select IOMMU_HELPER if PPC64 select IRQ_DOMAIN select IRQ_FORCED_THREADING - select KASAN_VMALLOC if KASAN && MODULES + select KASAN_VMALLOC if KASAN && EXECMEM select LOCK_MM_AND_FIND_VMA select MMU_GATHER_PAGE_SIZE select MMU_GATHER_RCU_TABLE_FREE diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h index 365d2720097c..b5bbb94c51f6 100644 --- a/arch/powerpc/include/asm/kasan.h +++ b/arch/powerpc/include/asm/kasan.h @@ -19,7 +19,7 @@ #define KASAN_SHADOW_SCALE_SHIFT 3 -#if defined(CONFIG_MODULES) && defined(CONFIG_PPC32) +#if defined(CONFIG_EXECMEM) && defined(CONFIG_PPC32) #define KASAN_KERN_START ALIGN_DOWN(PAGE_OFFSET - SZ_256M, SZ_256M) #else #define KASAN_KERN_START PAGE_OFFSET diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S index 647b0b445e89..edc479a7c2bc 100644 --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -199,12 +199,12 @@ instruction_counter: mfspr r10, SPRN_SRR0 /* Get effective address of fault */ INVALIDATE_ADJACENT_PAGES_CPU15(r10, r11) mtspr SPRN_MD_EPN, r10 -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM mfcr r11 compare_to_kernel_boundary r10, r10 #endif mfspr r10, SPRN_M_TWB /* Get level 1 table */ -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM blt+ 3f rlwinm r10, r10, 0, 20, 31 oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S index c1d89764dd22..57196883a00e 100644 --- a/arch/powerpc/kernel/head_book3s_32.S +++ b/arch/powerpc/kernel/head_book3s_32.S @@ -419,14 +419,14 @@ InstructionTLBMiss: */ /* Get PTE (linux-style) and check access */ mfspr r3,SPRN_IMISS -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM lis r1, TASK_SIZE@h /* check if kernel address */ cmplw 0,r1,r3 #endif mfspr r2, SPRN_SDR1 li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC rlwinm r2, r2, 28, 0xfffff000 -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM li r0, 3 bgt- 112f lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ @@ -442,7 +442,7 @@ InstructionTLBMiss: andc. r1,r1,r2 /* check access & ~permission */ bne- InstructionAddressInvalid /* return if access not permitted */ /* Convert linux-style PTE to low word of PPC-style PTE */ -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM rlwimi r2, r0, 0, 31, 31 /* userspace ? -> PP lsb */ #endif ori r1, r1, 0xe06 /* clear out reserved bits */ diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index c6ab46156cda..7af791446ddf 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -225,7 +225,7 @@ void __init poking_init(void) static unsigned long get_patch_pfn(void *addr) { - if (IS_ENABLED(CONFIG_MODULES) && is_vmalloc_or_module_addr(addr)) + if (IS_ENABLED(CONFIG_EXECMEM) && is_vmalloc_or_module_addr(addr)) return vmalloc_to_pfn(addr); else return __pa_symbol(addr) >> PAGE_SHIFT; diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c index 100f999871bc..625fe7d08e06 100644 --- a/arch/powerpc/mm/book3s32/mmu.c +++ b/arch/powerpc/mm/book3s32/mmu.c @@ -184,7 +184,7 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top) static bool is_module_segment(unsigned long addr) { - if (!IS_ENABLED(CONFIG_MODULES)) + if (!IS_ENABLED(CONFIG_EXECMEM)) return false; if (addr < ALIGN_DOWN(MODULES_VADDR, SZ_256M)) return false; From patchwork Sun May 5 16:06:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7516FC04FFE for ; Sun, 5 May 2024 17:15:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L5TSMxcWfXn2V8m/GcJMT0oj/KDTOQuWzNXEnPZE33o=; b=Hlq+ItJ3ofaD4k GX0zlCFLUE4IKUe4/x28eEaL6ETMxztAP61BRR757mHJ4igABt4qhBiNzdMMC3P3kfbjW2PJgUR9o 1bAtc6EFf9SkmtzYxPCVL1E/XXG2cxj40co5h6CbQo22hmACZuDhetMZFHwhl0cDY0a7L9LoAuq+m oWWXSLABmOzqm2iyKqY7oiBChmpCcNOYFqE/L5v52snH2w40PM0H4tAhK+IWQv/sqO5UVMQIPBmD0 fx5LawrQ1WTiHzwvKK4iqUC3CvBz6aY1OdYD6E8Vo+SjDhUHdGsLFvwKYGxttZl7S2gJFuU1sW0x/ XHgNersiafPAsd0vkc5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3fSS-00000004tkh-3cCF; Sun, 05 May 2024 17:15:28 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eQu-00000004j7A-2ssc; Sun, 05 May 2024 16:10:03 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 5E337CE0AB6; Sun, 5 May 2024 16:09:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EA5FC113CC; Sun, 5 May 2024 16:09:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925385; bh=n6mgI0H/39mvE4gYkoCj3q2CT30/D9BQcnyPUkYmzdE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QCCxshquMQZIYmLR9BC2goOIG2WvDcQZSg4DMYmv4seH5u50FAyr5sSSu2cZTrq1k 1aq+DM7MtvC7uYy0L8lK7PUeKUGNcYzI26AhgX3k+r02BvoRIp85vYxuQwIqVSaqj2 hLeSFjrZkntdHxjj1YiwU+8/c+M2CdO9FTQ83Tem8TFxYi+7kYUCfmvU8yl0uDpd2x m78cxpLmGVeNZ571v7raPKrnKA0wMo/A7REebj23qa4T/Fhgzr+SGtEZsj8TQEZkpT W1OzDnAhQ8QtznLGNpOdi4R9Un20Sv+DQV1jy2eugEuIIQ+7oEtAwZhBh0k+TDvo2C r8YK/n2WwOQHw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 15/16] kprobes: remove dependency on CONFIG_MODULES Date: Sun, 5 May 2024 19:06:27 +0300 Message-ID: <20240505160628.2323363-16-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090949_324595_9DC38C08 X-CRM114-Status: GOOD ( 21.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" kprobes depended on CONFIG_MODULES because it has to allocate memory for code. Since code allocations are now implemented with execmem, kprobes can be enabled in non-modular kernels. Add #ifdef CONFIG_MODULE guards for the code dealing with kprobes inside modules, make CONFIG_KPROBES select CONFIG_EXECMEM and drop the dependency of CONFIG_KPROBES on CONFIG_MODULES. Signed-off-by: Mike Rapoport (IBM) Acked-by: Masami Hiramatsu (Google) --- arch/Kconfig | 2 +- include/linux/module.h | 9 ++++++ kernel/kprobes.c | 55 +++++++++++++++++++++++-------------- kernel/trace/trace_kprobe.c | 20 +++++++++++++- 4 files changed, 63 insertions(+), 23 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 4fd0daa54e6c..caa459964f09 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -52,9 +52,9 @@ config GENERIC_ENTRY config KPROBES bool "Kprobes" - depends on MODULES depends on HAVE_KPROBES select KALLSYMS + select EXECMEM select TASKS_RCU if PREEMPTION help Kprobes allows you to trap at almost any kernel address and diff --git a/include/linux/module.h b/include/linux/module.h index 1153b0d99a80..ffa1c603163c 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -605,6 +605,11 @@ static inline bool module_is_live(struct module *mod) return mod->state != MODULE_STATE_GOING; } +static inline bool module_is_coming(struct module *mod) +{ + return mod->state == MODULE_STATE_COMING; +} + struct module *__module_text_address(unsigned long addr); struct module *__module_address(unsigned long addr); bool is_module_address(unsigned long addr); @@ -857,6 +862,10 @@ void *dereference_module_function_descriptor(struct module *mod, void *ptr) return ptr; } +static inline bool module_is_coming(struct module *mod) +{ + return false; +} #endif /* CONFIG_MODULES */ #ifdef CONFIG_SYSFS diff --git a/kernel/kprobes.c b/kernel/kprobes.c index ddd7cdc16edf..ca2c6cbd42d2 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1588,7 +1588,7 @@ static int check_kprobe_address_safe(struct kprobe *p, } /* Get module refcount and reject __init functions for loaded modules. */ - if (*probed_mod) { + if (IS_ENABLED(CONFIG_MODULES) && *probed_mod) { /* * We must hold a refcount of the probed module while updating * its code to prohibit unexpected unloading. @@ -1603,12 +1603,13 @@ static int check_kprobe_address_safe(struct kprobe *p, * kprobes in there. */ if (within_module_init((unsigned long)p->addr, *probed_mod) && - (*probed_mod)->state != MODULE_STATE_COMING) { + !module_is_coming(*probed_mod)) { module_put(*probed_mod); *probed_mod = NULL; ret = -ENOENT; } } + out: preempt_enable(); jump_label_unlock(); @@ -2488,24 +2489,6 @@ int kprobe_add_area_blacklist(unsigned long start, unsigned long end) return 0; } -/* Remove all symbols in given area from kprobe blacklist */ -static void kprobe_remove_area_blacklist(unsigned long start, unsigned long end) -{ - struct kprobe_blacklist_entry *ent, *n; - - list_for_each_entry_safe(ent, n, &kprobe_blacklist, list) { - if (ent->start_addr < start || ent->start_addr >= end) - continue; - list_del(&ent->list); - kfree(ent); - } -} - -static void kprobe_remove_ksym_blacklist(unsigned long entry) -{ - kprobe_remove_area_blacklist(entry, entry + 1); -} - int __weak arch_kprobe_get_kallsym(unsigned int *symnum, unsigned long *value, char *type, char *sym) { @@ -2570,6 +2553,25 @@ static int __init populate_kprobe_blacklist(unsigned long *start, return ret ? : arch_populate_kprobe_blacklist(); } +#ifdef CONFIG_MODULES +/* Remove all symbols in given area from kprobe blacklist */ +static void kprobe_remove_area_blacklist(unsigned long start, unsigned long end) +{ + struct kprobe_blacklist_entry *ent, *n; + + list_for_each_entry_safe(ent, n, &kprobe_blacklist, list) { + if (ent->start_addr < start || ent->start_addr >= end) + continue; + list_del(&ent->list); + kfree(ent); + } +} + +static void kprobe_remove_ksym_blacklist(unsigned long entry) +{ + kprobe_remove_area_blacklist(entry, entry + 1); +} + static void add_module_kprobe_blacklist(struct module *mod) { unsigned long start, end; @@ -2672,6 +2674,17 @@ static struct notifier_block kprobe_module_nb = { .priority = 0 }; +static int kprobe_register_module_notifier(void) +{ + return register_module_notifier(&kprobe_module_nb); +} +#else +static int kprobe_register_module_notifier(void) +{ + return 0; +} +#endif /* CONFIG_MODULES */ + void kprobe_free_init_mem(void) { void *start = (void *)(&__init_begin); @@ -2731,7 +2744,7 @@ static int __init init_kprobes(void) if (!err) err = register_die_notifier(&kprobe_exceptions_nb); if (!err) - err = register_module_notifier(&kprobe_module_nb); + err = kprobe_register_module_notifier(); kprobes_initialized = (err == 0); kprobe_sysctls_init(); diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index 14099cc17fc9..2cb2a3951b4f 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -111,6 +111,7 @@ static nokprobe_inline bool trace_kprobe_within_module(struct trace_kprobe *tk, return strncmp(module_name(mod), name, len) == 0 && name[len] == ':'; } +#ifdef CONFIG_MODULES static nokprobe_inline bool trace_kprobe_module_exist(struct trace_kprobe *tk) { char *p; @@ -129,6 +130,12 @@ static nokprobe_inline bool trace_kprobe_module_exist(struct trace_kprobe *tk) return ret; } +#else +static inline bool trace_kprobe_module_exist(struct trace_kprobe *tk) +{ + return false; +} +#endif static bool trace_kprobe_is_busy(struct dyn_event *ev) { @@ -670,6 +677,7 @@ static int register_trace_kprobe(struct trace_kprobe *tk) return ret; } +#ifdef CONFIG_MODULES /* Module notifier call back, checking event on the module */ static int trace_kprobe_module_callback(struct notifier_block *nb, unsigned long val, void *data) @@ -704,6 +712,16 @@ static struct notifier_block trace_kprobe_module_nb = { .notifier_call = trace_kprobe_module_callback, .priority = 1 /* Invoked after kprobe module callback */ }; +static int trace_kprobe_register_module_notifier(void) +{ + return register_module_notifier(&trace_kprobe_module_nb); +} +#else +static int trace_kprobe_register_module_notifier(void) +{ + return 0; +} +#endif /* CONFIG_MODULES */ static int count_symbols(void *data, unsigned long unused) { @@ -1933,7 +1951,7 @@ static __init int init_kprobe_trace_early(void) if (ret) return ret; - if (register_module_notifier(&trace_kprobe_module_nb)) + if (trace_kprobe_register_module_notifier()) return -EINVAL; return 0; From patchwork Sun May 5 16:06:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49146C4345F for ; Sun, 5 May 2024 16:10:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cBAXKedy0W0RbUae70e9BeOy86RsDqeib7aAhnN5sNE=; b=g5LjxPXgw7U0PC 6KxmxM80mfoc5dezTdWPKbSAj4fZvhWSDkjgx9m3b4JdHKozOKRU8qgM92f0z6ta75iEB7fPCxgZn Ubu3xB8r+fsmzpWs6NfMEVKlv2ZoyGWEoLPUpwOOPQj/4q/beFSyUC7Xwn6vaIdMN+qY8IcJZSPWv N4dpiy+xSa4e4Ib0lOIVyQt6hiREg6OngvFMWgL9Shtm/W9FwgJ6NLn8wFmZ0DzG50mmKrc7Kv3Nc G0QSSZwetQJfsEI7mxwHYShuihOdmzenQhK9zbBTqJgNLaGy+lD96hhJMWEoefN4ze76oz2kGVW9O qPz2SsvwrV5pNVQH8Mdw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eRW-00000004jcT-46V1; Sun, 05 May 2024 16:10:26 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eR7-00000004jIL-21cF; Sun, 05 May 2024 16:10:17 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 2A432CE0AB4; Sun, 5 May 2024 16:09:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DF8BC4AF68; Sun, 5 May 2024 16:09:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925397; bh=+AOj3OMDSB3feHYUPdQfVhUVtDWDdcImqVlO1v4xpng=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nCjy//GNLhmqA5rs+dZZl1QCmyYSxaQlXDUwvYrm33awCSrxBgoNQtgTEd+Og5reU 1zfpiAJVvYvH9wItGrJ4dlyOo1VBxfVQT+0TPSwS6pBwlj4x0T0vXWLHRxsW+NBcov 1GBUBgTju3SAFfkJpbUB5d/D4ixYKRZbSC2UwjlZqcgVu4MQggxhkG6KAr21NEWq3L 0KkdIVsYS7NgzJvC2au3jbEGcFX64kCrdx6ChAeAXd70cAqr/40KbW438zRh/iz+Bp AYkRBsIesQ4rIkXq2RH197dPeLajwJNlegZb6k18Wf6vs/nf1+m+19Rl2bg2C/92qW uPmG0zhErxdXQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 16/16] bpf: remove CONFIG_BPF_JIT dependency on CONFIG_MODULES of Date: Sun, 5 May 2024 19:06:28 +0300 Message-ID: <20240505160628.2323363-17-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_091002_176775_0D384D61 X-CRM114-Status: GOOD ( 14.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" BPF just-in-time compiler depended on CONFIG_MODULES because it used module_alloc() to allocate memory for the generated code. Since code allocations are now implemented with execmem, drop dependency of CONFIG_BPF_JIT on CONFIG_MODULES and make it select CONFIG_EXECMEM. Suggested-by: Björn Töpel Signed-off-by: Mike Rapoport (IBM) Tested-by: Klara Modin --- kernel/bpf/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig index bc25f5098a25..f999e4e0b344 100644 --- a/kernel/bpf/Kconfig +++ b/kernel/bpf/Kconfig @@ -43,7 +43,7 @@ config BPF_JIT bool "Enable BPF Just In Time compiler" depends on BPF depends on HAVE_CBPF_JIT || HAVE_EBPF_JIT - depends on MODULES + select EXECMEM help BPF programs are normally handled by a BPF interpreter. This option allows the kernel to generate native code when a program is loaded