From patchwork Sun May 5 16:06:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A8358C25B10 for ; Sun, 5 May 2024 16:08:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MCcG9/GARd4Jg/eqJAC6EzkdutrZ3JENZ4GAjXV8ie0=; b=wLxO734AgRas+w Y3dz2MaX28txIz0yiHV5i/scJ+zdUsksIt6rOmify3JsKoUESLvpxzpJq1OzE41GreP9/Vihe3mVc qJJ70TWuGX2oRDFUvk3KMwbWjCSBbCLGnf148XTUw1GH9oGWdMsgDRpFjxr9ibUATIshMsyH3AaD3 dPhUsRDdL/hSrqD9Zk3klViV6wOn55lzA2oC1R3GrM11nH92qhqIwEKfeWBVTfXa/H7PX4FNJvUPQ Ac6pUP3EmQNPQMa4tQ5WGRNbdX7kmE0B5rud87SZixye0iMet3+wZg5DsRUNTGYTdNtKmyot8BXu+ tn2Blsww6UL7JjIb6I2w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eP9-00000004hb3-31v6; Sun, 05 May 2024 16:07:59 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s3eOy-00000004hPu-2hhG; Sun, 05 May 2024 16:07:56 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B1F8F60CBB; Sun, 5 May 2024 16:07:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 21FD0C4DDE7; Sun, 5 May 2024 16:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925267; bh=vbQmVIViLlnHnV/HmsoDNHPWB+6N7x9rENWps4lsXos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aAa5MSgJQJM45G3HokYLiiJiOZDEeE2ajV8iw/yf1DX/fQ3Kmw6A3USHH0Ot5m3Op TVZYg1AWKghmbLAQnkVMS+GBWsm8aShhkYMk2oUWFTCwZ221sTIuktvc3sHgn5Di67 0I35GiFLFgXviW5AdRB+u7RdDHkgSpW6HhkZwMFM/gPwvOTBBTyPfyI73Civ8D9YvS rNWePZxZHMorQMyuKc+VCAWVVq3t2idHccNWlW0YiN3gXtGTvkETcE/v5FdYXJ2KJf LyxaKhxqABmwc9SmSM4tJ4eoIiC7UL7HsecM0w4e+PWxQYptDGBGAdqaW44NkUBYgt DP/ubOUkGv4RQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 05/16] module: make module_memory_{alloc,free} more self-contained Date: Sun, 5 May 2024 19:06:17 +0300 Message-ID: <20240505160628.2323363-6-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240505_090748_933368_03B4CA27 X-CRM114-Status: GOOD ( 18.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (IBM)" Move the logic related to the memory allocation and freeing into module_memory_alloc() and module_memory_free(). Signed-off-by: Mike Rapoport (IBM) Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Masami Hiramatsu (Google) --- kernel/module/main.c | 64 +++++++++++++++++++++++++++----------------- 1 file changed, 39 insertions(+), 25 deletions(-) diff --git a/kernel/module/main.c b/kernel/module/main.c index e1e8a7a9d6c1..5b82b069e0d3 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1203,15 +1203,44 @@ static bool mod_mem_use_vmalloc(enum mod_mem_type type) mod_mem_type_is_core_data(type); } -static void *module_memory_alloc(unsigned int size, enum mod_mem_type type) +static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { + unsigned int size = PAGE_ALIGN(mod->mem[type].size); + void *ptr; + + mod->mem[type].size = size; + if (mod_mem_use_vmalloc(type)) - return vzalloc(size); - return module_alloc(size); + ptr = vmalloc(size); + else + ptr = module_alloc(size); + + if (!ptr) + return -ENOMEM; + + /* + * The pointer to these blocks of memory are stored on the module + * structure and we keep that around so long as the module is + * around. We only free that memory when we unload the module. + * Just mark them as not being a leak then. The .init* ELF + * sections *do* get freed after boot so we *could* treat them + * slightly differently with kmemleak_ignore() and only grey + * them out as they work as typical memory allocations which + * *do* eventually get freed, but let's just keep things simple + * and avoid *any* false positives. + */ + kmemleak_not_leak(ptr); + + memset(ptr, 0, size); + mod->mem[type].base = ptr; + + return 0; } -static void module_memory_free(void *ptr, enum mod_mem_type type) +static void module_memory_free(struct module *mod, enum mod_mem_type type) { + void *ptr = mod->mem[type].base; + if (mod_mem_use_vmalloc(type)) vfree(ptr); else @@ -1229,12 +1258,12 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); if (mod_mem->size) - module_memory_free(mod_mem->base, type); + module_memory_free(mod, type); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); - module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); + module_memory_free(mod, MOD_DATA); } /* Free a module, remove from lists, etc. */ @@ -2225,7 +2254,6 @@ static int find_module_sections(struct module *mod, struct load_info *info) static int move_module(struct module *mod, struct load_info *info) { int i; - void *ptr; enum mod_mem_type t = 0; int ret = -ENOMEM; @@ -2234,26 +2262,12 @@ static int move_module(struct module *mod, struct load_info *info) mod->mem[type].base = NULL; continue; } - mod->mem[type].size = PAGE_ALIGN(mod->mem[type].size); - ptr = module_memory_alloc(mod->mem[type].size, type); - /* - * The pointer to these blocks of memory are stored on the module - * structure and we keep that around so long as the module is - * around. We only free that memory when we unload the module. - * Just mark them as not being a leak then. The .init* ELF - * sections *do* get freed after boot so we *could* treat them - * slightly differently with kmemleak_ignore() and only grey - * them out as they work as typical memory allocations which - * *do* eventually get freed, but let's just keep things simple - * and avoid *any* false positives. - */ - kmemleak_not_leak(ptr); - if (!ptr) { + + ret = module_memory_alloc(mod, type); + if (ret) { t = type; goto out_enomem; } - memset(ptr, 0, mod->mem[type].size); - mod->mem[type].base = ptr; } /* Transfer each section which specifies SHF_ALLOC */ @@ -2296,7 +2310,7 @@ static int move_module(struct module *mod, struct load_info *info) return 0; out_enomem: for (t--; t >= 0; t--) - module_memory_free(mod->mem[t].base, t); + module_memory_free(mod, t); return ret; }