From patchwork Sun Jan 26 07:47:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91D1EC02181 for ; Sun, 26 Jan 2025 07:48:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29D042800F0; Sun, 26 Jan 2025 02:48:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 24D842800E8; Sun, 26 Jan 2025 02:48:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EEB02800F0; Sun, 26 Jan 2025 02:48:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E1B6A2800E8 for ; Sun, 26 Jan 2025 02:48:56 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A9C6BB21CF for ; Sun, 26 Jan 2025 07:48:56 +0000 (UTC) X-FDA: 83048826672.02.B25CF92 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id F163740005 for ; Sun, 26 Jan 2025 07:48:54 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q3CR+WgX; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dLOhiCYJeGwuqjCBqjoq3shROGVp80xye52V1IHw82A=; b=wZmXINxYnxD++hS4bSVEfjW01MVhPXQv3ldZpWVvgxEc3jMCCFLC1i9JNqbjYJi4RB1lv1 LYz1Yqheya0U8yqVSYNCzZX23BEAULGpIWHkdlqwY4aI3hjOCTnx9BNcnxHR4P5toskFpg ND1yG6lW5nifEF+ILAUN1zRWcOtDOmQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877735; a=rsa-sha256; cv=none; b=Q9mW+JSh4s3zmyqN1Ma+773p/EZjRuMJK0+o1pBRcG6hRolr7nLxJOC1tvjdbpS2wEG7bd Hif2Jrklbrhr9tvR7chP6UbEqRMV3ltRPPNpeyP+knfiDeMqXOLig/Eh6VsXxddCykX1yt F3FcnUvUGc/czAJXmmtytNIKRo2/bHQ= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q3CR+WgX; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id E92505C0706; Sun, 26 Jan 2025 07:48:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D6C1C4CED3; Sun, 26 Jan 2025 07:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877733; bh=xwDxvyvlbxvJeVZzj9KMpy3H3vAeFVvQXS+rKT0mI/8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q3CR+WgXKXNwHo6aho9r7aFhu/902ehjGLuyyhJ2IJ4YicSiuQmg8Aa/cYOcI8k66 4rkN37YvOcCebfKeVcMHlpqLg2FCT7Ee01RqHUM0QscEx5tRiwXhjcrphJQyVRyn0y TDOIugWfdaExO9pNoI/3ND43Iac9UzNr4haiUbRc0gqXt/EvIL7c0eUpT2jT4RluwW HPOqbIrhDyptBfsVw0Vm6veEfta+nNcZJWsu+BfPkK4VUo0Pgw/1QdNj6r9jf1gpYD WozkmdxsLheJRZRyXUBIi8cEgOfa2Ev0U0PF4HwQZ8ILrNnoRjYgJuZC5tqHsEKwB2 ZVU4+OVsF7D8g== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 6/9] module: switch to execmem API for remapping as RW and restoring ROX Date: Sun, 26 Jan 2025 09:47:30 +0200 Message-ID: <20250126074733.1384926-7-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: sf9iwk5o1d4ciuk69mqr4wyuigzgzw88 X-Rspamd-Queue-Id: F163740005 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737877734-327337 X-HE-Meta: U2FsdGVkX18ncRGB8z9u68hpcwnY2j1lvLHxQmv6D99WcL9ks3NKg3WHg63weqIF6UWtNmUMlIJbgAalZyNUzj28hPHwBAwh+aziUAhT3qfiVZ3HfaWdAgnZZX7LKdFWe6mYaofvSYcn8nn3kAsMhxxmoTft29TCgd3ThL43e/vxPsk3Ab4DMHz4ExXo0pH1GGLw+GDCvKBHFusEAfdUBHN7mjnSmaLpM9T/D0olNpfOJiBcPy8zXL8O0fA8gaFOgj6dSqWiMZB5/sn8+ynZHAj964xJXZuFqGnihawCKTNL0Zk+LyFG3UzWrrOaGMuH3iCmS2rJusI6SHTR9rp0hkxewbRxmv3FyJJiaKB/2x3nhkR3dO7VgOLZ7rNqjp/7I7dVinj8fiZ1uhSkq7xQ6RJ5yhbAfSLOgb/Nq1vovA1fjvBqzpqpHapnwVXk1ZzD1i1Z8LjnE1FFosewGYpf7r/AFqCtpOC4wHdFA2c3QZPaMuOiMWTVvK4TFzAyTqYk4dy1gbnpE2HZGMyO9sC+ukjGP0aCPfRE/GB9YmD/pIeSiRSHH34AwvX2xAoMKW6XFC2+WawJiBnsdp6evZqOeSco3XJjxqHB0loM1ZaNqvqWo3s2n3uVuOXncwJHWO6tEzGvmdvp2CcEnENyHrkk0xDlOd1qEreFO3KPu6wOY8S5yBXAdpQj4W2L83dgnSAfEVklvEm89v2J37u26Bb2EcErzB/LOPB8dYINTXBo5GLQ9zpCF3XKDzR7QDcX01Gwpf6MUpTC5MmYFz0mQCXtcFY792OJXI0tGO7q0V48pZ7xXol8NUlDPaZjHyB0q19mNHUw/svt9lf8PgE4SGbfmhziZFi4V4URoCpd9tq1JO/jUopyAdi+Qq/VoXKuM4NJSH3oh6sZICGhsgMWm9QVIHHDRTlCrV9AfKU8QIwIqyht9Hc8WckZNauNDXxfUrapEvg4QMdJApEUEiCMNMd hv6miqs1 jQdH91v0SYuIGxCEdVOh7Rc9+7herwMCl/jO5TblmmuDxyb4SPyANl9cWU4gc9bG/mUBE0CPvB/J4huw8BPLRw3lKxRJ0LcUoGScLFC8CZyYh0o4JMVHnMYnCdxsIZDrWEztX3Q9QJpVY980WZ+edaP5f6nKwzJnype7Mohgl7qOg79g4uzWUMXBn0zsoRI9am87v/s0bld3l+EPxySVIDWO4UXe+pVkqwmDS/cnbwTbJWbo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Instead of using writable copy for module text sections, temporarily remap the memory allocated from execmem's ROX cache as writable and restore its ROX permissions after the module is formed. This will allow removing nasty games with writable copy in alternatives patching on x86. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/module.h | 8 +--- include/linux/moduleloader.h | 4 -- kernel/module/main.c | 78 ++++++++++-------------------------- kernel/module/strict_rwx.c | 9 +++-- 4 files changed, 27 insertions(+), 72 deletions(-) diff --git a/include/linux/module.h b/include/linux/module.h index b3a643435357..6a24e9395cb2 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -367,7 +367,6 @@ enum mod_mem_type { struct module_memory { void *base; - void *rw_copy; bool is_rox; unsigned int size; @@ -769,14 +768,9 @@ static inline bool is_livepatch_module(struct module *mod) void set_module_sig_enforced(void); -void *__module_writable_address(struct module *mod, void *loc); - static inline void *module_writable_address(struct module *mod, void *loc) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) || !mod || - mod->state != MODULE_STATE_UNFORMED) - return loc; - return __module_writable_address(mod, loc); + return loc; } #else /* !CONFIG_MODULES... */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 1f5507ba5a12..e395461d59e5 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -108,10 +108,6 @@ int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod); -int module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *mod); - #ifdef CONFIG_MODULES void flush_module_init_free_work(void); #else diff --git a/kernel/module/main.c b/kernel/module/main.c index 5399c182b3cb..4a02503836d7 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1221,18 +1221,6 @@ void __weak module_arch_freeing_init(struct module *mod) { } -void *__module_writable_address(struct module *mod, void *loc) -{ - for_class_mod_mem_type(type, text) { - struct module_memory *mem = &mod->mem[type]; - - if (loc >= mem->base && loc < mem->base + mem->size) - return loc + (mem->rw_copy - mem->base); - } - - return loc; -} - static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { unsigned int size = PAGE_ALIGN(mod->mem[type].size); @@ -1250,21 +1238,15 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) if (!ptr) return -ENOMEM; - mod->mem[type].base = ptr; - if (execmem_is_rox(execmem_type)) { - ptr = vzalloc(size); + int err = execmem_make_temp_rw(ptr, size); - if (!ptr) { - execmem_free(mod->mem[type].base); + if (err) { + execmem_free(ptr); return -ENOMEM; } - mod->mem[type].rw_copy = ptr; mod->mem[type].is_rox = true; - } else { - mod->mem[type].rw_copy = mod->mem[type].base; - memset(mod->mem[type].base, 0, size); } /* @@ -1280,16 +1262,26 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) */ kmemleak_not_leak(ptr); + memset(ptr, 0, size); + mod->mem[type].base = ptr; + return 0; } +static void module_memory_restore_rox(struct module *mod) +{ + for_class_mod_mem_type(type, text) { + struct module_memory *mem = &mod->mem[type]; + + if (mem->is_rox) + execmem_restore_rox(mem->base, mem->size); + } +} + static void module_memory_free(struct module *mod, enum mod_mem_type type) { struct module_memory *mem = &mod->mem[type]; - if (mem->is_rox) - vfree(mem->rw_copy); - execmem_free(mem->base); } @@ -2561,7 +2553,6 @@ static int move_module(struct module *mod, struct load_info *info) for_each_mod_mem_type(type) { if (!mod->mem[type].size) { mod->mem[type].base = NULL; - mod->mem[type].rw_copy = NULL; continue; } @@ -2578,7 +2569,6 @@ static int move_module(struct module *mod, struct load_info *info) void *dest; Elf_Shdr *shdr = &info->sechdrs[i]; const char *sname; - unsigned long addr; if (!(shdr->sh_flags & SHF_ALLOC)) continue; @@ -2599,14 +2589,12 @@ static int move_module(struct module *mod, struct load_info *info) ret = PTR_ERR(dest); goto out_err; } - addr = (unsigned long)dest; codetag_section_found = true; } else { enum mod_mem_type type = shdr->sh_entsize >> SH_ENTSIZE_TYPE_SHIFT; unsigned long offset = shdr->sh_entsize & SH_ENTSIZE_OFFSET_MASK; - addr = (unsigned long)mod->mem[type].base + offset; - dest = mod->mem[type].rw_copy + offset; + dest = mod->mem[type].base + offset; } if (shdr->sh_type != SHT_NOBITS) { @@ -2629,13 +2617,14 @@ static int move_module(struct module *mod, struct load_info *info) * users of info can keep taking advantage and using the newly * minted official memory area. */ - shdr->sh_addr = addr; + shdr->sh_addr = (unsigned long)dest; pr_debug("\t0x%lx 0x%.8lx %s\n", (long)shdr->sh_addr, (long)shdr->sh_size, info->secstrings + shdr->sh_name); } return 0; out_err: + module_memory_restore_rox(mod); for (t--; t >= 0; t--) module_memory_free(mod, t); if (codetag_section_found) @@ -2782,17 +2771,8 @@ int __weak module_finalize(const Elf_Ehdr *hdr, return 0; } -int __weak module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) -{ - return 0; -} - static int post_relocation(struct module *mod, const struct load_info *info) { - int ret; - /* Sort exception table now relocations are done. */ sort_extable(mod->extable, mod->extable + mod->num_exentries); @@ -2804,24 +2784,7 @@ static int post_relocation(struct module *mod, const struct load_info *info) add_kallsyms(mod, info); /* Arch-specific module finalizing. */ - ret = module_finalize(info->hdr, info->sechdrs, mod); - if (ret) - return ret; - - for_each_mod_mem_type(type) { - struct module_memory *mem = &mod->mem[type]; - - if (mem->is_rox) { - if (!execmem_update_copy(mem->base, mem->rw_copy, - mem->size)) - return -ENOMEM; - - vfree(mem->rw_copy); - mem->rw_copy = NULL; - } - } - - return module_post_finalize(info->hdr, info->sechdrs, mod); + return module_finalize(info->hdr, info->sechdrs, mod); } /* Call module constructors. */ @@ -3417,6 +3380,7 @@ static int load_module(struct load_info *info, const char __user *uargs, mod->mem[type].size); } + module_memory_restore_rox(mod); module_deallocate(mod, info); free_copy: /* diff --git a/kernel/module/strict_rwx.c b/kernel/module/strict_rwx.c index 239e5013359d..ce47b6346f27 100644 --- a/kernel/module/strict_rwx.c +++ b/kernel/module/strict_rwx.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "internal.h" static int module_set_memory(const struct module *mod, enum mod_mem_type type, @@ -32,12 +33,12 @@ static int module_set_memory(const struct module *mod, enum mod_mem_type type, int module_enable_text_rox(const struct module *mod) { for_class_mod_mem_type(type, text) { + const struct module_memory *mem = &mod->mem[type]; int ret; - if (mod->mem[type].is_rox) - continue; - - if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) + if (mem->is_rox) + ret = execmem_restore_rox(mem->base, mem->size); + else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) ret = module_set_memory(mod, type, set_memory_rox); else ret = module_set_memory(mod, type, set_memory_x);