From patchwork Mon Nov 5 17:57:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Yu X-Patchwork-Id: 10668831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 957C81709 for ; Mon, 5 Nov 2018 17:58:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E43929B79 for ; Mon, 5 Nov 2018 17:58:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6FEA229DB5; Mon, 5 Nov 2018 17:58:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D253829B79 for ; Mon, 5 Nov 2018 17:58:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Rj6mpCylJfCDdC+uAIHm7P/kJBI6enzrtQ/6p0+2Q6s=; b=Sha0O8QCQQVE3kN1cW8A0BJrn USfxH61bwXN7Cb/z98jDYcm3gHcQopOore+XlV0J14zyG0A0aICjqtnF0ibVPuku2YPSmMk4ckbz+ CbQ5QHiB5Dttc8BY38Cl1d3ugAci1W/qceD9f+LFqaWKA6Q2dIjIF/lUSjTfvNvr1QkgnWclZD8rw XJc2wGZYAK8DbfIM1p7yxHqtDBdYsTK758lc/wEU/YqqtzxV1s2bvvW2JcCSLSQ8IgcUKSlaecu/8 uq0MQwZLsbwAJzQRZp02oGmMgXffEpAIVJDSailUeoC6gJoSnfhVuKan7UErf560zVoO40KKkIB3g lHv7FzvTA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gJj8j-0001PI-J1; Mon, 05 Nov 2018 17:58:17 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gJj8d-00017k-Hv for linux-arm-kernel@lists.infradead.org; Mon, 05 Nov 2018 17:58:15 +0000 Received: from redbean (ip5f5adbf7.dynamic.kabel-deutschland.de [95.90.219.247]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9F1022086C; Mon, 5 Nov 2018 17:57:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1541440662; bh=tf8Flk4DCVNUmH87MjPHRBNyA4KX379ZuqrKT29L/qk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=aFYZZ5mk3mvucCoqMoFGFpUkSjn1zi6ZgCcoJej9h+bgxZl0f1yRwrm5sewkdwbs5 2KfjsZUJGfBMf2CO9pndqmxKzl4OsJBITyrCo8BVZfAMdKVYpOo1HM8fW2qa57RQOo s/AorYB73fQomSaZmg4felhqOJhlVmFAWqzyHkpg= Date: Mon, 5 Nov 2018 18:57:22 +0100 From: Jessica Yu To: Torsten Duwe Subject: [PATCH] arm64/module: use plt section indices for relocations Message-ID: <20181105175722.removbbpilqxu7tr@redbean> References: <20181001140910.086E768BC7@newverein.lst.de> <20181001141652.5478C68BE1@newverein.lst.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20181001141652.5478C68BE1@newverein.lst.de> User-Agent: NeoMutt/20180716 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181105_095811_737390_2C8746A3 X-CRM114-Status: GOOD ( 16.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Arnd Bergmann , Julien Thierry , Catalin Marinas , Ard Biesheuvel , Will Deacon , linux-kernel@vger.kernel.org, Steven Rostedt , AKASHI Takahiro , Ingo Molnar , Josh Poimboeuf , live-patching@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of saving a pointer to the .plt and .init.plt sections to apply plt-based relocations, save and use their section indices instead. The mod->arch.{core,init}.plt pointers were problematic for livepatch because they pointed within temporary section headers (provided by the module loader via info->sechdrs) that would be freed after module load. Since livepatch modules may need to apply relocations post-module-load (for example, to patch a module that is loaded later), using section indices to offset into the section headers (instead of accessing them through a saved pointer) allows livepatch modules on arm64 to pass in their own copy of the section headers to apply_relocate_add() to apply delayed relocations. Signed-off-by: Jessica Yu --- Note: Addressed Will's comment about the pltsec -> plt_info rename and removed that change to reduce unnecessary code churn. That is the only change from the first version. I didn't include the Ack's for this reason so let me know if this version is OK as well. arch/arm64/include/asm/module.h | 8 +++++--- arch/arm64/kernel/module-plts.c | 36 ++++++++++++++++++++---------------- arch/arm64/kernel/module.c | 9 +++++---- 3 files changed, 30 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h index 97d0ef12e2ff..f81c1847312d 100644 --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -22,7 +22,7 @@ #ifdef CONFIG_ARM64_MODULE_PLTS struct mod_plt_sec { - struct elf64_shdr *plt; + int plt_shndx; int plt_num_entries; int plt_max_entries; }; @@ -36,10 +36,12 @@ struct mod_arch_specific { }; #endif -u64 module_emit_plt_entry(struct module *mod, void *loc, const Elf64_Rela *rela, +u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs, + void *loc, const Elf64_Rela *rela, Elf64_Sym *sym); -u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val); +u64 module_emit_veneer_for_adrp(struct module *mod, Elf64_Shdr *sechdrs, + void *loc, u64 val); #ifdef CONFIG_RANDOMIZE_BASE extern u64 module_alloc_base; diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c index f0690c2ca3e0..c04b0d6abde0 100644 --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -16,12 +16,13 @@ static bool in_init(const struct module *mod, void *loc) return (u64)loc - (u64)mod->init_layout.base < mod->init_layout.size; } -u64 module_emit_plt_entry(struct module *mod, void *loc, const Elf64_Rela *rela, +u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs, + void *loc, const Elf64_Rela *rela, Elf64_Sym *sym) { struct mod_plt_sec *pltsec = !in_init(mod, loc) ? &mod->arch.core : &mod->arch.init; - struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr; + struct plt_entry *plt = (struct plt_entry *)(sechdrs + pltsec->plt_shndx)->sh_addr; int i = pltsec->plt_num_entries; u64 val = sym->st_value + rela->r_addend; @@ -43,11 +44,12 @@ u64 module_emit_plt_entry(struct module *mod, void *loc, const Elf64_Rela *rela, } #ifdef CONFIG_ARM64_ERRATUM_843419 -u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val) +u64 module_emit_veneer_for_adrp(struct module *mod, Elf64_Shdr *sechdrs, + void *loc, u64 val) { struct mod_plt_sec *pltsec = !in_init(mod, loc) ? &mod->arch.core : &mod->arch.init; - struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr; + struct plt_entry *plt = (struct plt_entry *)(sechdrs + pltsec->plt_shndx)->sh_addr; int i = pltsec->plt_num_entries++; u32 mov0, mov1, mov2, br; int rd; @@ -202,7 +204,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, unsigned long core_plts = 0; unsigned long init_plts = 0; Elf64_Sym *syms = NULL; - Elf_Shdr *tramp = NULL; + Elf_Shdr *pltsec, *tramp = NULL; int i; /* @@ -211,9 +213,9 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, */ for (i = 0; i < ehdr->e_shnum; i++) { if (!strcmp(secstrings + sechdrs[i].sh_name, ".plt")) - mod->arch.core.plt = sechdrs + i; + mod->arch.core.plt_shndx = i; else if (!strcmp(secstrings + sechdrs[i].sh_name, ".init.plt")) - mod->arch.init.plt = sechdrs + i; + mod->arch.init.plt_shndx = i; else if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && !strcmp(secstrings + sechdrs[i].sh_name, ".text.ftrace_trampoline")) @@ -222,7 +224,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, syms = (Elf64_Sym *)sechdrs[i].sh_addr; } - if (!mod->arch.core.plt || !mod->arch.init.plt) { + if (!mod->arch.core.plt_shndx || !mod->arch.init.plt_shndx) { pr_err("%s: module PLT section(s) missing\n", mod->name); return -ENOEXEC; } @@ -254,17 +256,19 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, sechdrs[i].sh_info, dstsec); } - mod->arch.core.plt->sh_type = SHT_NOBITS; - mod->arch.core.plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC; - mod->arch.core.plt->sh_addralign = L1_CACHE_BYTES; - mod->arch.core.plt->sh_size = (core_plts + 1) * sizeof(struct plt_entry); + pltsec = sechdrs + mod->arch.core.plt_shndx; + pltsec->sh_type = SHT_NOBITS; + pltsec->sh_flags = SHF_EXECINSTR | SHF_ALLOC; + pltsec->sh_addralign = L1_CACHE_BYTES; + pltsec->sh_size = (core_plts + 1) * sizeof(struct plt_entry); mod->arch.core.plt_num_entries = 0; mod->arch.core.plt_max_entries = core_plts; - mod->arch.init.plt->sh_type = SHT_NOBITS; - mod->arch.init.plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC; - mod->arch.init.plt->sh_addralign = L1_CACHE_BYTES; - mod->arch.init.plt->sh_size = (init_plts + 1) * sizeof(struct plt_entry); + pltsec = sechdrs + mod->arch.init.plt_shndx; + pltsec->sh_type = SHT_NOBITS; + pltsec->sh_flags = SHF_EXECINSTR | SHF_ALLOC; + pltsec->sh_addralign = L1_CACHE_BYTES; + pltsec->sh_size = (init_plts + 1) * sizeof(struct plt_entry); mod->arch.init.plt_num_entries = 0; mod->arch.init.plt_max_entries = init_plts; diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index f0f27aeefb73..c2abe59c6f8f 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -198,7 +198,8 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, __le32 *place, u64 val, return 0; } -static int reloc_insn_adrp(struct module *mod, __le32 *place, u64 val) +static int reloc_insn_adrp(struct module *mod, Elf64_Shdr *sechdrs, + __le32 *place, u64 val) { u32 insn; @@ -215,7 +216,7 @@ static int reloc_insn_adrp(struct module *mod, __le32 *place, u64 val) insn &= ~BIT(31); } else { /* out of range for ADR -> emit a veneer */ - val = module_emit_veneer_for_adrp(mod, place, val & ~0xfff); + val = module_emit_veneer_for_adrp(mod, sechdrs, place, val & ~0xfff); if (!val) return -ENOEXEC; insn = aarch64_insn_gen_branch_imm((u64)place, val, @@ -368,7 +369,7 @@ int apply_relocate_add(Elf64_Shdr *sechdrs, case R_AARCH64_ADR_PREL_PG_HI21_NC: overflow_check = false; case R_AARCH64_ADR_PREL_PG_HI21: - ovf = reloc_insn_adrp(me, loc, val); + ovf = reloc_insn_adrp(me, sechdrs, loc, val); if (ovf && ovf != -ERANGE) return ovf; break; @@ -413,7 +414,7 @@ int apply_relocate_add(Elf64_Shdr *sechdrs, if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && ovf == -ERANGE) { - val = module_emit_plt_entry(me, loc, &rel[i], sym); + val = module_emit_plt_entry(me, sechdrs, loc, &rel[i], sym); if (!val) return -ENOEXEC; ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2,