From patchwork Thu Nov 22 08:46:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10693655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B0D813BB for ; Thu, 22 Nov 2018 08:47:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E63B2C9A0 for ; Thu, 22 Nov 2018 08:47:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 228072C9B8; Thu, 22 Nov 2018 08:47:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0F3C02C9A0 for ; Thu, 22 Nov 2018 08:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uxH35uLnURLfHjDqHV44LHwyDOq9J0wDtsvU0wOOmZc=; b=r4OGw+lsrZIktQ/ZD+739Wnyk3 bM06xZvKL7xmDhu4cwjE6Jh9h8sPmBWFSCCspQDk0KPmZsZmz0tnBHeIyNrcuvbIE4tzHoMp+j0O6 wu0PRQNUnQ+v6YhCQwJZF/RZyQi4zYMLZWshavWikSadNBS9CxqFblmFoIGdDln3a96dz2gwNIVcd llpgtevLtcrmAz/7zaTjJKj00bh6dU5F6inZklPP55A5dy1WoKZrZYDgdegclS09sq0w9nKvqdzMF 56wT3l1ZVFM11dR/VcN5UgBoJuQ1uXthz28wt/0r/d19689ZJeXIm3FyckjXUVUbd8Rk+yMV7v76l JGr+yPUg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gPkds-0006q7-S4; Thu, 22 Nov 2018 08:47:20 +0000 Received: from mail-ed1-x543.google.com ([2a00:1450:4864:20::543]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gPkdg-0006am-1z for linux-arm-kernel@lists.infradead.org; Thu, 22 Nov 2018 08:47:09 +0000 Received: by mail-ed1-x543.google.com with SMTP id d3so7018129edx.7 for ; Thu, 22 Nov 2018 00:46:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cNKA/rq7qeVmgAtM2MOkGpidnPucuE6gZ41o4iP993Q=; b=HGook2ib4xtKy7Q4620q6ZtCuuNDgY9AGkLikk9rJ/fJ7gzMX2SfLwvwMILcogdg8f fCvGjOxxuG2qpEbRK8aMImQr2F7H9a6KwzU5q2EWfJbI6kY27B+vaOAbEMgXW3gCIYsZ KBd9rus0gK5wF19rLjXzTW7YcIIVLuQET51uc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cNKA/rq7qeVmgAtM2MOkGpidnPucuE6gZ41o4iP993Q=; b=ZKcIeDULdCvbig77RQxA5W930Ah1zGtBuEiRzv9trjD288k1GhQnK2BLq6GDHHy6Dl NcVIe7p0SVzTozYlO1TPHAjPtfi643z5aQaPtPoApqIWrMxIq4+8jOl4B0xJDwG6ZtlT uqJLeJ22QG9RHpHtHbNd5IccEFqMwJizyH+geua7rmaGMbYvrUNHZllVzzueSMIcgmec lUov9OfryvaKn4fq0EhQzX0blU2psBdTuMmt0oGEPSnvB2bkw3RYGHtvy9RzyRCncVC4 qlucW8lR8tCL9M8xHk6V3et/Lbgkn4QKl9rjUndpJxmYfKmSF1OQlNrnY0qU31V85RSC m+Pw== X-Gm-Message-State: AA+aEWYOH32tSZ5Z7Ev7IXwWfqAotFrpMqkvuLhRxHX0sGwNmIFKugOJ P1aMVqDXI8yGvG0Lvg2GebhbeogM442aXg== X-Google-Smtp-Source: AFSGD/WhrFFWGhdHbKYzs57ykIhVtrcojMQv9kVzLj5quIcqeuK1gAIfPwkYT5qsCqSufwy002bAjg== X-Received: by 2002:a50:af65:: with SMTP id g92-v6mr8623876edd.274.1542876415911; Thu, 22 Nov 2018 00:46:55 -0800 (PST) Received: from mba13.wifi.ns.nl (dhcp-077-251-017-237.chello.nl. [77.251.17.237]) by smtp.gmail.com with ESMTPSA id n23-v6sm3053784ejx.57.2018.11.22.00.46.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Nov 2018 00:46:54 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/2] arm64/insn: add support for emitting ADR/ADRP instructions Date: Thu, 22 Nov 2018 09:46:45 +0100 Message-Id: <20181122084646.3247-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181122084646.3247-1-ard.biesheuvel@linaro.org> References: <20181122084646.3247-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181122_004708_105232_801A7EB7 X-CRM114-Status: GOOD ( 13.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, Ard Biesheuvel , marc.zyngier@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Torsten Duwe , Jessica Yu MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for emitting ADR and ADRP instructions so we can switch over our PLT generation code in a subsequent patch. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/insn.h | 8 ++++++ arch/arm64/kernel/insn.c | 29 ++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index c6802dea6cab..9c01f04db64d 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -261,6 +261,11 @@ enum aarch64_insn_prfm_policy { AARCH64_INSN_PRFM_POLICY_STRM, }; +enum aarch64_insn_adr_type { + AARCH64_INSN_ADR_TYPE_ADRP, + AARCH64_INSN_ADR_TYPE_ADR, +}; + #define __AARCH64_INSN_FUNCS(abbr, mask, val) \ static __always_inline bool aarch64_insn_is_##abbr(u32 code) \ { return (code & (mask)) == (val); } \ @@ -393,6 +398,9 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, enum aarch64_insn_register src, int imm, enum aarch64_insn_variant variant, enum aarch64_insn_adsb_type type); +u32 aarch64_insn_gen_adr(unsigned long pc, unsigned long addr, + enum aarch64_insn_register reg, + enum aarch64_insn_adr_type type); u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst, enum aarch64_insn_register src, int immr, int imms, diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 2b3413549734..7820a4a688fa 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -1239,6 +1239,35 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst, return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_6, insn, shift); } +u32 aarch64_insn_gen_adr(unsigned long pc, unsigned long addr, + enum aarch64_insn_register reg, + enum aarch64_insn_adr_type type) +{ + u32 insn; + s32 offset; + + switch (type) { + case AARCH64_INSN_ADR_TYPE_ADR: + insn = aarch64_insn_get_adr_value(); + offset = addr - pc; + break; + case AARCH64_INSN_ADR_TYPE_ADRP: + insn = aarch64_insn_get_adrp_value(); + offset = (addr - ALIGN_DOWN(pc, SZ_4K)) >> 12; + break; + default: + pr_err("%s: unknown adr encoding %d\n", __func__, type); + return AARCH64_BREAK_FAULT; + } + + if (offset < -SZ_1M || offset >= SZ_1M) + return AARCH64_BREAK_FAULT; + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, reg); + + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_ADR, insn, offset); +} + /* * Decode the imm field of a branch, and return the byte offset as a * signed value (so it can be used when computing a new branch From patchwork Thu Nov 22 08:46:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10693657 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2B7E14BD for ; Thu, 22 Nov 2018 08:47:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C546B2C9A0 for ; Thu, 22 Nov 2018 08:47:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B957A2C9B8; Thu, 22 Nov 2018 08:47:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1E52A2C9A0 for ; Thu, 22 Nov 2018 08:47:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=CimJ5BiFQ8dpO3hN+OPHE1E1DUf9uWBZ8drwqIi3d5A=; b=F6lAwe4z+PHO8Hk9HFFDPsajZm PgSMaL9MD8KA+c7faGclus8YpTgni3Bm3jrXzpqA5RbNkL1yKPS/aTe3z9i1W+NR9IsqOjHBZ+OGd St3y9citPCAmEiVM2o7n2CMVfGj05PaXeUa4tWQTBMThAARbFvoHabfqncBtwjj8eCFhYGDGaxcWN WKPf+fgCQVajmLB/pzeTnPJqvdApKsUMzJhzHI6ZJg2VJCNWA6F3l4cD6s3Rd3XgUt98VAkjH8e2B mW8dlOPooLrpLezGRogURp50+fXSRxMRnmLUsiOpdqp7Kfoxx1DWHgoFRq94BlP83cSbbXHeqt7cP vuxmNHgg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gPke0-00071h-Rr; Thu, 22 Nov 2018 08:47:28 +0000 Received: from mail-ed1-x543.google.com ([2a00:1450:4864:20::543]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gPkdg-0006an-1x for linux-arm-kernel@lists.infradead.org; Thu, 22 Nov 2018 08:47:09 +0000 Received: by mail-ed1-x543.google.com with SMTP id y56so7003135edd.11 for ; Thu, 22 Nov 2018 00:46:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GKMLC+Gp4t6+Yx21KQnsf1dacO7vABTMoD6l4dgNoik=; b=Z4G+Rvll4+eh4z629RrSTMwWkIkg9dslh22H+Zkn0Txe8XC2x4LgahM80IN08rUjGl JfTb84wuztyPpdM2eW0fa6xeRHsHZv3r3WeJiFlravFU3ZdfSUQAczZTZWQEft/admrr q0lo8r4Woq4GsimDWUf0YFq+laQ3LpP2eNfLo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GKMLC+Gp4t6+Yx21KQnsf1dacO7vABTMoD6l4dgNoik=; b=tvNHuW1V9iKTQUYdreEC5QuGz7vhTVHatOMaN2DzMxU8ZiNiABiKeep9/m7lue/qCL II0Hs3vtNVdXDWGy1dDJs/ExTmQCLe/b4nH62G+4cWjhqpiiXui0VNRPNTyr1ONTyQwe PE42QvWTbElQ6yAgZumep7qUEl4mK+lZLMJnFTiRfbzbNZjC6JOskQXGi/ii0sJVc+87 bQQ1c9enpDGMyn9OsDVqT/F7PZVXFTblNi8hIVb5Vm6FwbGh32/tC9xJGAMhKdEC++3A wJ1IVi4iXHdC7WoOJuqmzdHv/MkEeJn3Vg8t3GLzsIy/PN0gZOLMr/fhOIcAf39PJt7o ctyQ== X-Gm-Message-State: AA+aEWYL1Ppef2iPjsrChJSOA9JALdIvdCiej7gag6UekaYwDD3sHFkJ cU9qqZrh2KTqmljuI73QgESnHj8uU3cKMA== X-Google-Smtp-Source: AFSGD/VpAQxNEeqYaDBIDzyi6VAIO9It5ognM9o0Z9QpDQtz9rMrlZ/UQKTmFlDKp5NJcMrdzf4m9g== X-Received: by 2002:a50:86b7:: with SMTP id r52mr8253103eda.227.1542876417107; Thu, 22 Nov 2018 00:46:57 -0800 (PST) Received: from mba13.wifi.ns.nl (dhcp-077-251-017-237.chello.nl. [77.251.17.237]) by smtp.gmail.com with ESMTPSA id n23-v6sm3053784ejx.57.2018.11.22.00.46.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Nov 2018 00:46:56 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/2] arm64/module: switch to ADRP/ADD sequences for PLT entries Date: Thu, 22 Nov 2018 09:46:46 +0100 Message-Id: <20181122084646.3247-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181122084646.3247-1-ard.biesheuvel@linaro.org> References: <20181122084646.3247-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181122_004708_093758_BBAD5738 X-CRM114-Status: GOOD ( 21.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, Ard Biesheuvel , marc.zyngier@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Torsten Duwe , Jessica Yu MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we have switched to the small code model entirely, and reduced the extended KASLR range to 4 GB, we can be sure that the targets of relative branches that are out of range are in range for a ADRP/ADD pair, which is one instruction shorter than our current MOVN/MOVK/MOVK sequence, and is more idiomatic and so it is more likely to be implemented efficiently by micro-architectures. So switch over the ordinary PLT code and the special handling of the Cortex-A53 ADRP errata, as well as the ftrace trampline handling. Signed-off-by: Ard Biesheuvel Reviewed-by: Torsten Duwe --- arch/arm64/include/asm/module.h | 36 ++------ arch/arm64/kernel/ftrace.c | 2 +- arch/arm64/kernel/module-plts.c | 93 +++++++++++++++----- arch/arm64/kernel/module.c | 4 +- 4 files changed, 82 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h index 97d0ef12e2ff..9ce31b056ac9 100644 --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -56,39 +56,19 @@ struct plt_entry { * is exactly what we are dealing with here, we are free to use x16 * as a scratch register in the PLT veneers. */ - __le32 mov0; /* movn x16, #0x.... */ - __le32 mov1; /* movk x16, #0x...., lsl #16 */ - __le32 mov2; /* movk x16, #0x...., lsl #32 */ + __le32 adrp; /* adrp x16, .... */ + __le32 add; /* add x16, x16, #0x.... */ __le32 br; /* br x16 */ }; -static inline struct plt_entry get_plt_entry(u64 val) +static inline bool is_forbidden_offset_for_adrp(void *place) { - /* - * MOVK/MOVN/MOVZ opcode: - * +--------+------------+--------+-----------+-------------+---------+ - * | sf[31] | opc[30:29] | 100101 | hw[22:21] | imm16[20:5] | Rd[4:0] | - * +--------+------------+--------+-----------+-------------+---------+ - * - * Rd := 0x10 (x16) - * hw := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32) - * opc := 0b11 (MOVK), 0b00 (MOVN), 0b10 (MOVZ) - * sf := 1 (64-bit variant) - */ - return (struct plt_entry){ - cpu_to_le32(0x92800010 | (((~val ) & 0xffff)) << 5), - cpu_to_le32(0xf2a00010 | ((( val >> 16) & 0xffff)) << 5), - cpu_to_le32(0xf2c00010 | ((( val >> 32) & 0xffff)) << 5), - cpu_to_le32(0xd61f0200) - }; + return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) && + cpus_have_const_cap(ARM64_WORKAROUND_843419) && + ((u64)place & 0xfff) >= 0xff8; } -static inline bool plt_entries_equal(const struct plt_entry *a, - const struct plt_entry *b) -{ - return a->mov0 == b->mov0 && - a->mov1 == b->mov1 && - a->mov2 == b->mov2; -} +struct plt_entry get_plt_entry(u64 dst, void *pc); +bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b); #endif /* __ASM_MODULE_H */ diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 50986e388d2b..2135665a8ab3 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -104,7 +104,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) * is added in the future, but for now, the pr_err() below * deals with a theoretical issue only. */ - trampoline = get_plt_entry(addr); + trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline); if (!plt_entries_equal(mod->arch.ftrace_trampoline, &trampoline)) { if (!plt_entries_equal(mod->arch.ftrace_trampoline, diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c index f0690c2ca3e0..3c6e5f3a4973 100644 --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -11,6 +11,55 @@ #include #include +static struct plt_entry __get_adrp_add_pair(u64 dst, u64 pc, + enum aarch64_insn_register reg) +{ + u32 adrp, add; + + adrp = aarch64_insn_gen_adr(pc, dst, reg, AARCH64_INSN_ADR_TYPE_ADRP); + add = aarch64_insn_gen_add_sub_imm(reg, reg, dst % SZ_4K, + AARCH64_INSN_VARIANT_64BIT, + AARCH64_INSN_ADSB_ADD); + + return (struct plt_entry){ cpu_to_le32(adrp), cpu_to_le32(add) }; +} + +struct plt_entry get_plt_entry(u64 dst, void *pc) +{ + struct plt_entry plt; + static u32 br; + + if (!br) + br = aarch64_insn_gen_branch_reg(AARCH64_INSN_REG_16, + AARCH64_INSN_BRANCH_NOLINK); + + plt = __get_adrp_add_pair(dst, (u64)pc, AARCH64_INSN_REG_16); + plt.br = cpu_to_le32(br); + + return plt; +} + +bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b) +{ + u64 p, q; + + /* + * Check whether both entries refer to the same target: + * do the cheapest checks first. + */ + if (a->add != b->add || a->br != b->br) + return false; + + p = ALIGN_DOWN((u64)a, SZ_4K); + q = ALIGN_DOWN((u64)b, SZ_4K); + + if (a->adrp == b->adrp && p == q) + return true; + + return (p + aarch64_insn_adrp_get_offset(le32_to_cpu(a->adrp))) == + (q + aarch64_insn_adrp_get_offset(le32_to_cpu(b->adrp))); +} + static bool in_init(const struct module *mod, void *loc) { return (u64)loc - (u64)mod->init_layout.base < mod->init_layout.size; @@ -23,19 +72,23 @@ u64 module_emit_plt_entry(struct module *mod, void *loc, const Elf64_Rela *rela, &mod->arch.init; struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr; int i = pltsec->plt_num_entries; + int j = i - 1; u64 val = sym->st_value + rela->r_addend; - plt[i] = get_plt_entry(val); + if (is_forbidden_offset_for_adrp(&plt[i].adrp)) + i++; + + plt[i] = get_plt_entry(val, &plt[i]); /* * Check if the entry we just created is a duplicate. Given that the * relocations are sorted, this will be the last entry we allocated. * (if one exists). */ - if (i > 0 && plt_entries_equal(plt + i, plt + i - 1)) - return (u64)&plt[i - 1]; + if (j >= 0 && plt_entries_equal(plt + i, plt + j)) + return (u64)&plt[j]; - pltsec->plt_num_entries++; + pltsec->plt_num_entries += i - j; if (WARN_ON(pltsec->plt_num_entries > pltsec->plt_max_entries)) return 0; @@ -49,35 +102,24 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val) &mod->arch.init; struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr; int i = pltsec->plt_num_entries++; - u32 mov0, mov1, mov2, br; + u32 br; int rd; if (WARN_ON(pltsec->plt_num_entries > pltsec->plt_max_entries)) return 0; + if (is_forbidden_offset_for_adrp(&plt[i].adrp)) + i = pltsec->plt_num_entries++; + /* get the destination register of the ADRP instruction */ rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, le32_to_cpup((__le32 *)loc)); - /* generate the veneer instructions */ - mov0 = aarch64_insn_gen_movewide(rd, (u16)~val, 0, - AARCH64_INSN_VARIANT_64BIT, - AARCH64_INSN_MOVEWIDE_INVERSE); - mov1 = aarch64_insn_gen_movewide(rd, (u16)(val >> 16), 16, - AARCH64_INSN_VARIANT_64BIT, - AARCH64_INSN_MOVEWIDE_KEEP); - mov2 = aarch64_insn_gen_movewide(rd, (u16)(val >> 32), 32, - AARCH64_INSN_VARIANT_64BIT, - AARCH64_INSN_MOVEWIDE_KEEP); br = aarch64_insn_gen_branch_imm((u64)&plt[i].br, (u64)loc + 4, AARCH64_INSN_BRANCH_NOLINK); - plt[i] = (struct plt_entry){ - cpu_to_le32(mov0), - cpu_to_le32(mov1), - cpu_to_le32(mov2), - cpu_to_le32(br) - }; + plt[i] = __get_adrp_add_pair(val, (u64)&plt[i], rd); + plt[i].br = cpu_to_le32(br); return (u64)&plt[i]; } @@ -193,6 +235,15 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num, break; } } + + if (IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) && + cpus_have_const_cap(ARM64_WORKAROUND_843419)) + /* + * Add some slack so we can skip PLT slots that may trigger + * the erratum due to the placement of the ADRP instruction. + */ + ret += DIV_ROUND_UP(ret, (SZ_4K / sizeof(struct plt_entry))); + return ret; } diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index f0f27aeefb73..3b6dc4ce7ec7 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -202,9 +202,7 @@ static int reloc_insn_adrp(struct module *mod, __le32 *place, u64 val) { u32 insn; - if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) || - !cpus_have_const_cap(ARM64_WORKAROUND_843419) || - ((u64)place & 0xfff) < 0xff8) + if (!is_forbidden_offset_for_adrp(place)) return reloc_insn_imm(RELOC_OP_PAGE, place, val, 12, 21, AARCH64_INSN_IMM_ADR);