From patchwork Thu May 10 16:23:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10392085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9B3E76028E for ; Thu, 10 May 2018 16:28:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8766928B77 for ; Thu, 10 May 2018 16:28:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7A02F28BD9; Thu, 10 May 2018 16:28:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 52F7A28B77 for ; Thu, 10 May 2018 16:28:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JKpXUcqMrzdGlnkMpJGu99VAENP40e44tcUmcgypkWQ=; b=iaYq2bOxwXfVycsiyNGFqbdkWa meIcLdzw4usjKi+fKVi/dWFg0DcEPFhoIBFIJHy/1mxicpd6DkZ1cdcQW5JYRp7JGucgyYzBCpr2p GJzQDdQ5nJLxaEyRTeO85T1e10qFv4LzcanqbnwqFn/nRPsnKXjjmPpap4F2oIJwstd/ETPMZS6UY kuKtuC1Ft1HHaULKb0++zjMauDkQmMRK1qjqcmMprnld/4aNr91Sk3uTmVgBLSNpwodeEEb92qhmW O7r0TVCCxrMEHEMFX3twe8FK7/PDf7/QZH1FAyn7gKfGJQ9Lt608omUNXyWG6e6H2J17HWn2jLo09 MZwJEPDg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fGoQu-0006Kt-CJ; Thu, 10 May 2018 16:28:44 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fGoMh-00036k-Lu for linux-arm-kernel@lists.infradead.org; Thu, 10 May 2018 16:24:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 92CC71688; Thu, 10 May 2018 09:24:12 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.1]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8828A3F73E; Thu, 10 May 2018 09:24:11 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com Subject: [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs Date: Thu, 10 May 2018 17:23:45 +0100 Message-Id: <20180510162347.3858-7-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180510162347.3858-1-steve.capper@arm.com> References: <20180510162347.3858-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180510_092423_806908_5FCBE1FB X-CRM114-Status: GOOD ( 17.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Steve Capper , ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ard Bieusheuval In preparation for 52-bit VA support in the Linux kernel, we extend the plts veneer to support 52-bit addresses via an extra movk instruction. [Steve: code from Ard off-list, changed the #ifdef logic to inequality] Signed-off-by: Steve Capper --- New in V3 of the series. I'm not sure if this is strictly necessary as the VAs of the module space will fit within 48-bits of addressing even when a 52-bit VA space is enabled. However, this may act to future-proof the 52-bit VA support should any future adjustments be made to the VA space. --- arch/arm64/include/asm/module.h | 13 ++++++++++++- arch/arm64/kernel/module-plts.c | 12 ++++++++++++ 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h index 97d0ef12e2ff..30b8ca95d19a 100644 --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -59,6 +59,9 @@ struct plt_entry { __le32 mov0; /* movn x16, #0x.... */ __le32 mov1; /* movk x16, #0x...., lsl #16 */ __le32 mov2; /* movk x16, #0x...., lsl #32 */ +#if CONFIG_ARM64_VA_BITS > 48 + __le32 mov3; /* movk x16, #0x...., lsl #48 */ +#endif __le32 br; /* br x16 */ }; @@ -71,7 +74,8 @@ static inline struct plt_entry get_plt_entry(u64 val) * +--------+------------+--------+-----------+-------------+---------+ * * Rd := 0x10 (x16) - * hw := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32) + * hw := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32), + * 0b11 (lsl #48) * opc := 0b11 (MOVK), 0b00 (MOVN), 0b10 (MOVZ) * sf := 1 (64-bit variant) */ @@ -79,6 +83,9 @@ static inline struct plt_entry get_plt_entry(u64 val) cpu_to_le32(0x92800010 | (((~val ) & 0xffff)) << 5), cpu_to_le32(0xf2a00010 | ((( val >> 16) & 0xffff)) << 5), cpu_to_le32(0xf2c00010 | ((( val >> 32) & 0xffff)) << 5), +#if CONFIG_ARM64_VA_BITS > 48 + cpu_to_le32(0xf2e00010 | ((( val >> 48) & 0xffff)) << 5), +#endif cpu_to_le32(0xd61f0200) }; } @@ -86,6 +93,10 @@ static inline struct plt_entry get_plt_entry(u64 val) static inline bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b) { +#if CONFIG_ARM64_VA_BITS > 48 + if (a->mov3 != b->mov3) + return false; +#endif return a->mov0 == b->mov0 && a->mov1 == b->mov1 && a->mov2 == b->mov2; diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c index f0690c2ca3e0..4d5617e09943 100644 --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -50,6 +50,9 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val) struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr; int i = pltsec->plt_num_entries++; u32 mov0, mov1, mov2, br; +#if CONFIG_ARM64_VA_BITS > 48 + u32 mov3; +#endif int rd; if (WARN_ON(pltsec->plt_num_entries > pltsec->plt_max_entries)) @@ -69,6 +72,12 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val) mov2 = aarch64_insn_gen_movewide(rd, (u16)(val >> 32), 32, AARCH64_INSN_VARIANT_64BIT, AARCH64_INSN_MOVEWIDE_KEEP); +#if CONFIG_ARM64_VA_BITS > 48 + mov3 = aarch64_insn_gen_movewide(rd, (u16)(val >> 48), 48, + AARCH64_INSN_VARIANT_64BIT, + AARCH64_INSN_MOVEWIDE_KEEP); +#endif + br = aarch64_insn_gen_branch_imm((u64)&plt[i].br, (u64)loc + 4, AARCH64_INSN_BRANCH_NOLINK); @@ -76,6 +85,9 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val) cpu_to_le32(mov0), cpu_to_le32(mov1), cpu_to_le32(mov2), +#if CONFIG_ARM64_VA_BITS > 48 + cpu_to_le32(mov3), +#endif cpu_to_le32(br) };