From patchwork Sun Sep 3 12:07:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9936113 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 82BEB6037D for ; Sun, 3 Sep 2017 12:08:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7373D286A9 for ; Sun, 3 Sep 2017 12:08:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 66298286B3; Sun, 3 Sep 2017 12:08:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 2F25E286A9 for ; Sun, 3 Sep 2017 12:08:58 +0000 (UTC) Received: (qmail 7926 invoked by uid 550); 3 Sep 2017 12:08:39 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7798 invoked from network); 3 Sep 2017 12:08:37 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4QuYR3mmi7YdLtMtq4uTjtsc34UtrwADX1DkwlkV0Ho=; b=hIplOu8ewT+rnwh5yeOEG80H/jneqJWFRBchxKVkT+H96TX3uf/G8JNKMm3Ah9rLzl t88rCWiKlVerDXaEkiS/T3jw4dQ40zu8/kid4G19SCwBrpvtcEdUE0MWAHndwvGnFSdE y0GqXwsyJfKSLVYSRlSIWR8sn3nyvobWOQ5SU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4QuYR3mmi7YdLtMtq4uTjtsc34UtrwADX1DkwlkV0Ho=; b=BkufWOL2bepMCr9jEVms6kv+WgDy5yht/jYeQJ11zdhMHc+BAvyTrZDn65fpxhGnGw xlRJ+HcjORkunrJkZJBWy8pGDPHvLej7Z4pDOL5nUrrzMYJPK4BQzpGPaJH2Pb+AgAaP LjmltrQxBlG2uokK3c1wgc/7bADVReg70ECsrK5diIlpE5oFZ5avBiOn18uFg40zw48R 7tHF0IFm0EP3mERjqP9Ev9FnDYFN3DtuJV2wA7OEGrS+RMOZVv1EcVYmEGXmCv1KDFFN WYuXexrYr9CPvjyg1ggxLnmBRRi+g+Yr/Na0OG+TkEkAebITL5Zhx+3ER2LFs1+bjzRE ORIQ== X-Gm-Message-State: AHPjjUj9VZPiRFsRS+6Bs+L6nbwmwfNjalCbAvJ4PJWgCBXtAsKplrPZ UjDA399rO+M3c/4e X-Google-Smtp-Source: ADKCNb4SuEk8AqXzHr45NcH0+IBx8TBnj7YvMcXzwsvzGhDiW6icW+bpyGo0prMI6gyvAB/5oU6wTg== X-Received: by 10.223.171.184 with SMTP id s53mr3851770wrc.105.1504440506030; Sun, 03 Sep 2017 05:08:26 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, kernel-hardening@lists.openwall.com Cc: Ard Biesheuvel , Arnd Bergmann , Nicolas Pitre , Russell King , Kees Cook , Thomas Garnier , Marc Zyngier , Mark Rutland , Tony Lindgren , Matt Fleming , Dave Martin Date: Sun, 3 Sep 2017 13:07:31 +0100 Message-Id: <20170903120757.14968-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170903120757.14968-1-ard.biesheuvel@linaro.org> References: <20170903120757.14968-1-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [PATCH v2 03/29] ARM: assembler: introduce adr_l, ldr_l and str_l macros X-Virus-Scanned: ClamAV using ClamSMTP Like arm64, ARM supports position independent code sequences that produce symbol references with a greater reach than the ordinary adr/ldr instructions. Currently, we use open coded instruction sequences involving literals and arithmetic operations. Instead, we can use movw/movt pairs on v7 CPUs, circumventing the D-cache entirely. For older CPUs, we can emit the literal into a subsection, allowing it to be emitted out of line while retaining the ability to perform arithmetic on label offsets. E.g., on pre-v7 CPUs, we can emit a PC-relative reference as follows: ldr , 222f 111: add , , pc .subsection 1 222: .long - (111b + 8) .previous This is allowed by the assembler because, unlike ordinary sections, subsections are combined into a single section into the object file, and so the label references are not true cross-section references that are visible as relocations. Note that we could even do something like add , pc, #(222f - 111f) & ~0xfff ldr , [, #(222f - 111f) & 0xfff] 111: add , , pc .subsection 1 222: .long - (111b + 8) .previous if it turns out that the 4 KB range of the ldr instruction is insufficient to reach the literal in the subsection, although this is currently not a problem (of the 98 objects built from .S files in a multi_v7_defconfig build, only 11 have .text sections that are over 1 KB, and the largest one [entry-armv.o] is 3308 bytes) Subsections have been available in binutils since 2004 at least, so they should not cause any issues with older toolchains. So use the above to implement the macros mov_l, adr_l, ldr_l and str_l, all of which will use movw/movt pairs on v7 and later CPUs, and use PC-relative literals otherwise. Cc: Russell King Signed-off-by: Ard Biesheuvel REviewed-by: Nicolas Pitre --- arch/arm/include/asm/assembler.h | 76 ++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index ad301f107dd2..341e4ed1ef84 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -518,4 +518,80 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) #endif .endm + .macro __adldst_l, op, reg, sym, tmp + .if __LINUX_ARM_ARCH__ < 7 + ldr \tmp, 111f + .subsection 1 + .align 2 +111: .long \sym - (222f + 8) + .previous + .else + /* + * In Thumb-2 builds, the PC bias depends on whether we are currently + * emitting into a .arm or a .thumb section. So emit a nop and take + * its size, so we can infer the execution mode and PC bias from it. + */ + ARM( .set .Lnopsize, 4 ) + THUMB( .pushsection ".discard.nop", "x", %note ) + THUMB( 111: nop ) + THUMB( .set .Lnopsize, . - 111b ) + THUMB( .popsection ) + + movw \tmp, #:lower16:\sym - (222f + 2 * .Lnopsize) + movt \tmp, #:upper16:\sym - (222f + 2 * .Lnopsize) + .endif +222: + .ifc \op, add + add \reg, \tmp, pc + .elseif .Lnopsize == 2 @ Thumb-2 mode + add \tmp, \tmp, pc + \op \reg, [\tmp] + .else + \op \reg, [pc, \tmp] + .endif + .endm + + /* + * mov_l - move a constant value or [relocated] address into a register + */ + .macro mov_l, dst:req, imm:req + .if __LINUX_ARM_ARCH__ < 7 + ldr \dst, =\imm + .else + movw \dst, #:lower16:\imm + movt \dst, #:upper16:\imm + .endif + .endm + + /* + * adr_l - adr pseudo-op with unlimited range + * + * @dst: destination register + * @sym: name of the symbol + */ + .macro adr_l, dst:req, sym:req + __adldst_l add, \dst, \sym, \dst + .endm + + /* + * ldr_l - ldr pseudo-op with unlimited range + * + * @dst: destination register + * @sym: name of the symbol + */ + .macro ldr_l, dst:req, sym:req + __adldst_l ldr, \dst, \sym, \dst + .endm + + /* + * str_l - str pseudo-op with unlimited range + * + * @src: source register + * @sym: name of the symbol + * @tmp: mandatory scratch register + */ + .macro str_l, src:req, sym:req, tmp:req + __adldst_l str, \src, \sym, \tmp + .endm + #endif /* __ASM_ASSEMBLER_H__ */