From patchwork Mon Aug 14 12:53:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9898787 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4C09D602BA for ; Mon, 14 Aug 2017 12:54:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E4D1285E3 for ; Mon, 14 Aug 2017 12:54:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 30CB628602; Mon, 14 Aug 2017 12:54:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 0AE22285E3 for ; Mon, 14 Aug 2017 12:54:57 +0000 (UTC) Received: (qmail 17751 invoked by uid 550); 14 Aug 2017 12:54:47 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17475 invoked from network); 14 Aug 2017 12:54:43 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TK5R0gHP9OARcFfh6dKqtvfiDqX9VhxaYxVxHYdSY2I=; b=esRs2ypOoB1kdqRVoxHZyE6x5Xhbdm9uhTcIxwDo/oVV9pcRffeijPX52DeRAn5U+k hyLOBS40wctwOYJ6O6ieoj8nHZ3oLe/LfNGPTM2WXlRmKKBa1axABRJuHlwU3G7KkIY8 IV4swKyq3+525GuHSeO1evh/aOvo65g8i0DDw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TK5R0gHP9OARcFfh6dKqtvfiDqX9VhxaYxVxHYdSY2I=; b=qbWQrYQX974Kt74ifILyE8UAtmbtwPoQn2V5ggPPAWzXdscdpikCMwUQ0z7KqfoJDy WaxZMsn6qpJdc0mMUUb3OVKJF7YNquRymebxT8q9Iep438itCvp16ABNFGV+LWI6dM7m jJs2XY5gMm9uM+fCgYhKY4WQG+eRD1mSjrvdn1lpLrdSohmFmCgGUUqHiX+Sgue1QW3Y vk2+MhjVPxHOrMAiGHZ7dvtECD2mhUWdQDRMFzezJ7NUui6CS7JB+ZiM86X9r6OdluTf sd6UdD5gWFN6BBESEiuTAhLr2oY3TE5lTp2fJBi35M/ib86NIVhYWW/thu+pdTgTwlZS wfVw== X-Gm-Message-State: AHYfb5jxK21mwcq5fHDapO6qRfqhiy3UoLG7wT41XXVN/MT0hRh/b4Nh 6YS6ZfuPSHcADWEEJGrekw== X-Received: by 10.223.188.84 with SMTP id a20mr1598704wrh.278.1502715271801; Mon, 14 Aug 2017 05:54:31 -0700 (PDT) From: Ard Biesheuvel To: kernel-hardening@lists.openwall.com Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Arnd Bergmann , Nicolas Pitre , Russell King , Kees Cook , Thomas Garnier , Marc Zyngier , Mark Rutland , Tony Lindgren , Matt Fleming , Dave Martin Date: Mon, 14 Aug 2017 13:53:43 +0100 Message-Id: <20170814125411.22604-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170814125411.22604-1-ard.biesheuvel@linaro.org> References: <20170814125411.22604-1-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [PATCH 02/30] ARM: assembler: introduce adr_l, ldr_l and str_l macros X-Virus-Scanned: ClamAV using ClamSMTP Like arm64, ARM supports position independent code sequences that produce symbol references with a greater reach than the ordinary adr/ldr instructions. Currently, we use open coded instruction sequences involving literals and arithmetic operations. Instead, we can use movw/movt pairs on v7 CPUs, circumventing the D-cache entirely. For older CPUs, we can emit the literal into a subsection, allowing it to be emitted out of line while retaining the ability to perform arithmetic on label offsets. E.g., on pre-v7 CPUs, we can emit a PC-relative reference as follows: ldr , 222f 111: add , , pc .subsection 1 222: .long - (111b + 8) .previous This is allowed by the assembler because, unlike ordinary sections, subsections are combined into a single section into the object file, and so the label references are not true cross-section references that are visible as relocations. Note that we could even do something like add , pc, #(222f - 111f) & ~0xfff ldr , [, #(222f - 111f) & 0xfff] 111: add , , pc .subsection 1 222: .long - (111b + 8) .previous if it turns out that the 4 KB range of the ldr instruction is insufficient to reach the literal in the subsection, although this is currently not a problem (of the 98 objects built from .S files in a multi_v7_defconfig build, only 11 have .text sections that are over 1 KB, and the largest one [entry-armv.o] is 3308 bytes) Subsections have been available in binutils since 2004 at least, so they should not cause any issues with older toolchains. So use the above to implement the macros mov_l, adr_l, adrm_l (using ldm to load multiple literals at once), ldr_l and str_l, all of which will use movw/movt pairs on v7 and later CPUs, and use PC-relative literals otherwise. Cc: Russell King Signed-off-by: Ard Biesheuvel --- arch/arm/include/asm/assembler.h | 71 ++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index ad301f107dd2..516ebaf4ff38 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -518,4 +518,75 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) #endif .endm +#ifdef CONFIG_THUMB2_KERNEL +#define ARM_PC_BIAS 4 +#else +#define ARM_PC_BIAS 8 +#endif + + .macro __adldst_l, op, reg, sym, tmp, c + .if __LINUX_ARM_ARCH__ < 7 + ldr\c \tmp, 111f + .subsection 1 + .align 2 +111: .long \sym - (222f + ARM_PC_BIAS) + .previous + .else + W(movw\c\()) \tmp, #:lower16:\sym - (222f + ARM_PC_BIAS) + W(movt\c\()) \tmp, #:upper16:\sym - (222f + ARM_PC_BIAS) + .endif +222: + .ifc \op, add + add\c \reg, \tmp, pc + .elseif CONFIG_THUMB2_KERNEL == 1 + add \tmp, \tmp, pc + \op\c \reg, [\tmp] + .else + \op\c \reg, [pc, \tmp] + .endif + .endm + + /* + * mov_l - move a constant value or [relocated] address into a register + */ + .macro mov_l, dst:req, imm:req, cond + .if __LINUX_ARM_ARCH__ < 7 + ldr\cond \dst, =\imm + .else + W(movw\cond\()) \dst, #:lower16:\imm + W(movt\cond\()) \dst, #:upper16:\imm + .endif + .endm + + /* + * adr_l - adr pseudo-op with unlimited range + * + * @dst: destination register + * @sym: name of the symbol + */ + .macro adr_l, dst:req, sym:req, cond + __adldst_l add, \dst, \sym, \dst, \cond + .endm + + /* + * ldr_l - ldr pseudo-op with unlimited range + * + * @dst: destination register + * @sym: name of the symbol + */ + .macro ldr_l, dst:req, sym:req, cond + __adldst_l ldr, \dst, \sym, \dst, \cond + .endm + + /* + * str_l - str pseudo-op with unlimited range + * + * @src: source register + * @sym: name of the symbol + * @tmp: mandatory scratch register + */ + .macro str_l, src:req, sym:req, tmp:req, cond + __adldst_l str, \src, \sym, \tmp, \cond + .endm + #endif /* __ASM_ASSEMBLER_H__ */