From patchwork Mon Jan 22 16:28:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10178861 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 307D4601D5 for ; Mon, 22 Jan 2018 16:28:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FDA82811A for ; Mon, 22 Jan 2018 16:28:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 11A64283A5; Mon, 22 Jan 2018 16:28:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5078E2811A for ; Mon, 22 Jan 2018 16:28:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:Subject:Message-ID:Date:From: References:In-Reply-To:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ktkOC/lxk02fOvXBpPrvSdMCzYqIcEYEzZNfDd1kPrA=; b=eMZzX9CjV9PsWD 031eW+cT1nJZvOZGUrq5S9GuxiZGSLuyhi0/JntwwCogllaSi6tTrhX3KLztIOpBK2H0F4l8qT5GY AjRkS1ENsevb9OebTgaumT47+xm73IbKGI0w0DVvsLnytZzTCy0mNErCowp8XcW4BiTAfcNN1AiMS h69JtNXKMpy68DlcPlSmt3hlz4lXqnHPd57xfBCTYzRNlaUBbPJti5zIO4tQQPeYkuTppsUj/GvrY aUzFijh/GpFyseXn/Ikd2lW1kEmg+nCTldd65f182n91RtZMFPRDhz80enX2m1Dnm5QhDoAhVrNYB IPeyCeQmUoEBpt86QVNw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1edexh-0006eQ-Tq; Mon, 22 Jan 2018 16:28:45 +0000 Received: from mail-it0-x241.google.com ([2607:f8b0:4001:c0b::241]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1edexe-0006VJ-CA for linux-arm-kernel@lists.infradead.org; Mon, 22 Jan 2018 16:28:44 +0000 Received: by mail-it0-x241.google.com with SMTP id p139so10508856itb.1 for ; Mon, 22 Jan 2018 08:28:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=c6klptghZloGt1TuTEno0c/vm0Cg/s4PQHHmj3QPMZk=; b=CIPAXfGuFGHKfxbxFHrt3sPdzkTDUmBz8NN4GPq5G7J12o0nL/W5E3yzGJFwkJgk3b 9ttK+WtL9cVwINDnRM/o5RpOIRlfplWYK4tb/COmrnu+rt3U6GKPHDWtUWwcpy8cWytx AvCZR1ZNfEwAC6yepRQauOR523I58uWLkaR+k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=c6klptghZloGt1TuTEno0c/vm0Cg/s4PQHHmj3QPMZk=; b=YR349Vwmb9zk18POpzOO0pGxMXQU7u2fhrnefFSLuE7jTjMEmQjHHGceICEXK3rtpc GkEGlIvgHYE4Vc89UatXJS26r6JebnTlQRwQFRhq08KMyE3FI+49MUgYoL+ztJHUA6vx izdgXbPGtoZUc5N/rWcQXHmnZn1e1kuSX6axvjDKKX2wLLCFzNW9X5xXVx5Fp6qHlcpn HLCAKrulmXi4B+/buhQUvQJwRRfQX3cpMDS6hvMvHp/cr/wszFgm88IZUP82YrnMECK5 E9AEus4d3Ge9i5ql4awauHBVxNGQHBHPBImfsh6NoUdrqtFDk5ZM1gtGP8VtS3dB8X3W /ibQ== X-Gm-Message-State: AKwxytfxAFdhm1+ar7/ePa4OkvgMbDW2+LzqvBefm2vhm3J3ORtl2CkX uuRzQKjyl3wntrYijT/hc9YDzq9e1Fe4KOz9D3neQA== X-Google-Smtp-Source: AH8x225+gPAqYMq54+YIulQWBynV+iWM/cUA1kxwn0FxW8NqAtrV4hfHK6ycUvU0npquy4SP3Q4eVc6dvFpXypu8Cm0= X-Received: by 10.36.128.5 with SMTP id g5mr8807600itd.17.1516638511002; Mon, 22 Jan 2018 08:28:31 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.112.13 with HTTP; Mon, 22 Jan 2018 08:28:30 -0800 (PST) In-Reply-To: References: From: Ard Biesheuvel Date: Mon, 22 Jan 2018 16:28:30 +0000 Message-ID: Subject: Re: per-task stack canaries for arm64 To: Kees Cook X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "linux-hardened@lists.openwall.com" , Will Deacon , Andy Lutomirski , Ramana Radhakrishnan , "uros@gcc.gnu.org" , nd , Thomas Garnier , linux-arm-kernel , Laura Abbott Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On 22 January 2018 at 12:42, Ard Biesheuvel wrote: > On 22 January 2018 at 12:26, Kees Cook wrote: >> On Mon, Jan 22, 2018 at 9:59 PM, Ard Biesheuvel >> wrote: >>> OK, so I have done a bit of homework, and I think this shouldn't be >>> too difficult after all. >>> >>> I realized that we don't really need per-CPU references to >>> __stack_chk_guard, given that we already keep the task struct pointer >>> in sp_el0. This means we should be able to do >>> >>> mrs x0, sp_el0 >>> movz x1, :abs_g0:__stack_chk_guard_tsk_offset >>> add x0, x0, x1 >>> ldr x2, [x0] >>> >>> where __stack_chk_guard_tsk_offset is exposed via an ELF symbol. >>> (Analogous to x86, this could be implemented at the GCC level using >>> arbitrary sysreg and symbol names to remain flexible) >>> >>> So my next question (mainly to Ramana): would it be possible to emit >>> the mrs/movz/add sequence above from targetm.stack_protect_guard() in >>> a way that will allow expand_normal() to expand it in a useful manner >>> (e.g., as an INDIRECT_REF that can be matched by the memory_operand in >>> Aarch64's stack_protect_set)? If that is the case, we could implement >>> this in a GCC plugin in the kernel rather than relying on upstream GCC >>> changes (and having to commit to this ABI long term) >> >> Though this would ultimately be implemented in core GCC, yes? (I don't >> want to have GCC plugins be a dependency for stack protector...) >> > > Oh absolutely. But being able to expose this more widely early on, > while GCC 9 is still under development, would help prevent cockups > where we end up with a flawed ABI and it's too late to do something > about it. OK, so it appears the only feasible way to implement this is via a backend function, which is what i386 and rs6000 do as well. Taking that code as an example, I managed to piece together a GCC patch that uses what is basically the above sequence, and interestingly, I could just compile the mainline kernel with virtually no changes (see below), and it just works, using different values for the stack canary for each task. This would mean we would not need any build time compiler feature test to enable the functionality, which is an advantage. So all we need is agree on how to parameterize this so that we don't paint ourselves into a corner. Using a system register name and offset symbol name like x86 does sounds flexible enough, although an absolute offset value such as __stack_chk_guard_tsk_offset is rather different from an ordinary symbol reference, given that we cannot use an adrp/add pair to load it. Also, the range of the offset decides which sequence to emit: I am using add with a :lo12: relocation in my patch, but I think we probably want to avoid limiting ourselves to 4 KB here. Comments? Kernel patch ---------------------8<-------------------------- diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1303e04110cd..4b639a3c15bb 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -164,5 +164,6 @@ int main(void) DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif + DEFINE(TSK_STACK_CANARY, offsetof(struct task_struct, stack_canary)); return 0; } diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h index c7fcb232fe47..a4dd7350111a 100644 --- a/arch/arm64/kernel/image.h +++ b/arch/arm64/kernel/image.h @@ -18,6 +18,8 @@ #ifndef __ASM_IMAGE_H #define __ASM_IMAGE_H +#include + #ifndef LINKER_SCRIPT #error This file should only be included in vmlinux.lds.S #endif @@ -118,4 +120,6 @@ __efistub_screen_info = KALLSYMS_HIDE(screen_info); #endif +PROVIDE(__stack_chk_guard_tsk_offset = ABSOLUTE(TSK_STACK_CANARY)); + #endif /* __ASM_IMAGE_H */ ---------------------8<-------------------------- GCC patch ---------------------8<-------------------------- diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 174310c9f1bc..dc4928f1f14c 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -17208,6 +17208,12 @@ aarch64_select_early_remat_modes (sbitmap modes) } } +static tree +aarch64_init_stack_protect_guard (void) +{ + return NULL_TREE; +} + /* Target-specific selftests. */ #if CHECKING_P @@ -17682,6 +17688,9 @@ aarch64_libgcc_floating_mode_supported_p #define TARGET_RUN_TARGET_SELFTESTS selftest::aarch64_run_selftests #endif /* #if CHECKING_P */ +#undef TARGET_STACK_PROTECT_GUARD +#define TARGET_STACK_PROTECT_GUARD aarch64_init_stack_protect_guard + struct gcc_target targetm = TARGET_INITIALIZER; #include "gt-aarch64.h" diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index a6ecb3913094..cd591b65d8e6 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -168,6 +168,7 @@ UNSPEC_INSR UNSPEC_CLASTB UNSPEC_FADDA + UNSPEC_TSK_STACK_CANARY ]) (define_c_enum "unspecv" [ @@ -5834,6 +5835,15 @@ DONE; }) +(define_insn "aarch64_load_current_stack_canary" + [(set (match_operand:DI 0 "register_operand" "=r") + (unspec:DI [(const_int 0)] UNSPEC_TSK_STACK_CANARY))] + "" + "mrs\\t%0, sp_el0\;add\\t%0, %0, :lo12:__stack_chk_guard_tsk_offset" + [(set_attr "type" "multiple") + (set_attr "length" "8")] +) + ;; Named patterns for stack smashing protection. (define_expand "stack_protect_set" [(match_operand 0 "memory_operand") @@ -5842,6 +5852,11 @@ { machine_mode mode = GET_MODE (operands[0]); + rtx reg = gen_reg_rtx (Pmode); + + operands[1] = gen_rtx_MEM (Pmode, reg); + emit_insn (gen_aarch64_load_current_stack_canary (reg)); + emit_insn ((mode == DImode ? gen_stack_protect_set_di : gen_stack_protect_set_si) (operands[0], operands[1])); @@ -5867,6 +5882,11 @@ rtx result; machine_mode mode = GET_MODE (operands[0]); + rtx reg = gen_reg_rtx (Pmode); + + operands[1] = gen_rtx_MEM (Pmode, reg); + emit_insn (gen_aarch64_load_current_stack_canary (reg)); + result = gen_reg_rtx(mode); emit_insn ((mode == DImode