From patchwork Thu Aug 2 13:21:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10553609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E14115E9 for ; Thu, 2 Aug 2018 13:24:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EAF227F2B for ; Thu, 2 Aug 2018 13:24:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0CC012BFB7; Thu, 2 Aug 2018 13:24:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 0AAD827F2B for ; Thu, 2 Aug 2018 13:24:39 +0000 (UTC) Received: (qmail 5311 invoked by uid 550); 2 Aug 2018 13:24:35 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 5275 invoked from network); 2 Aug 2018 13:24:34 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=wrgcwA0ySg2cqgiEtLhToj4mvdWs6veeSoIuUHSGVcY=; b=eTa+nKKUO/s6CBK2RmbAHHSRP7Cc5m/2AcgYRYMbaB1hUQKGTgcFWd1w3jtrPCEqRG hbrecPV0EhWLgvpxp5aLb4+Cu8/l7Cc+kPot41Q4JMwint8Y1L5mbdQd+EDUiDRIemLq fmv67VlbtIO4lOQz+ntJNVkiRRD/PCB6fGKPs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=wrgcwA0ySg2cqgiEtLhToj4mvdWs6veeSoIuUHSGVcY=; b=lEi0QNt8N4Eo7BIympzN3DCBz/vNDC3TWoQ8jAakLObDq+Fv4u1kxF7P5hzmlsDXjM iP+6zCUTj5teVtWmfNG/b7EzJQa5ogatEGlxswlRI81DYTDeG8brdq7C2v43mbmEEZRe bdEB6paBXVEK4vHGef0vuhPW/V5L3T5XhOi78pBZV0nLeHKPsaFpYNXMWXmqfJ1F0EUc 9geLZQrc/1tUSCrRa23v6vrrh5m0Uzx3dUksD+XyN/c3Xk8P3jpSn+g9YDMY7BlT2mzT Dg4Xg0kO1E07823H83zYANp7Fi+T+INjnnT9FqVbaXqIBCsQqqI8OSA9sulA7qe2OalY GTDQ== X-Gm-Message-State: AOUpUlHUw0RuKpAcur7JdHUTzimtlm9ZBwphl34amlmRAQMNCUHtc3Iq 2lQJY01NXLvgACc4l/CfR7YTJUq/L/U+mA== X-Google-Smtp-Source: AAOMgpdus2btSaysGK2tKzZhr9g3IH4cvKAPsgxxK0wFBIoqZ58l5k1+yo8FwGWBIl2dRSN/SKaOWw== X-Received: by 2002:a50:a2a6:: with SMTP id 35-v6mr3389649edm.276.1533216263037; Thu, 02 Aug 2018 06:24:23 -0700 (PDT) From: Ard Biesheuvel To: kernel-hardening@lists.openwall.com Cc: keescook@chromium.org, christoffer.dall@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, mark.rutland@arm.com, labbott@fedoraproject.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Subject: [RFC/PoC PATCH 0/3] arm64: basic ROP mitigation Date: Thu, 2 Aug 2018 15:21:29 +0200 Message-Id: <20180802132133.23999-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 X-Virus-Scanned: ClamAV using ClamSMTP This is a proof of concept I cooked up, primarily to trigger a discussion about whether there is a point to doing anything like this, and if there is, what the pitfalls are. Also, while I am not aware of any similar implementations, the idea is so simple that I would be surprised if nobody else thought of the same thing way before I did. The idea is that we can significantly limit the kernel's attack surface for ROP based attacks by clearing the stack pointer's sign bit before returning from a function, and setting it again right after proceeding from the [expected] return address. This should make it much more difficult to return to arbitrary gadgets, given that they rely on being chained to the next via a return address popped off the stack, and this is difficult when the stack pointer is invalid. Of course, 4 additional instructions per function return is not exactly for free, but they are just movs and adds, and leaf functions are disregarded unless they allocate a stack frame (this comes for free because simple_return insns are disregarded by the plugin) Please shoot, preferably with better ideas ... Ard Biesheuvel (3): arm64: use wrapper macro for bl/blx instructions from asm code gcc: plugins: add ROP shield plugin for arm64 arm64: enable ROP protection by clearing SP bit #55 across function returns arch/Kconfig | 4 + arch/arm64/Kconfig | 10 ++ arch/arm64/include/asm/assembler.h | 21 +++- arch/arm64/kernel/entry-ftrace.S | 6 +- arch/arm64/kernel/entry.S | 104 +++++++++------- arch/arm64/kernel/head.S | 4 +- arch/arm64/kernel/probes/kprobes_trampoline.S | 2 +- arch/arm64/kernel/sleep.S | 6 +- drivers/firmware/efi/libstub/Makefile | 3 +- scripts/Makefile.gcc-plugins | 7 ++ scripts/gcc-plugins/arm64_rop_shield_plugin.c | 116 ++++++++++++++++++ 11 files changed, 228 insertions(+), 55 deletions(-) create mode 100644 scripts/gcc-plugins/arm64_rop_shield_plugin.c