From patchwork Tue Aug 28 20:14:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2165414BD for ; Tue, 28 Aug 2018 21:07:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1688F29C20 for ; Tue, 28 Aug 2018 21:07:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 067512A6ED; Tue, 28 Aug 2018 21:07:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 2202829C20 for ; Tue, 28 Aug 2018 21:07:55 +0000 (UTC) Received: (qmail 21655 invoked by uid 550); 28 Aug 2018 21:07:53 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1435 invoked from network); 28 Aug 2018 20:14:54 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uRAKHZlA0btsCxDvC9F6MjjKyQXLUpbtwbnShrkeMs8=; b=QqjEy0paB2ESV8Eq3Z0vETkQyMFWYL7kKtF/GF8hO4P+n/fyNdczSjh4I9yk7VBXvY CdhV263LIrzgGo+MzHtTLUaUlO3nB8ye5f7YrwPgbLec9ND8WyL3TClkV8zWEvQot1QC YDJ2LcFH7fE8hlvgDKcNHj+7L+QFJhbxV3CamRUr3H6aabclbqrGOtUn24Uli6u9w2sB a7qBwt5as4CNh3pVe6VdAO852f5kqRb4+n9ylOd4wZX/6AJILVnzc4TgZ82WUTcCU4Gp n2hdirYGHToTE8Bn1wxoIOSmqmdOPmgPnZRM7ghWxK/zE5SnF0mfpA0Dc2vaU802MtLJ q3tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uRAKHZlA0btsCxDvC9F6MjjKyQXLUpbtwbnShrkeMs8=; b=rNglp/3BFbtNA9E2jbv4al+x4TGmeL5kKXnB5jdKpFGsyZ9+xionQPk2LnhpNYK3KN LnIDMeRB2LwNN/TWgGnHHL00/SOdHsuDE6jwMTXQw/O2HvmN50ej2d3sE7wpFWCAHoEU fkbQYYTQ/wX2QUyHHyTNtwC3DdzTLgrJXKHCBepZ2qteS2xg9K26+RpHlGpMwAQpy4+F M2f6AzgGAWoWf5lpgtCecZrJKJ9NRwBPYGlTmmX9zov/MGFy9yPTMacul8Fqv0E0+w6m M8mjmF7q7yzS/3lSXVeKD2FpSIs+bz1/ju85bTyadFIYTe4g7agS816R6troh9TVnDez GT6w== X-Gm-Message-State: APzg51BrOsg8q8rDcPeTWtI1bHNkJxfpd+gskwImNICbYGSxTFfalU9/ cp7EYjjk147C/RB+e57C4PCGwEXwGA== X-Google-Smtp-Source: ANB0VdZp/YDJc9C/v0lNpho7kK8NpP0tyQo06S1Yg8u9D8Ds1SiTDfbf4uwei9wnD4CvG4s+hIq1EkqCog== X-Received: by 2002:a1f:d382:: with SMTP id k124-v6mr1430418vkg.14.1535487281915; Tue, 28 Aug 2018 13:14:41 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:15 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-2-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify() From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP This is an extension of commit b506a9d08bae ("x86: code clarification patch to Kprobes arch code"). As that commit explains, even though kprobe_running() can't be called with preemption enabled, you don't have to disable preemption - if preemption is on, you can't be in a kprobe. Also, use X86_TRAP_PF instead of 14. Signed-off-by: Jann Horn Acked-by: Masami Hiramatsu --- v3: - avoid unnecessary branch on return value and split up the checks (Borislav Petkov) arch/x86/mm/fault.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index b9123c497e0a..bcdaae1d5bf5 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -44,17 +44,19 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr) static nokprobe_inline int kprobes_fault(struct pt_regs *regs) { - int ret = 0; - - /* kprobe_running() needs smp_processor_id() */ - if (kprobes_built_in() && !user_mode(regs)) { - preempt_disable(); - if (kprobe_running() && kprobe_fault_handler(regs, 14)) - ret = 1; - preempt_enable(); - } - - return ret; + if (!kprobes_built_in()) + return 0; + if (user_mode(regs)) + return 0; + /* + * To be potentially processing a kprobe fault and to be allowed to call + * kprobe_running(), we have to be non-preemptible. + */ + if (preemptible()) + return 0; + if (!kprobe_running()) + return 0; + return kprobe_fault_handler(regs, X86_TRAP_PF); } /* From patchwork Tue Aug 28 20:14:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579167 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A70D514BD for ; Tue, 28 Aug 2018 21:08:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E23D2A961 for ; Tue, 28 Aug 2018 21:08:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 91A692A964; Tue, 28 Aug 2018 21:08:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 32DAE2A961 for ; Tue, 28 Aug 2018 21:08:04 +0000 (UTC) Received: (qmail 22482 invoked by uid 550); 28 Aug 2018 21:08:03 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1502 invoked from network); 28 Aug 2018 20:14:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uJWoCGq+dC+kv6D5HnpJpcUVVj9VCc0+wlPJZZu58Ic=; b=MRO1epmGNwUCnHbU0i4f57VR08as3uyxoIBp5RmzwmlCtcWAZJpI/2we+TaZa0z2Ln 3CrQkFyomt9BxMQNrkOBsV/YeVZN7p0iSiAehzruglhiG7k0hmVwlAzkxmiXtor6HVR8 aT9r5T4hsK7PawtQzM7IesKSESdyb3FO4V3bFN+gYo6L0zwP7B6OWqaAqB3ZiYh7lXJb 7SPdAY3RQ/DIrZb/kkiP/6f/txQyJ+f2WxkbINoeft9TbTBfUC1EdzbX8XJhMSldC4iT WK0q9e7si8qmOK7RWk5k4lF0hwIfI9BwmmORBhUTIGJTbB9v7RMieIa30NrxStXHDa45 SubQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uJWoCGq+dC+kv6D5HnpJpcUVVj9VCc0+wlPJZZu58Ic=; b=C8UMqjFVYotDHsx9ADvr+ENhns/whwTPIt3l7pLL39MeS467kiSD9T7H2DA01PFOiW 2CugB+Ifv6mqlqRfbtQivVoHeTSVKfG6dP40ISJAbuxxyaMAG8p+n0CSNZGULa31rZLD mapZmKqtFIlq8BvkaCIljTIjodEkoO4zjIKaCAZKA9wIqs5FSDfcKHBRJMPm7tc8DNUm tDs5tBv7p4/qHEXz/elEIVxMXBMXoeoa/ikzPXoctz/BYetp3tU3rLBFtg7ny2PqqKeD qtrDvQ96wFv9bfdQ4zCHxuMvvL70aYK5v6Qq1Cz64fYTSvQZq7ejtKw2lutFmDGJ7uGN ZVQg== X-Gm-Message-State: APzg51B+6D+CyQkQdHP9GXVoYw5WgHbFwTTOqEiZhOJyEa9kFNuFjLsI KGzFbkNggUGGwTffYkpTgRIGQBdMWQ== X-Google-Smtp-Source: ANB0VdbkegCsYkP9pgf34tu1wuFJg7i0BU53xS/AZqQ1qT0piv2l5bWT4weTcWV+ofjP9fz0MVA73wFxiQ== X-Received: by 2002:a67:341d:: with SMTP id b29-v6mr901vsa.53.1535487285116; Tue, 28 Aug 2018 13:14:45 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:16 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-3-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 2/7] x86: inline kprobe_exceptions_notify() into do_general_protection() From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP The opaque plumbing of #GP from do_general_protection() through notify_die() into kprobe_exceptions_notify() makes it hard to understand what's going on. Suggested-by: Andy Lutomirski Signed-off-by: Jann Horn Acked-by: Masami Hiramatsu --- arch/x86/kernel/kprobes/core.c | 31 +------------------------------ arch/x86/kernel/traps.c | 10 ++++++++++ 2 files changed, 11 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index b0d1e81c96bb..467ac22691b0 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -1028,42 +1028,13 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr) if (fixup_exception(regs, trapnr)) return 1; - /* - * fixup routine could not handle it, - * Let do_page_fault() fix it. - */ + /* fixup routine could not handle it. */ } return 0; } NOKPROBE_SYMBOL(kprobe_fault_handler); -/* - * Wrapper routine for handling exceptions. - */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, - void *data) -{ - struct die_args *args = data; - int ret = NOTIFY_DONE; - - if (args->regs && user_mode(args->regs)) - return ret; - - if (val == DIE_GPF) { - /* - * To be potentially processing a kprobe fault and to - * trust the result from kprobe_running(), we have - * be non-preemptible. - */ - if (!preemptible() && kprobe_running() && - kprobe_fault_handler(args->regs, args->trapnr)) - ret = NOTIFY_STOP; - } - return ret; -} -NOKPROBE_SYMBOL(kprobe_exceptions_notify); - bool arch_within_kprobe_blacklist(unsigned long addr) { bool is_in_entry_trampoline_section = false; diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index e6db475164ed..bf9ab1aaa175 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -556,6 +556,16 @@ do_general_protection(struct pt_regs *regs, long error_code) tsk->thread.error_code = error_code; tsk->thread.trap_nr = X86_TRAP_GP; + + /* + * To be potentially processing a kprobe fault and to + * trust the result from kprobe_running(), we have to + * be non-preemptible. + */ + if (!preemptible() && kprobe_running() && + kprobe_fault_handler(regs, X86_TRAP_GP)) + return; + if (notify_die(DIE_GPF, "general protection fault", regs, error_code, X86_TRAP_GP, SIGSEGV) != NOTIFY_STOP) die("general protection fault", regs, error_code); From patchwork Tue Aug 28 20:14:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80B7214BD for ; Tue, 28 Aug 2018 21:08:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 788BD2A961 for ; Tue, 28 Aug 2018 21:08:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6AC072A964; Tue, 28 Aug 2018 21:08:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 8D7432A961 for ; Tue, 28 Aug 2018 21:08:18 +0000 (UTC) Received: (qmail 24304 invoked by uid 550); 28 Aug 2018 21:08:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1581 invoked from network); 28 Aug 2018 20:15:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=03TAXzBIbK9qSPjls82pb6bVgK3cZq6BxaYmPeOzPNA=; b=k9MstZk0c2zdNTFSvMTX0jVj6F0t2QStibNqEXye6Ey7CN7GTPilMG/QNGEft9wwy7 7z3CUVCrnrHtFjQjnkrsiBgHyXu+V4ix9rOiV4ZhUDLlcDEls7cQ49zmOUuIeITJ89J+ BBYyvErOgZn9brmOZMbBKpii+A8kEqmhE6h7D25Hru6B1N7gwxHGU+60IbF6vnTz/DUF 8eJg1VRws24//e00HOVbKSAPVB5SIPxY1rTRctGFQWg7LZDxNcVr9KHesDp/YyZOqnVd S4e9L3Tt7fufL0H57G4qQL0Kd9YsY1Wb0/g0gCfLQ0RoCOiCmnpeQbkLBeXym0aCP8mS ZUew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=03TAXzBIbK9qSPjls82pb6bVgK3cZq6BxaYmPeOzPNA=; b=IcpuXlDJ0UC0yDT66wRE0jZlWdwUl9Y9HUWXnA3ZG0xIqqj3xywIevoLWRaQMj/R1m p0U6rcpRruBCO7ugmC5WHD11poMuVw/HzM5VpIFAAsQVKfin/bbUl7PZLNN0B0VNPcpd m+9FFaprTlg+1/oKem5wbwuWfdwDRSX3q8rk79Pini8fNYNfl8BEbXjwuv7zdT4hAFL1 vSEIvW6CF16wJpn5RfduKFVa1aNVDk6yjnO1ftPo4hxQYkBhLWHaa8VTT0VNmj4TTr1P +R0O/gjKiEKtax1ynBE8FGY8QFoch9/uzCKQL2PV3n9z4qVh65RDBHk9wkp8u+ykBBbr hJWA== X-Gm-Message-State: APzg51B0p7eU+NMyAqAbFmFL7/n9dZschAeTPiZOCndGEP9zUWYBkPmb 2broALYr+nO0jUcHtpCk8gbH9eup3A== X-Google-Smtp-Source: ANB0VdYgWZ5VfYWaGnKLfkPsMkmQJLzGAuQC5FQ4qD+JPJXy209W/KyIRBpY8ob3Rf+BctOyZwDckr/9yw== X-Received: by 2002:a63:4406:: with SMTP id r6-v6mr853725pga.64.1535487288321; Tue, 28 Aug 2018 13:14:48 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:17 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-4-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler() From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP This removes the call into exception fixup that was added in commit c28f896634f2 ("[PATCH] kprobes: fix broken fault handling for x86_64"). On X86, kprobe_fault_handler() is called from two places: do_general_protection() (for #GP) and kprobes_fault() (for #PF). In both paths, the fixup_exception() call in the kprobe fault handler is redundant. For #GP, fixup_exception() is called immediately before kprobe_fault_handler() is invoked - if someone wanted to fix up our #GP, they've already done so, no need to try again. (This assumes that the kprobe's fault handler isn't going to do something crazy like changing RIP so that it suddenly points to an instruction that does userspace access.) For #PF on a kernel address from kernel space, after the kprobe fault handler has run, we'll go into no_context(), which calls fixup_exception(). Acked-by: Masami Hiramatsu Signed-off-by: Jann Horn --- arch/x86/kernel/kprobes/core.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 467ac22691b0..7315ac202aad 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -1021,13 +1021,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr) if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr)) return 1; - /* - * In case the user-specified fault handler returned - * zero, try to fix up. - */ - if (fixup_exception(regs, trapnr)) - return 1; - /* fixup routine could not handle it. */ } From patchwork Tue Aug 28 20:14:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8663920 for ; Tue, 28 Aug 2018 21:08:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E6D52A9A8 for ; Tue, 28 Aug 2018 21:08:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 916D02A9F1; Tue, 28 Aug 2018 21:08:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 564CB2A9A8 for ; Tue, 28 Aug 2018 21:08:28 +0000 (UTC) Received: (qmail 26074 invoked by uid 550); 28 Aug 2018 21:08:26 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1639 invoked from network); 28 Aug 2018 20:15:03 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RzlUn6nmWl286BP8JfLGR1PkmcLw6oRgk7arX51R+a0=; b=ZQJ5WqzVIq5j9LNA+njzZmYOJ/tgH4r73+tAUMRuwtIAsgZXDeUkVRoP313Q0MOMPy oXwVlGYI22nWHYhEkhnLB4osNMYZ9I3P6NexEwyQk+P6X/uHzKtc+SoL4s+qpauKAQm+ 8rxQWaIECimyVBugrvQDlAoQ6C7xPEGyVXmp6+4trm0K4v/DX/2fU+Eu7PsRhlR/fMjX Smu0V5FozcUTJ8un18yrkMEP1Ck9F09Jx/uoj4DdiRlQfhhUFXsDZFhy62cLeqqbgYz4 ZQAdcEu4J7vm6NCW7MYVXLRS+yvUFc6t296hCK0cRrjQU+/GVdNf27kYHn1MkpkPbYN1 LhpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RzlUn6nmWl286BP8JfLGR1PkmcLw6oRgk7arX51R+a0=; b=Owj5yAzAKf5bdWQz2luNikbyqAYJ7yqECFCosRMdoonGbiRDjzJfhdf5BD0aQh9qwM 5rl2yF7s9ESM7nSNkJN95mqThf6fNj4SJtCN42a1SSNQvG6CTyrEuRittLTrXwaufAMo V57bKTWll3ePVNh01fyE/QFtfjEn9xkyRM8fn0R/D1BsH3H/+BERSWYAddk26PFf5TCK VnWw9pUOlselHuudFhiSWxc/PSWD6a+CfAbiofcnN3a7VLSqXypPXIxeDDOcUz694dxI QmxFOZVQbIAFtuypHs4j4A9RVjbfPeAoIM/Kag8jjePqkPcXB6BTP3k+0peZy5p80iQk tU4g== X-Gm-Message-State: APzg51DFJ1tZoTBZhuKS7nv/YFEiCQk23bv1gkoy9PnfW1I3BLihCiF2 eexdINpE0AqnhsnebR0OlmL8cykUdg== X-Google-Smtp-Source: ANB0VdaKIV0CbyP5x+gxenxfx7W2Cg4lLaGW7apRQiZqDmsjYjKt94tR7UPovIWKIiCFMxAbnjPhq98jhA== X-Received: by 2002:a1f:1c2:: with SMTP id 185-v6mr1413653vkb.40.1535487291646; Tue, 28 Aug 2018 13:14:51 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:18 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-5-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 4/7] x86: introduce _ASM_EXTABLE_UA for uaccess fixups From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP Currently, most fixups for attempting to access userspace memory are handled using _ASM_EXTABLE, which is also used for various other types of fixups (e.g. safe MSR access, IRET failures, and a bunch of other things). In order to make it possible to add special safety checks to uaccess fixups (in particular, checking whether the fault address is actually in userspace), introduce a new exception table handler ex_handler_uaccess() and wire it up to all the user access fixups (excluding ones that already use _ASM_EXTABLE_EX). Signed-off-by: Jann Horn --- NOTE: I haven't touched the two uaccess fixups in asm/fpu/internal.h because I'm not sure about what other kinds of exceptions those might have to handle. arch/x86/include/asm/asm.h | 10 ++- arch/x86/include/asm/fpu/internal.h | 2 +- arch/x86/include/asm/futex.h | 6 +- arch/x86/include/asm/uaccess.h | 22 ++--- arch/x86/lib/checksum_32.S | 4 +- arch/x86/lib/copy_user_64.S | 90 ++++++++++---------- arch/x86/lib/csum-copy_64.S | 8 +- arch/x86/lib/getuser.S | 12 +-- arch/x86/lib/putuser.S | 10 +-- arch/x86/lib/usercopy_32.c | 126 ++++++++++++++-------------- arch/x86/lib/usercopy_64.c | 4 +- arch/x86/mm/extable.c | 8 ++ 12 files changed, 160 insertions(+), 142 deletions(-) diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index 990770f9e76b..6467757bb39f 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -130,6 +130,9 @@ # define _ASM_EXTABLE(from, to) \ _ASM_EXTABLE_HANDLE(from, to, ex_handler_default) +# define _ASM_EXTABLE_UA(from, to) \ + _ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess) + # define _ASM_EXTABLE_FAULT(from, to) \ _ASM_EXTABLE_HANDLE(from, to, ex_handler_fault) @@ -165,8 +168,8 @@ jmp copy_user_handle_tail .previous - _ASM_EXTABLE(100b,103b) - _ASM_EXTABLE(101b,103b) + _ASM_EXTABLE_UA(100b, 103b) + _ASM_EXTABLE_UA(101b, 103b) .endm #else @@ -182,6 +185,9 @@ # define _ASM_EXTABLE(from, to) \ _ASM_EXTABLE_HANDLE(from, to, ex_handler_default) +# define _ASM_EXTABLE_UA(from, to) \ + _ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess) + # define _ASM_EXTABLE_FAULT(from, to) \ _ASM_EXTABLE_HANDLE(from, to, ex_handler_fault) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index a38bf5a1e37a..068ff7689268 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -226,7 +226,7 @@ static inline void copy_fxregs_to_kernel(struct fpu *fpu) "3: movl $-2,%[err]\n\t" \ "jmp 2b\n\t" \ ".popsection\n\t" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : [err] "=r" (err) \ : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ : "memory") diff --git a/arch/x86/include/asm/futex.h b/arch/x86/include/asm/futex.h index de4d68852d3a..13c83fe97988 100644 --- a/arch/x86/include/asm/futex.h +++ b/arch/x86/include/asm/futex.h @@ -20,7 +20,7 @@ "3:\tmov\t%3, %1\n" \ "\tjmp\t2b\n" \ "\t.previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "=r" (oldval), "=r" (ret), "+m" (*uaddr) \ : "i" (-EFAULT), "0" (oparg), "1" (0)) @@ -36,8 +36,8 @@ "4:\tmov\t%5, %1\n" \ "\tjmp\t3b\n" \ "\t.previous\n" \ - _ASM_EXTABLE(1b, 4b) \ - _ASM_EXTABLE(2b, 4b) \ + _ASM_EXTABLE_UA(1b, 4b) \ + _ASM_EXTABLE_UA(2b, 4b) \ : "=&a" (oldval), "=&r" (ret), \ "+m" (*uaddr), "=&r" (tem) \ : "r" (oparg), "i" (-EFAULT), "1" (0)) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index aae77eb8491c..b5e58cc0c5e7 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -198,8 +198,8 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) "4: movl %3,%0\n" \ " jmp 3b\n" \ ".previous\n" \ - _ASM_EXTABLE(1b, 4b) \ - _ASM_EXTABLE(2b, 4b) \ + _ASM_EXTABLE_UA(1b, 4b) \ + _ASM_EXTABLE_UA(2b, 4b) \ : "=r" (err) \ : "A" (x), "r" (addr), "i" (errret), "0" (err)) @@ -340,8 +340,8 @@ do { \ " xorl %%edx,%%edx\n" \ " jmp 3b\n" \ ".previous\n" \ - _ASM_EXTABLE(1b, 4b) \ - _ASM_EXTABLE(2b, 4b) \ + _ASM_EXTABLE_UA(1b, 4b) \ + _ASM_EXTABLE_UA(2b, 4b) \ : "=r" (retval), "=&A"(x) \ : "m" (__m(__ptr)), "m" __m(((u32 __user *)(__ptr)) + 1), \ "i" (errret), "0" (retval)); \ @@ -386,7 +386,7 @@ do { \ " xor"itype" %"rtype"1,%"rtype"1\n" \ " jmp 2b\n" \ ".previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "=r" (err), ltype(x) \ : "m" (__m(addr)), "i" (errret), "0" (err)) @@ -398,7 +398,7 @@ do { \ "3: mov %3,%0\n" \ " jmp 2b\n" \ ".previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "=r" (err), ltype(x) \ : "m" (__m(addr)), "i" (errret), "0" (err)) @@ -474,7 +474,7 @@ struct __large_struct { unsigned long buf[100]; }; "3: mov %3,%0\n" \ " jmp 2b\n" \ ".previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "=r"(err) \ : ltype(x), "m" (__m(addr)), "i" (errret), "0" (err)) @@ -602,7 +602,7 @@ extern void __cmpxchg_wrong_size(void) "3:\tmov %3, %0\n" \ "\tjmp 2b\n" \ "\t.previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "+r" (__ret), "=a" (__old), "+m" (*(ptr)) \ : "i" (-EFAULT), "q" (__new), "1" (__old) \ : "memory" \ @@ -618,7 +618,7 @@ extern void __cmpxchg_wrong_size(void) "3:\tmov %3, %0\n" \ "\tjmp 2b\n" \ "\t.previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "+r" (__ret), "=a" (__old), "+m" (*(ptr)) \ : "i" (-EFAULT), "r" (__new), "1" (__old) \ : "memory" \ @@ -634,7 +634,7 @@ extern void __cmpxchg_wrong_size(void) "3:\tmov %3, %0\n" \ "\tjmp 2b\n" \ "\t.previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "+r" (__ret), "=a" (__old), "+m" (*(ptr)) \ : "i" (-EFAULT), "r" (__new), "1" (__old) \ : "memory" \ @@ -653,7 +653,7 @@ extern void __cmpxchg_wrong_size(void) "3:\tmov %3, %0\n" \ "\tjmp 2b\n" \ "\t.previous\n" \ - _ASM_EXTABLE(1b, 3b) \ + _ASM_EXTABLE_UA(1b, 3b) \ : "+r" (__ret), "=a" (__old), "+m" (*(ptr)) \ : "i" (-EFAULT), "r" (__new), "1" (__old) \ : "memory" \ diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S index 46e71a74e612..ad8e0906d1ea 100644 --- a/arch/x86/lib/checksum_32.S +++ b/arch/x86/lib/checksum_32.S @@ -273,11 +273,11 @@ unsigned int csum_partial_copy_generic (const char *src, char *dst, #define SRC(y...) \ 9999: y; \ - _ASM_EXTABLE(9999b, 6001f) + _ASM_EXTABLE_UA(9999b, 6001f) #define DST(y...) \ 9999: y; \ - _ASM_EXTABLE(9999b, 6002f) + _ASM_EXTABLE_UA(9999b, 6002f) #ifndef CONFIG_X86_USE_PPRO_CHECKSUM diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S index 020f75cc8cf6..db4e5aa0858b 100644 --- a/arch/x86/lib/copy_user_64.S +++ b/arch/x86/lib/copy_user_64.S @@ -92,26 +92,26 @@ ENTRY(copy_user_generic_unrolled) 60: jmp copy_user_handle_tail /* ecx is zerorest also */ .previous - _ASM_EXTABLE(1b,30b) - _ASM_EXTABLE(2b,30b) - _ASM_EXTABLE(3b,30b) - _ASM_EXTABLE(4b,30b) - _ASM_EXTABLE(5b,30b) - _ASM_EXTABLE(6b,30b) - _ASM_EXTABLE(7b,30b) - _ASM_EXTABLE(8b,30b) - _ASM_EXTABLE(9b,30b) - _ASM_EXTABLE(10b,30b) - _ASM_EXTABLE(11b,30b) - _ASM_EXTABLE(12b,30b) - _ASM_EXTABLE(13b,30b) - _ASM_EXTABLE(14b,30b) - _ASM_EXTABLE(15b,30b) - _ASM_EXTABLE(16b,30b) - _ASM_EXTABLE(18b,40b) - _ASM_EXTABLE(19b,40b) - _ASM_EXTABLE(21b,50b) - _ASM_EXTABLE(22b,50b) + _ASM_EXTABLE_UA(1b, 30b) + _ASM_EXTABLE_UA(2b, 30b) + _ASM_EXTABLE_UA(3b, 30b) + _ASM_EXTABLE_UA(4b, 30b) + _ASM_EXTABLE_UA(5b, 30b) + _ASM_EXTABLE_UA(6b, 30b) + _ASM_EXTABLE_UA(7b, 30b) + _ASM_EXTABLE_UA(8b, 30b) + _ASM_EXTABLE_UA(9b, 30b) + _ASM_EXTABLE_UA(10b, 30b) + _ASM_EXTABLE_UA(11b, 30b) + _ASM_EXTABLE_UA(12b, 30b) + _ASM_EXTABLE_UA(13b, 30b) + _ASM_EXTABLE_UA(14b, 30b) + _ASM_EXTABLE_UA(15b, 30b) + _ASM_EXTABLE_UA(16b, 30b) + _ASM_EXTABLE_UA(18b, 40b) + _ASM_EXTABLE_UA(19b, 40b) + _ASM_EXTABLE_UA(21b, 50b) + _ASM_EXTABLE_UA(22b, 50b) ENDPROC(copy_user_generic_unrolled) EXPORT_SYMBOL(copy_user_generic_unrolled) @@ -156,8 +156,8 @@ ENTRY(copy_user_generic_string) jmp copy_user_handle_tail .previous - _ASM_EXTABLE(1b,11b) - _ASM_EXTABLE(3b,12b) + _ASM_EXTABLE_UA(1b, 11b) + _ASM_EXTABLE_UA(3b, 12b) ENDPROC(copy_user_generic_string) EXPORT_SYMBOL(copy_user_generic_string) @@ -189,7 +189,7 @@ ENTRY(copy_user_enhanced_fast_string) jmp copy_user_handle_tail .previous - _ASM_EXTABLE(1b,12b) + _ASM_EXTABLE_UA(1b, 12b) ENDPROC(copy_user_enhanced_fast_string) EXPORT_SYMBOL(copy_user_enhanced_fast_string) @@ -319,27 +319,27 @@ ENTRY(__copy_user_nocache) jmp copy_user_handle_tail .previous - _ASM_EXTABLE(1b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(2b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(3b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(4b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(5b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(6b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(7b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(8b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(9b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(10b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(11b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(12b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(13b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(14b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(15b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(16b,.L_fixup_4x8b_copy) - _ASM_EXTABLE(20b,.L_fixup_8b_copy) - _ASM_EXTABLE(21b,.L_fixup_8b_copy) - _ASM_EXTABLE(30b,.L_fixup_4b_copy) - _ASM_EXTABLE(31b,.L_fixup_4b_copy) - _ASM_EXTABLE(40b,.L_fixup_1b_copy) - _ASM_EXTABLE(41b,.L_fixup_1b_copy) + _ASM_EXTABLE_UA(1b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(2b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(3b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(4b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(5b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(6b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(7b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(8b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(9b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(10b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(11b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(12b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(13b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(14b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(15b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(16b, .L_fixup_4x8b_copy) + _ASM_EXTABLE_UA(20b, .L_fixup_8b_copy) + _ASM_EXTABLE_UA(21b, .L_fixup_8b_copy) + _ASM_EXTABLE_UA(30b, .L_fixup_4b_copy) + _ASM_EXTABLE_UA(31b, .L_fixup_4b_copy) + _ASM_EXTABLE_UA(40b, .L_fixup_1b_copy) + _ASM_EXTABLE_UA(41b, .L_fixup_1b_copy) ENDPROC(__copy_user_nocache) EXPORT_SYMBOL(__copy_user_nocache) diff --git a/arch/x86/lib/csum-copy_64.S b/arch/x86/lib/csum-copy_64.S index 45a53dfe1859..a4a379e79259 100644 --- a/arch/x86/lib/csum-copy_64.S +++ b/arch/x86/lib/csum-copy_64.S @@ -31,14 +31,18 @@ .macro source 10: - _ASM_EXTABLE(10b, .Lbad_source) + _ASM_EXTABLE_UA(10b, .Lbad_source) .endm .macro dest 20: - _ASM_EXTABLE(20b, .Lbad_dest) + _ASM_EXTABLE_UA(20b, .Lbad_dest) .endm + /* + * No _ASM_EXTABLE_UA; this is used for intentional prefetch on a + * potentially unmapped kernel address. + */ .macro ignore L=.Lignore 30: _ASM_EXTABLE(30b, \L) diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S index 49b167f73215..74fdff968ea3 100644 --- a/arch/x86/lib/getuser.S +++ b/arch/x86/lib/getuser.S @@ -132,12 +132,12 @@ bad_get_user_8: END(bad_get_user_8) #endif - _ASM_EXTABLE(1b,bad_get_user) - _ASM_EXTABLE(2b,bad_get_user) - _ASM_EXTABLE(3b,bad_get_user) + _ASM_EXTABLE_UA(1b, bad_get_user) + _ASM_EXTABLE_UA(2b, bad_get_user) + _ASM_EXTABLE_UA(3b, bad_get_user) #ifdef CONFIG_X86_64 - _ASM_EXTABLE(4b,bad_get_user) + _ASM_EXTABLE_UA(4b, bad_get_user) #else - _ASM_EXTABLE(4b,bad_get_user_8) - _ASM_EXTABLE(5b,bad_get_user_8) + _ASM_EXTABLE_UA(4b, bad_get_user_8) + _ASM_EXTABLE_UA(5b, bad_get_user_8) #endif diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S index 96dce5fe2a35..d2e5c9c39601 100644 --- a/arch/x86/lib/putuser.S +++ b/arch/x86/lib/putuser.S @@ -94,10 +94,10 @@ bad_put_user: EXIT END(bad_put_user) - _ASM_EXTABLE(1b,bad_put_user) - _ASM_EXTABLE(2b,bad_put_user) - _ASM_EXTABLE(3b,bad_put_user) - _ASM_EXTABLE(4b,bad_put_user) + _ASM_EXTABLE_UA(1b, bad_put_user) + _ASM_EXTABLE_UA(2b, bad_put_user) + _ASM_EXTABLE_UA(3b, bad_put_user) + _ASM_EXTABLE_UA(4b, bad_put_user) #ifdef CONFIG_X86_32 - _ASM_EXTABLE(5b,bad_put_user) + _ASM_EXTABLE_UA(5b, bad_put_user) #endif diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c index 7add8ba06887..71fb58d44d58 100644 --- a/arch/x86/lib/usercopy_32.c +++ b/arch/x86/lib/usercopy_32.c @@ -47,8 +47,8 @@ do { \ "3: lea 0(%2,%0,4),%0\n" \ " jmp 2b\n" \ ".previous\n" \ - _ASM_EXTABLE(0b,3b) \ - _ASM_EXTABLE(1b,2b) \ + _ASM_EXTABLE_UA(0b, 3b) \ + _ASM_EXTABLE_UA(1b, 2b) \ : "=&c"(size), "=&D" (__d0) \ : "r"(size & 3), "0"(size / 4), "1"(addr), "a"(0)); \ } while (0) @@ -153,44 +153,44 @@ __copy_user_intel(void __user *to, const void *from, unsigned long size) "101: lea 0(%%eax,%0,4),%0\n" " jmp 100b\n" ".previous\n" - _ASM_EXTABLE(1b,100b) - _ASM_EXTABLE(2b,100b) - _ASM_EXTABLE(3b,100b) - _ASM_EXTABLE(4b,100b) - _ASM_EXTABLE(5b,100b) - _ASM_EXTABLE(6b,100b) - _ASM_EXTABLE(7b,100b) - _ASM_EXTABLE(8b,100b) - _ASM_EXTABLE(9b,100b) - _ASM_EXTABLE(10b,100b) - _ASM_EXTABLE(11b,100b) - _ASM_EXTABLE(12b,100b) - _ASM_EXTABLE(13b,100b) - _ASM_EXTABLE(14b,100b) - _ASM_EXTABLE(15b,100b) - _ASM_EXTABLE(16b,100b) - _ASM_EXTABLE(17b,100b) - _ASM_EXTABLE(18b,100b) - _ASM_EXTABLE(19b,100b) - _ASM_EXTABLE(20b,100b) - _ASM_EXTABLE(21b,100b) - _ASM_EXTABLE(22b,100b) - _ASM_EXTABLE(23b,100b) - _ASM_EXTABLE(24b,100b) - _ASM_EXTABLE(25b,100b) - _ASM_EXTABLE(26b,100b) - _ASM_EXTABLE(27b,100b) - _ASM_EXTABLE(28b,100b) - _ASM_EXTABLE(29b,100b) - _ASM_EXTABLE(30b,100b) - _ASM_EXTABLE(31b,100b) - _ASM_EXTABLE(32b,100b) - _ASM_EXTABLE(33b,100b) - _ASM_EXTABLE(34b,100b) - _ASM_EXTABLE(35b,100b) - _ASM_EXTABLE(36b,100b) - _ASM_EXTABLE(37b,100b) - _ASM_EXTABLE(99b,101b) + _ASM_EXTABLE_UA(1b, 100b) + _ASM_EXTABLE_UA(2b, 100b) + _ASM_EXTABLE_UA(3b, 100b) + _ASM_EXTABLE_UA(4b, 100b) + _ASM_EXTABLE_UA(5b, 100b) + _ASM_EXTABLE_UA(6b, 100b) + _ASM_EXTABLE_UA(7b, 100b) + _ASM_EXTABLE_UA(8b, 100b) + _ASM_EXTABLE_UA(9b, 100b) + _ASM_EXTABLE_UA(10b, 100b) + _ASM_EXTABLE_UA(11b, 100b) + _ASM_EXTABLE_UA(12b, 100b) + _ASM_EXTABLE_UA(13b, 100b) + _ASM_EXTABLE_UA(14b, 100b) + _ASM_EXTABLE_UA(15b, 100b) + _ASM_EXTABLE_UA(16b, 100b) + _ASM_EXTABLE_UA(17b, 100b) + _ASM_EXTABLE_UA(18b, 100b) + _ASM_EXTABLE_UA(19b, 100b) + _ASM_EXTABLE_UA(20b, 100b) + _ASM_EXTABLE_UA(21b, 100b) + _ASM_EXTABLE_UA(22b, 100b) + _ASM_EXTABLE_UA(23b, 100b) + _ASM_EXTABLE_UA(24b, 100b) + _ASM_EXTABLE_UA(25b, 100b) + _ASM_EXTABLE_UA(26b, 100b) + _ASM_EXTABLE_UA(27b, 100b) + _ASM_EXTABLE_UA(28b, 100b) + _ASM_EXTABLE_UA(29b, 100b) + _ASM_EXTABLE_UA(30b, 100b) + _ASM_EXTABLE_UA(31b, 100b) + _ASM_EXTABLE_UA(32b, 100b) + _ASM_EXTABLE_UA(33b, 100b) + _ASM_EXTABLE_UA(34b, 100b) + _ASM_EXTABLE_UA(35b, 100b) + _ASM_EXTABLE_UA(36b, 100b) + _ASM_EXTABLE_UA(37b, 100b) + _ASM_EXTABLE_UA(99b, 101b) : "=&c"(size), "=&D" (d0), "=&S" (d1) : "1"(to), "2"(from), "0"(size) : "eax", "edx", "memory"); @@ -259,26 +259,26 @@ static unsigned long __copy_user_intel_nocache(void *to, "9: lea 0(%%eax,%0,4),%0\n" "16: jmp 8b\n" ".previous\n" - _ASM_EXTABLE(0b,16b) - _ASM_EXTABLE(1b,16b) - _ASM_EXTABLE(2b,16b) - _ASM_EXTABLE(21b,16b) - _ASM_EXTABLE(3b,16b) - _ASM_EXTABLE(31b,16b) - _ASM_EXTABLE(4b,16b) - _ASM_EXTABLE(41b,16b) - _ASM_EXTABLE(10b,16b) - _ASM_EXTABLE(51b,16b) - _ASM_EXTABLE(11b,16b) - _ASM_EXTABLE(61b,16b) - _ASM_EXTABLE(12b,16b) - _ASM_EXTABLE(71b,16b) - _ASM_EXTABLE(13b,16b) - _ASM_EXTABLE(81b,16b) - _ASM_EXTABLE(14b,16b) - _ASM_EXTABLE(91b,16b) - _ASM_EXTABLE(6b,9b) - _ASM_EXTABLE(7b,16b) + _ASM_EXTABLE_UA(0b, 16b) + _ASM_EXTABLE_UA(1b, 16b) + _ASM_EXTABLE_UA(2b, 16b) + _ASM_EXTABLE_UA(21b, 16b) + _ASM_EXTABLE_UA(3b, 16b) + _ASM_EXTABLE_UA(31b, 16b) + _ASM_EXTABLE_UA(4b, 16b) + _ASM_EXTABLE_UA(41b, 16b) + _ASM_EXTABLE_UA(10b, 16b) + _ASM_EXTABLE_UA(51b, 16b) + _ASM_EXTABLE_UA(11b, 16b) + _ASM_EXTABLE_UA(61b, 16b) + _ASM_EXTABLE_UA(12b, 16b) + _ASM_EXTABLE_UA(71b, 16b) + _ASM_EXTABLE_UA(13b, 16b) + _ASM_EXTABLE_UA(81b, 16b) + _ASM_EXTABLE_UA(14b, 16b) + _ASM_EXTABLE_UA(91b, 16b) + _ASM_EXTABLE_UA(6b, 9b) + _ASM_EXTABLE_UA(7b, 16b) : "=&c"(size), "=&D" (d0), "=&S" (d1) : "1"(to), "2"(from), "0"(size) : "eax", "edx", "memory"); @@ -321,9 +321,9 @@ do { \ "3: lea 0(%3,%0,4),%0\n" \ " jmp 2b\n" \ ".previous\n" \ - _ASM_EXTABLE(4b,5b) \ - _ASM_EXTABLE(0b,3b) \ - _ASM_EXTABLE(1b,2b) \ + _ASM_EXTABLE_UA(4b, 5b) \ + _ASM_EXTABLE_UA(0b, 3b) \ + _ASM_EXTABLE_UA(1b, 2b) \ : "=&c"(size), "=&D" (__d0), "=&S" (__d1), "=r"(__d2) \ : "3"(size), "0"(size), "1"(to), "2"(from) \ : "memory"); \ diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c index 9c5606d88f61..fefe64436398 100644 --- a/arch/x86/lib/usercopy_64.c +++ b/arch/x86/lib/usercopy_64.c @@ -37,8 +37,8 @@ unsigned long __clear_user(void __user *addr, unsigned long size) "3: lea 0(%[size1],%[size8],8),%[size8]\n" " jmp 2b\n" ".previous\n" - _ASM_EXTABLE(0b,3b) - _ASM_EXTABLE(1b,2b) + _ASM_EXTABLE_UA(0b, 3b) + _ASM_EXTABLE_UA(1b, 2b) : [size8] "=&c"(size), [dst] "=&D" (__d0) : [size1] "r"(size & 7), "[size8]" (size / 8), "[dst]"(addr)); clac(); diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 45f5d6cf65ae..0b8b5d889eec 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -108,6 +108,14 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup, } EXPORT_SYMBOL_GPL(ex_handler_fprestore); +__visible bool ex_handler_uaccess(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr) +{ + regs->ip = ex_fixup_addr(fixup); + return true; +} +EXPORT_SYMBOL(ex_handler_uaccess); + __visible bool ex_handler_ext(const struct exception_table_entry *fixup, struct pt_regs *regs, int trapnr) { From patchwork Tue Aug 28 20:14:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579173 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50B24920 for ; Tue, 28 Aug 2018 21:08:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47FCE2A9DD for ; Tue, 28 Aug 2018 21:08:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3BA032AA2C; Tue, 28 Aug 2018 21:08:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 967762A9DD for ; Tue, 28 Aug 2018 21:08:43 +0000 (UTC) Received: (qmail 26273 invoked by uid 550); 28 Aug 2018 21:08:30 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1703 invoked from network); 28 Aug 2018 20:15:06 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6+wvh3gVbSvK20MuHN7PGHkeMC0u12Pexty0kPvC+5c=; b=i6P2lH6cRMg/nPPP2HoSq+Nt896dyGVJjO/d1PpB1gwTmka5O605Pjo/T1rKQb9mD8 NpT9yaFAV4IooOrhfKbPeHMSWPJRqFMS04pFVhFVE1r8HBb9/TfSGqYQ5uhyEMnLY3dK HYRNfdQmWS5SjjXKEJUQooaOT4MQU994iwnO+zt4kzcChsGiKuHq22DadiXtW+oLml+l KOIOB6MgyAOS39uKtIxnfebYkOrf9LTiIFSuVpvtY6MWSsmFOfdy5rWLNHkF9lINkihG REbCuGc9/nXxf3eEgxgG+nfLsD3Mx5FjAQLDXXt+2zNuP7/5AO+4riXtpeB1afvrFu9p xIRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6+wvh3gVbSvK20MuHN7PGHkeMC0u12Pexty0kPvC+5c=; b=cGmwdlrHX2jrGNADT8I5kZ02xrUa8glf4NNht4kJSQBu5S4TYG/OZg5o0iQlUiwqQt TETqv6hjrBXmog73NMRTIdeSVIe6QuO1Bo2fl4LvwSjhYa6zYJbCCE00EwKN0qFrhOu9 0K+LQxiq23khKNWjqYdKpItNF8PYX9TaUGRazdy7Oruif9bWBBS3W70zltJ2AwoTydNS 7P0bDAsYinifXiR77xZX5C45yGEMWj8OZghw/n6lPkaZUb0a7SvTxac1AuAQYprafjaq i/qOl1QGC0s+aJZ2rRvP+m4IoKeFAANfKTagZXhghkKjmGXuQmZyBoHh9Helx8IbZy2T 00Cw== X-Gm-Message-State: APzg51BSGH3HW0SzET3CAuuKJE2I0Ekgr8HIpTROFfmQaomM4fEpvk+4 1YJ+S61c2939O+kNRTU2iDeGrk5Kkw== X-Google-Smtp-Source: ANB0Vdaob1DTxV0/APqUNaVRs4TOscb/p1BPKyuykGRMvaQxbeCGxbT15tlqo7IW4JImjIyuSRFMj0R0yQ== X-Received: by 2002:a1f:848:: with SMTP id 69-v6mr1460384vki.82.1535487294840; Tue, 28 Aug 2018 13:14:54 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:19 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-6-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 5/7] x86: plumb error code and fault address through to fault handlers From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP This is preparation for looking at trap number and fault address in the handlers for uaccess errors. This patch should not change any behavior. Signed-off-by: Jann Horn --- v3: - really plumb the error code through to the handlers (Andy) arch/x86/include/asm/extable.h | 3 +- arch/x86/include/asm/ptrace.h | 2 ++ arch/x86/kernel/cpu/mcheck/mce.c | 2 +- arch/x86/kernel/traps.c | 6 ++-- arch/x86/mm/extable.c | 50 ++++++++++++++++++++++---------- arch/x86/mm/fault.c | 2 +- 6 files changed, 44 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/extable.h b/arch/x86/include/asm/extable.h index f9c3a5d502f4..d8c2198d543b 100644 --- a/arch/x86/include/asm/extable.h +++ b/arch/x86/include/asm/extable.h @@ -29,7 +29,8 @@ struct pt_regs; (b)->handler = (tmp).handler - (delta); \ } while (0) -extern int fixup_exception(struct pt_regs *regs, int trapnr); +extern int fixup_exception(struct pt_regs *regs, int trapnr, + unsigned long error_code, unsigned long fault_addr); extern int fixup_bug(struct pt_regs *regs, int trapnr); extern bool ex_has_fault_handler(unsigned long ip); extern void early_fixup_exception(struct pt_regs *regs, int trapnr); diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h index 6de1fd3d0097..6c68d4947a8f 100644 --- a/arch/x86/include/asm/ptrace.h +++ b/arch/x86/include/asm/ptrace.h @@ -37,8 +37,10 @@ struct pt_regs { unsigned short __esh; unsigned short fs; unsigned short __fsh; +/* On interrupt, gs and __gsh store the vector number. */ unsigned short gs; unsigned short __gsh; +/* On interrupt, this is the error code. */ unsigned long orig_ax; unsigned long ip; unsigned short cs; diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c index 953b3ce92dcc..ef8fd1f2ede0 100644 --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -1315,7 +1315,7 @@ void do_machine_check(struct pt_regs *regs, long error_code) local_irq_disable(); ist_end_non_atomic(); } else { - if (!fixup_exception(regs, X86_TRAP_MC)) + if (!fixup_exception(regs, X86_TRAP_MC, error_code, 0)) mce_panic("Failed kernel mode recovery", &m, NULL); } diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index bf9ab1aaa175..16c95cb90496 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -206,7 +206,7 @@ do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str, } if (!user_mode(regs)) { - if (fixup_exception(regs, trapnr)) + if (fixup_exception(regs, trapnr, error_code, 0)) return 0; tsk->thread.error_code = error_code; @@ -551,7 +551,7 @@ do_general_protection(struct pt_regs *regs, long error_code) tsk = current; if (!user_mode(regs)) { - if (fixup_exception(regs, X86_TRAP_GP)) + if (fixup_exception(regs, X86_TRAP_GP, error_code, 0)) return; tsk->thread.error_code = error_code; @@ -848,7 +848,7 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr) cond_local_irq_enable(regs); if (!user_mode(regs)) { - if (fixup_exception(regs, trapnr)) + if (fixup_exception(regs, trapnr, error_code, 0)) return; task->thread.error_code = error_code; diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 0b8b5d889eec..856fa409c536 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -8,7 +8,8 @@ #include typedef bool (*ex_handler_t)(const struct exception_table_entry *, - struct pt_regs *, int); + struct pt_regs *, int, unsigned long, + unsigned long); static inline unsigned long ex_fixup_addr(const struct exception_table_entry *x) @@ -22,7 +23,9 @@ ex_fixup_handler(const struct exception_table_entry *x) } __visible bool ex_handler_default(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { regs->ip = ex_fixup_addr(fixup); return true; @@ -30,7 +33,9 @@ __visible bool ex_handler_default(const struct exception_table_entry *fixup, EXPORT_SYMBOL(ex_handler_default); __visible bool ex_handler_fault(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { regs->ip = ex_fixup_addr(fixup); regs->ax = trapnr; @@ -43,7 +48,9 @@ EXPORT_SYMBOL_GPL(ex_handler_fault); * result of a refcount inc/dec/add/sub. */ __visible bool ex_handler_refcount(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { /* First unconditionally saturate the refcount. */ *(int *)regs->cx = INT_MIN / 2; @@ -96,7 +103,9 @@ EXPORT_SYMBOL(ex_handler_refcount); * out all the FPU registers) if we can't restore from the task's FPU state. */ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { regs->ip = ex_fixup_addr(fixup); @@ -109,7 +118,9 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup, EXPORT_SYMBOL_GPL(ex_handler_fprestore); __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { regs->ip = ex_fixup_addr(fixup); return true; @@ -117,7 +128,9 @@ __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup, EXPORT_SYMBOL(ex_handler_uaccess); __visible bool ex_handler_ext(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { /* Special hack for uaccess_err */ current->thread.uaccess_err = 1; @@ -127,7 +140,9 @@ __visible bool ex_handler_ext(const struct exception_table_entry *fixup, EXPORT_SYMBOL(ex_handler_ext); __visible bool ex_handler_rdmsr_unsafe(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { if (pr_warn_once("unchecked MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pF)\n", (unsigned int)regs->cx, regs->ip, (void *)regs->ip)) @@ -142,7 +157,9 @@ __visible bool ex_handler_rdmsr_unsafe(const struct exception_table_entry *fixup EXPORT_SYMBOL(ex_handler_rdmsr_unsafe); __visible bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { if (pr_warn_once("unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pF)\n", (unsigned int)regs->cx, (unsigned int)regs->dx, @@ -156,12 +173,14 @@ __visible bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup EXPORT_SYMBOL(ex_handler_wrmsr_unsafe); __visible bool ex_handler_clear_fs(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) { if (static_cpu_has(X86_BUG_NULL_SEG)) asm volatile ("mov %0, %%fs" : : "rm" (__USER_DS)); asm volatile ("mov %0, %%fs" : : "rm" (0)); - return ex_handler_default(fixup, regs, trapnr); + return ex_handler_default(fixup, regs, trapnr, error_code, fault_addr); } EXPORT_SYMBOL(ex_handler_clear_fs); @@ -178,7 +197,8 @@ __visible bool ex_has_fault_handler(unsigned long ip) return handler == ex_handler_fault; } -int fixup_exception(struct pt_regs *regs, int trapnr) +int fixup_exception(struct pt_regs *regs, int trapnr, unsigned long error_code, + unsigned long fault_addr) { const struct exception_table_entry *e; ex_handler_t handler; @@ -202,7 +222,7 @@ int fixup_exception(struct pt_regs *regs, int trapnr) return 0; handler = ex_fixup_handler(e); - return handler(e, regs, trapnr); + return handler(e, regs, trapnr, error_code, fault_addr); } extern unsigned int early_recursion_flag; @@ -238,9 +258,9 @@ void __init early_fixup_exception(struct pt_regs *regs, int trapnr) * result in a hard-to-debug panic. * * Keep in mind that not all vectors actually get here. Early - * fage faults, for example, are special. + * page faults, for example, are special. */ - if (fixup_exception(regs, trapnr)) + if (fixup_exception(regs, trapnr, regs->orig_ax, 0)) return; if (fixup_bug(regs, trapnr)) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index bcdaae1d5bf5..840833469551 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -711,7 +711,7 @@ no_context(struct pt_regs *regs, unsigned long error_code, int sig; /* Are we prepared to handle this kernel fault? */ - if (fixup_exception(regs, X86_TRAP_PF)) { + if (fixup_exception(regs, X86_TRAP_PF, error_code, address)) { /* * Any interrupt that takes a fault gets the fixup. This makes * the below recursive fault logic only apply to a faults from From patchwork Tue Aug 28 20:14:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579181 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AEFD14BD for ; Tue, 28 Aug 2018 21:08:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6291E2A9F1 for ; Tue, 28 Aug 2018 21:08:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5642B2AA84; Tue, 28 Aug 2018 21:08:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 0B2BF2A9F1 for ; Tue, 28 Aug 2018 21:08:54 +0000 (UTC) Received: (qmail 27822 invoked by uid 550); 28 Aug 2018 21:08:37 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1766 invoked from network); 28 Aug 2018 20:15:10 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NAETjXJiOQavioRhqbGq6F/PzukzG2gZMWmeXQhMl80=; b=WNI8ai7+Ht1I4l5z/whPDwVOiBvsOplA05YMYRIfqVQlJ2+ZVyhdWGLGefcF6nvULz nCaQhDzxjzW/1RYfszsLXUkxKF1jLMdyKSVNYMHCNO1VEq6wqwxJa36FNTF2Kmo4EJX2 Ud3JqfRBAGMW8Bay70gIBHrrLyqvWK/44JaHTEutpKevrlWy0fpXp9DvqElM++dD5CpV 3yzoj/oEej7eLabBA8u1LCnCFyvB9xhrAVzX518VG/aoPBezsdzu9QGt4klHxXoIFpha jEa9oysLx3Z4VC8nLmpUXst4nVucIHg9NEQh55Jw6m547JLudG8BhbAjgHZuj8ogtTtR NvvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NAETjXJiOQavioRhqbGq6F/PzukzG2gZMWmeXQhMl80=; b=Jzrlo4F+AoPU1bv8b2zqQ1I8YrcH4DO+nNIqTxCfT7B/3slhrZ9+sDTCQc2+CnvZC9 V8t7q45fUT3aPGxOLUH1DT+KpXsuLczaoPTrHFC1D4br+nfjb9I98GGbpmExlXe3Yh1b nmafjINSEXGdHN4rWFOcbjaobEwk5Y2B0PGWTkl1psgUoSl6mWMHYPHO9EvyTqer2lrK x7kGTy4bOQWeFa2QNNwk+8rd5h1CgxwwSHesycujldZo/PcjX04wwZeadhb90ND+Y6Aa WhtPEaw7eGJrZ24gxh+ULG7qNfSr2Mu3k2RRAc02yFpY6kE+ZzBXZTEC7PXxJ3ZxCQDv lQrQ== X-Gm-Message-State: APzg51BqI1qWCipH/34GUXuTniELujYiwpZYeR9IubdxMOh4JP64rIGy SnOyTxcUc1ZAfiwr+nLfJXEn+inXqQ== X-Google-Smtp-Source: ANB0VdaaJ4oPVmKutihv0mhKCgI76AgUwcB5OOoZyVVLf2Qa5AMoOXkJubQMng4m3eTi+DqXw+8y140DIQ== X-Received: by 2002:a81:2304:: with SMTP id j4-v6mr941579ywj.101.1535487298418; Tue, 28 Aug 2018 13:14:58 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:20 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-7-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 6/7] x86: BUG() when uaccess helpers fault on kernel addresses From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP There have been multiple kernel vulnerabilities that permitted userspace to pass completely unchecked pointers through to userspace accessors: - the waitid() bug - commit 96ca579a1ecc ("waitid(): Add missing access_ok() checks") - the sg/bsg read/write APIs - the infiniband read/write APIs These don't happen all that often, but when they do happen, it is hard to test for them properly; and it is probably also hard to discover them with fuzzing. Even when an unmapped kernel address is supplied to such buggy code, it just returns -EFAULT instead of doing a proper BUG() or at least WARN(). This patch attempts to make such misbehaving code a bit more visible by refusing to do a fixup in the pagefault handler code when a userspace accessor causes #PF on a kernel address and the current context isn't whitelisted. Signed-off-by: Jann Horn --- v3: - whitelist exact_copy_from_user(), at least for now - the alternative would be a somewhat complicated refactor (Kees Cook) arch/x86/mm/extable.c | 58 +++++++++++++++++++++++++++++++++++++++++++ fs/namespace.c | 2 ++ include/linux/sched.h | 6 +++++ mm/maccess.c | 6 +++++ 4 files changed, 72 insertions(+) diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 856fa409c536..6521134057e8 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -117,11 +117,67 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup, } EXPORT_SYMBOL_GPL(ex_handler_fprestore); +/* Helper to check whether a uaccess fault indicates a kernel bug. */ +static bool bogus_uaccess(struct pt_regs *regs, int trapnr, + unsigned long fault_addr) +{ + /* This is the normal case: #PF with a fault address in userspace. */ + if (trapnr == X86_TRAP_PF && fault_addr < TASK_SIZE_MAX) + return false; + + /* + * This code can be reached for machine checks, but only if the #MC + * handler has already decided that it looks like a candidate for fixup. + * This e.g. happens when attempting to access userspace memory which + * the CPU can't access because of uncorrectable bad memory. + */ + if (trapnr == X86_TRAP_MC) + return false; + + /* + * There are two remaining exception types we might encounter here: + * - #PF for faulting accesses to kernel addresses + * - #GP for faulting accesses to noncanonical addresses + * Complain about anything else. + */ + if (trapnr != X86_TRAP_PF && trapnr != X86_TRAP_GP) { + WARN(1, "unexpected trap %d in uaccess\n", trapnr); + return false; + } + + /* + * This is a faulting memory access in kernel space, on a kernel + * address, in a usercopy function. This can e.g. be caused by improper + * use of helpers like __put_user and by improper attempts to access + * userspace addresses in KERNEL_DS regions. + * The one (semi-)legitimate exception are probe_kernel_{read,write}(), + * which can be invoked from places like kgdb, /dev/mem (for reading) + * and privileged BPF code (for reading). + * The probe_kernel_*() functions set the kernel_uaccess_faults_ok flag + * to tell us that faulting on kernel addresses, and even noncanonical + * addresses, in a userspace accessor does not necessarily imply a + * kernel bug, root might just be doing weird stuff. + */ + if (current->kernel_uaccess_faults_ok) + return false; + + /* This is bad. Refuse the fixup so that we go into die(). */ + if (trapnr == X86_TRAP_PF) { + pr_emerg("BUG: pagefault on kernel address 0x%lx in non-whitelisted uaccess\n", + fault_addr); + } else { + pr_emerg("BUG: GPF in non-whitelisted uaccess (non-canonical address?)\n"); + } + return true; +} + __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup, struct pt_regs *regs, int trapnr, unsigned long error_code, unsigned long fault_addr) { + if (bogus_uaccess(regs, trapnr, fault_addr)) + return false; regs->ip = ex_fixup_addr(fixup); return true; } @@ -132,6 +188,8 @@ __visible bool ex_handler_ext(const struct exception_table_entry *fixup, unsigned long error_code, unsigned long fault_addr) { + if (bogus_uaccess(regs, trapnr, fault_addr)) + return false; /* Special hack for uaccess_err */ current->thread.uaccess_err = 1; regs->ip = ex_fixup_addr(fixup); diff --git a/fs/namespace.c b/fs/namespace.c index 99186556f8d3..d86830c86ce8 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -2642,6 +2642,7 @@ static long exact_copy_from_user(void *to, const void __user * from, if (!access_ok(VERIFY_READ, from, n)) return n; + current->kernel_uaccess_faults_ok++; while (n) { if (__get_user(c, f)) { memset(t, 0, n); @@ -2651,6 +2652,7 @@ static long exact_copy_from_user(void *to, const void __user * from, f++; n--; } + current->kernel_uaccess_faults_ok--; return n; } diff --git a/include/linux/sched.h b/include/linux/sched.h index 977cb57d7bc9..56dd65f1be4f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -739,6 +739,12 @@ struct task_struct { unsigned use_memdelay:1; #endif + /* + * May usercopy functions fault on kernel addresses? + * This is not just a single bit because this can potentially nest. + */ + unsigned int kernel_uaccess_faults_ok; + unsigned long atomic_flags; /* Flags requiring atomic access. */ struct restart_block restart_block; diff --git a/mm/maccess.c b/mm/maccess.c index ec00be51a24f..f3416632e5a4 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -30,8 +30,10 @@ long __probe_kernel_read(void *dst, const void *src, size_t size) set_fs(KERNEL_DS); pagefault_disable(); + current->kernel_uaccess_faults_ok++; ret = __copy_from_user_inatomic(dst, (__force const void __user *)src, size); + current->kernel_uaccess_faults_ok--; pagefault_enable(); set_fs(old_fs); @@ -58,7 +60,9 @@ long __probe_kernel_write(void *dst, const void *src, size_t size) set_fs(KERNEL_DS); pagefault_disable(); + current->kernel_uaccess_faults_ok++; ret = __copy_to_user_inatomic((__force void __user *)dst, src, size); + current->kernel_uaccess_faults_ok--; pagefault_enable(); set_fs(old_fs); @@ -94,11 +98,13 @@ long strncpy_from_unsafe(char *dst, const void *unsafe_addr, long count) set_fs(KERNEL_DS); pagefault_disable(); + current->kernel_uaccess_faults_ok++; do { ret = __get_user(*dst++, (const char __user __force *)src++); } while (dst[-1] && ret == 0 && src - unsafe_addr < count); + current->kernel_uaccess_faults_ok--; dst[-1] = '\0'; pagefault_enable(); set_fs(old_fs); From patchwork Tue Aug 28 20:14:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10579187 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C3067920 for ; Tue, 28 Aug 2018 21:09:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA78928DC3 for ; Tue, 28 Aug 2018 21:09:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ADE882A9F1; Tue, 28 Aug 2018 21:09:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id C700128DC3 for ; Tue, 28 Aug 2018 21:09:03 +0000 (UTC) Received: (qmail 28376 invoked by uid 550); 28 Aug 2018 21:08:45 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 1839 invoked from network); 28 Aug 2018 20:15:13 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=orkRpEOZI7lmcFqEOmIoRKnoNlpBSRaZPpEarFrxgn4=; b=dJtCuA9+jn3JAd7mRLBblEhQcu5ltG2x2ErJEb4AoFY6cAWBnnE9WrZUPagGV6lPS/ Isk7kxKGsTe7qKU5K8pvQBCTBKDgZ5XoD3YU07b3hcWTCY5X5sEHOgV55KbG9jvGeTR8 eNHoOcCNYhBrWY/AzOpz4S2FKOl5BFe2si0NqeWAuKLgg2t1/uK6xcvm6cfM6qBRP/1f B4mD/dy3Wxaa+EQA1v7Aan4ChgCArgOPGqmYLQWLUGrAkmZzY5klGbh6hqRbqutOWXQC ASj6uLgwRB23No3sHnOnXZRBTtHtxRjU4kG0N149tX+0ldcEBocZhrYIDAaMne6Nup0M 8phw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=orkRpEOZI7lmcFqEOmIoRKnoNlpBSRaZPpEarFrxgn4=; b=Mm2Y2QgcjhBqp2tXblYHMrSHGr8/CtWGQlwYDEx303YSEpPQ7NCteYAkYkkrDezZBq AvIXkOSfknzSXatVvqUXmZk5FyEsdaBFo1h4LwnpTS1vIEuwnIksG4p55A6xd4jsaznZ C3T1ylnrFcS5RBw7RBYDIYD75aV2kXcCSdHMmM3uzcsjzyEgqvQGzX19LMPMH0ooc0OB dCLjsF2t0C0aTJl2jVxw9ex99RtMF3D4FX84iyE2MA+C9DyWH8mpI8KqJ/x8jjGrmG38 qQ7350moICn0ahRLOxvnC5N9VjwtuLz+sabCLwEJcJsvU9CZh5Q+2HXV0LM8Ej00bHwD FpBA== X-Gm-Message-State: APzg51ATm/ZArIva9R2Db3vharsHYtiE19evcanvgk4PtqE9TRQSAXb8 oP0CcBrRzvmIuMJ2XIEazuuKH0vElg== X-Google-Smtp-Source: ANB0VdZBbCL7UFF1k+aW0ff9wf579S1iabnTvek8YhkrSkZNOC1Y6bJfoziFPHprHIkWIEL+QMDgVZmdfg== X-Received: by 2002:a0c:c3cd:: with SMTP id p13-v6mr1692849qvi.22.1535487301486; Tue, 28 Aug 2018 13:15:01 -0700 (PDT) Date: Tue, 28 Aug 2018 22:14:21 +0200 In-Reply-To: <20180828201421.157735-1-jannh@google.com> Message-Id: <20180828201421.157735-8-jannh@google.com> Mime-Version: 1.0 References: <20180828201421.157735-1-jannh@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog Subject: [PATCH v3 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS From: Jann Horn To: Kees Cook , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Andy Lutomirski , kernel-hardening@lists.openwall.com, jannh@google.com Cc: linux-kernel@vger.kernel.org, dvyukov@google.com, Masami Hiramatsu , "Naveen N. Rao" , Anil S Keshavamurthy , "David S. Miller" , Alexander Viro , linux-fsdevel@vger.kernel.org, Borislav Petkov X-Virus-Scanned: ClamAV using ClamSMTP Test whether the kernel WARN()s when, under KERNEL_DS, a bad kernel pointer is used as "userspace" pointer. Should normally be used in "DIRECT" mode. Acked-by: Kees Cook Signed-off-by: Jann Horn --- drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + drivers/misc/lkdtm/usercopy.c | 13 +++++++++++++ 3 files changed, 15 insertions(+) diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index 2154d1bfd18b..5a755590d3dc 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -183,6 +183,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(USERCOPY_STACK_FRAME_FROM), CRASHTYPE(USERCOPY_STACK_BEYOND), CRASHTYPE(USERCOPY_KERNEL), + CRASHTYPE(USERCOPY_KERNEL_DS), }; diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 9e513dcfd809..07db641d71d0 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -82,5 +82,6 @@ void lkdtm_USERCOPY_STACK_FRAME_TO(void); void lkdtm_USERCOPY_STACK_FRAME_FROM(void); void lkdtm_USERCOPY_STACK_BEYOND(void); void lkdtm_USERCOPY_KERNEL(void); +void lkdtm_USERCOPY_KERNEL_DS(void); #endif diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c index 9725aed305bb..389475b25bb7 100644 --- a/drivers/misc/lkdtm/usercopy.c +++ b/drivers/misc/lkdtm/usercopy.c @@ -322,6 +322,19 @@ void lkdtm_USERCOPY_KERNEL(void) vm_munmap(user_addr, PAGE_SIZE); } +void lkdtm_USERCOPY_KERNEL_DS(void) +{ + char __user *user_ptr = (char __user *)ERR_PTR(-EINVAL); + mm_segment_t old_fs = get_fs(); + char buf[10] = {0}; + + pr_info("attempting copy_to_user on unmapped kernel address\n"); + set_fs(KERNEL_DS); + if (copy_to_user(user_ptr, buf, sizeof(buf))) + pr_info("copy_to_user un unmapped kernel address failed\n"); + set_fs(old_fs); +} + void __init lkdtm_usercopy_init(void) { /* Prepare cache that lacks SLAB_USERCOPY flag. */