From patchwork Sat Jan 20 21:06:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10176743 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 434666055D for ; Sat, 20 Jan 2018 21:16:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3158F205FD for ; Sat, 20 Jan 2018 21:16:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25BCA2074F; Sat, 20 Jan 2018 21:16:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 241B1205FD for ; Sat, 20 Jan 2018 21:16:35 +0000 (UTC) Received: (qmail 3592 invoked by uid 550); 20 Jan 2018 21:15:55 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3508 invoked from network); 20 Jan 2018 21:15:53 -0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,387,1511856000"; d="scan'208";a="197373914" From: Dan Williams To: tglx@linutronix.de Cc: linux-arch@vger.kernel.org, kernel-hardening@lists.openwall.com, gregkh@linuxfoundation.org, x86@kernel.org, Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , torvalds@linux-foundation.org, alan@linux.intel.com Date: Sat, 20 Jan 2018 13:06:35 -0800 Message-ID: <151648239535.34747.4422108674633222531.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <151648235823.34747.15181877619346237802.stgit@dwillia2-desk3.amr.corp.intel.com> References: <151648235823.34747.15181877619346237802.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH v4.1 07/10] x86: narrow out of bounds syscalls to sys_read under speculation X-Virus-Scanned: ClamAV using ClamSMTP The syscall table base is a user controlled function pointer in kernel space. Like, 'get_user, use 'MASK_NOSPEC' to prevent any out of bounds speculation. While retpoline prevents speculating into the user controlled target it does not stop the pointer de-reference, the concern is leaking memory relative to the syscall table base. Reported-by: Linus Torvalds Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: Andy Lutomirski Signed-off-by: Dan Williams --- arch/x86/entry/entry_64.S | 2 ++ arch/x86/include/asm/smap.h | 9 ++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 63f4320602a3..584f6d2236b3 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -260,6 +261,7 @@ entry_SYSCALL_64_fastpath: cmpl $__NR_syscall_max, %eax #endif ja 1f /* return -ENOSYS (already in pt_regs->ax) */ + MASK_NOSPEC %r11 %rax /* sanitize syscall_nr wrt speculation */ movq %r10, %rcx /* diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h index 2b4ad4c6a226..3b5b2cf58dc6 100644 --- a/arch/x86/include/asm/smap.h +++ b/arch/x86/include/asm/smap.h @@ -35,7 +35,14 @@ * this directs the cpu to speculate with a NULL ptr rather than * something targeting kernel memory. * - * assumes CF is set from a previous 'cmp TASK_addr_limit, %ptr' + * In the syscall entry path it is possible to speculate past the + * validation of the system call number. Use MASK_NOSPEC to sanitize the + * syscall array index to zero (sys_read) rather than an arbitrary + * target. + * + * assumes CF is set from a previous 'cmp' i.e.: + * cmp TASK_addr_limit, %ptr + * cmp __NR_syscall_max, %idx */ .macro MASK_NOSPEC mask val sbb \mask, \mask