From patchwork Mon Mar 12 03:03:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Hutchings X-Patchwork-Id: 10276135 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5F94C60467 for ; Mon, 12 Mar 2018 12:44:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D6D928BEE for ; Mon, 12 Mar 2018 12:44:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 421B528D94; Mon, 12 Mar 2018 12:44:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 44D2828BEE for ; Mon, 12 Mar 2018 12:44:06 +0000 (UTC) Received: (qmail 30134 invoked by uid 550); 12 Mar 2018 12:43:54 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 29697 invoked from network); 12 Mar 2018 03:07:52 -0000 Content-Disposition: inline MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, alan@linux.intel.com, kernel-hardening@lists.openwall.com, "Linus Torvalds" , "Dan Williams" , "Thomas Gleixner" , gregkh@linuxfoundation.org, linux-arch@vger.kernel.org Date: Mon, 12 Mar 2018 03:03:34 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.2 084/104] x86: Implement array_index_mask_nospec In-Reply-To: X-SA-Exim-Connect-IP: 2a02:8011:400e:2:6f00:88c8:c921:d332 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false X-Virus-Scanned: ClamAV using ClamSMTP 3.2.101-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Dan Williams commit babdde2698d482b6c0de1eab4f697cf5856c5859 upstream. array_index_nospec() uses a mask to sanitize user controllable array indexes, i.e. generate a 0 mask if 'index' >= 'size', and a ~0 mask otherwise. While the default array_index_mask_nospec() handles the carry-bit from the (index - size) result in software. The x86 array_index_mask_nospec() does the same, but the carry-bit is handled in the processor CF flag without conditional instructions in the control flow. Suggested-by: Linus Torvalds Signed-off-by: Dan Williams Signed-off-by: Thomas Gleixner Cc: linux-arch@vger.kernel.org Cc: kernel-hardening@lists.openwall.com Cc: gregkh@linuxfoundation.org Cc: alan@linux.intel.com Link: https://lkml.kernel.org/r/151727414808.33451.1873237130672785331.stgit@dwillia2-desk3.amr.corp.intel.com [bwh: Backported to 3.2: adjust filename, context] Signed-off-by: Ben Hutchings --- arch/x86/include/asm/system.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) --- a/arch/x86/include/asm/system.h +++ b/arch/x86/include/asm/system.h @@ -455,6 +455,30 @@ void stop_this_cpu(void *dummy); #endif /** + * array_index_mask_nospec() - generate a mask that is ~0UL when the + * bounds check succeeds and 0 otherwise + * @index: array element index + * @size: number of elements in array + * + * Returns: + * 0 - (index < size) + */ +static inline unsigned long array_index_mask_nospec(unsigned long index, + unsigned long size) +{ + unsigned long mask; + + asm ("cmp %1,%2; sbb %0,%0;" + :"=r" (mask) + :"r"(size),"r" (index) + :"cc"); + return mask; +} + +/* Override the default implementation from linux/nospec.h. */ +#define array_index_mask_nospec array_index_mask_nospec + +/** * read_barrier_depends - Flush all pending reads that subsequents reads * depend on. *