From patchwork Sat Jan 27 07:55:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10187353 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E6EBF60385 for ; Sat, 27 Jan 2018 08:05:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D75A828CB5 for ; Sat, 27 Jan 2018 08:05:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CB4C628D40; Sat, 27 Jan 2018 08:05:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id DB2A628CB5 for ; Sat, 27 Jan 2018 08:05:19 +0000 (UTC) Received: (qmail 16362 invoked by uid 550); 27 Jan 2018 08:04:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 16244 invoked from network); 27 Jan 2018 08:04:49 -0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,421,1511856000"; d="scan'208";a="22691038" From: Dan Williams To: tglx@linutronix.de Cc: linux-arch@vger.kernel.org, kernel-hardening@lists.openwall.com, gregkh@linuxfoundation.org, x86@kernel.org, Ingo Molnar , "H. Peter Anvin" , torvalds@linux-foundation.org, alan@linux.intel.com Date: Fri, 26 Jan 2018 23:55:29 -0800 Message-ID: <151703972912.26578.6792656143278523491.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <151703971300.26578.1185595719337719486.stgit@dwillia2-desk3.amr.corp.intel.com> References: <151703971300.26578.1185595719337719486.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH v5 03/12] x86: implement array_idx_mask X-Virus-Scanned: ClamAV using ClamSMTP 'array_idx' uses a mask to sanitize user controllable array indexes, i.e. generate a 0 mask if idx >= sz, and a ~0 mask otherwise. While the default array_idx_mask handles the carry-bit from the (index - size) result in software. The x86 'array_idx_mask' does the same, but the carry-bit is handled in the processor CF flag without conditional instructions in the control flow. Suggested-by: Linus Torvalds Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Signed-off-by: Dan Williams --- arch/x86/include/asm/barrier.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 01727dbc294a..30419b674ebd 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -24,6 +24,28 @@ #define wmb() asm volatile("sfence" ::: "memory") #endif +/** + * array_idx_mask - generate a mask for array_idx() that is ~0UL when + * the bounds check succeeds and 0 otherwise + * + * mask = 0 - (idx < sz); + */ +#define array_idx_mask array_idx_mask +static inline unsigned long array_idx_mask(unsigned long idx, unsigned long sz) +{ + unsigned long mask; + +#ifdef CONFIG_X86_32 + asm ("cmpl %1,%2; sbbl %0,%0;" +#else + asm ("cmpq %1,%2; sbbq %0,%0;" +#endif + :"=r" (mask) + :"r"(sz),"r" (idx) + :"cc"); + return mask; +} + #ifdef CONFIG_X86_PPRO_FENCE #define dma_rmb() rmb() #else