From patchwork Fri Jul 15 21:44:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9232817 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 45E3360865 for ; Fri, 15 Jul 2016 21:47:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34A7927AC2 for ; Fri, 15 Jul 2016 21:47:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 270CB26D08; Fri, 15 Jul 2016 21:47:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A294D26D08 for ; Fri, 15 Jul 2016 21:47:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bOAva-0003Ct-KD; Fri, 15 Jul 2016 21:45:46 +0000 Received: from mail-pf0-x236.google.com ([2607:f8b0:400e:c00::236]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bOAul-0001dV-Ju for linux-arm-kernel@lists.infradead.org; Fri, 15 Jul 2016 21:45:01 +0000 Received: by mail-pf0-x236.google.com with SMTP id c2so44973309pfa.2 for ; Fri, 15 Jul 2016 14:44:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/5+I+L+/AMRpjEH3kUso6zzz5J33a+EQh2w7x0KW294=; b=KJp6LDUSP/XlWFLTnT4stCLNsaXlGQjEJJvohwrBLAXMnJrL7Zps5j/ouiP+HA89bZ /2Fi+fhczEbYXuifEWzV2ggyqsg2cYlOevpw4FVt5ZL5tP1rwhzqDKwl7y4JCY/x3avH mgz0TgljBhSgbdx12Iw9p9LP08ECkIdpbvrys= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/5+I+L+/AMRpjEH3kUso6zzz5J33a+EQh2w7x0KW294=; b=eZIb/66F81kEmgirRfR6J/U4u4qgLFTIYHZIcKaXsNG05slH9k7wIlu3bXlizm2T2q zytDYgKbvqAm6gTJnWgAb1rryvrcO/ODKafB5uaxf0zhyn4dC/szRqkEUC56cgFyLbga mPEUi4wtVAH3z81yuBAn1dZ/kdYGHc8/h36oyrfD65K4uWCEOtO6Unwjis0YBda+2cS2 heVG5NhUN9wYePRFXjfL2bdfwLvW1j/xp+NmahTolT+Fwn9iX0bp9HXGzdSmU/fD5VYh 6KPmPWpqM11t8HN+XfXxhAWx6o38NIpTx4W6zJhFWt5dJmpXpcQak44bAjtLJZyr7iYN ks2A== X-Gm-Message-State: ALyK8tLmCk6O8DNfSZTu/g8mqhXLKh19GUqpYKQ3tNspC37YmqCXcwLo8rKQVujFa3PUllfS X-Received: by 10.98.66.209 with SMTP id h78mr25189972pfd.11.1468619074894; Fri, 15 Jul 2016 14:44:34 -0700 (PDT) Received: from www.outflux.net ([2002:ada4:7085:0:ae16:2dff:fe07:4fb6]) by smtp.gmail.com with ESMTPSA id w67sm4956159pfw.41.2016.07.15.14.44.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 Jul 2016 14:44:33 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Subject: [PATCH v3 03/11] x86/uaccess: Enable hardened usercopy Date: Fri, 15 Jul 2016 14:44:17 -0700 Message-Id: <1468619065-3222-4-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1468619065-3222-1-git-send-email-keescook@chromium.org> References: <1468619065-3222-1-git-send-email-keescook@chromium.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160715_144455_863756_4FF2CF6A X-CRM114-Status: GOOD ( 14.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jan Kara , kernel-hardening@lists.openwall.com, Benjamin Herrenschmidt , Balbir Singh , Will Deacon , linux-mm@kvack.org, sparclinux@vger.kernel.org, linux-ia64@vger.kernel.org, Christoph Lameter , Andrea Arcangeli , linux-arch@vger.kernel.org, Michael Ellerman , x86@kernel.org, Russell King , linux-arm-kernel@lists.infradead.org, Catalin Marinas , PaX Team , Borislav Petkov , Mathias Krause , Fenghua Yu , Rik van Riel , Kees Cook , Vitaly Wool , David Rientjes , Tony Luck , Andy Lutomirski , Josh Poimboeuf , Andrew Morton , Dmitry Vyukov , Laura Abbott , Brad Spengler , Ard Biesheuvel , Pekka Enberg , Daniel Micay , Casey Schaufler , Joonsoo Kim , linuxppc-dev@lists.ozlabs.org, "David S. Miller" MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in copy_*_user() and __copy_*_user() because copy_*_user() actually calls down to _copy_*_user() and not __copy_*_user(). Based on code from PaX and grsecurity. Signed-off-by: Kees Cook Tested-By: Valdis Kletnieks --- arch/x86/Kconfig | 2 ++ arch/x86/include/asm/uaccess.h | 10 ++++++---- arch/x86/include/asm/uaccess_32.h | 2 ++ arch/x86/include/asm/uaccess_64.h | 2 ++ 4 files changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4407f596b72c..39d89e058249 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -80,11 +80,13 @@ config X86 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_AOUT if X86_32 select HAVE_ARCH_AUDITSYSCALL + select HAVE_ARCH_HARDENED_USERCOPY select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP select HAVE_ARCH_KGDB select HAVE_ARCH_KMEMCHECK + select HAVE_ARCH_LINEAR_KERNEL_MAPPING if X86_64 select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT select HAVE_ARCH_SECCOMP_FILTER diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 2982387ba817..d3312f0fcdfc 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n) * case, and do only runtime checking for non-constant sizes. */ - if (likely(sz < 0 || sz >= n)) + if (likely(sz < 0 || sz >= n)) { + check_object_size(to, n, false); n = _copy_from_user(to, from, n); - else if(__builtin_constant_p(n)) + } else if (__builtin_constant_p(n)) copy_from_user_overflow(); else __copy_from_user_overflow(sz, n); @@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n) might_fault(); /* See the comment in copy_from_user() above. */ - if (likely(sz < 0 || sz >= n)) + if (likely(sz < 0 || sz >= n)) { + check_object_size(from, n, true); n = _copy_to_user(to, from, n); - else if(__builtin_constant_p(n)) + } else if (__builtin_constant_p(n)) copy_to_user_overflow(); else __copy_to_user_overflow(sz, n); diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h index 4b32da24faaf..7d3bdd1ed697 100644 --- a/arch/x86/include/asm/uaccess_32.h +++ b/arch/x86/include/asm/uaccess_32.h @@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero static __always_inline unsigned long __must_check __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + check_object_size(from, n, true); return __copy_to_user_ll(to, from, n); } @@ -95,6 +96,7 @@ static __always_inline unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { might_fault(); + check_object_size(to, n, false); if (__builtin_constant_p(n)) { unsigned long ret; diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 2eac2aa3e37f..673059a109fe 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size) { int ret = 0; + check_object_size(dst, size, false); if (!__builtin_constant_p(size)) return copy_user_generic(dst, (__force void *)src, size); switch (size) { @@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size) { int ret = 0; + check_object_size(src, size, true); if (!__builtin_constant_p(size)) return copy_user_generic((__force void *)dst, src, size); switch (size) {