From patchwork Wed Jul 20 20:26:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9240513 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E74136077C for ; Wed, 20 Jul 2016 20:30:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D81E827C26 for ; Wed, 20 Jul 2016 20:30:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CBD9A27D4D; Wed, 20 Jul 2016 20:30:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 395FB27C26 for ; Wed, 20 Jul 2016 20:30:11 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bPy6n-0000Aj-8t; Wed, 20 Jul 2016 20:28:45 +0000 Received: from mail-pa0-x22d.google.com ([2607:f8b0:400e:c03::22d]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bPy5k-0007ki-DF for linux-arm-kernel@lists.infradead.org; Wed, 20 Jul 2016 20:27:44 +0000 Received: by mail-pa0-x22d.google.com with SMTP id iw10so21360658pac.2 for ; Wed, 20 Jul 2016 13:27:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=E+izyuEALS68uwQzGZM1wUxIVvLRZAMUsm40eYnsZSs=; b=ckNGHra3aSustttmLv61HgKplnzPdYQIDBGFDdOVeWp76Pix1imOcX61vPzOXzFfJk P1dr7Twdab7lYvS5nxT3SON7NZhc+K2fc5KneAvo46orcEqpi0+jr13vpr0JlX2Y4ZEz wipU0VpwZP3NRL9TIXvpxHZU+QR8yHGetWqH8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E+izyuEALS68uwQzGZM1wUxIVvLRZAMUsm40eYnsZSs=; b=bpnZl30fh/IbM471jaeq5mhAPWa6K2E4fFoJVqlTlbOqdV2JbwX9gKbZk/OVp2MJHn GY6DCDSle5nfxY7mSz0MgY35vG1yw5exJuN3ETTfRb0qQZX1FphwA7WyyD6LwbyRuPOn mEuXSX5geWYc/qloX9WmLeYfgEbXJJHJO86jJDmm3+y3nwiBtnhOVWk7emRGmCeKW2BY rPt4lnWoneUzL1x3o7PuYQRuHJgD5iBSb1xkZgQinIyUyBcxPPo4hse789sAqki2hIOF UG2aFeZNlFvOr2SXzaqtPS7q4gnnksVNdf0cw2YFFJqk3AWXbWQgtOQGFtiOFzLkxW3i 4pDg== X-Gm-Message-State: ALyK8tLHWMTIOn8xY2lSgkWT6UyZmY0smf2Gn12RbyeIgkdb5nH72EoHakAWJFDtzbuI2Lh+ X-Received: by 10.66.138.74 with SMTP id qo10mr77281023pab.135.1469046439624; Wed, 20 Jul 2016 13:27:19 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id yv9sm6634708pab.0.2016.07.20.13.27.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jul 2016 13:27:17 -0700 (PDT) From: Kees Cook To: kernel-hardening@lists.openwall.com Subject: [PATCH v4 04/12] x86/uaccess: Enable hardened usercopy Date: Wed, 20 Jul 2016 13:26:59 -0700 Message-Id: <1469046427-12696-5-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1469046427-12696-1-git-send-email-keescook@chromium.org> References: <1469046427-12696-1-git-send-email-keescook@chromium.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160720_132740_580177_03248DBB X-CRM114-Status: GOOD ( 14.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jan Kara , Benjamin Herrenschmidt , Balbir Singh , Will Deacon , linux-mm@kvack.org, sparclinux@vger.kernel.org, linux-ia64@vger.kernel.org, Christoph Lameter , Andrea Arcangeli , linux-arch@vger.kernel.org, Michael Ellerman , x86@kernel.org, Russell King , linux-arm-kernel@lists.infradead.org, Catalin Marinas , PaX Team , Borislav Petkov , Mathias Krause , Fenghua Yu , Rik van Riel , Kees Cook , Vitaly Wool , David Rientjes , Tony Luck , Andy Lutomirski , Josh Poimboeuf , Andrew Morton , Dmitry Vyukov , Laura Abbott , Brad Spengler , Ard Biesheuvel , linux-kernel@vger.kernel.org, Pekka Enberg , Daniel Micay , Casey Schaufler , Joonsoo Kim , linuxppc-dev@lists.ozlabs.org, "David S. Miller" MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in copy_*_user() and __copy_*_user() because copy_*_user() actually calls down to _copy_*_user() and not __copy_*_user(). Based on code from PaX and grsecurity. Signed-off-by: Kees Cook Tested-by: Valdis Kletnieks --- arch/x86/Kconfig | 1 + arch/x86/include/asm/uaccess.h | 10 ++++++---- arch/x86/include/asm/uaccess_32.h | 2 ++ arch/x86/include/asm/uaccess_64.h | 2 ++ 4 files changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4407f596b72c..762a0349633c 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -80,6 +80,7 @@ config X86 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_AOUT if X86_32 select HAVE_ARCH_AUDITSYSCALL + select HAVE_ARCH_HARDENED_USERCOPY select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 2982387ba817..d3312f0fcdfc 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n) * case, and do only runtime checking for non-constant sizes. */ - if (likely(sz < 0 || sz >= n)) + if (likely(sz < 0 || sz >= n)) { + check_object_size(to, n, false); n = _copy_from_user(to, from, n); - else if(__builtin_constant_p(n)) + } else if (__builtin_constant_p(n)) copy_from_user_overflow(); else __copy_from_user_overflow(sz, n); @@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n) might_fault(); /* See the comment in copy_from_user() above. */ - if (likely(sz < 0 || sz >= n)) + if (likely(sz < 0 || sz >= n)) { + check_object_size(from, n, true); n = _copy_to_user(to, from, n); - else if(__builtin_constant_p(n)) + } else if (__builtin_constant_p(n)) copy_to_user_overflow(); else __copy_to_user_overflow(sz, n); diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h index 4b32da24faaf..7d3bdd1ed697 100644 --- a/arch/x86/include/asm/uaccess_32.h +++ b/arch/x86/include/asm/uaccess_32.h @@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero static __always_inline unsigned long __must_check __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + check_object_size(from, n, true); return __copy_to_user_ll(to, from, n); } @@ -95,6 +96,7 @@ static __always_inline unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { might_fault(); + check_object_size(to, n, false); if (__builtin_constant_p(n)) { unsigned long ret; diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 2eac2aa3e37f..673059a109fe 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size) { int ret = 0; + check_object_size(dst, size, false); if (!__builtin_constant_p(size)) return copy_user_generic(dst, (__force void *)src, size); switch (size) { @@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size) { int ret = 0; + check_object_size(src, size, true); if (!__builtin_constant_p(size)) return copy_user_generic((__force void *)dst, src, size); switch (size) {