From patchwork Wed Feb 15 20:05:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9574967 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A411060493 for ; Wed, 15 Feb 2017 20:06:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9297228530 for ; Wed, 15 Feb 2017 20:06:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 86B7F28533; Wed, 15 Feb 2017 20:06:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id E7A5428530 for ; Wed, 15 Feb 2017 20:06:17 +0000 (UTC) Received: (qmail 30112 invoked by uid 550); 15 Feb 2017 20:06:14 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 30066 invoked from network); 15 Feb 2017 20:06:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=gH+qRmGo7+F/+33AKmVBlFPuhZhqrX1at7o2gAVvUuE=; b=OOdGcBb/QSsjrXZ6xqzOdXQsYN9mmMPQEdUOCSdm+9RVIFiHBeOoqMlyzkaNEnuq3o 1pMzYs/3Vwb1+KNCa8J3Nq5NfyQVzTUrrdCZQd2BPilfmnPReJLsYli09RRkSJh+roxN utRDArKObm/saEb1PI3gSGopcrOuy6Iwr/Eoc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=gH+qRmGo7+F/+33AKmVBlFPuhZhqrX1at7o2gAVvUuE=; b=bt544uhGuV2vx2M/NT9cDzKtyn5XQX66yxhs7Tszj0/W+RfLdGEr19yyNSd6OYZqNf v8bGWQbZRDBT3VDdqp2Z9VcfTQqyjYqqORwj9watgNE8LWhJjXU+PH/+Fkxow3IzDLW3 jatmqyeYPtURxDqXpU23SIK8mNcKPPpMeUuCEyjaAW1m3nMhtfi58geMpDKLUaEpf6Ju c1eHASVmLGSncr0lrRMNe9zZKWqzEt/ZBkwAkhYzm0o5IhnIoBPM+khYO+k7xYa2JWIS WF6dcV1gUl9y0uMx2ebKnF3kJS5yuURbzk1ECbTtu4RVD2fx5tnEyh5IBwElOKWP4rxa Iz/Q== X-Gm-Message-State: AMke39l/surTAoxbVmeWCGz/3KupA3cgD1ufyxZVVAuR3ix3UU8wqT2WuLo85+JTujt5+wfp X-Received: by 10.84.138.3 with SMTP id 3mr45413067plo.94.1487189159058; Wed, 15 Feb 2017 12:05:59 -0800 (PST) Date: Wed, 15 Feb 2017 12:05:57 -0800 From: Kees Cook To: Russell King Cc: Mark Rutland , Al Viro , Robin Murphy , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-hardening@lists.openwall.com Message-ID: <20170215200557.GA37325@beast> MIME-Version: 1.0 Content-Disposition: inline Subject: [kernel-hardening] [PATCH] ARM: uaccess: consistently check object sizes X-Virus-Scanned: ClamAV using ClamSMTP In commit 76624175dcae ("arm64: uaccess: consistently check object sizes"), the object size checks are moved outside the access_ok() so that bad destinations are detected before hitting the "memset(dest, 0, size)" in the copy_from_user() failure path. This makes the same change for arm, with attention given to possibly extracting the uaccess routines into a common header file for all architectures in the future. Suggested-by: Mark Rutland Signed-off-by: Kees Cook Acked-by: Mark Rutland --- arch/arm/include/asm/uaccess.h | 44 ++++++++++++++++++++++++++++++------------ 1 file changed, 32 insertions(+), 12 deletions(-) diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 1f59ea051bab..b7e0125c0bbf 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -478,11 +478,10 @@ extern unsigned long __must_check arm_copy_from_user(void *to, const void __user *from, unsigned long n); static inline unsigned long __must_check -__copy_from_user(void *to, const void __user *from, unsigned long n) +__arch_copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned int __ua_flags; - check_object_size(to, n, false); __ua_flags = uaccess_save_and_enable(); n = arm_copy_from_user(to, from, n); uaccess_restore(__ua_flags); @@ -495,18 +494,15 @@ extern unsigned long __must_check __copy_to_user_std(void __user *to, const void *from, unsigned long n); static inline unsigned long __must_check -__copy_to_user(void __user *to, const void *from, unsigned long n) +__arch_copy_to_user(void __user *to, const void *from, unsigned long n) { #ifndef CONFIG_UACCESS_WITH_MEMCPY unsigned int __ua_flags; - - check_object_size(from, n, true); __ua_flags = uaccess_save_and_enable(); n = arm_copy_to_user(to, from, n); uaccess_restore(__ua_flags); return n; #else - check_object_size(from, n, true); return arm_copy_to_user(to, from, n); #endif } @@ -526,25 +522,49 @@ __clear_user(void __user *addr, unsigned long n) } #else -#define __copy_from_user(to, from, n) (memcpy(to, (void __force *)from, n), 0) -#define __copy_to_user(to, from, n) (memcpy((void __force *)to, from, n), 0) +#define __arch_copy_from_user(to, from, n) \ + (memcpy(to, (void __force *)from, n), 0) +#define __arch_copy_to_user(to, from, n) \ + (memcpy((void __force *)to, from, n), 0) #define __clear_user(addr, n) (memset((void __force *)addr, 0, n), 0) #endif -static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) +static inline unsigned long __must_check +__copy_from_user(void *to, const void __user *from, unsigned long n) +{ + check_object_size(to, n, false); + return __arch_copy_from_user(to, from, n); +} + +static inline unsigned long __must_check +copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; + + check_object_size(to, n, false); + if (likely(access_ok(VERIFY_READ, from, n))) - res = __copy_from_user(to, from, n); + res = __arch_copy_from_user(to, from, n); if (unlikely(res)) memset(to + (n - res), 0, res); return res; } -static inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n) +static inline unsigned long __must_check +__copy_to_user(void __user *to, const void *from, unsigned long n) { + check_object_size(from, n, true); + + return __arch_copy_to_user(to, from, n); +} + +static inline unsigned long __must_check +copy_to_user(void __user *to, const void *from, unsigned long n) +{ + check_object_size(from, n, true); + if (access_ok(VERIFY_WRITE, to, n)) - n = __copy_to_user(to, from, n); + n = __arch_copy_to_user(to, from, n); return n; }