From patchwork Thu Dec 17 18:37:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 7876141 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5DF48BEEE5 for ; Thu, 17 Dec 2015 18:39:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 71EFF20380 for ; Thu, 17 Dec 2015 18:39:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E06820443 for ; Thu, 17 Dec 2015 18:39:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a9dR6-00088b-9P; Thu, 17 Dec 2015 18:37:56 +0000 Received: from mail-pa0-x22c.google.com ([2607:f8b0:400e:c03::22c]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a9dR2-0007vc-Dt for linux-arm-kernel@lists.infradead.org; Thu, 17 Dec 2015 18:37:53 +0000 Received: by mail-pa0-x22c.google.com with SMTP id ur14so46349737pab.0 for ; Thu, 17 Dec 2015 10:37:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:user-agent:mime-version :content-type; bh=fjt8VtuqEtClusDH4jh2rbBkxJPCj6T8P8mod0q63iU=; b=cSAF4cxn+icrXLzEPu7Ki2fT4fl+/pfh2DfsBaGJsvy7r7THdQhOach3w9e2rwwj7+ fjhdpNo/gbqQw1WeG/++DLD5jL/dx5x3nvy8oh15aVGoyVDtHfZUPSDBLkcVhTim95OQ 2xdbkxGH+eNH4VDtb8gA0Di1Z7Xm8dYje8OUrJ6NdkPIK2TZM7BT0zoisb58Js8p/L33 kw0Z/ouA+qrEH4YecsvNSbrrAHfGx2W3Ix2CixuODmSNdLM5sox22tBiPHiy41YaLMU5 2P8+ecIcKmuIsNZrSJclD0FxTuScPqnMc5iy1zXqcY4k91jkun8M4EyYVgeee4kHad/L byYw== X-Received: by 10.66.158.193 with SMTP id ww1mr45020099pab.21.1450377451237; Thu, 17 Dec 2015 10:37:31 -0800 (PST) Received: from i7 (c-67-168-201-187.hsd1.or.comcast.net. [67.168.201.187]) by smtp.gmail.com with ESMTPSA id tm4sm17369980pab.3.2015.12.17.10.37.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2015 10:37:30 -0800 (PST) Date: Thu, 17 Dec 2015 10:37:30 -0800 (PST) From: Linus Torvalds X-X-Sender: torvalds@i7 To: Peter Anvin , Ingo Molnar , Catalin Marinas , Will Deacon Subject: [PATCH 2/3] Add 'unsafe' user access functions for batched accesses Message-ID: User-Agent: Alpine 2.20 (LFD 67 2015-01-07) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151217_103752_549769_76BBBB53 X-CRM114-Status: GOOD ( 12.80 ) X-Spam-Score: -0.9 (/) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Linus Torvalds Date: Thu, 17 Dec 2015 09:57:27 -0800 Subject: [PATCH 2/3] Add 'unsafe' user access functions for batched accesses The naming is meant to discourage random use: the helper functions are not really any more "unsafe" than the traditional double-underscore functions (which need the address range checking), but they do need even more infrastructure around them, and should not be used willy-nilly. In addition to checking the access range, these user access functions require that you wrap the user access with a "user_acess_{begin,end}()" around it. That allows architectures that implement kernel user access control (x86: SMAP, arm64: PAN) to do the user access control in the wrapping user_access_begin/end part, and then batch up the actual user space accesses using the new interfaces. The main (and hopefully only) use for these are for core generic access helpers, initially just the generic user string functions (strnlen_user() and strncpy_from_user()). Signed-off-by: Linus Torvalds --- arch/x86/include/asm/uaccess.h | 25 +++++++++++++++++++++++++ include/linux/uaccess.h | 7 +++++++ 2 files changed, 32 insertions(+) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index cc228f4713da..ca59e4f9254e 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -762,5 +762,30 @@ copy_to_user(void __user *to, const void *from, unsigned long n) #undef __copy_from_user_overflow #undef __copy_to_user_overflow +/* + * The "unsafe" user accesses aren't really "unsafe", but the naming + * is a big fat warning: you have to not only do the access_ok() + * checking before using them, but you have to surround them with the + * user_access_begin/end() pair. + */ +#define user_access_begin() __uaccess_begin() +#define user_access_end() __uaccess_end() + +#define unsafe_put_user(x, ptr) \ +({ \ + int __pu_err; \ + __put_user_size((x), (ptr), sizeof(*(ptr)), __pu_err, -EFAULT); \ + __builtin_expect(__pu_err, 0); \ +}) + +#define unsafe_get_user(x, ptr) \ +({ \ + int __gu_err; \ + unsigned long __gu_val; \ + __get_user_size(__gu_val, (ptr), sizeof(*(ptr)), __gu_err, -EFAULT); \ + (x) = (__force __typeof__(*(ptr)))__gu_val; \ + __builtin_expect(__gu_err, 0); \ +}) + #endif /* _ASM_X86_UACCESS_H */ diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 558129af828a..349557825428 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -111,4 +111,11 @@ extern long strncpy_from_unsafe(char *dst, const void *unsafe_addr, long count); #define probe_kernel_address(addr, retval) \ probe_kernel_read(&retval, addr, sizeof(retval)) +#ifndef user_access_begin +#define user_access_begin() do { } while (0) +#define user_access_end() do { } while (0) +#define unsafe_get_user(x, ptr) __get_user(x, ptr) +#define unsafe_put_user(x, ptr) __put_user(x, ptr) +#endif + #endif /* __LINUX_UACCESS_H__ */