From patchwork Wed Oct 10 15:55:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 10634783 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EDB6313AD for ; Wed, 10 Oct 2018 15:56:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B122A2A279 for ; Wed, 10 Oct 2018 15:56:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A4CE12A2E2; Wed, 10 Oct 2018 15:56:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 059BD2A279 for ; Wed, 10 Oct 2018 15:56:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=CYG+rLXtuy1QxmZSxIVN2CrUmD3PTL24eWBS1fj0w2Q=; b=kh2Q6AJAk5Kkce 83uHFPZ6viLKe4Gr2TJbjAPbc3o4FMkxxpkcBlW0PPl5vsSqIWvLomBrEZWXSEV1rzdSxPD6bVgxB neRyI8RdoBm+zF4tnSEdNi28GheBvRHLZTvWB5pZSmlVowEYZry8lMfkbuoDv2Fh22WrzxI5t+ZTY 3bKf0h0beYFoWeWzP1pcO6RQrHHsHySPhAXqJWd4wDsn/BzspRPViyCYzfOkl0XqVsk2QcqBSWzTg FQsQzAgocI2PFyz5txVfAMV1DRKPX+bcLP3Y+3ZR9UMMmqmupouO1QLQgadurq0QWPbsiAa1k9a6h g2Eb0iAriu900+TeprRw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gAGqD-0002VG-L1; Wed, 10 Oct 2018 15:56:05 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gAGqA-0002Pk-Nq for linux-arm-kernel@lists.infradead.org; Wed, 10 Oct 2018 15:56:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B16D4ED1; Wed, 10 Oct 2018 08:55:51 -0700 (PDT) Received: from melchizedek.cambridge.arm.com (melchizedek.cambridge.arm.com [10.1.196.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7FBA23F5B3; Wed, 10 Oct 2018 08:55:50 -0700 (PDT) From: James Morse To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] Revert "arm64: uaccess: implement unsafe accessors" Date: Wed, 10 Oct 2018 16:55:44 +0100 Message-Id: <20181010155544.19125-1-james.morse@arm.com> X-Mailer: git-send-email 2.19.0 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181010_085602_787676_7762F1B7 X-CRM114-Status: GOOD ( 14.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Will Deacon , James Morse , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This reverts commit a1f33941f7e103bcf471eaf8461b212223c642d6. The unsafe accessors allow the PAN enable/disable calls to be made once for a group of accesses. Adding these means we can now have sequences that look like this: | user_access_begin(); | unsafe_put_user(static-value, x, err); | unsafe_put_user(helper-that-sleeps(), x, err); | user_access_end(); Calling schedule() without taking an exception doesn't switch the PSTATE or TTBRs. We can switch out of a uaccess-enabled region, and run other code with uaccess enabled for a different thread. We can also switch from uaccess-disabled code back into this region, meaning the unsafe_put_user()s will fault. For software-PAN, threads that do this will get stuck as handle_mm_fault() will determine the page has already been mapped in, but we fault again as the page tables aren't loaded. To solve this we need code in __switch_to() that save/restores the PAN state. Signed-off-by: James Morse CC: Julien Thierry Acked-by: Mark Rutland Acked-by: Julien Thierry --- This reverts a patch queued in for-next/core. arch/arm64/include/asm/uaccess.h | 61 ++++++++------------------------ 1 file changed, 15 insertions(+), 46 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 8ac6e34922e7..07c34087bd5e 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -276,9 +276,11 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) : "+r" (err), "=&r" (x) \ : "r" (addr), "i" (-EFAULT)) -#define __get_user_err_unsafe(x, ptr, err) \ +#define __get_user_err(x, ptr, err) \ do { \ unsigned long __gu_val; \ + __chk_user_ptr(ptr); \ + uaccess_enable_not_uao(); \ switch (sizeof(*(ptr))) { \ case 1: \ __get_user_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr), \ @@ -299,24 +301,17 @@ do { \ default: \ BUILD_BUG(); \ } \ - (x) = (__force __typeof__(*(ptr)))__gu_val; \ -} while (0) - -#define __get_user_err_check(x, ptr, err) \ -do { \ - __chk_user_ptr(ptr); \ - uaccess_enable_not_uao(); \ - __get_user_err_unsafe((x), (ptr), (err)); \ uaccess_disable_not_uao(); \ + (x) = (__force __typeof__(*(ptr)))__gu_val; \ } while (0) -#define __get_user_err(x, ptr, err, accessor) \ +#define __get_user_check(x, ptr, err) \ ({ \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(VERIFY_READ, __p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - accessor((x), __p, (err)); \ + __get_user_err((x), __p, (err)); \ } else { \ (x) = 0; (err) = -EFAULT; \ } \ @@ -324,14 +319,14 @@ do { \ #define __get_user_error(x, ptr, err) \ ({ \ - __get_user_err((x), (ptr), (err), __get_user_err_check); \ + __get_user_check((x), (ptr), (err)); \ (void)0; \ }) #define __get_user(x, ptr) \ ({ \ int __gu_err = 0; \ - __get_user_err((x), (ptr), __gu_err, __get_user_err_check); \ + __get_user_check((x), (ptr), __gu_err); \ __gu_err; \ }) @@ -351,9 +346,11 @@ do { \ : "+r" (err) \ : "r" (x), "r" (addr), "i" (-EFAULT)) -#define __put_user_err_unsafe(x, ptr, err) \ +#define __put_user_err(x, ptr, err) \ do { \ __typeof__(*(ptr)) __pu_val = (x); \ + __chk_user_ptr(ptr); \ + uaccess_enable_not_uao(); \ switch (sizeof(*(ptr))) { \ case 1: \ __put_user_asm("strb", "sttrb", "%w", __pu_val, (ptr), \ @@ -374,24 +371,16 @@ do { \ default: \ BUILD_BUG(); \ } \ -} while (0) - - -#define __put_user_err_check(x, ptr, err) \ -do { \ - __chk_user_ptr(ptr); \ - uaccess_enable_not_uao(); \ - __put_user_err_unsafe((x), (ptr), (err)); \ uaccess_disable_not_uao(); \ } while (0) -#define __put_user_err(x, ptr, err, accessor) \ +#define __put_user_check(x, ptr, err) \ ({ \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(VERIFY_WRITE, __p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - accessor((x), __p, (err)); \ + __put_user_err((x), __p, (err)); \ } else { \ (err) = -EFAULT; \ } \ @@ -399,39 +388,19 @@ do { \ #define __put_user_error(x, ptr, err) \ ({ \ - __put_user_err((x), (ptr), (err), __put_user_err_check); \ + __put_user_check((x), (ptr), (err)); \ (void)0; \ }) #define __put_user(x, ptr) \ ({ \ int __pu_err = 0; \ - __put_user_err((x), (ptr), __pu_err, __put_user_err_check); \ + __put_user_check((x), (ptr), __pu_err); \ __pu_err; \ }) #define put_user __put_user - -#define user_access_begin() uaccess_enable_not_uao() -#define user_access_end() uaccess_disable_not_uao() - -#define unsafe_get_user(x, ptr, err) \ -do { \ - int __gu_err = 0; \ - __get_user_err((x), (ptr), __gu_err, __get_user_err_unsafe); \ - if (__gu_err != 0) \ - goto err; \ -} while (0) - -#define unsafe_put_user(x, ptr, err) \ -do { \ - int __pu_err = 0; \ - __put_user_err((x), (ptr), __pu_err, __put_user_err_unsafe); \ - if (__pu_err != 0) \ - goto err; \ -} while (0) - extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); #define raw_copy_from_user(to, from, n) \ ({ \