From patchwork Tue Jun 25 04:04:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13710580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6240C3065A for ; Tue, 25 Jun 2024 04:19:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=z3iLW8CmZz9fPbdDTX6QTiQQtj8tBqr6lrsnkUdrmVA=; b=MTT/hCoEVCOVBy 3/IUNIJyLpEGmyplysJuYEShiP8Iwlch94Hc7y4xLDnKIw8CD8rWQknhmrANCSGjRlID7ilx59T6+ FFFi8KYGRo8Evpu2+nzb0pinMC6PhcKnKyOOQ88DcaD4tk2mmCt3N/DXfocT+KJm2u0RQ0+CQBi/o na/AjMI3B0P0vgCZaHuOKiRuGeNwy5+k23857E9HQxL6Bh3v3Zx4O1HbLGQ8e7MbZ8zUjiWrFwgun tVKAPHF0wq9I4TdG2CZmdfrfEyUuQy+pkMVUnMGlXTCc9D3zn40UxLRI5EJLVKgo/Vw5Pv6flEVO0 3ZNmZkqATeDsNkJZVT6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxeD-00000001XOy-1mLp; Tue, 25 Jun 2024 04:19:13 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxe8-00000001XN1-3tzg for linux-riscv@lists.infradead.org; Tue, 25 Jun 2024 04:19:10 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 171E5CE139C; Tue, 25 Jun 2024 04:19:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3092AC4AF07; Tue, 25 Jun 2024 04:19:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719289146; bh=UfjAcFpr8p8BwnGzYbUElRJhzruRcMixG1Jm7hb8oUY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BpRC/okPkhIdo+qkN/KxzojnzWMXXLSWgDUsNqqQk/96MorhbdA8LZAro5cxbC/k1 Q0Vi1KInfuqas151KodJ8uSN/XhkDilKMJQBtkF3XjT4e6kDh6AZ0OQ5LTLlKzjn1A VPq+Y0hdGbM/VjLPOwuDTosc/RLTxKzCQvkgyxiDHY9Me28ItVGsFR31VlAn7W8zbo yKmF+T+sOqdrt0xq+AAJjRrSIwIlOCQxBRVFFBoe5O1Y7jg2gH6VmE+oONOc1dXnKm XMdsC0ol9TOyyiPmdxw2aeeWT2iqtDizAnqzpdtxdQVMmunM/hj4FPtYBpa/895ubl bQWrYCHlpot1A== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] riscv: implement user_access_begin and families Date: Tue, 25 Jun 2024 12:04:57 +0800 Message-ID: <20240625040500.1788-2-jszhang@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240625040500.1788-1-jszhang@kernel.org> References: <20240625040500.1788-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240624_211909_347451_36F0C00A X-CRM114-Status: GOOD ( 14.79 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, when a function like strncpy_from_user() is called, the userspace access protection is disabled and enabled for every word read. By implementing user_access_begin and families, the protection is disabled at the beginning of the copy and enabled at the end. The __inttype macro is borrowed from x86 implementation. Signed-off-by: Jisheng Zhang Reviewed-by: Cyril Bur --- arch/riscv/include/asm/uaccess.h | 63 ++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h index 72ec1d9bd3f3..09d4ca37522c 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -28,6 +28,19 @@ #define __disable_user_access() \ __asm__ __volatile__ ("csrc sstatus, %0" : : "r" (SR_SUM) : "memory") +/* + * This is the smallest unsigned integer type that can fit a value + * (up to 'long long') + */ +#define __inttype(x) __typeof__( \ + __typefits(x,char, \ + __typefits(x,short, \ + __typefits(x,int, \ + __typefits(x,long,0ULL))))) + +#define __typefits(x,type,not) \ + __builtin_choose_expr(sizeof(x)<=sizeof(type),(unsigned type)0,not) + /* * The exception table consists of pairs of addresses: the first is the * address of an instruction that is allowed to fault, and the second is @@ -335,6 +348,56 @@ do { \ goto err_label; \ } while (0) +static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len) +{ + if (unlikely(!access_ok(ptr,len))) + return 0; + __enable_user_access(); + return 1; +} +#define user_access_begin(a,b) user_access_begin(a,b) +#define user_access_end() __disable_user_access(); + +static inline unsigned long user_access_save(void) { return 0UL; } +static inline void user_access_restore(unsigned long enabled) { } + +#define unsafe_put_user(x, ptr, label) do { \ + long __kr_err = 0; \ + __put_user_nocheck(x, (ptr), __kr_err); \ + if (__kr_err) goto label; \ +} while (0) + +#define unsafe_get_user(x, ptr, label) do { \ + long __kr_err = 0; \ + __inttype(*(ptr)) __gu_val; \ + __get_user_nocheck(__gu_val, (ptr), __kr_err); \ + (x) = (__force __typeof__(*(ptr)))__gu_val; \ + if (__kr_err) goto label; \ +} while (0) + +/* + * We want the unsafe accessors to always be inlined and use + * the error labels - thus the macro games. + */ +#define unsafe_copy_loop(dst, src, len, type, label) \ + while (len >= sizeof(type)) { \ + unsafe_put_user(*(type *)(src),(type __user *)(dst),label); \ + dst += sizeof(type); \ + src += sizeof(type); \ + len -= sizeof(type); \ + } + +#define unsafe_copy_to_user(_dst,_src,_len,label) \ +do { \ + char __user *__ucu_dst = (_dst); \ + const char *__ucu_src = (_src); \ + size_t __ucu_len = (_len); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u64, label); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u32, label); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u16, label); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, label); \ +} while (0) + #else /* CONFIG_MMU */ #include #endif /* CONFIG_MMU */ From patchwork Tue Jun 25 04:04:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13710579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 64EA9C30659 for ; Tue, 25 Jun 2024 04:19:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YZcQUuXXkiDQ2KFK/Dn0DgdBBvpolUeb0PbKERo/vfA=; b=OQM7N/b3lSI4eC 4xtLDKIl9WR9H8eoDHX05GWSAbPRqo3QF3oVYQMcr6M3FYfh5qkTlTqLUZne3N4yLwxl1TDhOsgu3 OiiMcoxE+ZPd4ujGzIBZCYPDrvi5GldNBwEDTumbcMuYq14Y1WKz1QLyjU7OCy1OKmnCLTciv5vGn CTyi6IeDW9wWHM+ki7ktXdsMEd1hTG9gRiUUy+/BiFb3kbhz1/HD9szc7TlDSg1nlrbrsS6UTY3v0 mYaYR8Mzt0BIG3x4ZiMY/ttoRncmrjyzHQV5N2Ia8gmGVb6ztJqq2+VRc7wtkf4rVknC6pgQZb4jh swxrY8FI7cJ4E06INEiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxeC-00000001XOp-2ccz; Tue, 25 Jun 2024 04:19:12 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxe8-00000001XN7-332s for linux-riscv@lists.infradead.org; Tue, 25 Jun 2024 04:19:09 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id ECEE2611DA; Tue, 25 Jun 2024 04:19:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7CA0C32782; Tue, 25 Jun 2024 04:19:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719289147; bh=5yvcxNr6EfINb4k2kkXt3e0AscGuviAUNxRV+wtNX/w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Eu6Sm5jN/GREE9cN89U6psq/7I6UTjd+7K5yuIXCah5xsRCY4Jzc1sgpuy+gZg/Tl D0Fg4puDPNIvbaxKpBL7P2kZiY8DzFjkoORnOG0bu2aM7SYN1JN8pt7C0bVIbMyln9 Nk4ne511q7i2gFgMk1P6oWyYE5V4irQ0y5Oo+YEdRESU0a332PRtOxAyisQCUlEcpF fZvP9vUyacrjE+JMbkNURcLS2nywajgO+0G8Mrz83J2AnRH2yvLnJXBN/H3wJywTGR I7aJ7npC+h7vEDo6/X25OXD4Qpu8ylwbhel0ynvjMHcC2BUbv2Qqn5S/Etl/wA/k+b QKfb8pDED4tLw== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] riscv: uaccess: use input constraints for ptr of __put_user Date: Tue, 25 Jun 2024 12:04:58 +0800 Message-ID: <20240625040500.1788-3-jszhang@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240625040500.1788-1-jszhang@kernel.org> References: <20240625040500.1788-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240624_211908_839399_D8709647 X-CRM114-Status: UNSURE ( 9.15 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org I believe the output constraints "=m" is not necessary, because the instruction itself is "write", we don't need the compiler to "write" for us. So tell compiler we read from memory instead of writing. Signed-off-by: Jisheng Zhang --- arch/riscv/include/asm/uaccess.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h index 09d4ca37522c..84b084e388a7 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -186,11 +186,11 @@ do { \ __typeof__(*(ptr)) __x = x; \ __asm__ __volatile__ ( \ "1:\n" \ - " " insn " %z2, %1\n" \ + " " insn " %z1, %2\n" \ "2:\n" \ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %0) \ - : "+r" (err), "=m" (*(ptr)) \ - : "rJ" (__x)); \ + : "+r" (err) \ + : "rJ" (__x), "m"(*(ptr))); \ } while (0) #ifdef CONFIG_64BIT @@ -203,16 +203,16 @@ do { \ u64 __x = (__typeof__((x)-(x)))(x); \ __asm__ __volatile__ ( \ "1:\n" \ - " sw %z3, %1\n" \ + " sw %z1, %3\n" \ "2:\n" \ - " sw %z4, %2\n" \ + " sw %z2, %4\n" \ "3:\n" \ _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %0) \ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %0) \ - : "+r" (err), \ - "=m" (__ptr[__LSW]), \ - "=m" (__ptr[__MSW]) \ - : "rJ" (__x), "rJ" (__x >> 32)); \ + : "+r" (err) \ + : "rJ" (__x), "rJ" (__x >> 32), \ + "m" (__ptr[__LSW]), \ + "m" (__ptr[__MSW])); \ } while (0) #endif /* CONFIG_64BIT */ From patchwork Tue Jun 25 04:04:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13710582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C0EBC3064D for ; Tue, 25 Jun 2024 04:19:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EAEpT7XPO2fo5+rcGH9ja8LyUSEEGVesHQHKG8DgZMM=; b=e4zIkUpj8Rvdfs EVZCXAsm1nS7EiJmWA3re19KPwVVXOaGEDpKWFmebtc8bFbMqu4AqLC/gM4hRgjitXkYNkMwxJ4Jh bQgQT8j8K4Gu0JhSZwGaqs/lm9A1VR5A+Y4+ntHvwkNUVF/k51tJytDlv8fyVRm6z2WiOi7L1cFX2 EPq2Rvl/IpyS8csBC3AZKx++3MokF9thzBTG9beeHpSttzVhsPY8TKEKNU/0ZmQVuhiKdFRv8oUdk tIZ/Sl+LefeMoUF2xM5/9cyg3T7BTbbM/Rk/4T+91mxqssRMR/rsGbhL7gWYaCic+Eq1bwlpNkGSt pT11bp/4yZXglJjMbgEw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxeE-00000001XPw-1OwZ; Tue, 25 Jun 2024 04:19:14 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxeA-00000001XNr-023y for linux-riscv@lists.infradead.org; Tue, 25 Jun 2024 04:19:11 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6EB1261195; Tue, 25 Jun 2024 04:19:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2927FC32782; Tue, 25 Jun 2024 04:19:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719289149; bh=ZzlEc7s3uYpxeTkZStcoK6OMbfxXl2VXk7KZNHt3OpE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AhGk98Mv6hs+xLaijlXJ/I4OLbsJW/jkS3axHL7EefamGHEOtW1ZtoJvCEg85A6XL ItftEQ4YK4kTtKWFpCYEXezDscYbWPD3IzrXXzr5tKOecVnYIBSK/ojBHSr+B2y7C4 NfeS1uaz7Ayl1voJExBvwc+qq9qNvjMpiTJ8oDytePSBOTK4a5EFOuwdrvWoZqqZhN JAkk4oz1BgQGcHR9zrfBIfqdrdAxPG8hJFdKN6nV42e8vOOKfEXjCIIlcUYIIpsK45 DGqRJScKzZV08XVnqphuA+74tE+WeSgl+r5K/FQmbw87cuDBK0FhEsifP3eKlUqCKL dpsr6/4206wgw== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] riscv: uaccess: use 'asm goto' for put_user() Date: Tue, 25 Jun 2024 12:04:59 +0800 Message-ID: <20240625040500.1788-4-jszhang@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240625040500.1788-1-jszhang@kernel.org> References: <20240625040500.1788-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240624_211910_225940_406D078A X-CRM114-Status: GOOD ( 11.31 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org 'asm goto' generates noticeably better code since we don't need to test the error etc, the exception just jumps to the error handling directly. Signed-off-by: Jisheng Zhang --- arch/riscv/include/asm/uaccess.h | 68 +++++++++++++++----------------- 1 file changed, 32 insertions(+), 36 deletions(-) diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h index 84b084e388a7..d8c44705b61d 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -181,61 +181,66 @@ do { \ ((x) = (__force __typeof__(x))0, -EFAULT); \ }) -#define __put_user_asm(insn, x, ptr, err) \ +#define __put_user_asm(insn, x, ptr, label) \ do { \ __typeof__(*(ptr)) __x = x; \ - __asm__ __volatile__ ( \ + asm goto( \ "1:\n" \ - " " insn " %z1, %2\n" \ - "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %0) \ - : "+r" (err) \ - : "rJ" (__x), "m"(*(ptr))); \ + " " insn " %z0, %1\n" \ + _ASM_EXTABLE(1b, %l2) \ + : : "rJ" (__x), "m"(*(ptr)) : : label); \ } while (0) #ifdef CONFIG_64BIT -#define __put_user_8(x, ptr, err) \ - __put_user_asm("sd", x, ptr, err) +#define __put_user_8(x, ptr, label) \ + __put_user_asm("sd", x, ptr, label) #else /* !CONFIG_64BIT */ -#define __put_user_8(x, ptr, err) \ +#define __put_user_8(x, ptr, label) \ do { \ u32 __user *__ptr = (u32 __user *)(ptr); \ u64 __x = (__typeof__((x)-(x)))(x); \ __asm__ __volatile__ ( \ "1:\n" \ - " sw %z1, %3\n" \ + " sw %z0, %2\n" \ "2:\n" \ - " sw %z2, %4\n" \ - "3:\n" \ - _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %0) \ - _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %0) \ - : "+r" (err) \ - : "rJ" (__x), "rJ" (__x >> 32), \ + " sw %z1, %3\n" \ + _ASM_EXTABLE(1b, %l4) \ + _ASM_EXTABLE(2b, %l4) \ + : : "rJ" (__x), "rJ" (__x >> 32), \ "m" (__ptr[__LSW]), \ - "m" (__ptr[__MSW])); \ + "m" (__ptr[__MSW]) : : label); \ } while (0) #endif /* CONFIG_64BIT */ -#define __put_user_nocheck(x, __gu_ptr, __pu_err) \ +#define __put_user_nocheck(x, __gu_ptr, label) \ do { \ switch (sizeof(*__gu_ptr)) { \ case 1: \ - __put_user_asm("sb", (x), __gu_ptr, __pu_err); \ + __put_user_asm("sb", (x), __gu_ptr, label); \ break; \ case 2: \ - __put_user_asm("sh", (x), __gu_ptr, __pu_err); \ + __put_user_asm("sh", (x), __gu_ptr, label); \ break; \ case 4: \ - __put_user_asm("sw", (x), __gu_ptr, __pu_err); \ + __put_user_asm("sw", (x), __gu_ptr, label); \ break; \ case 8: \ - __put_user_8((x), __gu_ptr, __pu_err); \ + __put_user_8((x), __gu_ptr, label); \ break; \ default: \ BUILD_BUG(); \ } \ } while (0) +#define __put_user_error(x, ptr, err) \ +do { \ + __label__ err_label; \ + __put_user_nocheck(x, ptr, err_label); \ + break; \ +err_label: \ + (err) = -EFAULT; \ +} while (0) + /** * __put_user: - Write a simple value into user space, with less checking. * @x: Value to copy to user space. @@ -266,7 +271,7 @@ do { \ __chk_user_ptr(__gu_ptr); \ \ __enable_user_access(); \ - __put_user_nocheck(__val, __gu_ptr, __pu_err); \ + __put_user_error(__val, __gu_ptr, __pu_err); \ __disable_user_access(); \ \ __pu_err; \ @@ -340,13 +345,7 @@ do { \ } while (0) #define __put_kernel_nofault(dst, src, type, err_label) \ -do { \ - long __kr_err = 0; \ - \ - __put_user_nocheck(*((type *)(src)), (type *)(dst), __kr_err); \ - if (unlikely(__kr_err)) \ - goto err_label; \ -} while (0) + __put_user_nocheck(*((type *)(src)), (type *)(dst), err_label); static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len) { @@ -361,11 +360,8 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt static inline unsigned long user_access_save(void) { return 0UL; } static inline void user_access_restore(unsigned long enabled) { } -#define unsafe_put_user(x, ptr, label) do { \ - long __kr_err = 0; \ - __put_user_nocheck(x, (ptr), __kr_err); \ - if (__kr_err) goto label; \ -} while (0) +#define unsafe_put_user(x, ptr, label) \ + __put_user_nocheck(x, (ptr), label); #define unsafe_get_user(x, ptr, label) do { \ long __kr_err = 0; \ From patchwork Tue Jun 25 04:05:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13710581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 377BEC3065B for ; Tue, 25 Jun 2024 04:19:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/cfvi6NU+33hE/fsb9H/otUc13wfGdnL1HywfMzmnzA=; b=iLqzCf4JEP9qBu RnHkXQZT+2hJxMBqiTy2rbUhdwrEBavucEeiyWgq8sHLrKsyQtgpNSjAbtIxjS3OFxn12e8mastDC JqmM3SmWtLW6J+50LQPloQ0B1qioEmzufcwMJuoj1Y8kzCzD3kF3FSSTWnhg42JKnvi0/BHt+G2xU NUd7g/kE41UfMgZW69494ou9HlJxng7p8m/fqt3LH8eL16TXthLxWLhE8TfyyLbpH3C4/4BmUy9cI WgSvGkr1DzRhxz6QxSjmcmMeZ9rLBLk2ySAplAOvUqrCi4C1dm3Z/M+rYOmtCT4y/olu0Vc6Rf/2d 5BJwPGIqk+WUU6Tvj5rQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxeI-00000001XRv-1W8J; Tue, 25 Jun 2024 04:19:18 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sLxeB-00000001XOY-3z5P for linux-riscv@lists.infradead.org; Tue, 25 Jun 2024 04:19:13 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 42B8D611D3; Tue, 25 Jun 2024 04:19:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F93CC4AF09; Tue, 25 Jun 2024 04:19:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719289150; bh=DRenIR1h1uckRbamCPlOygD4TulcG8SHfUvGTveeorc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YpozG22IEVA2nBqJ8c0xsbZt6uPkG5mxT2ifhOUxfWWnac5+tPiACBAcWM39RmioP WAEp3VD2DsbwQ/h9ZTjWxVMH1tZ3Cf1o8lnm9vddSd8BJPc/EW5SxdOZb7p+JcGTfv FtQ2CzUQdlD73BebltlmD75xZ0Xr6cOCwDW5z578rwoZN/2mqabQeVQ2EcrbH4ukK/ Sp0FKEHrsCvORglBeIDtt3mX3/ke4g3/aq1MA+0Hkf8hLAgR1my5jQYWl4xrXOGeae wuCggyCvHUqWM6Vv7RI9FJddfZOIUT+7HCOX/iTXnN8b8STZcIKMfmLqYwEaCT+kyW IbkiXTEg1mXKA== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] riscv: uaccess: use 'asm goto output' for get_user Date: Tue, 25 Jun 2024 12:05:00 +0800 Message-ID: <20240625040500.1788-5-jszhang@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240625040500.1788-1-jszhang@kernel.org> References: <20240625040500.1788-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240624_211912_107609_641B1FA2 X-CRM114-Status: GOOD ( 12.89 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org 'asm goto output' generates noticeably better code since we don't need to test the error etc, the exception just jumps to the error handling directly. Signed-off-by: Jisheng Zhang --- arch/riscv/include/asm/uaccess.h | 88 +++++++++++++++++++++++--------- 1 file changed, 63 insertions(+), 25 deletions(-) diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h index d8c44705b61d..d9c32b4f7d13 100644 --- a/arch/riscv/include/asm/uaccess.h +++ b/arch/riscv/include/asm/uaccess.h @@ -63,27 +63,54 @@ * call. */ -#define __get_user_asm(insn, x, ptr, err) \ +#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT +#define __get_user_asm(insn, x, ptr, label) \ + asm_goto_output( \ + "1:\n" \ + " " insn " %0, %1\n" \ + _ASM_EXTABLE_UACCESS_ERR(1b, %l2, %0) \ + : "=&r" (x) \ + : "m" (*(ptr)) : : label) +#else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ +#define __get_user_asm(insn, x, ptr, label) \ do { \ - __typeof__(x) __x; \ + long __gua_err = 0; \ __asm__ __volatile__ ( \ "1:\n" \ " " insn " %1, %2\n" \ "2:\n" \ _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %0, %1) \ - : "+r" (err), "=&r" (__x) \ + : "+r" (__gua_err), "=&r" (x) \ : "m" (*(ptr))); \ - (x) = __x; \ + if (__gua_err) \ + goto label; \ } while (0) +#endif /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ #ifdef CONFIG_64BIT -#define __get_user_8(x, ptr, err) \ - __get_user_asm("ld", x, ptr, err) +#define __get_user_8(x, ptr, label) \ + __get_user_asm("ld", x, ptr, label) #else /* !CONFIG_64BIT */ -#define __get_user_8(x, ptr, err) \ + +#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT +#define __get_user_8(x, ptr, label) \ + asm_goto_output( \ + "1:\n" \ + " lw %0, %2\n" \ + "2:\n" \ + " lw %1, %3\n" \ + _ASM_EXTABLE_UACCESS_ERR(1b, %l4, %0) \ + _ASM_EXTABLE_UACCESS_ERR(2b, %l4, %0) \ + : "=&r" (__lo), "=r" (__hi) \ + : "m" (__ptr[__LSW]), "m" (__ptr[__MSW]) \ + : : label) + +#else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ +#define __get_user_8(x, ptr, label) \ do { \ u32 __user *__ptr = (u32 __user *)(ptr); \ u32 __lo, __hi; \ + long __gu8_err = 0; \ __asm__ __volatile__ ( \ "1:\n" \ " lw %1, %3\n" \ @@ -92,35 +119,51 @@ do { \ "3:\n" \ _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 3b, %0, %1) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(2b, 3b, %0, %1) \ - : "+r" (err), "=&r" (__lo), "=r" (__hi) \ + : "+r" (__gu8_err), "=&r" (__lo), "=r" (__hi) \ : "m" (__ptr[__LSW]), "m" (__ptr[__MSW])); \ - if (err) \ + if (__gu8_err) { \ __hi = 0; \ + goto label; \ + } \ (x) = (__typeof__(x))((__typeof__((x)-(x)))( \ (((u64)__hi << 32) | __lo))); \ } while (0) +#endif /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ + #endif /* CONFIG_64BIT */ -#define __get_user_nocheck(x, __gu_ptr, __gu_err) \ +#define __get_user_nocheck(x, __gu_ptr, label) \ do { \ switch (sizeof(*__gu_ptr)) { \ case 1: \ - __get_user_asm("lb", (x), __gu_ptr, __gu_err); \ + __get_user_asm("lb", (x), __gu_ptr, label); \ break; \ case 2: \ - __get_user_asm("lh", (x), __gu_ptr, __gu_err); \ + __get_user_asm("lh", (x), __gu_ptr, label); \ break; \ case 4: \ - __get_user_asm("lw", (x), __gu_ptr, __gu_err); \ + __get_user_asm("lw", (x), __gu_ptr, label); \ break; \ case 8: \ - __get_user_8((x), __gu_ptr, __gu_err); \ + __get_user_8((x), __gu_ptr, label); \ break; \ default: \ BUILD_BUG(); \ } \ } while (0) +#define __get_user_error(x, ptr, err) \ +do { \ + __label__ __gu_failed; \ + \ + __get_user_nocheck(x, ptr, __gu_failed); \ + err = 0; \ + break; \ +__gu_failed: \ + x = 0; \ + err = -EFAULT; \ +} while (0) + /** * __get_user: - Get a simple variable from user space, with less checking. * @x: Variable to store result. @@ -145,13 +188,16 @@ do { \ ({ \ const __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \ long __gu_err = 0; \ + __typeof__(x) __gu_val; \ \ __chk_user_ptr(__gu_ptr); \ \ __enable_user_access(); \ - __get_user_nocheck(x, __gu_ptr, __gu_err); \ + __get_user_error(__gu_val, __gu_ptr, __gu_err); \ __disable_user_access(); \ \ + (x) = __gu_val; \ + \ __gu_err; \ }) @@ -336,13 +382,7 @@ unsigned long __must_check clear_user(void __user *to, unsigned long n) } #define __get_kernel_nofault(dst, src, type, err_label) \ -do { \ - long __kr_err = 0; \ - \ - __get_user_nocheck(*((type *)(dst)), (type *)(src), __kr_err); \ - if (unlikely(__kr_err)) \ - goto err_label; \ -} while (0) + __get_user_nocheck(*((type *)(dst)), (type *)(src), err_label); #define __put_kernel_nofault(dst, src, type, err_label) \ __put_user_nocheck(*((type *)(src)), (type *)(dst), err_label); @@ -364,11 +404,9 @@ static inline void user_access_restore(unsigned long enabled) { } __put_user_nocheck(x, (ptr), label); #define unsafe_get_user(x, ptr, label) do { \ - long __kr_err = 0; \ __inttype(*(ptr)) __gu_val; \ - __get_user_nocheck(__gu_val, (ptr), __kr_err); \ + __get_user_nocheck(__gu_val, (ptr), label); \ (x) = (__force __typeof__(*(ptr)))__gu_val; \ - if (__kr_err) goto label; \ } while (0) /*