From patchwork Wed Oct 4 15:13:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA20CE7C4D6 for ; Wed, 4 Oct 2023 15:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8+KdRg0Vfjd1OLTt0yCcOWo/qLIYKr5JgyOmfPVFbK0=; b=vp+dSxKgpzVB3V p2Jp4su4T+kxKKeCtquAqZNY8Vrjdfr/dtTw79BSMRF6APEF/pWY76fvaSAfpIbDbpUz2TrFF1RTv bcDs49nJMTeKcbTtfnn+bct0q6vwWzzZl1SnY2Y1WiuQqXCisoTNXaOF9fdiNWKLLXWiCXGjFCtXy IlUE4qMuNWYvQm55n4NYBuf9DpbbMeeaBov34vUNOsF5oDjo7+YDRxx6Qi8ow//MwzR5F2D2CupIC Dt6l9z7SoqYbnxffVzF3wZzpX/gu/xJSmcq5ciRDMKxGck2vFHc++25ZlCeN50w5po0kzxHgzQTnn Mid1giU87PTjP8pnj1Yg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a0-000JYv-1Q; Wed, 04 Oct 2023 15:14:28 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3Zx-000JXl-1o for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:26 +0000 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-405d70d19bcso4000335e9.0 for ; Wed, 04 Oct 2023 08:14:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432464; x=1697037264; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v7cbbMj+E5ZBlob6bJSyN7OfL5gwM5MZRfPIQhkQ9FM=; b=Ln6oFJ68DZXy6nNtakfKkd7xFe59+CVgePRpR8rgCJmZNSrtt3hPpP4yp6ZHlYWu4t EnUfSg3sDWfvxz4XPH8WN9XQNGtvreYifgCyD0oppU9jz97jujO1rvCSecmPQXFIuWyx towtXxowLm5T3zKYdf2Xq0xqx3iC3Fyfqjnu+0dXOO40HV6h1Os71EArLozz/wFucbfn LZSO9X+6Co1ZLFFYyOQP7A0arZH44KwFZLu2XC/AYOXieYYmH3JGgcSB+rfUD1ubWHOS J36ZBSJfIGsqL8gSAcJXig8qFkDPIB22SoNJtoaQOgstjog7t2Euwou+lZ9r+IB0qxjC i2Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432464; x=1697037264; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v7cbbMj+E5ZBlob6bJSyN7OfL5gwM5MZRfPIQhkQ9FM=; b=UendlYrKC9o1P1ypuTAfVhA9P3VKtWeJb9KAU+DM94xEvghtrWC01S1QrXw4czkmGQ aH09qD/xrAsp0twpjD1WjMdKHJT7pMTAeKPQEAYJ3uTEENineS7FozwuJXV74SBe8A9A dd9q3hQNXccklDARlgY0tBdkNRNaWfJVyQMnwTIIRcBe2rqhosWAWDVEi9nNWvDYKBk5 gCVk4Y2iPiK24eV7cAliU4Y21JWhGeRuLnzNWGzEk6rNTomRPHH9U8KjxxqWtwnL0mbs h00W539epMEG/i3+ZxbFT3QdP9cg8nGUDIAekjIlAcWKvwIr0RsBG7h0cm4lFtxtZdHf M9aQ== X-Gm-Message-State: AOJu0YztSpqRIxpkpyc8bwGON/bbA3bwm4Apn13Z0HngMVFtaFl+fsCf eY1Q4+dlWSWrrlj8LBvmhv2YbA== X-Google-Smtp-Source: AGHT+IFUYTV7FQFGtTHrMfR1VYTiBZo398LRCCrHvZm7ekynuy7D8ReYeOuSobKwivtHrDzV0o76mg== X-Received: by 2002:a05:600c:510b:b0:404:75cc:62e6 with SMTP id o11-20020a05600c510b00b0040475cc62e6mr2495282wms.3.1696432464039; Wed, 04 Oct 2023 08:14:24 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:23 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 1/8] riscv: remove unused functions in traps_misaligned.c Date: Wed, 4 Oct 2023 17:13:58 +0200 Message-ID: <20231004151405.521596-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081425_598885_2FC3B3B5 X-CRM114-Status: UNSURE ( 9.07 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Replace macros by the only two function calls that are done from this file, store_u8() and load_u8(). Signed-off-by: Clément Léger --- arch/riscv/kernel/traps_misaligned.c | 46 +++++----------------------- 1 file changed, 7 insertions(+), 39 deletions(-) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 378f5b151443..e7bfb33089c1 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -151,51 +151,19 @@ #define PRECISION_S 0 #define PRECISION_D 1 -#define DECLARE_UNPRIVILEGED_LOAD_FUNCTION(type, insn) \ -static inline type load_##type(const type *addr) \ -{ \ - type val; \ - asm (#insn " %0, %1" \ - : "=&r" (val) : "m" (*addr)); \ - return val; \ -} +static inline u8 load_u8(const u8 *addr) +{ + u8 val; -#define DECLARE_UNPRIVILEGED_STORE_FUNCTION(type, insn) \ -static inline void store_##type(type *addr, type val) \ -{ \ - asm volatile (#insn " %0, %1\n" \ - : : "r" (val), "m" (*addr)); \ -} + asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr)); -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(u8, lbu) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(u16, lhu) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(s8, lb) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(s16, lh) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(s32, lw) -DECLARE_UNPRIVILEGED_STORE_FUNCTION(u8, sb) -DECLARE_UNPRIVILEGED_STORE_FUNCTION(u16, sh) -DECLARE_UNPRIVILEGED_STORE_FUNCTION(u32, sw) -#if defined(CONFIG_64BIT) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(u32, lwu) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(u64, ld) -DECLARE_UNPRIVILEGED_STORE_FUNCTION(u64, sd) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(ulong, ld) -#else -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(u32, lw) -DECLARE_UNPRIVILEGED_LOAD_FUNCTION(ulong, lw) - -static inline u64 load_u64(const u64 *addr) -{ - return load_u32((u32 *)addr) - + ((u64)load_u32((u32 *)addr + 1) << 32); + return val; } -static inline void store_u64(u64 *addr, u64 val) +static inline void store_u8(u8 *addr, u8 val) { - store_u32((u32 *)addr, val); - store_u32((u32 *)addr + 1, val >> 32); + asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr)); } -#endif static inline ulong get_insn(ulong mepc) { From patchwork Wed Oct 4 15:13:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9EDF9E7C4D0 for ; Wed, 4 Oct 2023 15:14:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Tobj/TAQRA2BbqIW1oFXFm6Zgv3+odK1lrPRnlAQJ9Q=; b=E+yX45ESdOgADV elSAC3pdvX5oiuqvanDjfab5lBDK28H1hAaqCISCi1EyJQDgcSNtRX7aDuJW00Qtv+fQ34xCYzQup 60ll2gihDVO95us5ilciaofKJIjXPMFaggqiOC22lJ3GNz7gtnHIusvnICqQuYzUWqOyFVd0v+VGk xC51QdttLJg8CLJLejcs79jS6oC8lfTOm1CTlcG+FLuhLE5Ev8eC7ZUu/F8G1JGtO7nnDb24hDHxq RX4/sfmOFDa3vXK44EI20VCO9P7RqkoxZY9gWRvdhcYPku+CibmDrSnxiezD5lATMCTN6RE6V3ODc BypZnui6Fb0+87jRQfSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a2-000Jan-2c; Wed, 04 Oct 2023 15:14:30 +0000 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3Zz-000JY2-0p for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:28 +0000 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-406532c49dcso5076895e9.0 for ; Wed, 04 Oct 2023 08:14:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432465; x=1697037265; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H1p2iCt2/GYKYHhTcUS5xXMpXi+9mCsK1NUx+ndGvU8=; b=RRcNfnLBL9wPpSBUWkD5IAiFu97wcLSOe0gd5Z8Gz4KPVqUmTU+ws88L4UcodSxtxK kDg9HY1E5nBW2qPqrcSgrFALOmHmZvHGsrzJ5bMpXncIs0NCgWYADPlUdaAYHhx5LUzH ZOUB3osI/Kt6xTRFu5VpiS7jB4FZvdopmvKO2MGYMCuK3HStgtGDtBKKXJHC0/LcHB9D cK0qF1Ot5K0VV64nlKEeUZFCsvA0wSBBTHP6PmG9fm9aArb3XWg4Mj0IL0aE5flLrJwt nXLZsOUrcFsaYSWK3cgsNRZUjFW7nD8F8yTyrLV/rQNeM3JeFc93RzaPIJE/DhL+xE89 OO4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432465; x=1697037265; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H1p2iCt2/GYKYHhTcUS5xXMpXi+9mCsK1NUx+ndGvU8=; b=k321rptal32OhlSa3U3tqCtmLOdXnUY+B4SjZIZLxLYOSGMUrByv5OpSML8yj1k0jP VANosCbaMyseRZLrs+WucdNfl15C/S9Hrwt3dTcT3yAqCpnKTBc5GKafJJAZZ8dz6tyd dMb433DPVrXFVmm3Ecn/3YKmu1EP85nJaQmf5TeI/R+7TJbz03EMdrsQEvH0HhprgcDB PfcLjcQmu2L84LVeRAJVwhx0yzn3cqwBLFK1iEWYmJYMvol0xUmJhKxWieL18ISh+8d7 foyqf6q2y5qkDhU2JZldvsgtSxi7/ba7ZPQEsnK4y+/OFJ2L104sE0czfW1EX2Fva5by cE6A== X-Gm-Message-State: AOJu0YwOVmdqLGlF9540HPPvXT3KwnpIHvaw8HAhQgd8l25tt1RF8umL 9pMmgSOXXODUKgnlJyh/Vgx40w== X-Google-Smtp-Source: AGHT+IGD0PGV8oHiSuHWjQcdT4h1ZjbLGuKWI1OID61VdNsu3050wmo3Yv6YuKgmZuMKm3DbjuHW5g== X-Received: by 2002:a05:600c:5114:b0:405:4127:f471 with SMTP id o20-20020a05600c511400b004054127f471mr2631811wms.1.1696432465062; Wed, 04 Oct 2023 08:14:25 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:24 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 2/8] riscv: add support for misaligned trap handling in S-mode Date: Wed, 4 Oct 2023 17:13:59 +0200 Message-ID: <20231004151405.521596-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081427_297254_51884B65 X-CRM114-Status: GOOD ( 21.62 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Misalignment trap handling is only supported for M-mode and uses direct accesses to user memory. In S-mode, when handling usermode fault, this requires to use the get_user()/put_user() accessors. Implement load_u8(), store_u8() and get_insn() using these accessors for userspace and direct text access for kernel. Signed-off-by: Clément Léger Reviewed-by: Björn Töpel --- arch/riscv/Kconfig | 8 ++ arch/riscv/include/asm/entry-common.h | 14 +++ arch/riscv/kernel/Makefile | 2 +- arch/riscv/kernel/traps.c | 9 -- arch/riscv/kernel/traps_misaligned.c | 119 +++++++++++++++++++++++--- 5 files changed, 129 insertions(+), 23 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d607ab0f7c6d..6e167358a897 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -636,6 +636,14 @@ config THREAD_SIZE_ORDER Specify the Pages of thread stack size (from 4KB to 64KB), which also affects irq stack size, which is equal to thread stack size. +config RISCV_MISALIGNED + bool "Support misaligned load/store traps for kernel and userspace" + default y + help + Say Y here if you want the kernel to embed support for misaligned + load/store for both kernel and userspace. When disable, misaligned + accesses will generate SIGBUS in userspace and panic in kernel. + endmenu # "Platform type" menu "Kernel features" diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 6e4dee49d84b..7ab5e34318c8 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -8,4 +8,18 @@ void handle_page_fault(struct pt_regs *regs); void handle_break(struct pt_regs *regs); +#ifdef CONFIG_RISCV_MISALIGNED +int handle_misaligned_load(struct pt_regs *regs); +int handle_misaligned_store(struct pt_regs *regs); +#else +static inline int handle_misaligned_load(struct pt_regs *regs) +{ + return -1; +} +static inline int handle_misaligned_store(struct pt_regs *regs) +{ + return -1; +} +#endif + #endif /* _ASM_RISCV_ENTRY_COMMON_H */ diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 95cf25d48405..0d874fb24b51 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -59,7 +59,7 @@ obj-y += patch.o obj-y += probes/ obj-$(CONFIG_MMU) += vdso.o vdso/ -obj-$(CONFIG_RISCV_M_MODE) += traps_misaligned.o +obj-$(CONFIG_RISCV_MISALIGNED) += traps_misaligned.o obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_RISCV_ISA_V) += vector.o obj-$(CONFIG_SMP) += smpboot.o diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index 19807c4d3805..d69779e4b967 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -179,14 +179,6 @@ asmlinkage __visible __trap_section void do_trap_insn_illegal(struct pt_regs *re DO_ERROR_INFO(do_trap_load_fault, SIGSEGV, SEGV_ACCERR, "load access fault"); -#ifndef CONFIG_RISCV_M_MODE -DO_ERROR_INFO(do_trap_load_misaligned, - SIGBUS, BUS_ADRALN, "Oops - load address misaligned"); -DO_ERROR_INFO(do_trap_store_misaligned, - SIGBUS, BUS_ADRALN, "Oops - store (or AMO) address misaligned"); -#else -int handle_misaligned_load(struct pt_regs *regs); -int handle_misaligned_store(struct pt_regs *regs); asmlinkage __visible __trap_section void do_trap_load_misaligned(struct pt_regs *regs) { @@ -229,7 +221,6 @@ asmlinkage __visible __trap_section void do_trap_store_misaligned(struct pt_regs irqentry_nmi_exit(regs, state); } } -#endif DO_ERROR_INFO(do_trap_store_fault, SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault"); DO_ERROR_INFO(do_trap_ecall_s, diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index e7bfb33089c1..9daed7d756ae 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -12,6 +12,7 @@ #include #include #include +#include #define INSN_MATCH_LB 0x3 #define INSN_MASK_LB 0x707f @@ -151,21 +152,25 @@ #define PRECISION_S 0 #define PRECISION_D 1 -static inline u8 load_u8(const u8 *addr) +#ifdef CONFIG_RISCV_M_MODE +static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) { u8 val; asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr)); + *r_val = val; - return val; + return 0; } -static inline void store_u8(u8 *addr, u8 val) +static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) { asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr)); + + return 0; } -static inline ulong get_insn(ulong mepc) +static inline int get_insn(struct pt_regs *regs, ulong mepc, ulong *r_insn) { register ulong __mepc asm ("a2") = mepc; ulong val, rvc_mask = 3, tmp; @@ -194,9 +199,87 @@ static inline ulong get_insn(ulong mepc) : [addr] "r" (__mepc), [rvc_mask] "r" (rvc_mask), [xlen_minus_16] "i" (XLEN_MINUS_16)); - return val; + *r_insn = val; + + return 0; +} +#else +static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) +{ + if (user_mode(regs)) { + return __get_user(*r_val, addr); + } else { + *r_val = *addr; + return 0; + } } +static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val) +{ + if (user_mode(regs)) { + return __put_user(val, addr); + } else { + *addr = val; + return 0; + } +} + +#define __read_insn(regs, insn, insn_addr) \ +({ \ + int __ret; \ + \ + if (user_mode(regs)) { \ + __ret = __get_user(insn, insn_addr); \ + } else { \ + insn = *insn_addr; \ + __ret = 0; \ + } \ + \ + __ret; \ +}) + +static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn) +{ + ulong insn = 0; + + if (epc & 0x2) { + ulong tmp = 0; + u16 __user *insn_addr = (u16 __user *)epc; + + if (__read_insn(regs, insn, insn_addr)) + return -EFAULT; + /* __get_user() uses regular "lw" which sign extend the loaded + * value make sure to clear higher order bits in case we "or" it + * below with the upper 16 bits half. + */ + insn &= GENMASK(15, 0); + if ((insn & __INSN_LENGTH_MASK) != __INSN_LENGTH_32) { + *r_insn = insn; + return 0; + } + insn_addr++; + if (__read_insn(regs, tmp, insn_addr)) + return -EFAULT; + *r_insn = (tmp << 16) | insn; + + return 0; + } else { + u32 __user *insn_addr = (u32 __user *)epc; + + if (__read_insn(regs, insn, insn_addr)) + return -EFAULT; + if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) { + *r_insn = insn; + return 0; + } + insn &= GENMASK(15, 0); + *r_insn = insn; + + return 0; + } +} +#endif + union reg_data { u8 data_bytes[8]; ulong data_ulong; @@ -207,10 +290,13 @@ int handle_misaligned_load(struct pt_regs *regs) { union reg_data val; unsigned long epc = regs->epc; - unsigned long insn = get_insn(epc); - unsigned long addr = csr_read(mtval); + unsigned long insn; + unsigned long addr = regs->badaddr; int i, fp = 0, shift = 0, len = 0; + if (get_insn(regs, epc, &insn)) + return -1; + regs->epc = 0; if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) { @@ -274,8 +360,10 @@ int handle_misaligned_load(struct pt_regs *regs) } val.data_u64 = 0; - for (i = 0; i < len; i++) - val.data_bytes[i] = load_u8((void *)(addr + i)); + for (i = 0; i < len; i++) { + if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) + return -1; + } if (fp) return -1; @@ -290,10 +378,13 @@ int handle_misaligned_store(struct pt_regs *regs) { union reg_data val; unsigned long epc = regs->epc; - unsigned long insn = get_insn(epc); - unsigned long addr = csr_read(mtval); + unsigned long insn; + unsigned long addr = regs->badaddr; int i, len = 0; + if (get_insn(regs, epc, &insn)) + return -1; + regs->epc = 0; val.data_ulong = GET_RS2(insn, regs); @@ -327,8 +418,10 @@ int handle_misaligned_store(struct pt_regs *regs) return -1; } - for (i = 0; i < len; i++) - store_u8((void *)(addr + i), val.data_bytes[i]); + for (i = 0; i < len; i++) { + if (store_u8(regs, (void *)(addr + i), val.data_bytes[i])) + return -1; + } regs->epc = epc + INSN_LEN(insn); From patchwork Wed Oct 4 15:14:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81B03E7C4D2 for ; Wed, 4 Oct 2023 15:14:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bPhqY/+ZGIkARXXz0u+lEvgJuC38A+41FKy5UPyxyJA=; b=TI8ajh/hxpvm7z 7uvqs5MhDsMQR6HMIWy9KG/kfmxWMVhjBjX1kSLWTWM1tgG6SQMARFmD4AbHk6Nkv5hRHRoeCmc1R 3SOKdojt/dQrPPR50v2cvN8Adal9KWendN8Xctq6Vq3aMP09/SqWw8ZxWvev8+Xv+G/1P7sn8Uzi3 FweNSSFmcxAGc0WZWERFJdyUl4MkNX5xY9HKpk/JlsLImWjJyRa+leImGfTVLvL5Z7qO/L8NcpcQN xjDDiU6OiQKAHwMXPvcZ0bTZz1NjQF7eIH2efTU3ReGNmsbq3Ls3/dgeOrK1uSdxWliIXZQV52BjN 4UNt9SqfI0LgWsSstHnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a3-000JbI-1J; Wed, 04 Oct 2023 15:14:31 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a0-000JYL-1E for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:29 +0000 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-405e48d8e72so4850635e9.0 for ; Wed, 04 Oct 2023 08:14:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432466; x=1697037266; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=A29BZTVXLOLIyIz1ohc6hcIwjz2FYyV3nDH2JWKPVDg=; b=shr6Fo7PzrsowB1iaQkUhb21eGpe5xei7onMDYX5oOGMsGIL+axAOY6KcshFVlCXG7 sICDWDc8ZcOXq9k2EkMdekS0UY3wqOwpxYV0er24rO3A9+Vh5iBPNNKa8+YrJVig+Miv 82OH/iYAmntsHwjc+iRZuGIYajrfhjj4bPKPGXVLOb3YppbhQDO3Wxz4lzA8FDoGe+J9 HR9emg3pNw7QxfHtkb09xc2hs0ZsQVREkOKph9qcZRRhhfi7ZYGbJ6jlBNZr+2rfNdJI TaVl5IIAVICMNKa+QvZCUus38l1tGB5kR+K74cF6OfPVn9gMJ+P7po4VcrYCBh03kmSV X7SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432466; x=1697037266; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A29BZTVXLOLIyIz1ohc6hcIwjz2FYyV3nDH2JWKPVDg=; b=ghKZfHvDV6QmHFe6ajn4FSff91OaqzgDReCW6GVcPZddLqimRJVDjKwkOS7vvUKzaP EBl7knisX3RqD6B+Wr+rB0WK+Z6jyShZ3L0ZKiKhckwsAOF7ua2J0QJ9xWAWgLVpuz1n a/hCaHYZzN2nXik1l9C7VnBGMlFTlbunBocC8pgp/cjFc3u9DgZWYbdGZvO9hZYeGnqi WwmlYgj52nrUihgGgfMJBH0PZdfm2oRUHfXG0/txDaELienqBhc2FaouGrUr820m6CXV BrpmWaPgo2yNZSpcfYLUL8dBzhX9fdD5N8Zkza8rpk/9VuqnO4Q5nJqBFAL+JELXKtJc OwQQ== X-Gm-Message-State: AOJu0Yxh67uODkW2k0btSjKw2pSYAJbd6ITcqtuqKtUEINw4aKzzVgZq traVGgRI1XbDXGI5/oCYjHzBxA== X-Google-Smtp-Source: AGHT+IFAs4QCRFIPXdPu9HEhZ6Wlf941rotRjfMZx0FFxD9JoHwVNXpveZ0GmS/Rdy6s4UjHyzja0A== X-Received: by 2002:a05:600c:3c96:b0:403:334:fb0d with SMTP id bg22-20020a05600c3c9600b004030334fb0dmr2534922wmb.4.1696432466012; Wed, 04 Oct 2023 08:14:26 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:25 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 3/8] riscv: report perf event for misaligned fault Date: Wed, 4 Oct 2023 17:14:00 +0200 Message-ID: <20231004151405.521596-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081428_420005_B4F6F17D X-CRM114-Status: UNSURE ( 9.45 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add missing calls to account for misaligned fault event using perf_sw_event(). Signed-off-by: Clément Léger Reviewed-by: Björn Töpel --- arch/riscv/kernel/traps_misaligned.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 9daed7d756ae..804f6c5e0e44 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -294,6 +295,8 @@ int handle_misaligned_load(struct pt_regs *regs) unsigned long addr = regs->badaddr; int i, fp = 0, shift = 0, len = 0; + perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); + if (get_insn(regs, epc, &insn)) return -1; @@ -382,6 +385,8 @@ int handle_misaligned_store(struct pt_regs *regs) unsigned long addr = regs->badaddr; int i, len = 0; + perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); + if (get_insn(regs, epc, &insn)) return -1; From patchwork Wed Oct 4 15:14:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8DDC5E7C4D2 for ; Wed, 4 Oct 2023 15:14:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=E1F7hvu/h5sW4JaLuJ8ZmmxmLc3YWmh2XpiuMbNyiUg=; b=pkn82Euv15/AKA 076OooPD68p6WR0sA8wmW7EoEC67LGE4h18Y062cXqllFRwJ7iHhIlkpENeKwgRyOW4el25jLGwTL 5gjm8UdKTL+I4qDKndWR2oUY4foqTMLi2Qw9cCHIgEw7vqA2EErCQZsLKws84rgOUVkSpwV0sy53g c/DE+KaEJGeKeypKA/VgGoyuvarVkhXaaWLlx+tc+iwxaxEY4hXzGvXnxAsBrb6WLOrWQnjE4gUg6 LE+/vSa9h0ujXBFQhqQcQV7SsVmwGXZ55xn11QL+42wlExlt8TmklscRa5dARus6RjPDrUzCrDcTA 3/m+TxX4tYGrwuSPuFug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a7-000JeR-05; Wed, 04 Oct 2023 15:14:35 +0000 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a2-000JYx-0M for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:32 +0000 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-406532c49dcso5077025e9.0 for ; Wed, 04 Oct 2023 08:14:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432468; x=1697037268; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9TwKTwL0wrE/m4MYxnp0A1rNn2Nx0VOs6dVs63l3gfI=; b=HqDgPRhVtw0r/k/pRIsGgNooK43u9gP/HKXSJij/XAcRIwqSroG1eZfri0QptgyFHD JyvBBf9AR9wyS0btJhbrMk3J3KHrUcV83ph9cP5N6FMLEuqwRXHvfAW7IC7oTnvjfe5B dFOuylzOM8r/02oH1kWGjT85Sca0u6EYPexpUFgbFW9X08JIdRUYgpVWOK/efKFKbNM3 zeweEtNltyZl007/CJa2uhRIx9yJVStjmUF0E0L0wS8y33VZ4KRSRHdHZSRBPwOFJRvf 5MLlxctw4iodS1voY4FeF1vJ4R6+LLfiSWF3wr5CSJBvsq32CtBTZnjfutMCeb23c9q6 XQ9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432468; x=1697037268; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9TwKTwL0wrE/m4MYxnp0A1rNn2Nx0VOs6dVs63l3gfI=; b=gDVw74j8lDO+Op40mSbn4pm1u8TG4DlEJsnwJPegZB48/oWSiIZzDHKnqb2QCU26QG Ywzb3zO/HuXcaVSlDWkZDAv8CSzAposf5aqD4xVmanrKkJ7DZK82b1eUDxgk9qP60SCS BUfN60oVYqdW3408+6TGDtYTd3FsPG3+Cl9SRlriV1ZB6neebs29dADpSG+UkS41H4Ok KLMKpUWhh1R8kvRq/m2vJgnSmTjVFPNnDOjoyjTd6sfqXkmXqGTotshxlsTi8lldL+Oj HUCX6pTzj4h7vUT867vLsK8om02XS7aeEKiZ7nEeE9Qh9Jr4p6ckpfZ8VXoahw2eqvzU YZ4w== X-Gm-Message-State: AOJu0YyMQUseFHX3zrtfQRRo5cz9QJl5aR8gFWq6qvvaFywA5uHGYP0T 08xSrhFr0LUMpUnMywrFul1YPw== X-Google-Smtp-Source: AGHT+IGVoW/SPHvIv4lkjkR+XrcpQtW7+dlENDYCFvScnPWg8OZKhSODaSTdiqXMogksgYzy/cKlOQ== X-Received: by 2002:a7b:ca59:0:b0:3fe:d637:7b25 with SMTP id m25-20020a7bca59000000b003fed6377b25mr2665689wml.0.1696432466983; Wed, 04 Oct 2023 08:14:26 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:26 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 4/8] riscv: add floating point insn support to misaligned access emulation Date: Wed, 4 Oct 2023 17:14:01 +0200 Message-ID: <20231004151405.521596-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081430_148528_AD128ACF X-CRM114-Status: GOOD ( 22.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This support is partially based of openSBI misaligned emulation floating point instruction support. It provides support for the existing floating point instructions (both for 32/64 bits as well as compressed ones). Since floating point registers are not part of the pt_regs struct, we need to modify them directly using some assembly. We also dirty the pt_regs status in case we modify them to be sure context switch will save FP state. With this support, Linux is on par with openSBI support. Signed-off-by: Clément Léger --- arch/riscv/kernel/fpu.S | 121 +++++++++++++++++++++ arch/riscv/kernel/traps_misaligned.c | 152 ++++++++++++++++++++++++++- 2 files changed, 269 insertions(+), 4 deletions(-) diff --git a/arch/riscv/kernel/fpu.S b/arch/riscv/kernel/fpu.S index dd2205473de7..5dd3161a4dac 100644 --- a/arch/riscv/kernel/fpu.S +++ b/arch/riscv/kernel/fpu.S @@ -104,3 +104,124 @@ ENTRY(__fstate_restore) csrc CSR_STATUS, t1 ret ENDPROC(__fstate_restore) + +#define get_f32(which) fmv.x.s a0, which; j 2f +#define put_f32(which) fmv.s.x which, a1; j 2f +#if __riscv_xlen == 64 +# define get_f64(which) fmv.x.d a0, which; j 2f +# define put_f64(which) fmv.d.x which, a1; j 2f +#else +# define get_f64(which) fsd which, 0(a1); j 2f +# define put_f64(which) fld which, 0(a1); j 2f +#endif + +.macro fp_access_prologue + /* + * Compute jump offset to store the correct FP register since we don't + * have indirect FP register access + */ + sll t0, a0, 3 + la t2, 1f + add t0, t0, t2 + li t1, SR_FS + csrs CSR_STATUS, t1 + jr t0 +1: +.endm + +.macro fp_access_epilogue +2: + csrc CSR_STATUS, t1 + ret +.endm + +#define fp_access_body(__access_func) \ + __access_func(f0); \ + __access_func(f1); \ + __access_func(f2); \ + __access_func(f3); \ + __access_func(f4); \ + __access_func(f5); \ + __access_func(f6); \ + __access_func(f7); \ + __access_func(f8); \ + __access_func(f9); \ + __access_func(f10); \ + __access_func(f11); \ + __access_func(f12); \ + __access_func(f13); \ + __access_func(f14); \ + __access_func(f15); \ + __access_func(f16); \ + __access_func(f17); \ + __access_func(f18); \ + __access_func(f19); \ + __access_func(f20); \ + __access_func(f21); \ + __access_func(f22); \ + __access_func(f23); \ + __access_func(f24); \ + __access_func(f25); \ + __access_func(f26); \ + __access_func(f27); \ + __access_func(f28); \ + __access_func(f29); \ + __access_func(f30); \ + __access_func(f31) + + +#ifdef CONFIG_RISCV_MISALIGNED + +/* + * Disable compressed instructions set to keep a constant offset between FP + * load/store/move instructions + */ +.option norvc +/* + * put_f32_reg - Set a FP register from a register containing the value + * a0 = FP register index to be set + * a1 = value to be loaded in the FP register + */ +SYM_FUNC_START(put_f32_reg) + fp_access_prologue + fp_access_body(put_f32) + fp_access_epilogue +SYM_FUNC_END(put_f32_reg) + +/* + * get_f32_reg - Get a FP register value and return it + * a0 = FP register index to be retrieved + */ +SYM_FUNC_START(get_f32_reg) + fp_access_prologue + fp_access_body(get_f32) + fp_access_epilogue +SYM_FUNC_END(get_f32_reg) + +/* + * put_f64_reg - Set a 64 bits FP register from a value or a pointer. + * a0 = FP register index to be set + * a1 = value/pointer to be loaded in the FP register (when xlen == 32 bits, we + * load the value to a pointer). + */ +SYM_FUNC_START(put_f64_reg) + fp_access_prologue + fp_access_body(put_f64) + fp_access_epilogue +SYM_FUNC_END(put_f64_reg) + +/* + * put_f64_reg - Get a 64 bits FP register value and returned it or store it to + * a pointer. + * a0 = FP register index to be retrieved + * a1 = If xlen == 32, pointer which should be loaded with the FP register value + * or unused if xlen == 64. In which case the FP register value is returned + * through a0 + */ +SYM_FUNC_START(get_f64_reg) + fp_access_prologue + fp_access_body(get_f64) + fp_access_epilogue +SYM_FUNC_END(get_f64_reg) + +#endif /* CONFIG_RISCV_MISALIGNED */ diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 804f6c5e0e44..041fd2dbd955 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -153,6 +153,115 @@ #define PRECISION_S 0 #define PRECISION_D 1 +#ifdef CONFIG_FPU + +#define FP_GET_RD(insn) (insn >> 7 & 0x1F) + +extern void put_f32_reg(unsigned long fp_reg, unsigned long value); + +static int set_f32_rd(unsigned long insn, struct pt_regs *regs, + unsigned long val) +{ + unsigned long fp_reg = FP_GET_RD(insn); + + put_f32_reg(fp_reg, val); + regs->status |= SR_FS_DIRTY; + + return 0; +} + +extern void put_f64_reg(unsigned long fp_reg, unsigned long value); + +static int set_f64_rd(unsigned long insn, struct pt_regs *regs, u64 val) +{ + unsigned long fp_reg = FP_GET_RD(insn); + unsigned long value; + +#if __riscv_xlen == 32 + value = (unsigned long) &val; +#else + value = val; +#endif + put_f64_reg(fp_reg, value); + regs->status |= SR_FS_DIRTY; + + return 0; +} + +#if __riscv_xlen == 32 +extern void get_f64_reg(unsigned long fp_reg, u64 *value); + +static u64 get_f64_rs(unsigned long insn, u8 fp_reg_offset, + struct pt_regs *regs) +{ + unsigned long fp_reg = (insn >> fp_reg_offset) & 0x1F; + u64 val; + + get_f64_reg(fp_reg, &val); + regs->status |= SR_FS_DIRTY; + + return val; +} +#else + +extern unsigned long get_f64_reg(unsigned long fp_reg); + +static unsigned long get_f64_rs(unsigned long insn, u8 fp_reg_offset, + struct pt_regs *regs) +{ + unsigned long fp_reg = (insn >> fp_reg_offset) & 0x1F; + unsigned long val; + + val = get_f64_reg(fp_reg); + regs->status |= SR_FS_DIRTY; + + return val; +} + +#endif + +extern unsigned long get_f32_reg(unsigned long fp_reg); + +static unsigned long get_f32_rs(unsigned long insn, u8 fp_reg_offset, + struct pt_regs *regs) +{ + unsigned long fp_reg = (insn >> fp_reg_offset) & 0x1F; + unsigned long val; + + val = get_f32_reg(fp_reg); + regs->status |= SR_FS_DIRTY; + + return val; +} + +#else /* CONFIG_FPU */ +static void set_f32_rd(unsigned long insn, struct pt_regs *regs, + unsigned long val) {} + +static void set_f64_rd(unsigned long insn, struct pt_regs *regs, u64 val) {} + +static unsigned long get_f64_rs(unsigned long insn, u8 fp_reg_offset, + struct pt_regs *regs) +{ + return 0; +} + +static unsigned long get_f32_rs(unsigned long insn, u8 fp_reg_offset, + struct pt_regs *regs) +{ + return 0; +} + +#endif + +#define GET_F64_RS2(insn, regs) (get_f64_rs(insn, 20, regs)) +#define GET_F64_RS2C(insn, regs) (get_f64_rs(insn, 2, regs)) +#define GET_F64_RS2S(insn, regs) (get_f64_rs(RVC_RS2S(insn), 0, regs)) + +#define GET_F32_RS2(insn, regs) (get_f32_rs(insn, 20, regs)) +#define GET_F32_RS2C(insn, regs) (get_f32_rs(insn, 2, regs)) +#define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs)) + #ifdef CONFIG_RISCV_M_MODE static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val) { @@ -362,15 +471,21 @@ int handle_misaligned_load(struct pt_regs *regs) return -1; } + if (!IS_ENABLED(CONFIG_FPU) && fp) + return -EOPNOTSUPP; + val.data_u64 = 0; for (i = 0; i < len; i++) { if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i])) return -1; } - if (fp) - return -1; - SET_RD(insn, regs, val.data_ulong << shift >> shift); + if (!fp) + SET_RD(insn, regs, val.data_ulong << shift >> shift); + else if (len == 8) + set_f64_rd(insn, regs, val.data_u64); + else + set_f32_rd(insn, regs, val.data_ulong); regs->epc = epc + INSN_LEN(insn); @@ -383,7 +498,7 @@ int handle_misaligned_store(struct pt_regs *regs) unsigned long epc = regs->epc; unsigned long insn; unsigned long addr = regs->badaddr; - int i, len = 0; + int i, len = 0, fp = 0; perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); @@ -400,6 +515,14 @@ int handle_misaligned_store(struct pt_regs *regs) } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) { len = 8; #endif + } else if ((insn & INSN_MASK_FSD) == INSN_MATCH_FSD) { + fp = 1; + len = 8; + val.data_u64 = GET_F64_RS2(insn, regs); + } else if ((insn & INSN_MASK_FSW) == INSN_MATCH_FSW) { + fp = 1; + len = 4; + val.data_ulong = GET_F32_RS2(insn, regs); } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) { len = 2; #if defined(CONFIG_64BIT) @@ -418,11 +541,32 @@ int handle_misaligned_store(struct pt_regs *regs) ((insn >> SH_RD) & 0x1f)) { len = 4; val.data_ulong = GET_RS2C(insn, regs); + } else if ((insn & INSN_MASK_C_FSD) == INSN_MATCH_C_FSD) { + fp = 1; + len = 8; + val.data_u64 = GET_F64_RS2S(insn, regs); + } else if ((insn & INSN_MASK_C_FSDSP) == INSN_MATCH_C_FSDSP) { + fp = 1; + len = 8; + val.data_u64 = GET_F64_RS2C(insn, regs); +#if !defined(CONFIG_64BIT) + } else if ((insn & INSN_MASK_C_FSW) == INSN_MATCH_C_FSW) { + fp = 1; + len = 4; + val.data_ulong = GET_F32_RS2S(insn, regs); + } else if ((insn & INSN_MASK_C_FSWSP) == INSN_MATCH_C_FSWSP) { + fp = 1; + len = 4; + val.data_ulong = GET_F32_RS2C(insn, regs); +#endif } else { regs->epc = epc; return -1; } + if (!IS_ENABLED(CONFIG_FPU) && fp) + return -EOPNOTSUPP; + for (i = 0; i < len; i++) { if (store_u8(regs, (void *)(addr + i), val.data_bytes[i])) return -1; From patchwork Wed Oct 4 15:14:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C48AE7C4D4 for ; Wed, 4 Oct 2023 15:14:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+k4vTKcX+iohEJ9z8yQOQJ9xDchSgLVR53MRS6SO/j8=; b=AmIoxLtqRszuGd One8DqIaQUk4ncV7rQSNt853+Ij82gjbiH7DgYBzFsqR3t6VdK3lO2mgO+XYqtsZDyPnUvHsoPL7M xuEUH/HEtMWmuNC//iIyGh+9GT6Xe/sbpX1rgzVXjWSGr04fc1SfFvIev2+snCDo19D0LSc0rtz0J 6UFxxqEdJgdDkB7Z949avg2WOFC20lc1oPl0MOSiSvrfJ6Iq6M2X14KFrXINgpwpFth96JzNNjjiB kVr9MxvcEEuSMaRZIfZd5kDBLRmYDIYfGAse046bKAZR59TBsWmSTpJd+jV2F+itw6btdu93xKhtB 5z8xW+9BdxCHLt2fYclA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a6-000Jdp-0I; Wed, 04 Oct 2023 15:14:34 +0000 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a2-000JZj-1X for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:31 +0000 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-405e48d8e72so4850795e9.0 for ; Wed, 04 Oct 2023 08:14:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432469; x=1697037269; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rcEUkxfHErwG8tq/zMZua9Au/w+pS7+db1U/fyhODns=; b=nV2OypXdOwsOi5CJ9ClocDXwkPZIPSBkauQNz1F/8tYOOHW4th79CVvvfMsDdsTpZY edvl+WrUwqU4QA33PAycmXGYSqnn30pyKD5YCtdw5gd2DL/N2sXuNfwfC7fQm5faUE4k TRTpfsEx0wnWKEIFAx9qxOYgHxaRPQ16m+YS7qRyHsMxnIe0LSa6yHnspSw4tz+WfkEC Q2G5RO+vfmhgOGloBOfqFpCEROOE8w1lmpe3fVC2koWNZlDDg/ntlCWiiQazk/rby9I9 ++vnPXbk9Iew0fzvk+6O11My3NhK3gVPXBBsKw/fA3fPfeDF6VGWpqDvVxHmkofD+ImE 1PbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432469; x=1697037269; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rcEUkxfHErwG8tq/zMZua9Au/w+pS7+db1U/fyhODns=; b=G65jph2DHI8kR318buZIka1ZfbqbBJ1aLy8q67JIbzWDQXbiaLuiJdsezuta5O0n3T /HhNgA2mY07lUZlIFMO/4N0aNXioI8InDhMhnuZRqob6H9rC8gqqcQyHG6nOwvBd4gZh /T5vUPnxtZEMY8W8fDKSpgms0R09g9lAWVPdUmgiRgMf8x7Vt3ZRN/G6jGql2ekzycND a+PvLOFacyWgq5D9vpoAng3X8k4aS6OSGzW4zaZlGjvIc9P34j6y1N1bnzvYQSy7hNuf b0PwFX9v00mHr6Rja4afn/bnyqh0Z3U1tXOZ0fzy/TVPUTn61nn9SryzsNj6aAjnV01s 6cXQ== X-Gm-Message-State: AOJu0YzzWWNvhOVtEg5eeV3mNoCBC15cNTXzhTGF/4cpgjmZB1jo70St faqgdbVZJ/XsVSgWbFXKcGdc0Q== X-Google-Smtp-Source: AGHT+IHFWUl+0+sO6traUtAHPwk77+gDj/OtZoEyAvGn4ZMrYs4/3PFND9HQ+2oEvAhOhQjndgqGJA== X-Received: by 2002:a05:600c:1d18:b0:404:72f9:d59a with SMTP id l24-20020a05600c1d1800b0040472f9d59amr2613227wms.0.1696432468784; Wed, 04 Oct 2023 08:14:28 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:28 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 5/8] riscv: add support for sysctl unaligned_enabled control Date: Wed, 4 Oct 2023 17:14:02 +0200 Message-ID: <20231004151405.521596-6-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081430_515586_78CE16CE X-CRM114-Status: GOOD ( 10.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This sysctl tuning option allows the user to disable misaligned access handling globally on the system. This will also be used by misaligned detection code to temporarily disable misaligned access handling. Signed-off-by: Clément Léger Reviewed-by: Björn Töpel --- arch/riscv/Kconfig | 1 + arch/riscv/kernel/traps_misaligned.c | 9 +++++++++ 2 files changed, 10 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 6e167358a897..1313f83bb0cb 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -638,6 +638,7 @@ config THREAD_SIZE_ORDER config RISCV_MISALIGNED bool "Support misaligned load/store traps for kernel and userspace" + select SYSCTL_ARCH_UNALIGN_ALLOW default y help Say Y here if you want the kernel to embed support for misaligned diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 041fd2dbd955..b5fb1ff078e3 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -396,6 +396,9 @@ union reg_data { u64 data_u64; }; +/* sysctl hooks */ +int unaligned_enabled __read_mostly = 1; /* Enabled by default */ + int handle_misaligned_load(struct pt_regs *regs) { union reg_data val; @@ -406,6 +409,9 @@ int handle_misaligned_load(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); + if (!unaligned_enabled) + return -1; + if (get_insn(regs, epc, &insn)) return -1; @@ -502,6 +508,9 @@ int handle_misaligned_store(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); + if (!unaligned_enabled) + return -1; + if (get_insn(regs, epc, &insn)) return -1; From patchwork Wed Oct 4 15:14:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 59489E7C4D5 for ; Wed, 4 Oct 2023 15:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Nf3TuzPuCiSvuD39YBII6SHdrWGuwoRhUFVnG28B3mw=; b=lfjdKiiQXNsaoG M9vdQxSEOWsxfWejc8/KPvvoieQZgvEMsWKUYGxm1Hq+KOmj+WIWWs7YFxRKEbTV0x3qh2Iksk9GI +psItOFVON8JPX9rHEqTqw2gJXXfbdsIiHTyv1bGbdg1lLrnZjWtSWQxZ1kmRc+Ro5b+vDb7TUL3E JmzDeMe+oYYob6blWgHltFuykOkaBUyovvTh/oMtWEPVhIC/oUCdvhHfVu2NxGCy0BS4Ti/8CfRI+ Vfrhwwu6if1Uwd4u7KWKPkcRSLw56T6QfMh0ul9fhF9yrxM2S6ozuULQGWwcWwAugy6hUo+h96beF K3B+8AaOF2MLN7QOv4Iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a8-000JfM-23; Wed, 04 Oct 2023 15:14:36 +0000 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a3-000JaR-14 for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:32 +0000 Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-406553f6976so5071555e9.1 for ; Wed, 04 Oct 2023 08:14:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432470; x=1697037270; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2XZchuQuCBpXHMvu1aRqXWCThK6+yA7m0aHqniG2w4Y=; b=1ImJtxOWvkSRnN1Ngl7piZaJrrRWzsOeEfocEalCt+pi4Wnw9Fb3hdE3NuFAS4V7iN LSAAjyvxiXTsSVpwSaeSSLWkM7ARtE6om0q53iuFQnqbV6JRwEUg5XOnslVCQRjGbiBW QihgXOvAzp/cz08ubE42YS1jPYDT2d1Cd8w3tznfVoqUJc0S7GkMhfxwRI9D6Kmu5652 L7z19ToZRHUnu+xuGJzFZ3kJl8C9w0EAhtwj93EXogB5OiEjMnveQ1r4TQQV+dTAx0bT fen3Uv87S03AWvwtXGWUGxX8D3Nv8khFfWUqPf1bSIPrrgTgiA5mE8OFgDsAG2DYi+Gx CiJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432470; x=1697037270; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2XZchuQuCBpXHMvu1aRqXWCThK6+yA7m0aHqniG2w4Y=; b=VIaT1Q4wJy/C5hJ2tOmfooqsUBxGtP3WY3vtIiEyLF0Law/NQNUPFPRyCPmqFOv5Am meoHKciDn3FZlHHzOPk0wyHZSicf6i2CNuyBcPSobiX5sde+Gk2OGtcGvxr6mqN8kzM5 anORmL0KL3QtU5syINUQ0JbeJBZbSvmploBWR3oyqcjltXnlRVXvoG7XODpA6QLpEQXp lUUz9uYsmbu/4snYwEY2xn1yoxi0W3AU95k8nzoyT+11GEsEe+KqyhapKNLsg9godWVh 2mV9FkGw1Pkkh6A0Ty7ZsUAAZQp8B0Bu4PKSB0lFoEUy/MTXZ8RF0UKrXnm/rW1zQui8 6GIA== X-Gm-Message-State: AOJu0Yyuu9H5PpkNKL8oLrUmSwmzC74wOGtTlnc0bBH411zulEyX8d2Q XyH41tX0sToDclV6Jja/Cb3Kig== X-Google-Smtp-Source: AGHT+IG4Q2hW86Ee6BeC9N9w7JEqQmTEfb41i7/dbiv3giwSTJXRvEPVJJOvmzWfwKCGM00ziPcFVA== X-Received: by 2002:a05:600c:1d03:b0:404:7606:a871 with SMTP id l3-20020a05600c1d0300b004047606a871mr2610672wms.2.1696432469747; Wed, 04 Oct 2023 08:14:29 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:29 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 6/8] riscv: annotate check_unaligned_access_boot_cpu() with __init Date: Wed, 4 Oct 2023 17:14:03 +0200 Message-ID: <20231004151405.521596-7-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081431_368772_2D95F4C5 X-CRM114-Status: UNSURE ( 9.80 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This function is solely called as an initcall, thus annotate it with __init. Signed-off-by: Clément Léger Reviewed-by: Evan Green --- arch/riscv/kernel/cpufeature.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 1cfbba65d11a..356e5677eeb1 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -645,7 +645,7 @@ void check_unaligned_access(int cpu) __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); } -static int check_unaligned_access_boot_cpu(void) +static int __init check_unaligned_access_boot_cpu(void) { check_unaligned_access(0); return 0; From patchwork Wed Oct 4 15:14:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408892 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F3AD4E7C4D8 for ; Wed, 4 Oct 2023 15:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=trsq7sprO0GSksypb8iXNNW4eqldlVFuq5xmCuZgY80=; b=fHkJbFvxnFAeDI dpuGD/lQcTJxZMSgUR68+0dw0br8sQ1ogwwpbXvXyN9RLtblQiSjU24KHDmLkuc6MvQQk7qp8/TxE x/utQfU4D914o9ASxIzmh+kbse/2skXG44O5n5JPNlExHxIcaidvI/iMSbSC2aAS/tGTzq9DfQcr1 qVVFkuZUMbmLiSBOlu3mg7P0eo8Zy4xsu5FOBkcPWUQGpbzqyw+MsYsFEqCPZ/buBr9UBwey8dECq L8nR1Wdp9uHJezhOS5m/kO+6t8AvDYmGkVe71rDH2N/oTVdkTdfSq24+dKrbIOiW2VQFrpavGU6/a f0B5YTHMseZIF+LBSZrg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a9-000Jfr-17; Wed, 04 Oct 2023 15:14:37 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a3-000Jb8-2w for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:33 +0000 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-405d70d19bcso4000545e9.0 for ; Wed, 04 Oct 2023 08:14:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432470; x=1697037270; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7hHT3aIZuw8Lkm84JjKEOwJDf1f+gUckXFbeev3RO+4=; b=0trz9s3FalC341f7SwUEL8RND11m5LWN0RXfKc7seJY8DZPw5AtcccCb+3gg93Cog+ obgkHTiXHkyKOtpyUYkfg53tR1QkLuaDoMgKneCIDmsSuIAwhCZcTEKYsDs58RMSlSPY t97aShezOyhQfKOr1fgd2h9hzxNHgt6kbO3zhMyFDlKShg4vYEot6tvoXM8kKC6eRxjR c5iNAmAbYnt/TVnrP7AEDhkeKsZdbxezwHiTT8WsxgmbPG+Ezhfpf3IWTGEkS/VSzGkf pLMU34Ie9HhlwGLiSlU9nsnZfHNuIxOONyR3iovSWLvZuS8X+Hb1iXcVr72oR4nWkhv9 1cdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432470; x=1697037270; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7hHT3aIZuw8Lkm84JjKEOwJDf1f+gUckXFbeev3RO+4=; b=eM2nRbRa0+QS6BlHdD/0mjngELec5lT5BhfFgHhZ7Ara3d8rO3Qt2lgxTW5gEal9lz hxUzdm7OwM1uSef2ZXvxiCEfD7gdfwGKl45xS9znCOTiS5gWeW1ML0W4ixn7PSUGnh4F l23rSiuKbKzHhK1q+YbUS7h247yavM9uBhSdWRzZm4UYUv+qMgN9uEtmZuUHTVSwy6+4 O0+HBiMrq7j/ERjcA3vuTlE1CrBv9ifAMp0N/tCLJcy9THd5BPrngHXHZHGJ91yDXUYh m0QAe+Dyl5is911B4VfolqTXfm0Zf9dTZvdAzODoiwhMl4Q+UrisCw+YqWBsieziiyfk JdsA== X-Gm-Message-State: AOJu0Yzr1Q6Dc+x4l4JAe3bZbO6us24s5vdsYVV+of8XOdcGDzh/n2iX N5nVjIm/kWVnEyuygFwplBo1gw== X-Google-Smtp-Source: AGHT+IFuCL5rZpZGHOb8V9eEdhrWiMLhtRej1ZZTIPCC1LRN7LnAQEvfCLQ6zr3FZUky7FDjfHrZTw== X-Received: by 2002:a05:600c:5192:b0:405:1ba2:4fcf with SMTP id fa18-20020a05600c519200b004051ba24fcfmr2488327wmb.4.1696432470736; Wed, 04 Oct 2023 08:14:30 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:30 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 7/8] riscv: report misaligned accesses emulation to hwprobe Date: Wed, 4 Oct 2023 17:14:04 +0200 Message-ID: <20231004151405.521596-8-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081431_958042_6C6B428A X-CRM114-Status: GOOD ( 18.89 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org hwprobe provides a way to report if misaligned access are emulated. In order to correctly populate that feature, we can check if it actually traps when doing a misaligned access. This can be checked using an exception table entry which will actually be used when a misaligned access is done from kernel mode. Signed-off-by: Clément Léger --- arch/riscv/include/asm/cpufeature.h | 18 +++++++++ arch/riscv/kernel/cpufeature.c | 4 ++ arch/riscv/kernel/smpboot.c | 2 +- arch/riscv/kernel/traps_misaligned.c | 56 ++++++++++++++++++++++++++++ 4 files changed, 79 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h index d0345bd659c9..e4ae6af51876 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -32,4 +32,22 @@ extern struct riscv_isainfo hart_isa[NR_CPUS]; void check_unaligned_access(int cpu); +#ifdef CONFIG_RISCV_MISALIGNED +bool unaligned_ctl_available(void); +bool check_unaligned_access_emulated(int cpu); +void unaligned_emulation_finish(void); +#else +static inline bool unaligned_ctl_available(void) +{ + return false; +} + +static inline bool check_unaligned_access_emulated(int cpu) +{ + return false; +} + +static inline void unaligned_emulation_finish(void) {} +#endif + #endif diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 356e5677eeb1..fbbde800bc21 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -568,6 +568,9 @@ void check_unaligned_access(int cpu) void *src; long speed = RISCV_HWPROBE_MISALIGNED_SLOW; + if (check_unaligned_access_emulated(cpu)) + return; + page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); if (!page) { pr_warn("Can't alloc pages to measure memcpy performance"); @@ -648,6 +651,7 @@ void check_unaligned_access(int cpu) static int __init check_unaligned_access_boot_cpu(void) { check_unaligned_access(0); + unaligned_emulation_finish(); return 0; } diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c index 1b8da4e40a4d..5d9858d6ad26 100644 --- a/arch/riscv/kernel/smpboot.c +++ b/arch/riscv/kernel/smpboot.c @@ -245,8 +245,8 @@ asmlinkage __visible void smp_callin(void) riscv_ipi_enable(); numa_add_cpu(curr_cpuid); - set_cpu_online(curr_cpuid, 1); check_unaligned_access(curr_cpuid); + set_cpu_online(curr_cpuid, 1); if (has_vector()) { if (riscv_v_setup_vsize()) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index b5fb1ff078e3..d99b95084b6c 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -14,6 +14,8 @@ #include #include #include +#include +#include #define INSN_MATCH_LB 0x3 #define INSN_MASK_LB 0x707f @@ -396,6 +398,8 @@ union reg_data { u64 data_u64; }; +static bool unaligned_ctl __read_mostly; + /* sysctl hooks */ int unaligned_enabled __read_mostly = 1; /* Enabled by default */ @@ -409,6 +413,8 @@ int handle_misaligned_load(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); + *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_EMULATED; + if (!unaligned_enabled) return -1; @@ -585,3 +591,53 @@ int handle_misaligned_store(struct pt_regs *regs) return 0; } + +bool check_unaligned_access_emulated(int cpu) +{ + long *mas_ptr = per_cpu_ptr(&misaligned_access_speed, cpu); + unsigned long tmp_var, tmp_val; + bool misaligned_emu_detected; + + *mas_ptr = RISCV_HWPROBE_MISALIGNED_UNKNOWN; + + __asm__ __volatile__ ( + " "REG_L" %[tmp], 1(%[ptr])\n" + : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory"); + + misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_EMULATED); + /* + * If unaligned_ctl is already set, this means that we detected that all + * CPUS uses emulated misaligned access at boot time. If that changed + * when hotplugging the new cpu, this is something we don't handle. + */ + if (unlikely(unaligned_ctl && !misaligned_emu_detected)) { + pr_crit("CPU misaligned accesses non homogeneous (expected all emulated)\n"); + while (true) + cpu_relax(); + } + + return misaligned_emu_detected; +} + +void __init unaligned_emulation_finish(void) +{ + int cpu; + + /* + * We can only support PR_UNALIGN controls if all CPUs have misaligned + * accesses emulated since tasks requesting such control can run on any + * CPU. + */ + for_each_present_cpu(cpu) { + if (per_cpu(misaligned_access_speed, cpu) != + RISCV_HWPROBE_MISALIGNED_EMULATED) { + return; + } + } + unaligned_ctl = true; +} + +bool unaligned_ctl_available(void) +{ + return unaligned_ctl; +} From patchwork Wed Oct 4 15:14:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13408896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA7D8E7C4D4 for ; Wed, 4 Oct 2023 15:14:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=V+04qVmmI07cK3Fna6WMPuulFnzZU7F5MvjN1RerLdg=; b=vkDbnOqshklSXI 9rzuRhTGK1UZEtAmsBwFsWzXxLSrN1ElNgx3G6xpSX8/VmTzXERYzRFEXbhCwnUx55ITsyZwxSNDX A2+YWLSfr1XI/Pu79QLvkpVFYcpnQ7W373GsA+gGX91ivRCB+3ylOSdtq6tmAxnT8tsSKR5uE8IDb u2MmD1gJgm7+hndsCxFxui+4xrnsTmtZKxVf3ipjlhSXnOf4ZDZQgxANh2Q55bPiyMUNCS6YI1WdM pZ8iCESMrTwn8Qi2lkNUktx1dRtPumQ99mv9pDBl+TCs8fNuwxiTqGVASxIFrqV0fn2PmilmKM9X+ GGYIGNHvULqmdQhVnJMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qo3aA-000Jgr-0K; Wed, 04 Oct 2023 15:14:38 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qo3a4-000JcN-2u for linux-riscv@lists.infradead.org; Wed, 04 Oct 2023 15:14:34 +0000 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-4064e3c7c07so3944725e9.1 for ; Wed, 04 Oct 2023 08:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696432471; x=1697037271; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mgvk3biMyb3dWXWlkLQojD9wXEb7N0NOjcs65taIzUI=; b=mIYNHuyg4HaaFYxDuAMNhfkyGj5s+G3hPMnWJneSk7hn+YjBRZymR/t3oe4reTzGpR kMjbZ5dNjGEy4dYslZnkt3CuBykmcyiDAbsFRoGqK/puIfZCZjOv5p9SJC14jcvt2Zgd ZzQhMorkziFWoiheMFF/AUB5Y6qhz7oVqvff4+dzup5Q/LZ68FStB+i9nTzaKaaIRq1F 6AsMh6eFBJSqpk3zcHu+EJDSyzPFSCOmQ1NYTkAxcSqAZbgpKB5t7Jd8fg8JV9pNOvJq UM1/SPRUjy6b1WNqewQJSo5lWr/fYPw1sECwvHckTZF6L4nogytAV3fVSUyiJCDjCDLj eBgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696432471; x=1697037271; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mgvk3biMyb3dWXWlkLQojD9wXEb7N0NOjcs65taIzUI=; b=AJycp1YuDBwbpqJf/YltV5mmPQ84ccgLOqXkeqQRPbPhKvLHwUZYCjwprtDXG7GeAT bSjLzPyGj5mk842qgVeF6GTA4eFtKmQ5Oe6dkgritbJBaYIzGQ32ATC9yD5dqCT6Y1r/ Us6Bia+chrzOTexFEz+QXrQ9YmU+Gh8ZOpE903ZPUFDnCGgeG6LK+uomsPLYrOsX2BT8 478hp5G8IZOkiLYfm/ZwNHeF5XQZKPRQrexyWMk3ag8g3OTWp52OIJ5SNVEi3CszxIBC K2202mbHmeJW9KaVo6nD0AIJd+DsgWqQh+UZvPRNJJj8z7+IGE+/rMmTIgrNI6aUPnad 6NEA== X-Gm-Message-State: AOJu0YyC1bQKLUdRxNltMKp09jPWtqHG5tQxfz/7+angWnKLkr7xnRsd 8l5QYrppE9E1MMDMu6cknExBNA== X-Google-Smtp-Source: AGHT+IFWFyxvkpKoO/6/23fHvmhwPdqiEfb/7rp6Yc0TRcJio42GVyA9Nn9VHADu7IcrpwduTsF9nw== X-Received: by 2002:a05:600c:5192:b0:405:1ba2:4fcf with SMTP id fa18-20020a05600c519200b004051ba24fcfmr2488360wmb.4.1696432471665; Wed, 04 Oct 2023 08:14:31 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id i2-20020a05600c290200b00402f7b50517sm1768764wmd.40.2023.10.04.08.14.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 08:14:31 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski , Conor Dooley Subject: [PATCH v2 8/8] riscv: add support for PR_SET_UNALIGN and PR_GET_UNALIGN Date: Wed, 4 Oct 2023 17:14:05 +0200 Message-ID: <20231004151405.521596-9-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004151405.521596-1-cleger@rivosinc.com> References: <20231004151405.521596-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231004_081432_945149_EA391D2F X-CRM114-Status: GOOD ( 14.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Now that trap support is ready to handle misalignment errors in S-mode, allow the user to control the behavior of misaligned accesses using prctl(PR_SET_UNALIGN). Add an align_ctl flag in thread_struct which will be used to determine if we should SIGBUS the process or not on such fault. Signed-off-by: Clément Léger Reviewed-by: Björn Töpel --- arch/riscv/include/asm/processor.h | 9 +++++++++ arch/riscv/kernel/process.c | 18 ++++++++++++++++++ arch/riscv/kernel/traps_misaligned.c | 6 ++++++ 3 files changed, 33 insertions(+) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 3e23e1786d05..adbe520d07c5 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -8,6 +8,7 @@ #include #include +#include #include @@ -82,6 +83,7 @@ struct thread_struct { unsigned long bad_cause; unsigned long vstate_ctrl; struct __riscv_v_ext_state vstate; + unsigned long align_ctl; }; /* Whitelist the fstate from the task_struct for hardened usercopy */ @@ -94,6 +96,7 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset, #define INIT_THREAD { \ .sp = sizeof(init_stack) + (long)&init_stack, \ + .align_ctl = PR_UNALIGN_NOPRINT, \ } #define task_pt_regs(tsk) \ @@ -134,6 +137,12 @@ extern long riscv_v_vstate_ctrl_set_current(unsigned long arg); extern long riscv_v_vstate_ctrl_get_current(void); #endif /* CONFIG_RISCV_ISA_V */ +extern int get_unalign_ctl(struct task_struct *tsk, unsigned long addr); +extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val); + +#define GET_UNALIGN_CTL(tsk, addr) get_unalign_ctl((tsk), (addr)) +#define SET_UNALIGN_CTL(tsk, val) set_unalign_ctl((tsk), (val)) + #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_PROCESSOR_H */ diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index e32d737e039f..4f21d970a129 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -25,6 +25,7 @@ #include #include #include +#include register unsigned long gp_in_global __asm__("gp"); @@ -41,6 +42,23 @@ void arch_cpu_idle(void) cpu_do_idle(); } +int set_unalign_ctl(struct task_struct *tsk, unsigned int val) +{ + if (!unaligned_ctl_available()) + return -EINVAL; + + tsk->thread.align_ctl = val; + return 0; +} + +int get_unalign_ctl(struct task_struct *tsk, unsigned long adr) +{ + if (!unaligned_ctl_available()) + return -EINVAL; + + return put_user(tsk->thread.align_ctl, (unsigned long __user *)adr); +} + void __show_regs(struct pt_regs *regs) { show_regs_print_info(KERN_DEFAULT); diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index d99b95084b6c..bba301b5194d 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -418,6 +418,9 @@ int handle_misaligned_load(struct pt_regs *regs) if (!unaligned_enabled) return -1; + if (user_mode(regs) && (current->thread.align_ctl & PR_UNALIGN_SIGBUS)) + return -1; + if (get_insn(regs, epc, &insn)) return -1; @@ -517,6 +520,9 @@ int handle_misaligned_store(struct pt_regs *regs) if (!unaligned_enabled) return -1; + if (user_mode(regs) && (current->thread.align_ctl & PR_UNALIGN_SIGBUS)) + return -1; + if (get_insn(regs, epc, &insn)) return -1;