From patchwork Tue Sep 26 15:03:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13399315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A2AAE7E64D for ; Tue, 26 Sep 2023 15:04:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FRbS03LBwt24lmXKbhAldxKKoSuDVTJeqQNmBrFSJ8I=; b=q+jcQ+2FcBsGdD DwbZ1oWUvhaaAOwZ9XfERW4NEKrmhOjde/dGb8wAhfPDpZvMZnOHJj/Hpew+pWWWPZHfukYC/EDiE RlWG+Z8Tgs5/WC3EkwJL6PE3llO4dWn2KPP1H22NEMQnDtGuZxncVYMHGNPdSWoudJk8atTJcjc6l o2fdUCpvCpZENFtSYEx7uBiW1n2Aj9qHX30+VczJRX+NkwhArw7n4AS/rrCmQwOVCBNqxp+ZH6KOc Zv5PsFP+3Ij+4v4JZhubv5IQb/4IfACCuTRKfMzCq/xXd0gi3jcT/DZq7ZwSVkJeAmHdmQQIQ16fw 5UsyDQXyVgAaeJXYy3cw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ql9bf-00GbRx-0s; Tue, 26 Sep 2023 15:04:11 +0000 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ql9bV-00GbL9-0O for linux-riscv@lists.infradead.org; Tue, 26 Sep 2023 15:04:04 +0000 Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-3232e96deaaso657640f8f.0 for ; Tue, 26 Sep 2023 08:03:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1695740637; x=1696345437; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cTRkIgCqvbOvq5WltvgZniNquXqWMppNcycuOTnJ1ME=; b=aOzZJMqEdGq/ktuxJF3+8xyzqzmVc4o4XMvNy5OJwxBBRAXHKqWDxwne1NFiTE/nS7 uW5Fjayi0xWIS/IYwuENFA/fbAKi3NLLoDr3fSkGZxzPSN9K/jrYEsRtQtEuCQfh//PH KTpyBUMegx2kBjOFXlH/Yb4J9x/i1NwuWomcsuiBnE6eeE0f2CmW4Q2o3LFnb+KGPabw VsZrPx5duZxGPcTNeArs1Q6UtfXmKNTMvokhfrTIT108P6hdiO+zdA/lHmcLgn8N93l9 wMiw3TrrWBQwLBDp+hixsVPHXSHLMDSegLeT5CfWPsV01AeJhMDGgN84/aUX4iYVssk6 9E0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695740637; x=1696345437; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cTRkIgCqvbOvq5WltvgZniNquXqWMppNcycuOTnJ1ME=; b=LlYEPqooxzifmL0PWEdquOdGn7266ewTaJo6Pb4H+MyOU6VpcQzvY8yGHNxQuD+ya8 C7WHSdLcPA+0I1oMMDxG9TmfqcL+Z7K6MJ9L1XCldBenRMZTA0MoOLfy48L/eW0724Ey b0TZUiA+WrsJSQL2ntLP3u4yfoROgJ4df04QFMVlQq/h5z5oSPoQ7lvP1T8+m6W5aVTU 9L4CffDM3O/RD9HHbTfW4w6iOXu5UcZFV7lPOjufMl2eTEOr3L2ypLlj3oCW9zvzFxxy eVY+keif8VHcii+7tJENCAwPqiRdpKZe8yxL1+BYwzRrNkj3LZP9KBYAxBCmgBpnbHd8 ynHQ== X-Gm-Message-State: AOJu0YxyESZgP/pDtTWeWcn+oGytNvoykHJaDCPDYAQikc42bj9Sf2t4 JS4BWfuNhS//0QvM0BLOw2F6gg== X-Google-Smtp-Source: AGHT+IF3tl133AyKMYwdf3JEZLgsj3bubF0UkQguDXYTyEpNjLdmnD+OMwVORcOYpBaWMoc41tZGgQ== X-Received: by 2002:adf:e945:0:b0:31a:e29f:5eab with SMTP id m5-20020adfe945000000b0031ae29f5eabmr8963359wrn.7.1695740637664; Tue, 26 Sep 2023 08:03:57 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:2b3d:6c70:9dbf:5ede]) by smtp.gmail.com with ESMTPSA id x11-20020a5d650b000000b00318147fd2d3sm14926060wru.41.2023.09.26.08.03.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Sep 2023 08:03:57 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Andrew Jones , Evan Green , =?utf-8?q?Bj=C3=B6rn_Topel?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski Subject: [PATCH 6/7] riscv: report misaligned accesses emulation to hwprobe Date: Tue, 26 Sep 2023 17:03:15 +0200 Message-Id: <20230926150316.1129648-7-cleger@rivosinc.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230926150316.1129648-1-cleger@rivosinc.com> References: <20230926150316.1129648-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230926_080401_160629_39FB9A5C X-CRM114-Status: GOOD ( 18.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org hwprobe provides a way to report if misaligned access are emulated. In order to correctly populate that feature, we can check if it actually traps when doing a misaligned access. This can be checked using an exception table entry which will actually be used when a misaligned access is done from kernel mode. Signed-off-by: Clément Léger --- arch/riscv/include/asm/cpufeature.h | 6 +++ arch/riscv/kernel/cpufeature.c | 6 ++- arch/riscv/kernel/setup.c | 1 + arch/riscv/kernel/traps_misaligned.c | 63 +++++++++++++++++++++++++++- 4 files changed, 74 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h index d0345bd659c9..c1f0ef02cd7d 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -8,6 +8,7 @@ #include #include +#include /* * These are probed via a device_initcall(), via either the SBI or directly @@ -32,4 +33,9 @@ extern struct riscv_isainfo hart_isa[NR_CPUS]; void check_unaligned_access(int cpu); +bool unaligned_ctl_available(void); + +bool check_unaligned_access_emulated(int cpu); +void unaligned_emulation_finish(void); + #endif diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 1cfbba65d11a..fbbde800bc21 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -568,6 +568,9 @@ void check_unaligned_access(int cpu) void *src; long speed = RISCV_HWPROBE_MISALIGNED_SLOW; + if (check_unaligned_access_emulated(cpu)) + return; + page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); if (!page) { pr_warn("Can't alloc pages to measure memcpy performance"); @@ -645,9 +648,10 @@ void check_unaligned_access(int cpu) __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); } -static int check_unaligned_access_boot_cpu(void) +static int __init check_unaligned_access_boot_cpu(void) { check_unaligned_access(0); + unaligned_emulation_finish(); return 0; } diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index e600aab116a4..3af6ad4df7cf 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index b5fb1ff078e3..fa81f6952fa4 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -9,11 +9,14 @@ #include #include #include +#include #include #include #include #include +#include +#include #define INSN_MATCH_LB 0x3 #define INSN_MASK_LB 0x707f @@ -396,8 +399,10 @@ union reg_data { u64 data_u64; }; +static bool unaligned_ctl __read_mostly; + /* sysctl hooks */ -int unaligned_enabled __read_mostly = 1; /* Enabled by default */ +int unaligned_enabled __read_mostly; int handle_misaligned_load(struct pt_regs *regs) { @@ -412,6 +417,9 @@ int handle_misaligned_load(struct pt_regs *regs) if (!unaligned_enabled) return -1; + if (user_mode(regs) && (current->thread.align_ctl & PR_UNALIGN_SIGBUS)) + return -1; + if (get_insn(regs, epc, &insn)) return -1; @@ -511,6 +519,9 @@ int handle_misaligned_store(struct pt_regs *regs) if (!unaligned_enabled) return -1; + if (user_mode(regs) && (current->thread.align_ctl & PR_UNALIGN_SIGBUS)) + return -1; + if (get_insn(regs, epc, &insn)) return -1; @@ -585,3 +596,53 @@ int handle_misaligned_store(struct pt_regs *regs) return 0; } + +bool check_unaligned_access_emulated(int cpu) +{ + unsigned long emulated = 1, tmp_var; + + /* Use a fixup to detect if misaligned access triggered an exception */ + __asm__ __volatile__ ( + "1:\n" + " "REG_L" %[tmp], 1(%[ptr])\n" + " li %[emulated], 0\n" + "2:\n" + _ASM_EXTABLE(1b, 2b) + : [emulated] "+r" (emulated), [tmp] "=r" (tmp_var) + : [ptr] "r" (&tmp_var) + : "memory"); + + if (!emulated) + return false; + + per_cpu(misaligned_access_speed, cpu) = + RISCV_HWPROBE_MISALIGNED_EMULATED; + + return true; +} + +void __init unaligned_emulation_finish(void) +{ + int cpu; + + /* + * We can only support PR_UNALIGN controls if all CPUs have misaligned + * accesses emulated since tasks requesting such control can run on any + * CPU. + */ + for_each_possible_cpu(cpu) { + if (per_cpu(misaligned_access_speed, cpu) != + RISCV_HWPROBE_MISALIGNED_EMULATED) { + goto out; + } + } + unaligned_ctl = true; + +out: + unaligned_enabled = 1; +} + +bool unaligned_ctl_available(void) +{ + return unaligned_ctl; +}