From patchwork Mon Feb 10 20:26:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 13968614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2283C021A1 for ; Mon, 10 Feb 2025 20:27:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F1DC280025; Mon, 10 Feb 2025 15:27:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4547F28001E; Mon, 10 Feb 2025 15:27:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E0F280025; Mon, 10 Feb 2025 15:27:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0978828001E for ; Mon, 10 Feb 2025 15:27:39 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 720F1805C7 for ; Mon, 10 Feb 2025 20:26:57 +0000 (UTC) X-FDA: 83105168874.22.4A3A90A Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf08.hostedemail.com (Postfix) with ESMTP id 7B99916000E for ; Mon, 10 Feb 2025 20:26:55 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=pXS5csb6; spf=pass (imf08.hostedemail.com: domain of debug@rivosinc.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=debug@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739219215; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XFoiG9nO9GzRbM78aaWwJK+Q/U0QRSToN08KNluZphY=; b=kvj5ng7rB38otW8CESo/aLKedGBxbVFTdrF6Ar5FrJdRbL2HwGZhMvH8TAQEpr/2HZI8Xj 7avo0f+j/vjQgD6fu7686bee1DVJ+BmuBJUTPvSIFfOrpeVCwIgfziLotedtwdeo4WtiJT 2GcSH940Jp1HB1AZ//hBoTXBSCJQUVU= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=pXS5csb6; spf=pass (imf08.hostedemail.com: domain of debug@rivosinc.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=debug@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739219215; a=rsa-sha256; cv=none; b=XTGGUfav51lxjRCtaSEMKV2DbfQPjdXRYBfqd3glKhKx1wsr+xgk9kAXz0jBF9IB6p2zhK 5eRVliL+GD/cSZ1fmBceXAuha11w0xAjW4JPKOMuGAI/1uwjqdFSgPwjUf7Yflbxn9QD9y 3H7nn8PyT/TkEofF9haiTsKyRKIDgac= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-21f62cc4088so49823265ad.3 for ; Mon, 10 Feb 2025 12:26:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1739219214; x=1739824014; darn=kvack.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=XFoiG9nO9GzRbM78aaWwJK+Q/U0QRSToN08KNluZphY=; b=pXS5csb6VGx6wmbtbO5X45WFNjdsqgv5ajcArXyckeAQHWfADhLoBYuH1+Rzybd8rW 7Jy0hLBSUTp0vT56RTbo4eYRwcxkguHsWnx0E7kMH3bhR3ntWKAv7X15ZmFS8MhvCZqH s7dvIRXG42NLrVw8iYNDjiN4n2yP1Mn8GzTtW6JclbOFqhl8UMhBWez56oFCQul641lk /GRajehEUZwYjHp+xGk3COR2x0s0M+49Ae5bC37wirGjRVCpO83giJZvnbebmdynrj16 7221M/Q0+XMumdtjlPIhYqaM1eHGiqKL7m/2q/g0h/wJGwOXwNEx+yBz+ltcZFPqgsxA gqfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739219214; x=1739824014; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XFoiG9nO9GzRbM78aaWwJK+Q/U0QRSToN08KNluZphY=; b=bwBK6YPKn1FUd0VNk2rSK3mHO/Ga3LFyeYgTx8+ATmWo+p+cFkDhm41eQbCTnc0Caw uGEjcJFrD/hyHqrSOJ9XIXmU/on/QDsrn4nmkhCiBzfqe5KyxowQJJbixQ0xhKPJ0a7k h1QefTTvgYLxGkeuACDzIQcgJzX/XGIbHXYOTH7JFE9gK3RrOquKCuZJi5qMIdS5DtfN 8Q/3p4HtV049ePr7jcd/a2t4N9asOzP7G48r+qVUzPdUdSxOT94POVYw5Dvcm9/cX11Z 1uCwOmhgxOYk+G8+ReLlNohW7hOGhQSuSAGvDLO889wJurrnkTRU9ftiW2GpRgkpb8Kj 2ldA== X-Forwarded-Encrypted: i=1; AJvYcCUMwHB4r/tqAu+1mOJyJ4tGiqtTV5icIPqOgCUiHQzaGToUD/meBcvp0bBlBf/mVXRwocFOUoYsng==@kvack.org X-Gm-Message-State: AOJu0YyfOH8Es+pTF2jmeKjapjHF37cC0BtCFaNbMKBmz8YTBg8nvwLe YMbntlhDSsc3zZgz9AbjATBwO68J5o35Ia6C4DWdf3PmHvH+Ji0BJyfk4ZOZnCA= X-Gm-Gg: ASbGncv00+5ow3Q1MqIae3QQ1yMaMFfXLYkUUZYAChjw31vvhoqBdwOVf8efrrYngWs wivcr2j70B26wXjMStfwE8uLsOQmREPPBU8IjC2dT8VXWhERyfCvNm3hKApiQT3nkN6h4YXTSnE H7mca/Pnta8qjAyBJ7crkQMVCyX/e7EmzUMSWe52rsf46kzTOndmB6FTFOP9xQQASAykNnooirH 67TOrzywXboxXLDMM3bo02KYE74B10vcmF6jO+OSuoMSiLBSnVtjXYUCxn/I5crTyYmNQ6cAIa7 XqgfqqAkEH/AyINHym0x3hbKvg== X-Google-Smtp-Source: AGHT+IEPSlWV/roU0SyRsD9mZQNHSbQmDLwPbxxPcGI6KoZ78tihK1b8Yp4dYVskjWY846JzmqLryA== X-Received: by 2002:a17:902:d543:b0:21f:f02:4176 with SMTP id d9443c01a7336-21f4e73bcc3mr228449315ad.33.1739219214484; Mon, 10 Feb 2025 12:26:54 -0800 (PST) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f3687c187sm83711555ad.168.2025.02.10.12.26.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Feb 2025 12:26:54 -0800 (PST) From: Deepak Gupta Date: Mon, 10 Feb 2025 12:26:38 -0800 Subject: [PATCH v10 05/27] riscv: usercfi state for task and save/restore of CSR_SSP on trap entry/exit MIME-Version: 1.0 Message-Id: <20250210-v5_user_cfi_series-v10-5-163dcfa31c60@rivosinc.com> References: <20250210-v5_user_cfi_series-v10-0-163dcfa31c60@rivosinc.com> In-Reply-To: <20250210-v5_user_cfi_series-v10-0-163dcfa31c60@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Deepak Gupta X-Mailer: b4 0.14.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 7B99916000E X-Stat-Signature: grmnng4omn94dggywy9f78efkanybqec X-HE-Tag: 1739219215-379455 X-HE-Meta: U2FsdGVkX18dbltULBi41MSBd72yR7ClSWvQU6NKtZ7gTk5ortbtFrXrO/Z8mLuYv7Tc8/mugQjz0KtpVFCU6b8OeLpAViNGtbpweGg6Z0Ls1bGr9EFWj8dM6lICJkEd/xviafxaSpKpQuteQT9NSyaZP8ODwc4h0gdRPrP53IW5C5M+ubWrycT99Cm7l0jCrrL9B1d3/T5NC/FQkAq/PIYXgxpVEcxq08NOoqC7BN5hEYTUqC3kqPcktd5siqmu92X8MEayN4HqzEjOHG4YLCUKQB0QeraGOKJdYZFybjiCa5Yzx+XkIMnUBSfuNpQQBvztCoG4kpf/VEo6l32gGzdxC5PnK+uJ9NLjqRhq8Mu9qJO35+6fgw23qj+PyLjF3Yv7uNRWieEtL9OpTHIzD4r++TFpn9o+Tk0+vT6yxVWNFhQUWzWpHoEn/t4oFruJuPz28aBY9jBmo+sJ3lUcLubRsgzWVd9EzZFTScYpYGWu9V0VTujle8xNJj8fVBeohpHKYGXEiWzxcQB6HiEOudHEf7Wu5gXzySouniQelWX2z7k5MNy6A1GUo4ZHc5qE71sdxy4otfKIPy8KB+tMLPqdOm8VSb4Gve4uOeblrmPEkRs+PIZhwU4OxqsvLT64EGYdYlLUiHyDKi+VpqC6rEvYVadFQjMmCD+Gfkbfk4S19FHNR5A/JTYZ/yByQElcnwCqeit3FYybxi8oEKsuIa47gF914RmsnGKZt9VDi2zyw5huIkJ2imEx6w8JbgogElfpDW/owv7Oi9OrfrsA2J7HqYY/EcW/FA9AdtdcDTor5dAMB6IVZOoTCO4Lilw/BPsAmCo+TDTeXSUc81eaSZmF+rbBbMACJ533bCFdHOwPM9/eZGZb3SEUz13GVxMHs99ugVdjg96sadNqiACP8cu/pjUYslklNhgy+10knXL/tbOSztvpM/s5sHHRkVNn1I+71DIiPXFHOtt2jMu 2YA4AFvR gITgkyBGyAAhniMCh9HyczeAnQQR9eIvUGrcKY/OCRaTrJsgHIPN0EWkk7+7lmq4Jlk9rQleNoa2WZzuR4lhYCguVIaTqxD2IsL22kfkJ9DLF6yAXk3XgV25+s5Gg7fIZGC/IGBD7uf7h6WJbrSxjz86M8GpxpUj8IBRdt1p1H7Kjv3tf+KCIPALoplvhfcLZstQZkHX+zZtgfzsxY3jEBUHD9RQLH69E0/E/TGorZ42O+3CGW0ODkq5QqcU3gfbPnD9PaFUDrCbSbfw81m6xnXDKWvllqEtvyRpzOaQDOdZtUd4rSBhPZveHg2Kpw0ukvyY+bm5TX0jH6s+Ks8aVgqTmvkjrzyhbZ3XjUdUCf7DmV4WYPcCZArKHFfpaGY2Qv9hvOeFZuP/ZjomAb9MiHvfMwGp/L4Bz801WJxKN1F4fyeqq7r+oEuOww2ChduemC7QqVdoYpjtMVXvPwAmrR1CcUJ/Vswy7rWSMf18oe9gz3IslyEd0GATKeaFXSjouVBQpYxQDqBq7E1y6mBEzUJnE3dF+x7kQgFcJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Carves out space in arch specific thread struct for cfi status and shadow stack in usermode on riscv. This patch does following - defines a new structure cfi_status with status bit for cfi feature - defines shadow stack pointer, base and size in cfi_status structure - defines offsets to new member fields in thread in asm-offsets.c - Saves and restore shadow stack pointer on trap entry (U --> S) and exit (S --> U) Shadow stack save/restore is gated on feature availiblity and implemented using alternative. CSR can be context switched in `switch_to` as well but soon as kernel shadow stack support gets rolled in, shadow stack pointer will need to be switched at trap entry/exit point (much like `sp`). It can be argued that kernel using shadow stack deployment scenario may not be as prevalant as user mode using this feature. But even if there is some minimal deployment of kernel shadow stack, that means that it needs to be supported. And thus save/restore of shadow stack pointer in entry.S instead of in `switch_to.h`. Signed-off-by: Deepak Gupta Reviewed-by: Charlie Jenkins --- arch/riscv/include/asm/processor.h | 1 + arch/riscv/include/asm/thread_info.h | 3 +++ arch/riscv/include/asm/usercfi.h | 24 ++++++++++++++++++++++++ arch/riscv/kernel/asm-offsets.c | 4 ++++ arch/riscv/kernel/entry.S | 26 ++++++++++++++++++++++++++ 5 files changed, 58 insertions(+) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index e3aba3336e63..d851bb5c6da0 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -14,6 +14,7 @@ #include #include +#include #define arch_get_mmap_end(addr, len, flags) \ ({ \ diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index f5916a70879a..a0cfe00c2ca6 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -62,6 +62,9 @@ struct thread_info { long user_sp; /* User stack pointer */ int cpu; unsigned long syscall_work; /* SYSCALL_WORK_ flags */ +#ifdef CONFIG_RISCV_USER_CFI + struct cfi_status user_cfi_state; +#endif #ifdef CONFIG_SHADOW_CALL_STACK void *scs_base; void *scs_sp; diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h new file mode 100644 index 000000000000..5f2027c51917 --- /dev/null +++ b/arch/riscv/include/asm/usercfi.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 + * Copyright (C) 2024 Rivos, Inc. + * Deepak Gupta + */ +#ifndef _ASM_RISCV_USERCFI_H +#define _ASM_RISCV_USERCFI_H + +#ifndef __ASSEMBLY__ +#include + +#ifdef CONFIG_RISCV_USER_CFI +struct cfi_status { + unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ + unsigned long rsvd : ((sizeof(unsigned long) * 8) - 1); + unsigned long user_shdw_stk; /* Current user shadow stack pointer */ + unsigned long shdw_stk_base; /* Base address of shadow stack */ + unsigned long shdw_stk_size; /* size of shadow stack */ +}; + +#endif /* CONFIG_RISCV_USER_CFI */ + +#endif /* __ASSEMBLY__ */ + +#endif /* _ASM_RISCV_USERCFI_H */ diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index e89455a6a0e5..0c188aaf3925 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -50,6 +50,10 @@ void asm_offsets(void) #endif OFFSET(TASK_TI_CPU_NUM, task_struct, thread_info.cpu); +#ifdef CONFIG_RISCV_USER_CFI + OFFSET(TASK_TI_CFI_STATUS, task_struct, thread_info.user_cfi_state); + OFFSET(TASK_TI_USER_SSP, task_struct, thread_info.user_cfi_state.user_shdw_stk); +#endif OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); OFFSET(TASK_THREAD_F2, task_struct, thread.fstate.f[2]); diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 33a5a9f2a0d4..68c99124ea55 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -147,6 +147,20 @@ SYM_CODE_START(handle_exception) REG_L s0, TASK_TI_USER_SP(tp) csrrc s1, CSR_STATUS, t0 + /* + * If previous mode was U, capture shadow stack pointer and save it away + * Zero CSR_SSP at the same time for sanitization. + */ + ALTERNATIVE("nop; nop; nop; nop", + __stringify( \ + andi s2, s1, SR_SPP; \ + bnez s2, skip_ssp_save; \ + csrrw s2, CSR_SSP, x0; \ + REG_S s2, TASK_TI_USER_SSP(tp); \ + skip_ssp_save:), + 0, + RISCV_ISA_EXT_ZICFISS, + CONFIG_RISCV_USER_CFI) csrr s2, CSR_EPC csrr s3, CSR_TVAL csrr s4, CSR_CAUSE @@ -236,6 +250,18 @@ SYM_CODE_START_NOALIGN(ret_from_exception) * structures again. */ csrw CSR_SCRATCH, tp + + /* + * Going back to U mode, restore shadow stack pointer + */ + ALTERNATIVE("nop; nop", + __stringify( \ + REG_L s3, TASK_TI_USER_SSP(tp); \ + csrw CSR_SSP, s3), + 0, + RISCV_ISA_EXT_ZICFISS, + CONFIG_RISCV_USER_CFI) + 1: #ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE move a0, sp