From patchwork Fri Mar 14 21:39:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 14017504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECA98C28B2F for ; Fri, 14 Mar 2025 21:40:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6AC2280022; Fri, 14 Mar 2025 17:40:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF2DA28001D; Fri, 14 Mar 2025 17:40:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D1F5280022; Fri, 14 Mar 2025 17:40:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7538128001D for ; Fri, 14 Mar 2025 17:40:13 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B4DB9B8EDB for ; Fri, 14 Mar 2025 21:40:14 +0000 (UTC) X-FDA: 83221475148.13.D1CE406 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf23.hostedemail.com (Postfix) with ESMTP id BE4C2140007 for ; Fri, 14 Mar 2025 21:40:12 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=WKRlZC8c; dmarc=none; spf=pass (imf23.hostedemail.com: domain of debug@rivosinc.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741988412; a=rsa-sha256; cv=none; b=EsgeP9Ipg4RyOmyYGKx/fKKyCZwM+CQSKdaie7819Azp9AdC3GRTkr/VO/uK6ccN2Sa8TZ XbQ37uW8BXrGaz0Xy0QtbwFuM9VE/tsN6Zh3yHPhLO7fA8AYlq7LLFTVkAOSCpyTTp4GKX q0/Vol9/sbudxakSXGLgZ/Vobq53D7U= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=WKRlZC8c; dmarc=none; spf=pass (imf23.hostedemail.com: domain of debug@rivosinc.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741988412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ylS/7auodeR2tA3gsEjRjjzbbCoTIX+8VvBmJDYwjkA=; b=TrlnPFnRuWm2f17K4tVNJPSqRiAstXNj/pZwgZM2R0moqBCNryzlydG7f8Rum+eu32cbej 61pRqbm4/s6uSrd+zQTHyCM3KoOKzVArkNiMj8IjOGrYObY99rik1VWUxrg7OIgt7ciSzU 1fqsbATcvKOC2X/YS0tYXq4vLQAzxhc= Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-2241053582dso61490775ad.1 for ; Fri, 14 Mar 2025 14:40:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741988412; x=1742593212; darn=kvack.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ylS/7auodeR2tA3gsEjRjjzbbCoTIX+8VvBmJDYwjkA=; b=WKRlZC8cLlup9/CIRwaTEABmshV2bun5g5Mis3DirDC6sYcr3A0oTwLaWcLNakcAzQ F6BXSA/yxMpi3HLYr9fJYJ55pGscs7q+ljL8xRiwP/VHJTWJkuxuGPiLQTecoinBfZEo bbOApm97jaXc7JWBfy4NtZaHbiRdOoA/xe3yuAsOWRjyzcQ+XbntGleeNG0eeCfdtZXL WxUxwmtLTHx7ksBthhGc49rAxb7cU4EDBrYiM3nmHLZOa/fQltyfz7mI1mxPhGAS4ZeR FO6QFNdELbBO7LSiCAgDJ+hqAeGxMda4tLNum3lRBUcg5kN9c4YTn1XGtKt7LkjLDvtD 5C+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741988412; x=1742593212; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ylS/7auodeR2tA3gsEjRjjzbbCoTIX+8VvBmJDYwjkA=; b=QyiSqkFVtR/8W0tT+yWhAjrDdUb7RcdYQtadXX1iZXOrCH/5AMx+rB2BZAsY+S78dc iyvZvE2q3dQwVYc8fJ4LwiBPVHETivvPupSckJB1qTYy4Gwn1s8GCW5AiVMfBt3UHFWR Py8Xj5XRzxPAbwAohqcAZFTQt/WTn8eVqXoijmi6YL8Yt4dm8dd99UErK3ZNbBT1saOS uznNaPJawqlBIKWqVzMTf/SQkqqwDwAPv59mugj+WbE/YENCaFmfMWmgQU4L/wbcOvLg fB94YCd4LOFrrgRnPQfxiYdUDVtFI+SBt4vY8Zb0AIUyioEKrfDudNMsdViiP1AZM0Cf XQvQ== X-Forwarded-Encrypted: i=1; AJvYcCVBOiFTPvmp6IdsPYQWzzxjGfpJ2txqV/npwAzqo3ntFdpsjuxZUc6FtMtHJmspgfsMoh97ZZMQPQ==@kvack.org X-Gm-Message-State: AOJu0YwUANW4b9IRx0hJgZXheGOzEE7zpJgU4tKbkrweK2FWVMKJd7dK OZUZYYtxn3+3BOdwfkMBxOait1tfJ678Yu9s627XKc3kRbCSudB9T4U9Qu61cMw= X-Gm-Gg: ASbGnctaAsBFqAFmU5PdOD9Lr9DGpsOeP+gZioIPImLTu90OwXH3+o35Uc2xHUD9N6X yqtfGL7sq2q+XmxFsGuKF/p2UM8KUUlIS+7fEwovf2SAtmnbUyXwJm/zEt+sjAhWKAIrTOCsUzG nxDrXdgbVFipQg2QkU+et6NoSUrxfGOFHU6DN+wT92ds3HUGpLKZ4Z9Q2uxC0hiA22LDAukk7nY 93C3L0pbuKMUYBBM9bw0C6R+Yv6B5fZCXeLEbcDnrRNkiDR2J9DKwLTzLF8dmf9X/itaNSm0P7p hPguFrXRvxDs/LMWMrT2m4wf0HPfHsUNgl64z8w8UtWR/2xwMypH98M= X-Google-Smtp-Source: AGHT+IHOxFmuAegtqCte/c3FDu53tbalBV2n4me+J2S3APOVDJM7l3Em8+cF8KvTD/JuSP72+ZiDAg== X-Received: by 2002:a17:903:2793:b0:224:255b:c92e with SMTP id d9443c01a7336-225e0a62f23mr36836485ad.3.1741988411683; Fri, 14 Mar 2025 14:40:11 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c68a6e09sm33368855ad.55.2025.03.14.14.40.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 14:40:11 -0700 (PDT) From: Deepak Gupta Date: Fri, 14 Mar 2025 14:39:36 -0700 Subject: [PATCH v12 17/28] riscv/signal: save and restore of shadow stack for signal MIME-Version: 1.0 Message-Id: <20250314-v5_user_cfi_series-v12-17-e51202b53138@rivosinc.com> References: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> In-Reply-To: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Deepak Gupta , Andy Chiu X-Mailer: b4 0.14.0 X-Rspamd-Server: rspam07 X-Rspam-User: X-Stat-Signature: disbtamxzjtn67a6mxuo8rrmx69ch4tt X-Rspamd-Queue-Id: BE4C2140007 X-HE-Tag: 1741988412-359092 X-HE-Meta: U2FsdGVkX1/tel5ROv3+Pi2EjlWD23HvwO32d+bhVIG4B2YMVL4c+4mmNvaurPembt74tegODS/0paqYydQEHVb7/4Zu7dUY6eUM28fECeJI34BcFZbbaTQs+WZsiW+NAHx4MXo79ePK1JZG46GMuyw0J20635y+qaOMmxrLz/fj0Q2LgpaAKkLXpQ5LeTUDPA89Of3B0G8Md5zHt7DrR6o6sgyI+hvDIgA2J3G3xGSRs8LR4yC5MKrQw5eMv7VyySGU+sUDS0kS6ZuEkQcYjN1k6vd1irG4zvbQ7zgTFLDJZqa+mcBixHqIpleQDrnW67J7IZqqdoL9OeibolxAgnpZ/mUbD7I7GBqSOxLSQPl2UqygDLFoCn4AkTJjg3USSJdcOpanF0Qat4Y2OXUkX+U6fbzKUU+E9mvmZVd6KWuA7tE2jTELEkDHV5uVa9ejnszFmHSz+HKQ426GvGQzxh5u51K7q4e7MFoasmtHih1YK3+ZPYFCF3s+FBtTLsei7Ty7fUdhcW2zwTDrHJv6qF5MpSR1qxQrLaRWT/y7iJ2dvobPy2KAzRHl0B6JOYM/+E/80JrQ7ukfizkd6ByFIlCMvcauj8CYqm8EY489GmPDPc3O4eR3z0EKCaV4RTTzrwYlUwUi6mh+/fxb2dAB9zDByxvPvtbDSKNQK7UHNnW7xslASNd9wQVGNRWtTmmLpjkJW/QYtzjsPhwhaaEM0T1PJKvATapKxJm/AD1CxHNZgRVl+PIO+aKcWmUiuYiHJc+o+3t2j0eCul7+OXC8Hi2bwCH/ChwEtYZNllwafitK+V4WjEQjHq6D4SNktV/HC8gjyyUb6e+2+tUGsBTnltOFvUOZpwkNghnHG0+A15Q+RVLi0s1bSMtE2i12I+3cs4NDFUR16dj+UXL3eg6GC2LfUgSl9u+JOOl2STCtTXVQQ9k84mQQcu6PDOIdtgRRzfPv+xR9m+/IsDXILy7 fj8CfW2M LSNR1G0lAkL6nYDT/rkCTZC3fbtTh59tcHx30DAxgXMMb3W2XQdlrvA84YOAWpdjHiftxsDAVyULzMokEYJLmziqeRc2qG76KzAQgoW1WwvYpMbSzt42fD8L8wxB7nRQC9RFWiU0XayPFo+SeVL7QGjlLlCPMSPhdNPgO9Zlp3t0fCvP2+dl9/vfnp4sApSQyeVKa0ROG9ei82AScLvlDn1zA90sLE/JIiY43SX5Pw3bmlE4WBSQXXbgMdyEWqhS4vVFm8GVLh6rHjK356h3jj8y2EoPLuPVHl3raaoEyCcXMtg3m9NdwBXpJjsYhHuQHBOXAOdhfArFS7j4KhprcK7/MTPwmNnvqhL8yibEqdC/hWSwdQUlBFiUGASiOv3eLFdfGOztcDcmJ4XiGyE6w4REEqJetvaY/Y+Rh29J1RWYbcZTp7/eJGdpDYteHSlffliyePsmm8BFEuO2FwJxU5U8fBz7Tw0UExBUsjmddt2OdysFMSGbu40duZuRlM9A6JriAV5Dbnp4AV5I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Save shadow stack pointer in sigcontext structure while delivering signal. Restore shadow stack pointer from sigcontext on sigreturn. As part of save operation, kernel uses `ssamoswap` to save snapshot of current shadow stack on shadow stack itself (can be called as a save token). During restore on sigreturn, kernel retrieves token from top of shadow stack and validates it. This allows that user mode can't arbitrary pivot to any shadow stack address without having a token and thus provide strong security assurance between signaly delivery and sigreturn window. Use ABI compatible way of saving/restoring shadow stack pointer into signal stack. This follows what Vector extension, where extra registers are placed in a form of extension header + extension body in the stack. The extension header indicates the size of the extra architectural states plus the size of header itself, and a magic identifier of the extension. Then, the extensions body contains the new architectural states in the form defined by uapi. Signed-off-by: Andy Chiu Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/usercfi.h | 10 ++++ arch/riscv/include/uapi/asm/ptrace.h | 4 ++ arch/riscv/include/uapi/asm/sigcontext.h | 1 + arch/riscv/kernel/signal.c | 80 ++++++++++++++++++++++++++++++++ arch/riscv/kernel/usercfi.c | 56 ++++++++++++++++++++++ 5 files changed, 151 insertions(+) diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h index a8cec7c14d1d..361f59edbdef 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -8,6 +8,7 @@ #ifndef __ASSEMBLY__ #include #include +#include struct task_struct; struct kernel_clone_args; @@ -35,6 +36,9 @@ bool is_shstk_locked(struct task_struct *task); bool is_shstk_allocated(struct task_struct *task); void set_shstk_lock(struct task_struct *task); void set_shstk_status(struct task_struct *task, bool enable); +unsigned long get_active_shstk(struct task_struct *task); +int restore_user_shstk(struct task_struct *tsk, unsigned long shstk_ptr); +int save_user_shstk(struct task_struct *tsk, unsigned long *saved_shstk_ptr); bool is_indir_lp_enabled(struct task_struct *task); bool is_indir_lp_locked(struct task_struct *task); void set_indir_lp_status(struct task_struct *task, bool enable); @@ -72,6 +76,12 @@ void set_indir_lp_lock(struct task_struct *task); #define set_indir_lp_lock(task) +#define restore_user_shstk(tsk, shstk_ptr) -EINVAL + +#define save_user_shstk(tsk, saved_shstk_ptr) -EINVAL + +#define get_active_shstk(task) 0UL + #endif /* CONFIG_RISCV_USER_CFI */ #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h index a38268b19c3d..659ea3af5680 100644 --- a/arch/riscv/include/uapi/asm/ptrace.h +++ b/arch/riscv/include/uapi/asm/ptrace.h @@ -127,6 +127,10 @@ struct __riscv_v_regset_state { */ #define RISCV_MAX_VLENB (8192) +struct __sc_riscv_cfi_state { + unsigned long ss_ptr; /* shadow stack pointer */ +}; + #endif /* __ASSEMBLY__ */ #endif /* _UAPI_ASM_RISCV_PTRACE_H */ diff --git a/arch/riscv/include/uapi/asm/sigcontext.h b/arch/riscv/include/uapi/asm/sigcontext.h index cd4f175dc837..f37e4beffe03 100644 --- a/arch/riscv/include/uapi/asm/sigcontext.h +++ b/arch/riscv/include/uapi/asm/sigcontext.h @@ -10,6 +10,7 @@ /* The Magic number for signal context frame header. */ #define RISCV_V_MAGIC 0x53465457 +#define RISCV_ZICFISS_MAGIC 0x9487 #define END_MAGIC 0x0 /* The size of END signal context header. */ diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 80c70dccf09f..a7472a6fcdca 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -22,11 +22,13 @@ #include #include #include +#include unsigned long signal_minsigstksz __ro_after_init; extern u32 __user_rt_sigreturn[2]; static size_t riscv_v_sc_size __ro_after_init; +static size_t riscv_zicfiss_sc_size __ro_after_init; #define DEBUG_SIG 0 @@ -140,6 +142,62 @@ static long __restore_v_state(struct pt_regs *regs, void __user *sc_vec) return copy_from_user(current->thread.vstate.datap, datap, riscv_v_vsize); } +static long save_cfiss_state(struct pt_regs *regs, void __user *sc_cfi) +{ + struct __sc_riscv_cfi_state __user *state = sc_cfi; + unsigned long ss_ptr = 0; + long err = 0; + + if (!IS_ENABLED(CONFIG_RISCV_USER_CFI) || !is_shstk_enabled(current)) + return 0; + + /* + * Save a pointer to shadow stack itself on shadow stack as a form of token. + * A token on shadow gives following properties + * - Safe save and restore for shadow stack switching. Any save of shadow stack + * must have had saved a token on shadow stack. Similarly any restore of shadow + * stack must check the token before restore. Since writing to shadow stack with + * address of shadow stack itself is not easily allowed. A restore without a save + * is quite difficult for an attacker to perform. + * - A natural break. A token in shadow stack provides a natural break in shadow stack + * So a single linear range can be bucketed into different shadow stack segments. Any + * sspopchk will detect the condition and fault to kernel as sw check exception. + */ + err |= save_user_shstk(current, &ss_ptr); + err |= __put_user(ss_ptr, &state->ss_ptr); + if (unlikely(err)) + return -EFAULT; + + return riscv_zicfiss_sc_size; +} + +static long __restore_cfiss_state(struct pt_regs *regs, void __user *sc_cfi) +{ + struct __sc_riscv_cfi_state __user *state = sc_cfi; + unsigned long ss_ptr = 0; + long err; + + /* + * Restore shadow stack as a form of token stored on shadow stack itself as a safe + * way to restore. + * A token on shadow gives following properties + * - Safe save and restore for shadow stack switching. Any save of shadow stack + * must have had saved a token on shadow stack. Similarly any restore of shadow + * stack must check the token before restore. Since writing to shadow stack with + * address of shadow stack itself is not easily allowed. A restore without a save + * is quite difficult for an attacker to perform. + * - A natural break. A token in shadow stack provides a natural break in shadow stack + * So a single linear range can be bucketed into different shadow stack segments. + * sspopchk will detect the condition and fault to kernel as sw check exception. + */ + err = __copy_from_user(&ss_ptr, &state->ss_ptr, sizeof(unsigned long)); + + if (unlikely(err)) + return err; + + return restore_user_shstk(current, ss_ptr); +} + struct arch_ext_priv { __u32 magic; long (*save)(struct pt_regs *regs, void __user *sc_vec); @@ -150,6 +208,10 @@ struct arch_ext_priv arch_ext_list[] = { .magic = RISCV_V_MAGIC, .save = &save_v_state, }, + { + .magic = RISCV_ZICFISS_MAGIC, + .save = &save_cfiss_state, + }, }; const size_t nr_arch_exts = ARRAY_SIZE(arch_ext_list); @@ -202,6 +264,12 @@ static long restore_sigcontext(struct pt_regs *regs, err = __restore_v_state(regs, sc_ext_ptr); break; + case RISCV_ZICFISS_MAGIC: + if (!is_shstk_enabled(current) || size != riscv_zicfiss_sc_size) + return -EINVAL; + + err = __restore_cfiss_state(regs, sc_ext_ptr); + break; default: return -EINVAL; } @@ -222,6 +290,10 @@ static size_t get_rt_frame_size(bool cal_all) if (cal_all || riscv_v_vstate_query(task_pt_regs(current))) total_context_size += riscv_v_sc_size; } + + if (is_shstk_enabled(current)) + total_context_size += riscv_zicfiss_sc_size; + /* * Preserved a __riscv_ctx_hdr for END signal context header if an * extension uses __riscv_extra_ext_header @@ -365,6 +437,11 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set, #ifdef CONFIG_MMU regs->ra = (unsigned long)VDSO_SYMBOL( current->mm->context.vdso, rt_sigreturn); + + /* if bcfi is enabled x1 (ra) and x5 (t0) must match. not sure if we need this? */ + if (is_shstk_enabled(current)) + regs->t0 = regs->ra; + #else /* * For the nommu case we don't have a VDSO. Instead we push two @@ -493,6 +570,9 @@ void __init init_rt_signal_env(void) { riscv_v_sc_size = sizeof(struct __riscv_ctx_hdr) + sizeof(struct __sc_riscv_v_state) + riscv_v_vsize; + + riscv_zicfiss_sc_size = sizeof(struct __riscv_ctx_hdr) + + sizeof(struct __sc_riscv_cfi_state); /* * Determine the stack space required for guaranteed signal delivery. * The signal_minsigstksz will be populated into the AT_MINSIGSTKSZ entry diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index 7937bcef9271..d31d89618763 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -52,6 +52,11 @@ void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) task->thread_info.user_cfi_state.user_shdw_stk = shstk_addr; } +unsigned long get_active_shstk(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.user_shdw_stk; +} + void set_shstk_status(struct task_struct *task, bool enable) { if (!cpu_supports_shadow_stack()) @@ -170,6 +175,57 @@ static int create_rstor_token(unsigned long ssp, unsigned long *token_addr) return 0; } +/* + * Save user shadow stack pointer on shadow stack itself and return pointer to saved location + * returns -EFAULT if operation was unsuccessful + */ +int save_user_shstk(struct task_struct *tsk, unsigned long *saved_shstk_ptr) +{ + unsigned long ss_ptr = 0; + unsigned long token_loc = 0; + int ret = 0; + + if (saved_shstk_ptr == NULL) + return -EINVAL; + + ss_ptr = get_active_shstk(tsk); + ret = create_rstor_token(ss_ptr, &token_loc); + + if (!ret) { + *saved_shstk_ptr = token_loc; + set_active_shstk(tsk, token_loc); + } + + return ret; +} + +/* + * Restores user shadow stack pointer from token on shadow stack for task `tsk` + * returns -EFAULT if operation was unsuccessful + */ +int restore_user_shstk(struct task_struct *tsk, unsigned long shstk_ptr) +{ + unsigned long token = 0; + + token = amo_user_shstk((unsigned long __user *)shstk_ptr, 0); + + if (token == -1) + return -EFAULT; + + /* invalid token, return EINVAL */ + if ((token - shstk_ptr) != SHSTK_ENTRY_SIZE) { + pr_info_ratelimited( + "%s[%d]: bad restore token in %s: pc=%p sp=%p, token=%p, shstk_ptr=%p\n", + tsk->comm, task_pid_nr(tsk), __func__, (void *)(task_pt_regs(tsk)->epc), + (void *)(task_pt_regs(tsk)->sp), (void *)token, (void *)shstk_ptr); + return -EINVAL; + } + + /* all checks passed, set active shstk and return success */ + set_active_shstk(tsk, token); + return 0; +} + static unsigned long allocate_shadow_stack(unsigned long addr, unsigned long size, unsigned long token_offset, bool set_tok) {