From patchwork Thu Apr 15 22:14:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu-cheng Yu X-Patchwork-Id: 12206167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBD8BC433ED for ; Thu, 15 Apr 2021 22:16:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 97F5461074 for ; Thu, 15 Apr 2021 22:16:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 97F5461074 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 575F78E000D; Thu, 15 Apr 2021 18:15:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3617E8E0007; Thu, 15 Apr 2021 18:15:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CB208E000C; Thu, 15 Apr 2021 18:15:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id D7DF88E000B for ; Thu, 15 Apr 2021 18:15:43 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 93E78824934B for ; Thu, 15 Apr 2021 22:15:43 +0000 (UTC) X-FDA: 78036009366.15.B0836A5 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf15.hostedemail.com (Postfix) with ESMTP id D0838A0003B9 for ; Thu, 15 Apr 2021 22:15:40 +0000 (UTC) IronPort-SDR: dQcXN4GDm21RJJ1m1KG2M/2WJpS/WK4ZY4GMmJp5ClF6hk1HTGLYJ4OZMhu58j2w1WIL94JjC0 nP0UiBRR9O0Q== X-IronPort-AV: E=McAfee;i="6200,9189,9955"; a="191764366" X-IronPort-AV: E=Sophos;i="5.82,225,1613462400"; d="scan'208";a="191764366" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 15:15:40 -0700 IronPort-SDR: tsnl57tE+rmekZcinKlcNbePhOtadHt3X4t1j410z1fS4FLNUgXmDk+LhjVulJzaJW9T29XJXP fNq4g5izZZZA== X-IronPort-AV: E=Sophos;i="5.82,225,1613462400"; d="scan'208";a="451270944" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 15:15:40 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu Subject: [PATCH v25 24/30] x86/cet/shstk: Introduce shadow stack token setup/verify routines Date: Thu, 15 Apr 2021 15:14:13 -0700 Message-Id: <20210415221419.31835-25-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210415221419.31835-1-yu-cheng.yu@intel.com> References: <20210415221419.31835-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D0838A0003B9 X-Stat-Signature: 7ernn8ryyphduc7bruftr9qh3c11gy5u X-Rspamd-Server: rspam02 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=mga11.intel.com; client-ip=192.55.52.93 X-HE-DKIM-Result: none/none X-HE-Tag: 1618524940-902402 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A shadow stack restore token marks a restore point of the shadow stack, and the address in a token must point directly above the token, which is within the same shadow stack. This is distinctively different from other pointers on the shadow stack, since those pointers point to executable code area. The restore token can be used as an extra protection for signal handling. To deliver a signal, create a shadow stack restore token and put the token and the signal restorer address on the shadow stack. In sigreturn, verify the token and restore from it the shadow stack pointer. Introduce token setup and verify routines. Also introduce WRUSS, which is a kernel-mode instruction but writes directly to user shadow stack. It is used to construct user signal stack as described above. Signed-off-by: Yu-cheng Yu Cc: Kees Cook --- v25: - Update inline assembly syntax, use %[]. - Change token address from (unsigned long) to (u64/u32 __user *). - Change -EPERM to -EFAULT. arch/x86/include/asm/cet.h | 9 ++ arch/x86/include/asm/special_insns.h | 32 +++++++ arch/x86/kernel/shstk.c | 126 +++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h index 8b83ded577cc..ef6155213b7e 100644 --- a/arch/x86/include/asm/cet.h +++ b/arch/x86/include/asm/cet.h @@ -20,6 +20,10 @@ int shstk_setup_thread(struct task_struct *p, unsigned long clone_flags, unsigned long stack_size); void shstk_free(struct task_struct *p); void shstk_disable(void); +int shstk_setup_rstor_token(bool ia32, unsigned long rstor, + unsigned long *token_addr, unsigned long *new_ssp); +int shstk_check_rstor_token(bool ia32, unsigned long token_addr, + unsigned long *new_ssp); #else static inline int shstk_setup(void) { return 0; } static inline int shstk_setup_thread(struct task_struct *p, @@ -27,6 +31,11 @@ static inline int shstk_setup_thread(struct task_struct *p, unsigned long stack_size) { return 0; } static inline void shstk_free(struct task_struct *p) {} static inline void shstk_disable(void) {} +static inline int shstk_setup_rstor_token(bool ia32, unsigned long rstor, + unsigned long *token_addr, + unsigned long *new_ssp) { return 0; } +static inline int shstk_check_rstor_token(bool ia32, unsigned long token_addr, + unsigned long *new_ssp) { return 0; } #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 1d3cbaef4bb7..5a0488923cae 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -234,6 +234,38 @@ static inline void clwb(volatile void *__p) : [pax] "a" (p)); } +#ifdef CONFIG_X86_SHADOW_STACK +#if defined(CONFIG_IA32_EMULATION) || defined(CONFIG_X86_X32) +static inline int write_user_shstk_32(u32 __user *addr, u32 val) +{ + asm_volatile_goto("1: wrussd %[val], (%[addr])\n" + _ASM_EXTABLE(1b, %l[fail]) + :: [addr] "r" (addr), [val] "r" (val) + :: fail); + return 0; +fail: + return -EFAULT; +} +#else +static inline int write_user_shstk_32(u32 __user *addr, u32 val) +{ + WARN_ONCE(1, "%s used but not supported.\n", __func__); + return -EFAULT; +} +#endif + +static inline int write_user_shstk_64(u64 __user *addr, u64 val) +{ + asm_volatile_goto("1: wrussq %[val], (%[addr])\n" + _ASM_EXTABLE(1b, %l[fail]) + :: [addr] "r" (addr), [val] "r" (val) + :: fail); + return 0; +fail: + return -EFAULT; +} +#endif /* CONFIG_X86_SHADOW_STACK */ + #define nop() asm volatile ("nop") static inline void serialize(void) diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index d387df84b7f1..48a0c87414ef 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -20,6 +20,7 @@ #include #include #include +#include static void start_update_msrs(void) { @@ -176,3 +177,128 @@ void shstk_disable(void) shstk_free(current); } + +static unsigned long _get_user_shstk_addr(void) +{ + struct fpu *fpu = ¤t->thread.fpu; + unsigned long ssp = 0; + + fpregs_lock(); + + if (fpregs_state_valid(fpu, smp_processor_id())) { + rdmsrl(MSR_IA32_PL3_SSP, ssp); + } else { + struct cet_user_state *p; + + p = get_xsave_addr(&fpu->state.xsave, XFEATURE_CET_USER); + if (p) + ssp = p->user_ssp; + } + + fpregs_unlock(); + return ssp; +} + +#define TOKEN_MODE_MASK 3UL +#define TOKEN_MODE_64 1UL +#define IS_TOKEN_64(token) (((token) & TOKEN_MODE_MASK) == TOKEN_MODE_64) +#define IS_TOKEN_32(token) (((token) & TOKEN_MODE_MASK) == 0) + +/* + * Create a restore token on the shadow stack. A token is always 8-byte + * and aligned to 8. + */ +static int _create_rstor_token(bool ia32, unsigned long ssp, + unsigned long *token_addr) +{ + unsigned long addr; + + *token_addr = 0; + + if ((!ia32 && !IS_ALIGNED(ssp, 8)) || !IS_ALIGNED(ssp, 4)) + return -EINVAL; + + addr = ALIGN_DOWN(ssp, 8) - 8; + + /* Is the token for 64-bit? */ + if (!ia32) + ssp |= TOKEN_MODE_64; + + if (write_user_shstk_64((u64 __user *)addr, (u64)ssp)) + return -EFAULT; + + *token_addr = addr; + return 0; +} + +/* + * Create a restore token on shadow stack, and then push the user-mode + * function return address. + */ +int shstk_setup_rstor_token(bool ia32, unsigned long ret_addr, + unsigned long *token_addr, unsigned long *new_ssp) +{ + struct cet_status *cet = ¤t->thread.cet; + unsigned long ssp = 0; + int err = 0; + + if (cet->shstk_size) { + if (!ret_addr) + return -EINVAL; + + ssp = _get_user_shstk_addr(); + err = _create_rstor_token(ia32, ssp, token_addr); + if (err) + return err; + + if (ia32) { + *new_ssp = *token_addr - sizeof(u32); + err = write_user_shstk_32((u32 __user *)*new_ssp, (u32)ret_addr); + } else { + *new_ssp = *token_addr - sizeof(u64); + err = write_user_shstk_64((u64 __user *)*new_ssp, (u64)ret_addr); + } + } + + return err; +} + +/* + * Verify token_addr point to a valid token, and then set *new_ssp + * according to the token. + */ +int shstk_check_rstor_token(bool ia32, unsigned long token_addr, unsigned long *new_ssp) +{ + unsigned long token; + + *new_ssp = 0; + + if (!IS_ALIGNED(token_addr, 8)) + return -EINVAL; + + if (get_user(token, (unsigned long __user *)token_addr)) + return -EFAULT; + + /* Is 64-bit mode flag correct? */ + if (!ia32 && !IS_TOKEN_64(token)) + return -EINVAL; + else if (ia32 && !IS_TOKEN_32(token)) + return -EINVAL; + + token &= ~TOKEN_MODE_MASK; + + /* + * Restore address properly aligned? + */ + if ((!ia32 && !IS_ALIGNED(token, 8)) || !IS_ALIGNED(token, 4)) + return -EINVAL; + + /* + * Token was placed properly? + */ + if (((ALIGN_DOWN(token, 8) - 8) != token_addr) || token >= TASK_SIZE_MAX) + return -EINVAL; + + *new_ssp = token; + return 0; +}