From patchwork Thu Apr 1 22:10:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu-cheng Yu X-Patchwork-Id: 12179701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D45EC43600 for ; Thu, 1 Apr 2021 22:12:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F1612610FA for ; Thu, 1 Apr 2021 22:12:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1612610FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 52D786B0104; Thu, 1 Apr 2021 18:11:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 48D8F6B0101; Thu, 1 Apr 2021 18:11:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EBEC6B0107; Thu, 1 Apr 2021 18:11:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id EE8BE6B0101 for ; Thu, 1 Apr 2021 18:11:30 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B0F04D208 for ; Thu, 1 Apr 2021 22:11:30 +0000 (UTC) X-FDA: 77985195540.02.E3EBEBC Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf19.hostedemail.com (Postfix) with ESMTP id B70BB90009F2 for ; Thu, 1 Apr 2021 22:11:27 +0000 (UTC) IronPort-SDR: HTywBwlH+ZdNIEXC4DpE5svAYCN/qoGtc22EMnuFDHktdRiQxDGF7dZ+D8GnzAkxcqLhlgWgGc gTcs+eyykXWQ== X-IronPort-AV: E=McAfee;i="6000,8403,9941"; a="192371422" X-IronPort-AV: E=Sophos;i="5.81,296,1610438400"; d="scan'208";a="192371422" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2021 15:11:26 -0700 IronPort-SDR: KLJM9BnUYLRXJHaSY7MgXOzQRarqdFKppyMQWVh/PnFvPPFGxm9HkclOD35O7+bGXyonuT7Y8V Cqq+8xAEuupA== X-IronPort-AV: E=Sophos;i="5.81,296,1610438400"; d="scan'208";a="517513946" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2021 15:11:26 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu Subject: [PATCH v24 24/30] x86/cet/shstk: Introduce shadow stack token setup/verify routines Date: Thu, 1 Apr 2021 15:10:58 -0700 Message-Id: <20210401221104.31584-25-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210401221104.31584-1-yu-cheng.yu@intel.com> References: <20210401221104.31584-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B70BB90009F2 X-Stat-Signature: u57eqmc6uwtastjawc59ss1sr8gt4h1a X-Rspamd-Server: rspam02 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=mga03.intel.com; client-ip=134.134.136.65 X-HE-DKIM-Result: none/none X-HE-Tag: 1617315087-74538 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A shadow stack restore token marks a restore point of the shadow stack, and the address in a token must point directly above the token, which is within the same shadow stack. This is distinctively different from other pointers on the shadow stack, since those pointers point to executable code area. The restore token can be used as an extra protection for signal handling. To deliver a signal, create a shadow stack restore token and put the token and the signal restorer address on the shadow stack. In sigreturn, verify the token and restore from it the shadow stack pointer. Introduce token setup and verify routines. Also introduce WRUSS, which is a kernel-mode instruction but writes directly to user shadow stack. It is used to construct user signal stack as described above. Signed-off-by: Yu-cheng Yu Cc: Kees Cook --- arch/x86/include/asm/cet.h | 9 ++ arch/x86/include/asm/special_insns.h | 32 +++++++ arch/x86/kernel/shstk.c | 126 +++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h index 8b83ded577cc..ef6155213b7e 100644 --- a/arch/x86/include/asm/cet.h +++ b/arch/x86/include/asm/cet.h @@ -20,6 +20,10 @@ int shstk_setup_thread(struct task_struct *p, unsigned long clone_flags, unsigned long stack_size); void shstk_free(struct task_struct *p); void shstk_disable(void); +int shstk_setup_rstor_token(bool ia32, unsigned long rstor, + unsigned long *token_addr, unsigned long *new_ssp); +int shstk_check_rstor_token(bool ia32, unsigned long token_addr, + unsigned long *new_ssp); #else static inline int shstk_setup(void) { return 0; } static inline int shstk_setup_thread(struct task_struct *p, @@ -27,6 +31,11 @@ static inline int shstk_setup_thread(struct task_struct *p, unsigned long stack_size) { return 0; } static inline void shstk_free(struct task_struct *p) {} static inline void shstk_disable(void) {} +static inline int shstk_setup_rstor_token(bool ia32, unsigned long rstor, + unsigned long *token_addr, + unsigned long *new_ssp) { return 0; } +static inline int shstk_check_rstor_token(bool ia32, unsigned long token_addr, + unsigned long *new_ssp) { return 0; } #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 1d3cbaef4bb7..c41c371f6c7d 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -234,6 +234,38 @@ static inline void clwb(volatile void *__p) : [pax] "a" (p)); } +#ifdef CONFIG_X86_SHADOW_STACK +#if defined(CONFIG_IA32_EMULATION) || defined(CONFIG_X86_X32) +static inline int write_user_shstk_32(unsigned long addr, unsigned int val) +{ + asm_volatile_goto("1: wrussd %1, (%0)\n" + _ASM_EXTABLE(1b, %l[fail]) + :: "r" (addr), "r" (val) + :: fail); + return 0; +fail: + return -EPERM; +} +#else +static inline int write_user_shstk_32(unsigned long addr, unsigned int val) +{ + WARN_ONCE(1, "%s used but not supported.\n", __func__); + return -EFAULT; +} +#endif + +static inline int write_user_shstk_64(unsigned long addr, unsigned long val) +{ + asm_volatile_goto("1: wrussq %1, (%0)\n" + _ASM_EXTABLE(1b, %l[fail]) + :: "r" (addr), "r" (val) + :: fail); + return 0; +fail: + return -EPERM; +} +#endif /* CONFIG_X86_SHADOW_STACK */ + #define nop() asm volatile ("nop") static inline void serialize(void) diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 9c80785535b9..6fa98b228ee3 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -20,6 +20,7 @@ #include #include #include +#include static void start_update_msrs(void) { @@ -181,3 +182,128 @@ void shstk_disable(void) shstk_free(current); } + +static unsigned long _get_user_shstk_addr(void) +{ + struct fpu *fpu = ¤t->thread.fpu; + unsigned long ssp = 0; + + fpregs_lock(); + + if (fpregs_state_valid(fpu, smp_processor_id())) { + rdmsrl(MSR_IA32_PL3_SSP, ssp); + } else { + struct cet_user_state *p; + + p = get_xsave_addr(&fpu->state.xsave, XFEATURE_CET_USER); + if (p) + ssp = p->user_ssp; + } + + fpregs_unlock(); + return ssp; +} + +#define TOKEN_MODE_MASK 3UL +#define TOKEN_MODE_64 1UL +#define IS_TOKEN_64(token) (((token) & TOKEN_MODE_MASK) == TOKEN_MODE_64) +#define IS_TOKEN_32(token) (((token) & TOKEN_MODE_MASK) == 0) + +/* + * Create a restore token on the shadow stack. A token is always 8-byte + * and aligned to 8. + */ +static int _create_rstor_token(bool ia32, unsigned long ssp, + unsigned long *token_addr) +{ + unsigned long addr; + + *token_addr = 0; + + if ((!ia32 && !IS_ALIGNED(ssp, 8)) || !IS_ALIGNED(ssp, 4)) + return -EINVAL; + + addr = ALIGN_DOWN(ssp, 8) - 8; + + /* Is the token for 64-bit? */ + if (!ia32) + ssp |= TOKEN_MODE_64; + + if (write_user_shstk_64(addr, ssp)) + return -EFAULT; + + *token_addr = addr; + return 0; +} + +/* + * Create a restore token on shadow stack, and then push the user-mode + * function return address. + */ +int shstk_setup_rstor_token(bool ia32, unsigned long ret_addr, + unsigned long *token_addr, unsigned long *new_ssp) +{ + struct cet_status *cet = ¤t->thread.cet; + unsigned long ssp = 0; + int err = 0; + + if (cet->shstk_size) { + if (!ret_addr) + return -EINVAL; + + ssp = _get_user_shstk_addr(); + err = _create_rstor_token(ia32, ssp, token_addr); + if (err) + return err; + + if (ia32) { + *new_ssp = *token_addr - sizeof(u32); + err = write_user_shstk_32(*new_ssp, (unsigned int)ret_addr); + } else { + *new_ssp = *token_addr - sizeof(u64); + err = write_user_shstk_64(*new_ssp, ret_addr); + } + } + + return err; +} + +/* + * Verify token_addr point to a valid token, and then set *new_ssp + * according to the token. + */ +int shstk_check_rstor_token(bool ia32, unsigned long token_addr, unsigned long *new_ssp) +{ + unsigned long token; + + *new_ssp = 0; + + if (!IS_ALIGNED(token_addr, 8)) + return -EINVAL; + + if (get_user(token, (unsigned long __user *)token_addr)) + return -EFAULT; + + /* Is 64-bit mode flag correct? */ + if (!ia32 && !IS_TOKEN_64(token)) + return -EINVAL; + else if (ia32 && !IS_TOKEN_32(token)) + return -EINVAL; + + token &= ~TOKEN_MODE_MASK; + + /* + * Restore address properly aligned? + */ + if ((!ia32 && !IS_ALIGNED(token, 8)) || !IS_ALIGNED(token, 4)) + return -EINVAL; + + /* + * Token was placed properly? + */ + if (((ALIGN_DOWN(token, 8) - 8) != token_addr) || token >= TASK_SIZE_MAX) + return -EINVAL; + + *new_ssp = token; + return 0; +}