From patchwork Mon Jul 31 06:32:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Xin3" X-Patchwork-Id: 13333761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E6B4C001DC for ; Mon, 31 Jul 2023 07:03:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231207AbjGaHDq (ORCPT ); Mon, 31 Jul 2023 03:03:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231172AbjGaHDY (ORCPT ); Mon, 31 Jul 2023 03:03:24 -0400 Received: from mgamail.intel.com (unknown [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E1C91712; Mon, 31 Jul 2023 00:02:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690786938; x=1722322938; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R/F68dINSJ894CWgsWAeNvE3sYeHzSDBLZY+Pavzuys=; b=lKSYGrEu8T1GGKWNCOhRk0Xue8TyAjY+4Iev4p1UGDxMGV2/jmlBWAcS MmCPCT+dzXqJh3kDhnzMoELkhvcbpkbNgo/BlgKf65ja3dDt+tlDZ2daw T5OuyrfJR6tesxvz29mSMEqoEtDPCBNxmFqrvlHQ/SwTKKgYLmTsDtryc +sVWotjcqC/axyEkzJEysDA62kHSTrKseXTlbzpTIrlDFcN0AFyYJFyYb OrMzr30KnKALnEpnLRWc9Cy/MtGV0erQNxiE3Htyfe6RoEiOczlfrhkTa gckTFOy1ORX5/nMcEkhjrZW3D0tEh86VSuGdFnBZsqo2siHNxFcPDsq+h A==; X-IronPort-AV: E=McAfee;i="6600,9927,10787"; a="371649144" X-IronPort-AV: E=Sophos;i="6.01,244,1684825200"; d="scan'208";a="371649144" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2023 00:02:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="871543437" Received: from unknown (HELO fred..) ([172.25.112.68]) by fmsmga001.fm.intel.com with ESMTP; 31 Jul 2023 00:02:06 -0700 From: Xin Li To: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-edac@vger.kernel.org, linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, xen-devel@lists.xenproject.org Cc: Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Oleg Nesterov , Tony Luck , "K . Y . Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Peter Zijlstra , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Josh Poimboeuf , "Paul E . McKenney" , Catalin Marinas , Randy Dunlap , Steven Rostedt , Kim Phillips , Xin Li , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Liam R . Howlett" , Sebastian Reichel , "Kirill A . Shutemov" , Suren Baghdasaryan , Pawan Gupta , Jiaxi Chen , Babu Moger , Jim Mattson , Sandipan Das , Lai Jiangshan , Hans de Goede , Reinette Chatre , Daniel Sneddon , Breno Leitao , Nikunj A Dadhania , Brian Gerst , Sami Tolvanen , Alexander Potapenko , Andrew Morton , Arnd Bergmann , "Eric W . Biederman" , Kees Cook , Masami Hiramatsu , Masahiro Yamada , Ze Gao , Fei Li , Conghui , Ashok Raj , "Jason A . Donenfeld" , Mark Rutland , Jacob Pan , Jiapeng Chong , Jane Malalane , David Woodhouse , Boris Ostrovsky , Arnaldo Carvalho de Melo , Yantengsi , Christophe Leroy , Sathvika Vasireddy Subject: [PATCH v9 16/36] x86/fred: Allow single-step trap and NMI when starting a new task Date: Sun, 30 Jul 2023 23:32:57 -0700 Message-Id: <20230731063317.3720-17-xin3.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230731063317.3720-1-xin3.li@intel.com> References: <20230731063317.3720-1-xin3.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: "H. Peter Anvin (Intel)" Entering a new task is logically speaking a return from a system call (exec, fork, clone, etc.). As such, if ptrace enables single stepping a single step exception should be allowed to trigger immediately upon entering user space. This is not optional. NMI should *never* be disabled in user space. As such, this is an optional, opportunistic way to catch errors. Allow single-step trap and NMI when starting a new task, thus once the new task enters user space, single-step trap and NMI are both enabled immediately. Signed-off-by: H. Peter Anvin (Intel) Tested-by: Shan Kang Signed-off-by: Xin Li --- Changes since v8: * Use high-order 48 bits above the lowest 16 bit SS only when FRED is enabled (Thomas Gleixner). --- arch/x86/kernel/process_64.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 6d5fed29f552..0b47871a6141 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -56,6 +56,7 @@ #include #include #include +#include #ifdef CONFIG_IA32_EMULATION /* Not included via unistd.h */ #include @@ -507,8 +508,18 @@ void x86_gsbase_write_task(struct task_struct *task, unsigned long gsbase) static void start_thread_common(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp, - unsigned int _cs, unsigned int _ss, unsigned int _ds) + u16 _cs, u16 _ss, u16 _ds) { + /* + * Paranoia: High-order 48 bits above the lowest 16 bit SS are + * discarded by the legacy IRET instruction on all Intel, AMD, + * and Cyrix/Centaur/VIA CPUs, thus can be set unconditionally, + * even when FRED is not enabled. But we choose the safer side + * to use these bits only when FRED is enabled. + */ + const unsigned long ssx_flags = cpu_feature_enabled(X86_FEATURE_FRED) ? + (FRED_SSX_SOFTWARE_INITIATED | FRED_SSX_NMI) : 0; + WARN_ON_ONCE(regs != current_pt_regs()); if (static_cpu_has(X86_BUG_NULL_SEG)) { @@ -522,11 +533,11 @@ start_thread_common(struct pt_regs *regs, unsigned long new_ip, loadsegment(ds, _ds); load_gs_index(0); - regs->ip = new_ip; - regs->sp = new_sp; - regs->cs = _cs; - regs->ss = _ss; - regs->flags = X86_EFLAGS_IF; + regs->ip = new_ip; + regs->sp = new_sp; + regs->csx = _cs; + regs->ssx = _ss | ssx_flags; + regs->flags = X86_EFLAGS_IF | X86_EFLAGS_FIXED; } void