From patchwork Mon Sep 12 23:39:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 12974211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAE98C6FA83 for ; Mon, 12 Sep 2022 23:40:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229790AbiILXkE (ORCPT ); Mon, 12 Sep 2022 19:40:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbiILXkB (ORCPT ); Mon, 12 Sep 2022 19:40:01 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F08889584; Mon, 12 Sep 2022 16:39:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663025999; x=1694561999; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=Lb+Eh9LBlOmCTOAZKvdbnThuzlfEk6eW2joj5DVNyD0=; b=dYNW32zxnnLyFStBOCGsGuub9jY4I15C17Ya+mjjIv4fI/A5PcB6JAwQ g186J2IkFU/B9APCWXd+MAHQM6229sIw5c5eMwmfcw6bKWtnoXCUqE12D 1RKy6q6ydfOsCcJR/6s616/hgismOO7Hm8xWFyMstApv2AralpATilM1g oRUParik5DTf4Pwmtm49r+zGMX+oz+S7isteSSYLZ8VPqFktL1fMP3i0+ QCk04fbdHw2TgZ9UZwMosw05sULWPHodx0P5j5t2Fzy3oZ4mHZOrBYUMz QuJf3mYUQNOTVQNJoK2YchcOFoXfE3qkiF54c9UZeG9Gaj0T2Y6p5Otbv w==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="296729754" X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="296729754" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:39:47 -0700 X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="618719980" Received: from sho10-mobl1.amr.corp.intel.com (HELO desk) ([10.251.9.78]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:39:46 -0700 Date: Mon, 12 Sep 2022 16:39:45 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , "Rafael J. Wysocki" , Pavel Machek , Andrew Cooper , degoede@redhat.com Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com Subject: [PATCH 1/3] x86/tsx: Add feature bit for TSX control MSR support Message-ID: <8592af5e3b95b197231445beb8c3123948ced15a.1663025154.git.pawan.kumar.gupta@linux.intel.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Support for TSX control MSR is enumerated in MSR_IA32_ARCH_CAPABILITIES. This is different from how other CPU features are enumerated i.e. via CPUID. Enumerating support for TSX control currently has an overhead of reading the MSR every time which can be avoided. Set a new feature bit X86_FEATURE_TSX_CTRL when MSR_IA32_TSX_CTRL is present. Also make tsx_ctrl_is_supported() use feature bit instead of reading the MSR. This will also be useful for any code that wants to use the feature bit instead of a calling tsx_ctrl_is_supported(). Suggested-by: Andrew Cooper Signed-off-by: Pawan Gupta --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/tsx.c | 30 +++++++++++++++--------------- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index ef4775c6db01..dd173733e40d 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -304,6 +304,7 @@ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */ #define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */ +#define X86_FEATURE_MSR_TSX_CTRL (11*32+18) /* "" MSR IA32_TSX_CTRL */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c index ec7bbac3a9f2..9fe488dbed15 100644 --- a/arch/x86/kernel/cpu/tsx.c +++ b/arch/x86/kernel/cpu/tsx.c @@ -60,20 +60,7 @@ static void tsx_enable(void) static bool tsx_ctrl_is_supported(void) { - u64 ia32_cap = x86_read_arch_cap_msr(); - - /* - * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this - * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES. - * - * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a - * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES - * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get - * MSR_IA32_TSX_CTRL support even after a microcode update. Thus, - * tsx= cmdline requests will do nothing on CPUs without - * MSR_IA32_TSX_CTRL support. - */ - return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR); + return cpu_feature_enabled(X86_FEATURE_MSR_TSX_CTRL); } static enum tsx_ctrl_states x86_get_tsx_auto_mode(void) @@ -191,7 +178,20 @@ void __init tsx_init(void) return; } - if (!tsx_ctrl_is_supported()) { + /* + * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this + * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES. + * + * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a + * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES + * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get + * MSR_IA32_TSX_CTRL support even after a microcode update. Thus, + * tsx= cmdline requests will do nothing on CPUs without + * MSR_IA32_TSX_CTRL support. + */ + if (x86_read_arch_cap_msr() & ARCH_CAP_TSX_CTRL_MSR) { + setup_force_cpu_cap(X86_FEATURE_MSR_TSX_CTRL); + } else { tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED; return; } From patchwork Mon Sep 12 23:40:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 12974212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B08A7C6FA82 for ; Mon, 12 Sep 2022 23:40:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229597AbiILXkv (ORCPT ); Mon, 12 Sep 2022 19:40:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229511AbiILXku (ORCPT ); Mon, 12 Sep 2022 19:40:50 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C40A1F61A; Mon, 12 Sep 2022 16:40:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663026048; x=1694562048; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=+dtGZjW57M0hat+ZUG/jaW87qoe8VIUQMNFQTveaduk=; b=Hg7wpjVg2fwjXsbwenIFoBtEJ1REhq+buDd4hlOZKsXPYlZVmet1SrMZ Bz5Fi0LYWTr3kPe8RFA/2jmWCz8EU7F4G1jM9GiSEN2H1qMk83b71whwJ VsF1/3bd0QFDBzgWMhMar8ecfZ2npX9KemiXhCm5AXGjjx9Xej88/6w6/ GYpu2wH8/wW2zBmT/Y1ZopxI8imT/OYQtWikXO62TDLhgSN77DI0802nP H34ZBxE2eRJB9bSs/6KStQCcjTnb7Ge+qZP4LrtRhIKSJWgC5Ge+heF5C CH0WqAt0E+c9cPSt3wPsQMWdCS497Ws7IwB5JtQMR1D7bh51Te47rPbjZ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="324235685" X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="324235685" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:40:48 -0700 X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="616247550" Received: from sho10-mobl1.amr.corp.intel.com (HELO desk) ([10.251.9.78]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:40:47 -0700 Date: Mon, 12 Sep 2022 16:40:47 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , "Rafael J. Wysocki" , Pavel Machek , Andrew Cooper , degoede@redhat.com Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com Subject: [PATCH 2/3] x86/cpu/amd: Add feature bit for MSR_AMD64_LS_CFG enumeration Message-ID: <034c7f5ac243ee7b40ba1a8cc3f9b10b1e380674.1663025154.git.pawan.kumar.gupta@linux.intel.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Currently there is no easy way to enumerate MSR_AMD64_LS_CFG. As this MSR is supported on AMD CPU families 10h to 18h, set a new feature bit on these CPU families. The new bit can be used to detect the MSR support. Suggested-by: Andrew Cooper Signed-off-by: Pawan Gupta --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/amd.c | 3 +++ 2 files changed, 4 insertions(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index dd173733e40d..90bdb1d98531 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -305,6 +305,7 @@ #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */ #define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */ #define X86_FEATURE_MSR_TSX_CTRL (11*32+18) /* "" MSR IA32_TSX_CTRL */ +#define X86_FEATURE_MSR_LS_CFG (11*32+19) /* "" MSR AMD64_LS_CFG */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 48276c0e479d..88f59d630826 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -521,6 +521,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1; } + if (c->x86 >= 0x10 && c->x86 <= 0x18) + setup_force_cpu_cap(X86_FEATURE_MSR_LS_CFG); + if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) && !boot_cpu_has(X86_FEATURE_VIRT_SSBD) && c->x86 >= 0x15 && c->x86 <= 0x17) { From patchwork Mon Sep 12 23:41:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 12974213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F33BC6FA82 for ; Mon, 12 Sep 2022 23:41:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229518AbiILXly (ORCPT ); Mon, 12 Sep 2022 19:41:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229502AbiILXlv (ORCPT ); Mon, 12 Sep 2022 19:41:51 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F51D25298; Mon, 12 Sep 2022 16:41:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663026109; x=1694562109; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=5VLXrjOR8Z8WZuxPsiSZsdGrF+rYHfQ8teAthj0SE20=; b=lfzk1eakTj2E6w4N0hXj3MHqJbp6LqhVQR0ImUXjFloHPgJ87Iku68FF KcZjqW8rP8Y1Y6DTNzdPd8TB6MidoMpoEoJxYWcI2fBPCVUqueBUSuT0S NxuOdSqCr/Ht3Zv0NHdp5cacAD29oeif7bF98ACANT0cVa0VRfEZMotzC vhM0HMk+7XpOrVGNirKJiQRP+qUJBP9sQ5fGBF/O83YDyr1h7EqBrfuCL jXb6yRKTtebz1zfPZBW5puHkGDbLTW17yCpRvRKSAzMTeWB0IEV9Jntwu 8+Ztz3K+GGMOXVv3gKZ3hUv703OWvwlbTBaQ11KZsYBqdUBYBS/5qUH6y w==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="298793777" X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="298793777" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:41:48 -0700 X-IronPort-AV: E=Sophos;i="5.93,311,1654585200"; d="scan'208";a="741961153" Received: from sho10-mobl1.amr.corp.intel.com (HELO desk) ([10.251.9.78]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 16:41:48 -0700 Date: Mon, 12 Sep 2022 16:41:48 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , "Rafael J. Wysocki" , Pavel Machek , Andrew Cooper , degoede@redhat.com Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com Subject: [PATCH 3/3] x86/pm: Add enumeration check before spec MSRs save/restore setup Message-ID: <206cec042e17f15432d523d12ce6f5ae9267dc77.1663025154.git.pawan.kumar.gupta@linux.intel.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org On an Intel Atom N2600 (and presumable other Cedar Trail models) MSR_IA32_TSX_CTRL can be read, causing saved_msr.valid to be set for it by msr_build_context(). This causes restore_processor_state() to try and restore it, but writing this MSR is not allowed on the Intel Atom N2600 leading to: [ 99.955141] unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20) [ 99.955176] Call Trace: [ 99.955186] [ 99.955195] restore_processor_state+0x275/0x2c0 [ 99.955246] x86_acpi_suspend_lowlevel+0x10e/0x140 [ 99.955273] acpi_suspend_enter+0xd3/0x100 [ 99.955297] suspend_devices_and_enter+0x7e2/0x830 [ 99.955341] pm_suspend.cold+0x2d2/0x35e [ 99.955368] state_store+0x68/0xd0 [ 99.955402] kernfs_fop_write_iter+0x15e/0x210 [ 99.955442] vfs_write+0x225/0x4b0 [ 99.955523] ksys_write+0x59/0xd0 [ 99.955557] do_syscall_64+0x58/0x80 [ 99.955579] ? do_syscall_64+0x67/0x80 [ 99.955600] ? up_read+0x17/0x20 [ 99.955631] ? lock_is_held_type+0xe3/0x140 [ 99.955670] ? asm_exc_page_fault+0x22/0x30 [ 99.955688] ? lockdep_hardirqs_on+0x7d/0x100 [ 99.955710] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 99.955723] RIP: 0033:0x7f7d0fb018f7 [ 99.955741] Code: 0f 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24 [ 99.955753] RSP: 002b:00007ffd03292ee8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 99.955771] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f7d0fb018f7 [ 99.955781] RDX: 0000000000000004 RSI: 00007ffd03292fd0 RDI: 0000000000000004 [ 99.955790] RBP: 00007ffd03292fd0 R08: 000000000000c0fe R09: 0000000000000000 [ 99.955799] R10: 00007f7d0fb85fb0 R11: 0000000000000246 R12: 0000000000000004 [ 99.955808] R13: 000055df564173e0 R14: 0000000000000004 R15: 00007f7d0fbf49e0 [ 99.955910] Add speculation control MSRs to MSR save/restore list only if their corresponding feature bit is set. This prevents MSR save/restore to attempt invalid MSR. Reported-by: Hans de Goede Signed-off-by: Pawan Gupta --- arch/x86/power/cpu.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c index bb176c72891c..b3492dd55e61 100644 --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -511,17 +511,26 @@ static int pm_cpu_check(const struct x86_cpu_id *c) return ret; } +struct msr_enumeration { + u32 msr_no; + u32 feature; +}; + static void pm_save_spec_msr(void) { - u32 spec_msr_id[] = { - MSR_IA32_SPEC_CTRL, - MSR_IA32_TSX_CTRL, - MSR_TSX_FORCE_ABORT, - MSR_IA32_MCU_OPT_CTRL, - MSR_AMD64_LS_CFG, + struct msr_enumeration msr_enum[] = { + {MSR_IA32_SPEC_CTRL, X86_FEATURE_MSR_SPEC_CTRL}, + {MSR_IA32_TSX_CTRL, X86_FEATURE_MSR_TSX_CTRL}, + {MSR_TSX_FORCE_ABORT, X86_FEATURE_TSX_FORCE_ABORT}, + {MSR_IA32_MCU_OPT_CTRL, X86_FEATURE_SRBDS_CTRL}, + {MSR_AMD64_LS_CFG, X86_FEATURE_MSR_LS_CFG}, }; + int i; - msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id)); + for (i = 0; i < ARRAY_SIZE(msr_enum); i++) { + if (boot_cpu_has(msr_enum[i].feature)) + msr_build_context(&msr_enum[i].msr_no, 1); + } } static int pm_check_save_msr(void)