From patchwork Mon Apr 3 21:07:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13198786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29B5CC761AF for ; Mon, 3 Apr 2023 21:07:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231937AbjDCVHg (ORCPT ); Mon, 3 Apr 2023 17:07:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233281AbjDCVHf (ORCPT ); Mon, 3 Apr 2023 17:07:35 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1529F1FDF; Mon, 3 Apr 2023 14:07:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680556050; x=1712092050; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3JnuqPhqzfbWXEBzPnfDscPrpyjSxr3izKMK2ZTkGZM=; b=Ho5nIBgeYI9krOXotVc43vfz+SDGpkUbUcksZWdj1cC7AmcHztaS30YK XhSAK2pypgmAbLHC2gMDdXr0k77xTLLJucvKOA1JLbxdQubt725ZmR7wa 59E5B7Pqgo+8Q64cU/jjmA/B0UYMVwdKxF4JYBAGwaiKjGDbk5L7ZYeUs kizCkvpGO7s9cbWNRgef9nyzG7UJzwWxFEMrBlqnqn5PEHrDpeoa1ucPG tGwVZSnzrIebPKh4ffuzhKjyJJOhtO7mjlKF/Om9eKsg6OeDayGV7PjQx Sdw47YUC8Xbhs/V3t6aGL910UZSu8YW6qoiTMs5ErLpDKHDlaaQ8oGEqX A==; X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="330590844" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="330590844" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="775354452" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="775354452" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:25 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v4 1/5] x86/mce: Remove old CMCI storm mitigation code Date: Mon, 3 Apr 2023 14:07:12 -0700 Message-Id: <20230403210716.347773-2-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403210716.347773-1-tony.luck@intel.com> References: <20230403210716.347773-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org When a "storm" of CMCI is detected this code mitigates by disabling CMCI interrupt signalling from all of the banks owned by the CPU that saw the storm. There are problems with this approach: 1) It is very coarse grained. In all likelihood only one of the banks was generating the interrupts, but CMCI is disabled for all. This means Linux may delay seeing and processing errors logged from other banks. 2) Although CMCI stands for Corrected Machine Check Interrupt, it is also used to signal when an uncorrected error is logged. This is a problem because these errors should be handled in a timely manner. Delete all this code in preparation for a finer grained solution. Signed-off-by: Tony Luck Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam --- arch/x86/kernel/cpu/mce/internal.h | 6 -- arch/x86/kernel/cpu/mce/core.c | 20 +--- arch/x86/kernel/cpu/mce/intel.c | 145 ----------------------------- 3 files changed, 1 insertion(+), 170 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 91a415553c27..f9331c6229b4 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -41,18 +41,12 @@ struct dentry *mce_get_debugfs_dir(void); extern mce_banks_t mce_banks_ce_disabled; #ifdef CONFIG_X86_MCE_INTEL -unsigned long cmci_intel_adjust_timer(unsigned long interval); -bool mce_intel_cmci_poll(void); -void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); void intel_init_cmci(void); void intel_init_lmce(void); void intel_clear_lmce(void); bool intel_filter_mce(struct mce *m); #else -# define cmci_intel_adjust_timer mce_adjust_timer_default -static inline bool mce_intel_cmci_poll(void) { return false; } -static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } static inline void intel_init_lmce(void) { } diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 2eec60f50057..e7936be84204 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1588,13 +1588,6 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL; static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */ static DEFINE_PER_CPU(struct timer_list, mce_timer); -static unsigned long mce_adjust_timer_default(unsigned long interval) -{ - return interval; -} - -static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default; - static void __start_timer(struct timer_list *t, unsigned long interval) { unsigned long when = jiffies + interval; @@ -1617,15 +1610,9 @@ static void mce_timer_fn(struct timer_list *t) iv = __this_cpu_read(mce_next_interval); - if (mce_available(this_cpu_ptr(&cpu_info))) { + if (mce_available(this_cpu_ptr(&cpu_info))) machine_check_poll(0, this_cpu_ptr(&mce_poll_banks)); - if (mce_intel_cmci_poll()) { - iv = mce_adjust_timer(iv); - goto done; - } - } - /* * Alert userspace if needed. If we logged an MCE, reduce the polling * interval, otherwise increase the polling interval. @@ -1635,7 +1622,6 @@ static void mce_timer_fn(struct timer_list *t) else iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); -done: __this_cpu_write(mce_next_interval, iv); __start_timer(t, iv); } @@ -1972,7 +1958,6 @@ static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c) intel_init_cmci(); intel_init_lmce(); - mce_adjust_timer = cmci_intel_adjust_timer; } static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c) @@ -1985,7 +1970,6 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) switch (c->x86_vendor) { case X86_VENDOR_INTEL: mce_intel_feature_init(c); - mce_adjust_timer = cmci_intel_adjust_timer; break; case X86_VENDOR_AMD: { @@ -2642,8 +2626,6 @@ static void mce_reenable_cpu(void) static int mce_cpu_dead(unsigned int cpu) { - mce_intel_hcpu_update(cpu); - /* intentionally ignoring frozen here */ if (!cpuhp_tasks_frozen) cmci_rediscover(); diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 95275a5e57e0..052bf2708391 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -41,15 +41,6 @@ */ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); -/* - * CMCI storm detection backoff counter - * - * During storm, we reset this counter to INITIAL_CHECK_INTERVAL in case we've - * encountered an error. If not, we decrement it by one. We signal the end of - * the CMCI storm when it reaches 0. - */ -static DEFINE_PER_CPU(int, cmci_backoff_cnt); - /* * cmci_discover_lock protects against parallel discovery attempts * which could race against each other. @@ -57,21 +48,6 @@ static DEFINE_PER_CPU(int, cmci_backoff_cnt); static DEFINE_RAW_SPINLOCK(cmci_discover_lock); #define CMCI_THRESHOLD 1 -#define CMCI_POLL_INTERVAL (30 * HZ) -#define CMCI_STORM_INTERVAL (HZ) -#define CMCI_STORM_THRESHOLD 15 - -static DEFINE_PER_CPU(unsigned long, cmci_time_stamp); -static DEFINE_PER_CPU(unsigned int, cmci_storm_cnt); -static DEFINE_PER_CPU(unsigned int, cmci_storm_state); - -enum { - CMCI_STORM_NONE, - CMCI_STORM_ACTIVE, - CMCI_STORM_SUBSIDED, -}; - -static atomic_t cmci_storm_on_cpus; static int cmci_supported(int *banks) { @@ -127,124 +103,6 @@ static bool lmce_supported(void) return tmp & FEAT_CTL_LMCE_ENABLED; } -bool mce_intel_cmci_poll(void) -{ - if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE) - return false; - - /* - * Reset the counter if we've logged an error in the last poll - * during the storm. - */ - if (machine_check_poll(0, this_cpu_ptr(&mce_banks_owned))) - this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL); - else - this_cpu_dec(cmci_backoff_cnt); - - return true; -} - -void mce_intel_hcpu_update(unsigned long cpu) -{ - if (per_cpu(cmci_storm_state, cpu) == CMCI_STORM_ACTIVE) - atomic_dec(&cmci_storm_on_cpus); - - per_cpu(cmci_storm_state, cpu) = CMCI_STORM_NONE; -} - -static void cmci_toggle_interrupt_mode(bool on) -{ - unsigned long flags, *owned; - int bank; - u64 val; - - raw_spin_lock_irqsave(&cmci_discover_lock, flags); - owned = this_cpu_ptr(mce_banks_owned); - for_each_set_bit(bank, owned, MAX_NR_BANKS) { - rdmsrl(MSR_IA32_MCx_CTL2(bank), val); - - if (on) - val |= MCI_CTL2_CMCI_EN; - else - val &= ~MCI_CTL2_CMCI_EN; - - wrmsrl(MSR_IA32_MCx_CTL2(bank), val); - } - raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); -} - -unsigned long cmci_intel_adjust_timer(unsigned long interval) -{ - if ((this_cpu_read(cmci_backoff_cnt) > 0) && - (__this_cpu_read(cmci_storm_state) == CMCI_STORM_ACTIVE)) { - mce_notify_irq(); - return CMCI_STORM_INTERVAL; - } - - switch (__this_cpu_read(cmci_storm_state)) { - case CMCI_STORM_ACTIVE: - - /* - * We switch back to interrupt mode once the poll timer has - * silenced itself. That means no events recorded and the timer - * interval is back to our poll interval. - */ - __this_cpu_write(cmci_storm_state, CMCI_STORM_SUBSIDED); - if (!atomic_sub_return(1, &cmci_storm_on_cpus)) - pr_notice("CMCI storm subsided: switching to interrupt mode\n"); - - fallthrough; - - case CMCI_STORM_SUBSIDED: - /* - * We wait for all CPUs to go back to SUBSIDED state. When that - * happens we switch back to interrupt mode. - */ - if (!atomic_read(&cmci_storm_on_cpus)) { - __this_cpu_write(cmci_storm_state, CMCI_STORM_NONE); - cmci_toggle_interrupt_mode(true); - cmci_recheck(); - } - return CMCI_POLL_INTERVAL; - default: - - /* We have shiny weather. Let the poll do whatever it thinks. */ - return interval; - } -} - -static bool cmci_storm_detect(void) -{ - unsigned int cnt = __this_cpu_read(cmci_storm_cnt); - unsigned long ts = __this_cpu_read(cmci_time_stamp); - unsigned long now = jiffies; - int r; - - if (__this_cpu_read(cmci_storm_state) != CMCI_STORM_NONE) - return true; - - if (time_before_eq(now, ts + CMCI_STORM_INTERVAL)) { - cnt++; - } else { - cnt = 1; - __this_cpu_write(cmci_time_stamp, now); - } - __this_cpu_write(cmci_storm_cnt, cnt); - - if (cnt <= CMCI_STORM_THRESHOLD) - return false; - - cmci_toggle_interrupt_mode(false); - __this_cpu_write(cmci_storm_state, CMCI_STORM_ACTIVE); - r = atomic_add_return(1, &cmci_storm_on_cpus); - mce_timer_kick(CMCI_STORM_INTERVAL); - this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL); - - if (r == 1) - pr_notice("CMCI storm detected: switching to poll mode\n"); - return true; -} - /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. @@ -253,9 +111,6 @@ static bool cmci_storm_detect(void) */ static void intel_threshold_interrupt(void) { - if (cmci_storm_detect()) - return; - machine_check_poll(MCP_TIMESTAMP, this_cpu_ptr(&mce_banks_owned)); } From patchwork Mon Apr 3 21:07:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13198787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E23BBC761AF for ; Mon, 3 Apr 2023 21:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233505AbjDCVHl (ORCPT ); Mon, 3 Apr 2023 17:07:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233548AbjDCVHj (ORCPT ); Mon, 3 Apr 2023 17:07:39 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 799D71FF9; Mon, 3 Apr 2023 14:07:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680556053; x=1712092053; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Oyd/znhlBVJcMcMFd/NlLwOLSue8eE2TO3708qiJ3ns=; b=f/yJKSIinjLHOEvJLRwtcqoxPsFH6zpex0h0V9wzB6Kvtyxd+vDbJwXM ND/ZWmbIliyKxWU+KuGwDNAub8vdL2GDYOQ+PX+WGZhjViP/5/uB/eS1y fHBomn9vQKOdgNrlXY6mIfScWWqt5FVz1H2H028SrU4JX1L7+O1kP2J4p 7QBmgfBg9YXSbRwE7eBiyGfdN95QK5A8iMBlOyqVo3dB3XP5M/iZ6hvie Xpj4rhBBlH6NyIGVQpd2Fw1MelOz/iUPLfsEnEDxKsQksTu2oIjMxZaCL 1q0faKRvLx1FB3J+Md5CywcTcLdmAxYaJOyx8mULY07Em/brVw45bI2qh A==; X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="330590852" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="330590852" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="775354454" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="775354454" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:25 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v4 2/5] x86/mce: Add per-bank CMCI storm mitigation Date: Mon, 3 Apr 2023 14:07:13 -0700 Message-Id: <20230403210716.347773-3-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403210716.347773-1-tony.luck@intel.com> References: <20230403210716.347773-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org Add a hook into machine_check_poll() to keep track of per-CPU, per-bank corrected error logs. Maintain a bitmap history for each bank showing whether the bank logged an corrected error or not each time it is polled. In normal operation the interval between polls of this banks determines how far to shift the history. The 64 bit width corresponds to about one second. When a storm is observed the Rate of interrupts is reduced by setting a large threshold value for this bank in IA32_MCi_CTL2. This bank is added to the bitmap of banks for this CPU to poll. The polling rate is increased to once per second. During a storm each bit in the history indicates the status of the bank each time it is polled. Thus the history covers just over a minute. Declare a storm for that bank if the number of corrected interrupts seen in that history is above some threshold (5 in this RFC code for ease of testing, likely move to 15 for compatibility with previous storm detection). A storm on a bank ends if enough consecutive polls of the bank show no corrected errors (currently 30, may also change). That resets the threshold in IA32_MCi_CTL2 back to 1, removes the bank from the bitmap for polling, and changes the polling rate back to the default. If a CPU with banks in storm mode is taken offline, the new CPU that inherits ownership of those banks takes over management of storm(s) in the inherited bank(s). Signed-off-by: Tony Luck Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam --- arch/x86/kernel/cpu/mce/internal.h | 4 +- arch/x86/kernel/cpu/mce/core.c | 26 ++++-- arch/x86/kernel/cpu/mce/intel.c | 139 ++++++++++++++++++++++++++++- 3 files changed, 158 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index f9331c6229b4..8d3a740a66ff 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -40,6 +40,8 @@ struct dentry *mce_get_debugfs_dir(void); extern mce_banks_t mce_banks_ce_disabled; +void track_cmci_storm(int bank, u64 status); + #ifdef CONFIG_X86_MCE_INTEL void cmci_disable_bank(int bank); void intel_init_cmci(void); @@ -54,7 +56,7 @@ static inline void intel_clear_lmce(void) { } static inline bool intel_filter_mce(struct mce *m) { return false; } #endif -void mce_timer_kick(unsigned long interval); +void mce_timer_kick(bool storm); #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index e7936be84204..20347eb65b8b 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -680,6 +680,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b) barrier(); m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS)); + track_cmci_storm(i, m.status); + /* If this entry is not valid, ignore it */ if (!(m.status & MCI_STATUS_VAL)) continue; @@ -1587,6 +1589,7 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL; static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */ static DEFINE_PER_CPU(struct timer_list, mce_timer); +static DEFINE_PER_CPU(bool, storm_poll_mode); static void __start_timer(struct timer_list *t, unsigned long interval) { @@ -1622,22 +1625,29 @@ static void mce_timer_fn(struct timer_list *t) else iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); - __this_cpu_write(mce_next_interval, iv); - __start_timer(t, iv); + if (__this_cpu_read(storm_poll_mode)) { + __start_timer(t, HZ); + } else { + __this_cpu_write(mce_next_interval, iv); + __start_timer(t, iv); + } } /* - * Ensure that the timer is firing in @interval from now. + * When a storm starts on any bank on this CPU, switch to polling + * once per second. When the storm ends, revert to the default + * polling interval. */ -void mce_timer_kick(unsigned long interval) +void mce_timer_kick(bool storm) { struct timer_list *t = this_cpu_ptr(&mce_timer); - unsigned long iv = __this_cpu_read(mce_next_interval); - __start_timer(t, interval); + __this_cpu_write(storm_poll_mode, storm); - if (interval < iv) - __this_cpu_write(mce_next_interval, interval); + if (storm) + __start_timer(t, HZ); + else + __this_cpu_write(mce_next_interval, check_interval * HZ); } /* Must not be called in IRQ context where del_timer_sync() can deadlock */ diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 052bf2708391..4106877de028 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -47,8 +47,40 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); */ static DEFINE_RAW_SPINLOCK(cmci_discover_lock); +/* + * CMCI storm tracking state + * stormy_bank_count: per-cpu count of MC banks in storm state + * bank_history: bitmask tracking of corrected errors seen in each bank + * bank_time_stamp: last time (in jiffies) that each bank was polled + * cmci_threshold: MCi_CTL2 threshold for each bank when there is no storm + */ +static DEFINE_PER_CPU(int, stormy_bank_count); +static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); +static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); +static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); +static int cmci_threshold[MAX_NR_BANKS]; + +/* Linux non-storm CMCI threshold (may be overridden by BIOS */ #define CMCI_THRESHOLD 1 +/* + * High threshold to limit CMCI rate during storms. Max supported is + * 0x7FFF. Use this slightly smaller value so it has a distinctive + * signature when some asks "Why am I not seeing all corrected errors?" + */ +#define CMCI_STORM_THRESHOLD 32749 + +/* + * How many errors within the history buffer mark the start of a storm + */ +#define STORM_BEGIN_THRESHOLD 5 + +/* + * How many polls of machine check bank without an error before declaring + * the storm is over + */ +#define STORM_END_POLL_THRESHOLD 30 + static int cmci_supported(int *banks) { u64 cap; @@ -103,6 +135,93 @@ static bool lmce_supported(void) return tmp & FEAT_CTL_LMCE_ENABLED; } +/* + * Set a new CMCI threshold value. Preserve the state of the + * MCI_CTL2_CMCI_EN bit in case this happens during a + * cmci_rediscover() operation. + */ +static void cmci_set_threshold(int bank, int thresh) +{ + unsigned long flags; + u64 val; + + raw_spin_lock_irqsave(&cmci_discover_lock, flags); + rdmsrl(MSR_IA32_MCx_CTL2(bank), val); + val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; + wrmsrl(MSR_IA32_MCx_CTL2(bank), val | thresh); + raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); +} + +static void cmci_storm_begin(int bank) +{ + __set_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_storm[bank], true); + + /* + * If this is the first bank on this CPU to enter storm mode + * start polling + */ + if (this_cpu_inc_return(stormy_bank_count) == 1) + mce_timer_kick(true); +} + +static void cmci_storm_end(int bank) +{ + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_history[bank], 0ull); + this_cpu_write(bank_storm[bank], false); + + /* If no banks left in storm mode, stop polling */ + if (!this_cpu_dec_return(stormy_bank_count)) + mce_timer_kick(false); +} + +void track_cmci_storm(int bank, u64 status) +{ + unsigned long now = jiffies, delta; + unsigned int shift = 1; + u64 history; + + /* + * When a bank is in storm mode it is polled once per second and + * the history mask will record about the last minute of poll results. + * If it is not in storm mode, then the bank is only checked when + * there is a CMCI interrupt. Check how long it has been since + * this bank was last checked, and adjust the amount of "shift" + * to apply to history. + */ + if (!this_cpu_read(bank_storm[bank])) { + delta = now - this_cpu_read(bank_time_stamp[bank]); + shift = (delta + HZ) / HZ; + } + + /* If has been a long time since the last poll, clear history */ + if (shift >= 64) + history = 0; + else + history = this_cpu_read(bank_history[bank]) << shift; + this_cpu_write(bank_time_stamp[bank], now); + + /* History keeps track of corrected errors. VAL=1 && UC=0 */ + if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL) + history |= 1; + this_cpu_write(bank_history[bank], history); + + if (this_cpu_read(bank_storm[bank])) { + if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) + return; + pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); + cmci_set_threshold(bank, cmci_threshold[bank]); + cmci_storm_end(bank); + } else { + if (hweight64(history) < STORM_BEGIN_THRESHOLD) + return; + pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); + cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); + cmci_storm_begin(bank); + } +} + /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. @@ -147,6 +266,9 @@ static void cmci_discover(int banks) continue; } + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) + goto storm; + if (!mca_cfg.bios_cmci_threshold) { val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; val |= CMCI_THRESHOLD; @@ -159,7 +281,7 @@ static void cmci_discover(int banks) bios_zero_thresh = 1; val |= CMCI_THRESHOLD; } - +storm: val |= MCI_CTL2_CMCI_EN; wrmsrl(MSR_IA32_MCx_CTL2(i), val); rdmsrl(MSR_IA32_MCx_CTL2(i), val); @@ -167,7 +289,14 @@ static void cmci_discover(int banks) /* Did the enable bit stick? -- the bank supports CMCI */ if (val & MCI_CTL2_CMCI_EN) { set_bit(i, owned); - __clear_bit(i, this_cpu_ptr(mce_poll_banks)); + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) { + pr_notice("CPU%d BANK%d CMCI inherited storm\n", smp_processor_id(), i); + this_cpu_write(bank_history[i], ~0ull); + this_cpu_write(bank_time_stamp[i], jiffies); + cmci_storm_begin(i); + } else { + __clear_bit(i, this_cpu_ptr(mce_poll_banks)); + } /* * We are able to set thresholds for some banks that * had a threshold of 0. This means the BIOS has not @@ -177,6 +306,10 @@ static void cmci_discover(int banks) if (mca_cfg.bios_cmci_threshold && bios_zero_thresh && (val & MCI_CTL2_CMCI_THRESHOLD_MASK)) bios_wrong_thresh = 1; + + /* Save default threshold for each bank */ + if (cmci_threshold[i] == 0) + cmci_threshold[i] = val & MCI_CTL2_CMCI_THRESHOLD_MASK; } else { WARN_ON(!test_bit(i, this_cpu_ptr(mce_poll_banks))); } @@ -218,6 +351,8 @@ static void __cmci_disable_bank(int bank) val &= ~MCI_CTL2_CMCI_EN; wrmsrl(MSR_IA32_MCx_CTL2(bank), val); __clear_bit(bank, this_cpu_ptr(mce_banks_owned)); + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) + cmci_storm_end(bank); } /* From patchwork Mon Apr 3 21:07:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13198790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9B0EC76196 for ; Mon, 3 Apr 2023 21:07:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233378AbjDCVHv (ORCPT ); Mon, 3 Apr 2023 17:07:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233558AbjDCVHm (ORCPT ); Mon, 3 Apr 2023 17:07:42 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C9FD3C05; Mon, 3 Apr 2023 14:07:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680556059; x=1712092059; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fYZQDLM69ukAcHy2Lr4F4fSjXY8oE5A3ji5UAJN5ox0=; b=gBOGNkCCPAPog4kGKFKj1upDW1HyYKmeqTN0FRfJhuSq+P/D7tozxPJ2 MEi2sRxTz9yT8kxChqcLjXlSW+CODRV7A76UrMaNR19dG1w8oEqpOb6Ic XUq1QjK3rT9ZE3GaqC3xpUfZuUUsRZg13n6y+32mtZLpFtjsS76jGOQ5O CqaPDkfCa/qVgsyFzWlEeN4YGOs0DsstRD34xH5RJX07TEKuoMEeNsBh+ 00kDFm8516HM8uLnCgJcT8bbtTpX7SHKiGIN7WiYQ7e20HRvN3xVPeaaW nDSrxkq/YDK0T689hZST91QjsOndMdQ6yalM3J5/HY9tPtnBxeA0ROiK1 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="330590869" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="330590869" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="775354456" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="775354456" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:25 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v4 3/5] x86/mce: Introduce mce_handle_storm() to deal with begin/end of storms Date: Mon, 3 Apr 2023 14:07:14 -0700 Message-Id: <20230403210716.347773-4-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403210716.347773-1-tony.luck@intel.com> References: <20230403210716.347773-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: Smita Koralahalli Intel and AMD need to take different actions when a storm begins or ends. Prepare for the storm code moving from intel.c into core.c by adding a function that checks CPU vendor to pick the right action. No functional changes. [Tony: Changed from function pointer to regular function] Signed-off-by: Smita Koralahalli Signed-off-by: Tony Luck Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam --- arch/x86/kernel/cpu/mce/internal.h | 3 +++ arch/x86/kernel/cpu/mce/core.c | 9 +++++++++ arch/x86/kernel/cpu/mce/intel.c | 12 ++++++++++-- 3 files changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 8d3a740a66ff..68288099b125 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -43,12 +43,14 @@ extern mce_banks_t mce_banks_ce_disabled; void track_cmci_storm(int bank, u64 status); #ifdef CONFIG_X86_MCE_INTEL +void mce_intel_handle_storm(int bank, bool on); void cmci_disable_bank(int bank); void intel_init_cmci(void); void intel_init_lmce(void); void intel_clear_lmce(void); bool intel_filter_mce(struct mce *m); #else +static inline void mce_intel_handle_storm(int bank, bool on) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } static inline void intel_init_lmce(void) { } @@ -57,6 +59,7 @@ static inline bool intel_filter_mce(struct mce *m) { return false; } #endif void mce_timer_kick(bool storm); +void mce_handle_storm(int bank, bool on); #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 20347eb65b8b..099d8444aca4 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1975,6 +1975,15 @@ static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c) intel_clear_lmce(); } +void mce_handle_storm(int bank, bool on) +{ + switch (boot_cpu_data.x86_vendor) { + case X86_VENDOR_INTEL: + mce_intel_handle_storm(bank, on); + break; + } +} + static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) { switch (c->x86_vendor) { diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 4106877de028..a8248514a689 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -152,6 +152,14 @@ static void cmci_set_threshold(int bank, int thresh) raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); } +void mce_intel_handle_storm(int bank, bool on) +{ + if (on) + cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); + else + cmci_set_threshold(bank, cmci_threshold[bank]); +} + static void cmci_storm_begin(int bank) { __set_bit(bank, this_cpu_ptr(mce_poll_banks)); @@ -211,13 +219,13 @@ void track_cmci_storm(int bank, u64 status) if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) return; pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); - cmci_set_threshold(bank, cmci_threshold[bank]); + mce_handle_storm(bank, false); cmci_storm_end(bank); } else { if (hweight64(history) < STORM_BEGIN_THRESHOLD) return; pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); - cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); + mce_handle_storm(bank, true); cmci_storm_begin(bank); } } From patchwork Mon Apr 3 21:07:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13198788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1513C77B62 for ; Mon, 3 Apr 2023 21:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233569AbjDCVHo (ORCPT ); Mon, 3 Apr 2023 17:07:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233281AbjDCVHk (ORCPT ); Mon, 3 Apr 2023 17:07:40 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6A1A272C; Mon, 3 Apr 2023 14:07:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680556053; x=1712092053; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ylrzOba0Jv9f4hEoGOAsffq5v3RGvAT4yQ/5h0fJHpQ=; b=fmtqmj7Mim3L6/l2eWFYJFbZCQHSi+6vwJJb70eI/LCNXmRaKNg3rQbQ QvuKx3ef2MxqMSvaidMEJgOQxAg8oNGvZLctvpk1jaMhJaqkgTCgEpr/O 87yonWOAdbOXWXd5w+sCuyqoM03AiGdf3J3ZnI5tO0s9g0T6I8M1ONiDw wROzJLgZX8PttQzd967rfB71RKHdKC0rjQT6m7LDLslvT/rwm9MxbS7oQ +fdZBVlgXj3OBcbDpi5g640yF+raU4kVGKMENsfqFpXHw0wOomw+g4j7u 6Y4wtn/Qk1Pug/1oaQuX1pjSiYjEtu5uOOxVH5rVxuG1LeW/HnIH12hzQ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="330590858" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="330590858" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="775354459" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="775354459" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:25 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v4 4/5] x86/mce: Move storm handling to core. Date: Mon, 3 Apr 2023 14:07:15 -0700 Message-Id: <20230403210716.347773-5-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403210716.347773-1-tony.luck@intel.com> References: <20230403210716.347773-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: Smita Koralahalli AMD's storm handling for threshold interrupts is similar to Intel's CMCI storm handling. Hence, make the storm handling code common by moving to core and removing the vendor exclusivity. On the contrary, setting different thresholds to reduce rate of interrupts in IA32_MCi_CTL2 register is kept Intel intact as the storm handling for AMD slightly differs where in it handles the storms by turning off the interrupts. No functional changes. [Tony: Same as Smita's original, plus changes rolled in from prior patches] Signed-off-by: Smita Koralahalli Signed-off-by: Tony Luck Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam --- arch/x86/kernel/cpu/mce/internal.h | 18 ++++++ arch/x86/kernel/cpu/mce/core.c | 81 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/mce/intel.c | 93 +----------------------------- 3 files changed, 100 insertions(+), 92 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 68288099b125..d052d80cce7a 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -60,6 +60,24 @@ static inline bool intel_filter_mce(struct mce *m) { return false; } void mce_timer_kick(bool storm); void mce_handle_storm(int bank, bool on); +void cmci_storm_begin(int bank); +void cmci_storm_end(int bank); + +DECLARE_PER_CPU(int, stormy_bank_count); +DECLARE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); +DECLARE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); +DECLARE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); + +/* + * How many errors within the history buffer mark the start of a storm + */ +#define STORM_BEGIN_THRESHOLD 5 + +/* + * How many polls of machine check bank without an error before declaring + * the storm is over + */ +#define STORM_END_POLL_THRESHOLD 30 #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 099d8444aca4..820b317b1b85 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -607,6 +607,87 @@ static struct notifier_block mce_default_nb = { .priority = MCE_PRIO_LOWEST, }; +/* + * CMCI storm tracking state + * stormy_bank_count: per-cpu count of MC banks in storm state + * bank_history: bitmask tracking of corrected errors seen in each bank + * bank_time_stamp: last time (in jiffies) that each bank was polled + */ +DEFINE_PER_CPU(int, stormy_bank_count); +DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); +DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); +DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); + +void cmci_storm_begin(int bank) +{ + __set_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_storm[bank], true); + + /* + * If this is the first bank on this CPU to enter storm mode + * start polling + */ + if (this_cpu_inc_return(stormy_bank_count) == 1) + mce_timer_kick(true); +} + +void cmci_storm_end(int bank) +{ + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_history[bank], 0ull); + this_cpu_write(bank_storm[bank], false); + + /* If no banks left in storm mode, stop polling */ + if (!this_cpu_dec_return(stormy_bank_count)) + mce_timer_kick(false); +} + +void track_cmci_storm(int bank, u64 status) +{ + unsigned long now = jiffies, delta; + unsigned int shift = 1; + u64 history; + + /* + * When a bank is in storm mode it is polled once per second and + * the history mask will record about the last minute of poll results. + * If it is not in storm mode, then the bank is only checked when + * there is a CMCI interrupt. Check how long it has been since + * this bank was last checked, and adjust the amount of "shift" + * to apply to history. + */ + if (!this_cpu_read(bank_storm[bank])) { + delta = now - this_cpu_read(bank_time_stamp[bank]); + shift = (delta + HZ) / HZ; + } + + /* If has been a long time since the last poll, clear history */ + if (shift >= 64) + history = 0; + else + history = this_cpu_read(bank_history[bank]) << shift; + this_cpu_write(bank_time_stamp[bank], now); + + /* History keeps track of corrected errors. VAL=1 && UC=0 */ + if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL) + history |= 1; + this_cpu_write(bank_history[bank], history); + + if (this_cpu_read(bank_storm[bank])) { + if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) + return; + pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); + mce_handle_storm(bank, false); + cmci_storm_end(bank); + } else { + if (hweight64(history) < STORM_BEGIN_THRESHOLD) + return; + pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); + mce_handle_storm(bank, true); + cmci_storm_begin(bank); + } +} + /* * Read ADDR and MISC registers. */ diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index a8248514a689..20c2143a68c1 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -47,17 +47,7 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); */ static DEFINE_RAW_SPINLOCK(cmci_discover_lock); -/* - * CMCI storm tracking state - * stormy_bank_count: per-cpu count of MC banks in storm state - * bank_history: bitmask tracking of corrected errors seen in each bank - * bank_time_stamp: last time (in jiffies) that each bank was polled - * cmci_threshold: MCi_CTL2 threshold for each bank when there is no storm - */ -static DEFINE_PER_CPU(int, stormy_bank_count); -static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); -static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); -static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); +/* MCi_CTL2 threshold for each bank when there is no storm */ static int cmci_threshold[MAX_NR_BANKS]; /* Linux non-storm CMCI threshold (may be overridden by BIOS */ @@ -70,17 +60,6 @@ static int cmci_threshold[MAX_NR_BANKS]; */ #define CMCI_STORM_THRESHOLD 32749 -/* - * How many errors within the history buffer mark the start of a storm - */ -#define STORM_BEGIN_THRESHOLD 5 - -/* - * How many polls of machine check bank without an error before declaring - * the storm is over - */ -#define STORM_END_POLL_THRESHOLD 30 - static int cmci_supported(int *banks) { u64 cap; @@ -160,76 +139,6 @@ void mce_intel_handle_storm(int bank, bool on) cmci_set_threshold(bank, cmci_threshold[bank]); } -static void cmci_storm_begin(int bank) -{ - __set_bit(bank, this_cpu_ptr(mce_poll_banks)); - this_cpu_write(bank_storm[bank], true); - - /* - * If this is the first bank on this CPU to enter storm mode - * start polling - */ - if (this_cpu_inc_return(stormy_bank_count) == 1) - mce_timer_kick(true); -} - -static void cmci_storm_end(int bank) -{ - __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); - this_cpu_write(bank_history[bank], 0ull); - this_cpu_write(bank_storm[bank], false); - - /* If no banks left in storm mode, stop polling */ - if (!this_cpu_dec_return(stormy_bank_count)) - mce_timer_kick(false); -} - -void track_cmci_storm(int bank, u64 status) -{ - unsigned long now = jiffies, delta; - unsigned int shift = 1; - u64 history; - - /* - * When a bank is in storm mode it is polled once per second and - * the history mask will record about the last minute of poll results. - * If it is not in storm mode, then the bank is only checked when - * there is a CMCI interrupt. Check how long it has been since - * this bank was last checked, and adjust the amount of "shift" - * to apply to history. - */ - if (!this_cpu_read(bank_storm[bank])) { - delta = now - this_cpu_read(bank_time_stamp[bank]); - shift = (delta + HZ) / HZ; - } - - /* If has been a long time since the last poll, clear history */ - if (shift >= 64) - history = 0; - else - history = this_cpu_read(bank_history[bank]) << shift; - this_cpu_write(bank_time_stamp[bank], now); - - /* History keeps track of corrected errors. VAL=1 && UC=0 */ - if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL) - history |= 1; - this_cpu_write(bank_history[bank], history); - - if (this_cpu_read(bank_storm[bank])) { - if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) - return; - pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); - mce_handle_storm(bank, false); - cmci_storm_end(bank); - } else { - if (hweight64(history) < STORM_BEGIN_THRESHOLD) - return; - pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); - mce_handle_storm(bank, true); - cmci_storm_begin(bank); - } -} - /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. From patchwork Mon Apr 3 21:07:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13198789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BE61C761AF for ; Mon, 3 Apr 2023 21:07:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232623AbjDCVHp (ORCPT ); Mon, 3 Apr 2023 17:07:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233378AbjDCVHl (ORCPT ); Mon, 3 Apr 2023 17:07:41 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3819F1BD0; Mon, 3 Apr 2023 14:07:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680556055; x=1712092055; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+YKvwuyVLDMR+sFs/3u3SFfneQIZ0qZ0jxgoSlbNg60=; b=Tkfw2IlO125v9nJbpsh999w7xg4ys5Oe0wtu5whib+FCinOvVjmitfxz WiBcOCxhrLlDa+96DdlvFnmVeohPsTf1HUi9Bmvaz+KEdPArvOeerjBfl yDaTYWdU7VBSbT1lwwLtwQ4UyskuS7Wevmz4xYzEq5Ymo/Y0601FyfoHV dn4+fHnK9xa2aYNCyaMm1KroeSyk+cSwNr/jm8Uc1qA/etVZjbVWP9wPR rGLpjENesKwe7fsO7tBk6X9V9uHQmrXcKUDutoqr29IwfatrfWGcSIWIk Iu96+c4Md56DbbXDrEDhTnntf3AkkWcOiR0nglWjvuyV+2ZJWanMEBgWc w==; X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="330590865" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="330590865" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10669"; a="775354462" X-IronPort-AV: E=Sophos;i="5.98,315,1673942400"; d="scan'208";a="775354462" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2023 14:07:25 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v4 5/5] x86/mce: Handle AMD threshold interrupt storms Date: Mon, 3 Apr 2023 14:07:16 -0700 Message-Id: <20230403210716.347773-6-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403210716.347773-1-tony.luck@intel.com> References: <20230403210716.347773-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: Smita Koralahalli Extend the logic of handling CMCI storms to AMD threshold interrupts. Rely on the similar approach as of Intel's CMCI to mitigate storms per CPU and per bank. But, unlike CMCI, do not set thresholds and reduce interrupt rate on a storm. Rather, disable the interrupt on the corresponding CPU and bank. Re-enable back the interrupts if enough consecutive polls of the bank show no corrected errors (30, as programmed by Intel). Turning off the threshold interrupts would be a better solution on AMD systems as other error severities will still be handled even if the threshold interrupts are disabled. [Tony: Small tweak because mce_handle_storm() isn't a pointer now] Signed-off-by: Smita Koralahalli Signed-off-by: Tony Luck Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam --- arch/x86/kernel/cpu/mce/internal.h | 2 ++ arch/x86/kernel/cpu/mce/amd.c | 49 ++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/mce/core.c | 3 ++ 3 files changed, 54 insertions(+) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index d052d80cce7a..f1a48bc2e904 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -224,6 +224,7 @@ extern bool filter_mce(struct mce *m); #ifdef CONFIG_X86_MCE_AMD extern bool amd_filter_mce(struct mce *m); +void mce_amd_handle_storm(int bank, bool on); /* * If MCA_CONFIG[McaLsbInStatusSupported] is set, extract ErrAddr in bits @@ -251,6 +252,7 @@ static __always_inline void smca_extract_err_addr(struct mce *m) #else static inline bool amd_filter_mce(struct mce *m) { return false; } +static inline void mce_amd_handle_storm(int bank, bool on) {} static inline void smca_extract_err_addr(struct mce *m) { } #endif diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 23c5072fbbb7..cd79295e2a0a 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -468,6 +468,47 @@ static void threshold_restart_bank(void *_tr) wrmsr(tr->b->address, lo, hi); } +static void _reset_block(struct threshold_block *block) +{ + struct thresh_restart tr; + + memset(&tr, 0, sizeof(tr)); + tr.b = block; + threshold_restart_bank(&tr); +} + +static void toggle_interrupt_reset_block(struct threshold_block *block, bool on) +{ + if (!block) + return; + + block->interrupt_enable = !!on; + _reset_block(block); +} + +void mce_amd_handle_storm(int bank, bool on) +{ + struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL; + struct threshold_bank **bp = this_cpu_read(threshold_banks); + unsigned long flags; + + if (!bp) + return; + + local_irq_save(flags); + + first_block = bp[bank]->blocks; + if (!first_block) + goto end; + + toggle_interrupt_reset_block(first_block, on); + + list_for_each_entry_safe(block, tmp, &first_block->miscj, miscj) + toggle_interrupt_reset_block(block, on); +end: + local_irq_restore(flags); +} + static void mce_threshold_block_init(struct threshold_block *b, int offset) { struct thresh_restart tr = { @@ -868,6 +909,7 @@ static void amd_threshold_interrupt(void) struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL; struct threshold_bank **bp = this_cpu_read(threshold_banks); unsigned int bank, cpu = smp_processor_id(); + u64 status; /* * Validate that the threshold bank has been initialized already. The @@ -881,6 +923,13 @@ static void amd_threshold_interrupt(void) if (!(per_cpu(bank_map, cpu) & (1 << bank))) continue; + rdmsrl(mca_msr_reg(bank, MCA_STATUS), status); + track_cmci_storm(bank, status); + + /* Return early on an interrupt storm */ + if (this_cpu_read(bank_storm[bank])) + return; + first_block = bp[bank]->blocks; if (!first_block) continue; diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 820b317b1b85..fac90625d8cb 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -2062,6 +2062,9 @@ void mce_handle_storm(int bank, bool on) case X86_VENDOR_INTEL: mce_intel_handle_storm(bank, on); break; + case X86_VENDOR_AMD: + mce_amd_handle_storm(bank, on); + break; } }