From patchwork Fri Mar 17 17:20:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13179326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32BEDC76196 for ; Fri, 17 Mar 2023 17:20:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229841AbjCQRU5 (ORCPT ); Fri, 17 Mar 2023 13:20:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229950AbjCQRU4 (ORCPT ); Fri, 17 Mar 2023 13:20:56 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 934C4DFB64; Fri, 17 Mar 2023 10:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679073654; x=1710609654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dwyQicXhlTutONXrSVUOqm/U5cvXNSdIJMWcfZZzWA0=; b=Q0TMt/oBv3mh2h3vvD5yn78ml/JWTCnAfMg2Veh7XSILTKdpssjfDjNP ueQv03NFQC1KQs5gVXOUKCtiyiJBlkDnwLOQZe7zwY+lQHkxCYY4C0/YT TeBBlanL+NEZ4s3cJELMZhxoO7rniYLrHZco2KnChaEffi1/VsM9RKKLf Krv1lKh8wqsxreQRrUrGMaljC3UnhSlgSDmm0/Fp8qv13yiwRjyctooCi aykoJae4vrejkx4yQPDc5pModp8TlacA7/1CXu/cPSKua9bcnnWloJyb2 BqVFbmdaMdiUIZwusVtiO/Jk27n2DJoCgyn+6DzPlXAy4GLKQcuXtAPVM w==; X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="339858218" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="339858218" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="804166600" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="804166600" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:48 -0700 From: Tony Luck To: Yazen Ghannam Cc: Borislav Petkov , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, hpa@zytor.com, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 1/5] x86/mce: Remove old CMCI storm mitigation code Date: Fri, 17 Mar 2023 10:20:38 -0700 Message-Id: <20230317172042.117201-2-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230317172042.117201-1-tony.luck@intel.com> References: <20230317172042.117201-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org When a "storm" of CMCI is detected this code mitigates by disabling CMCI interrupt signalling from all of the banks owned by the CPU that saw the storm. There are problems with this approach: 1) It is very coarse grained. In all likelihood only one of the banks was generating the interrupts, but CMCI is disabled for all. This means Linux may delay seeing and processing errors logged from other banks. 2) Although CMCI stands for Corrected Machine Check Interrupt, it is also used to signal when an uncorrected error is logged. This is a problem because these errors should be handled in a timely manner. Delete all this code in preparation for a finer grained solution. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/mce/internal.h | 6 -- arch/x86/kernel/cpu/mce/core.c | 20 +--- arch/x86/kernel/cpu/mce/intel.c | 145 ----------------------------- 3 files changed, 1 insertion(+), 170 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 7e03f5b7f6bd..07fef4d74525 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -41,18 +41,12 @@ struct dentry *mce_get_debugfs_dir(void); extern mce_banks_t mce_banks_ce_disabled; #ifdef CONFIG_X86_MCE_INTEL -unsigned long cmci_intel_adjust_timer(unsigned long interval); -bool mce_intel_cmci_poll(void); -void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); void intel_init_cmci(void); void intel_init_lmce(void); void intel_clear_lmce(void); bool intel_filter_mce(struct mce *m); #else -# define cmci_intel_adjust_timer mce_adjust_timer_default -static inline bool mce_intel_cmci_poll(void) { return false; } -static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } static inline void intel_init_lmce(void) { } diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 2c8ec5c71712..92c2dee4bf43 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1598,13 +1598,6 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL; static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */ static DEFINE_PER_CPU(struct timer_list, mce_timer); -static unsigned long mce_adjust_timer_default(unsigned long interval) -{ - return interval; -} - -static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default; - static void __start_timer(struct timer_list *t, unsigned long interval) { unsigned long when = jiffies + interval; @@ -1627,15 +1620,9 @@ static void mce_timer_fn(struct timer_list *t) iv = __this_cpu_read(mce_next_interval); - if (mce_available(this_cpu_ptr(&cpu_info))) { + if (mce_available(this_cpu_ptr(&cpu_info))) machine_check_poll(0, this_cpu_ptr(&mce_poll_banks)); - if (mce_intel_cmci_poll()) { - iv = mce_adjust_timer(iv); - goto done; - } - } - /* * Alert userspace if needed. If we logged an MCE, reduce the polling * interval, otherwise increase the polling interval. @@ -1645,7 +1632,6 @@ static void mce_timer_fn(struct timer_list *t) else iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); -done: __this_cpu_write(mce_next_interval, iv); __start_timer(t, iv); } @@ -1982,7 +1968,6 @@ static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c) intel_init_cmci(); intel_init_lmce(); - mce_adjust_timer = cmci_intel_adjust_timer; } static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c) @@ -1995,7 +1980,6 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) switch (c->x86_vendor) { case X86_VENDOR_INTEL: mce_intel_feature_init(c); - mce_adjust_timer = cmci_intel_adjust_timer; break; case X86_VENDOR_AMD: { @@ -2651,8 +2635,6 @@ static void mce_reenable_cpu(void) static int mce_cpu_dead(unsigned int cpu) { - mce_intel_hcpu_update(cpu); - /* intentionally ignoring frozen here */ if (!cpuhp_tasks_frozen) cmci_rediscover(); diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 95275a5e57e0..052bf2708391 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -41,15 +41,6 @@ */ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); -/* - * CMCI storm detection backoff counter - * - * During storm, we reset this counter to INITIAL_CHECK_INTERVAL in case we've - * encountered an error. If not, we decrement it by one. We signal the end of - * the CMCI storm when it reaches 0. - */ -static DEFINE_PER_CPU(int, cmci_backoff_cnt); - /* * cmci_discover_lock protects against parallel discovery attempts * which could race against each other. @@ -57,21 +48,6 @@ static DEFINE_PER_CPU(int, cmci_backoff_cnt); static DEFINE_RAW_SPINLOCK(cmci_discover_lock); #define CMCI_THRESHOLD 1 -#define CMCI_POLL_INTERVAL (30 * HZ) -#define CMCI_STORM_INTERVAL (HZ) -#define CMCI_STORM_THRESHOLD 15 - -static DEFINE_PER_CPU(unsigned long, cmci_time_stamp); -static DEFINE_PER_CPU(unsigned int, cmci_storm_cnt); -static DEFINE_PER_CPU(unsigned int, cmci_storm_state); - -enum { - CMCI_STORM_NONE, - CMCI_STORM_ACTIVE, - CMCI_STORM_SUBSIDED, -}; - -static atomic_t cmci_storm_on_cpus; static int cmci_supported(int *banks) { @@ -127,124 +103,6 @@ static bool lmce_supported(void) return tmp & FEAT_CTL_LMCE_ENABLED; } -bool mce_intel_cmci_poll(void) -{ - if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE) - return false; - - /* - * Reset the counter if we've logged an error in the last poll - * during the storm. - */ - if (machine_check_poll(0, this_cpu_ptr(&mce_banks_owned))) - this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL); - else - this_cpu_dec(cmci_backoff_cnt); - - return true; -} - -void mce_intel_hcpu_update(unsigned long cpu) -{ - if (per_cpu(cmci_storm_state, cpu) == CMCI_STORM_ACTIVE) - atomic_dec(&cmci_storm_on_cpus); - - per_cpu(cmci_storm_state, cpu) = CMCI_STORM_NONE; -} - -static void cmci_toggle_interrupt_mode(bool on) -{ - unsigned long flags, *owned; - int bank; - u64 val; - - raw_spin_lock_irqsave(&cmci_discover_lock, flags); - owned = this_cpu_ptr(mce_banks_owned); - for_each_set_bit(bank, owned, MAX_NR_BANKS) { - rdmsrl(MSR_IA32_MCx_CTL2(bank), val); - - if (on) - val |= MCI_CTL2_CMCI_EN; - else - val &= ~MCI_CTL2_CMCI_EN; - - wrmsrl(MSR_IA32_MCx_CTL2(bank), val); - } - raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); -} - -unsigned long cmci_intel_adjust_timer(unsigned long interval) -{ - if ((this_cpu_read(cmci_backoff_cnt) > 0) && - (__this_cpu_read(cmci_storm_state) == CMCI_STORM_ACTIVE)) { - mce_notify_irq(); - return CMCI_STORM_INTERVAL; - } - - switch (__this_cpu_read(cmci_storm_state)) { - case CMCI_STORM_ACTIVE: - - /* - * We switch back to interrupt mode once the poll timer has - * silenced itself. That means no events recorded and the timer - * interval is back to our poll interval. - */ - __this_cpu_write(cmci_storm_state, CMCI_STORM_SUBSIDED); - if (!atomic_sub_return(1, &cmci_storm_on_cpus)) - pr_notice("CMCI storm subsided: switching to interrupt mode\n"); - - fallthrough; - - case CMCI_STORM_SUBSIDED: - /* - * We wait for all CPUs to go back to SUBSIDED state. When that - * happens we switch back to interrupt mode. - */ - if (!atomic_read(&cmci_storm_on_cpus)) { - __this_cpu_write(cmci_storm_state, CMCI_STORM_NONE); - cmci_toggle_interrupt_mode(true); - cmci_recheck(); - } - return CMCI_POLL_INTERVAL; - default: - - /* We have shiny weather. Let the poll do whatever it thinks. */ - return interval; - } -} - -static bool cmci_storm_detect(void) -{ - unsigned int cnt = __this_cpu_read(cmci_storm_cnt); - unsigned long ts = __this_cpu_read(cmci_time_stamp); - unsigned long now = jiffies; - int r; - - if (__this_cpu_read(cmci_storm_state) != CMCI_STORM_NONE) - return true; - - if (time_before_eq(now, ts + CMCI_STORM_INTERVAL)) { - cnt++; - } else { - cnt = 1; - __this_cpu_write(cmci_time_stamp, now); - } - __this_cpu_write(cmci_storm_cnt, cnt); - - if (cnt <= CMCI_STORM_THRESHOLD) - return false; - - cmci_toggle_interrupt_mode(false); - __this_cpu_write(cmci_storm_state, CMCI_STORM_ACTIVE); - r = atomic_add_return(1, &cmci_storm_on_cpus); - mce_timer_kick(CMCI_STORM_INTERVAL); - this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL); - - if (r == 1) - pr_notice("CMCI storm detected: switching to poll mode\n"); - return true; -} - /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. @@ -253,9 +111,6 @@ static bool cmci_storm_detect(void) */ static void intel_threshold_interrupt(void) { - if (cmci_storm_detect()) - return; - machine_check_poll(MCP_TIMESTAMP, this_cpu_ptr(&mce_banks_owned)); } From patchwork Fri Mar 17 17:20:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13179325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92F75C74A5B for ; Fri, 17 Mar 2023 17:20:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229928AbjCQRUz (ORCPT ); Fri, 17 Mar 2023 13:20:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbjCQRUz (ORCPT ); Fri, 17 Mar 2023 13:20:55 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD55AE1FC1; Fri, 17 Mar 2023 10:20:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679073651; x=1710609651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+kpWJA5rZ2wraX+R6Au2wYAFUWO6WUqm6HQUcIiigFY=; b=g2gJOzMcI49ihFZExIvNIvz6QlsgWShf7pJHNFZxkX0tSVmwJorUyH0e q6U6bmsboUpVc/2LWe2IFEC/OAlqdXAIIGUFqmawvchGtqJ6pk/TZ71Hm n/WTVn4BTFn5WcTYqr4cL+pGI4hh4RQMbe3uut6+GXROyzJDpbvEb3+Di Faw7B5fqwTWMzI4Mlds6NdFWZZJZf200PBBd/rWs+Mysj7djc5ksYk0kE gAEIoG0xQ7JtoxZem2fNQZE4RtxhepG9VLq1Jp+qf5jDSHhE31Rfvo9TE HeIqWuM5FjS3R5J9qXKcBhgPYEQckgiwdCR0Jane7t3H7y6JnQ5jTP1Dr A==; X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="339858224" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="339858224" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="804166603" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="804166603" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:48 -0700 From: Tony Luck To: Yazen Ghannam Cc: Borislav Petkov , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, hpa@zytor.com, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 2/5] x86/mce: Add per-bank CMCI storm mitigation Date: Fri, 17 Mar 2023 10:20:39 -0700 Message-Id: <20230317172042.117201-3-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230317172042.117201-1-tony.luck@intel.com> References: <20230317172042.117201-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org Add a hook into machine_check_poll() to keep track of per-CPU, per-bank corrected error logs. Maintain a bitmap history for each bank showing whether the bank logged an corrected error or not each time it is polled. In normal operation the interval between polls of this banks determines how far to shift the history. The 64 bit width corresponds to about one second. When a storm is observed the Rate of interrupts is reduced by setting a large threshold value for this bank in IA32_MCi_CTL2. This bank is added to the bitmap of banks for this CPU to poll. The polling rate is increased to once per second. During a storm each bit in the history indicates the status of the bank each time it is polled. Thus the history covers just over a minute. Declare a storm for that bank if the number of corrected interrupts seen in that history is above some threshold (5 in this RFC code for ease of testing, likely move to 15 for compatibility with previous storm detection). A storm on a bank ends if enough consecutive polls of the bank show no corrected errors (currently 30, may also change). That resets the threshold in IA32_MCi_CTL2 back to 1, removes the bank from the bitmap for polling, and changes the polling rate back to the default. If a CPU with banks in storm mode is taken offline, the new CPU that inherits ownership of those banks takes over management of storm(s) in the inherited bank(s). Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/mce/internal.h | 4 +- arch/x86/kernel/cpu/mce/core.c | 26 ++++-- arch/x86/kernel/cpu/mce/intel.c | 139 ++++++++++++++++++++++++++++- 3 files changed, 158 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 07fef4d74525..72fbec8f6c3c 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -40,6 +40,8 @@ struct dentry *mce_get_debugfs_dir(void); extern mce_banks_t mce_banks_ce_disabled; +void track_cmci_storm(int bank, u64 status); + #ifdef CONFIG_X86_MCE_INTEL void cmci_disable_bank(int bank); void intel_init_cmci(void); @@ -54,7 +56,7 @@ static inline void intel_clear_lmce(void) { } static inline bool intel_filter_mce(struct mce *m) { return false; } #endif -void mce_timer_kick(unsigned long interval); +void mce_timer_kick(bool storm); #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 92c2dee4bf43..776d4724b1e0 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -694,6 +694,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b) barrier(); m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS)); + track_cmci_storm(i, m.status); + /* If this entry is not valid, ignore it */ if (!(m.status & MCI_STATUS_VAL)) continue; @@ -1597,6 +1599,7 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL; static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */ static DEFINE_PER_CPU(struct timer_list, mce_timer); +static DEFINE_PER_CPU(bool, storm_poll_mode); static void __start_timer(struct timer_list *t, unsigned long interval) { @@ -1632,22 +1635,29 @@ static void mce_timer_fn(struct timer_list *t) else iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); - __this_cpu_write(mce_next_interval, iv); - __start_timer(t, iv); + if (__this_cpu_read(storm_poll_mode)) { + __start_timer(t, HZ); + } else { + __this_cpu_write(mce_next_interval, iv); + __start_timer(t, iv); + } } /* - * Ensure that the timer is firing in @interval from now. + * When a storm starts on any bank on this CPU, switch to polling + * once per second. When the storm ends, revert to the default + * polling interval. */ -void mce_timer_kick(unsigned long interval) +void mce_timer_kick(bool storm) { struct timer_list *t = this_cpu_ptr(&mce_timer); - unsigned long iv = __this_cpu_read(mce_next_interval); - __start_timer(t, interval); + __this_cpu_write(storm_poll_mode, storm); - if (interval < iv) - __this_cpu_write(mce_next_interval, interval); + if (storm) + __start_timer(t, HZ); + else + __this_cpu_write(mce_next_interval, check_interval * HZ); } /* Must not be called in IRQ context where del_timer_sync() can deadlock */ diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 052bf2708391..4106877de028 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -47,8 +47,40 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); */ static DEFINE_RAW_SPINLOCK(cmci_discover_lock); +/* + * CMCI storm tracking state + * stormy_bank_count: per-cpu count of MC banks in storm state + * bank_history: bitmask tracking of corrected errors seen in each bank + * bank_time_stamp: last time (in jiffies) that each bank was polled + * cmci_threshold: MCi_CTL2 threshold for each bank when there is no storm + */ +static DEFINE_PER_CPU(int, stormy_bank_count); +static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); +static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); +static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); +static int cmci_threshold[MAX_NR_BANKS]; + +/* Linux non-storm CMCI threshold (may be overridden by BIOS */ #define CMCI_THRESHOLD 1 +/* + * High threshold to limit CMCI rate during storms. Max supported is + * 0x7FFF. Use this slightly smaller value so it has a distinctive + * signature when some asks "Why am I not seeing all corrected errors?" + */ +#define CMCI_STORM_THRESHOLD 32749 + +/* + * How many errors within the history buffer mark the start of a storm + */ +#define STORM_BEGIN_THRESHOLD 5 + +/* + * How many polls of machine check bank without an error before declaring + * the storm is over + */ +#define STORM_END_POLL_THRESHOLD 30 + static int cmci_supported(int *banks) { u64 cap; @@ -103,6 +135,93 @@ static bool lmce_supported(void) return tmp & FEAT_CTL_LMCE_ENABLED; } +/* + * Set a new CMCI threshold value. Preserve the state of the + * MCI_CTL2_CMCI_EN bit in case this happens during a + * cmci_rediscover() operation. + */ +static void cmci_set_threshold(int bank, int thresh) +{ + unsigned long flags; + u64 val; + + raw_spin_lock_irqsave(&cmci_discover_lock, flags); + rdmsrl(MSR_IA32_MCx_CTL2(bank), val); + val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; + wrmsrl(MSR_IA32_MCx_CTL2(bank), val | thresh); + raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); +} + +static void cmci_storm_begin(int bank) +{ + __set_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_storm[bank], true); + + /* + * If this is the first bank on this CPU to enter storm mode + * start polling + */ + if (this_cpu_inc_return(stormy_bank_count) == 1) + mce_timer_kick(true); +} + +static void cmci_storm_end(int bank) +{ + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_history[bank], 0ull); + this_cpu_write(bank_storm[bank], false); + + /* If no banks left in storm mode, stop polling */ + if (!this_cpu_dec_return(stormy_bank_count)) + mce_timer_kick(false); +} + +void track_cmci_storm(int bank, u64 status) +{ + unsigned long now = jiffies, delta; + unsigned int shift = 1; + u64 history; + + /* + * When a bank is in storm mode it is polled once per second and + * the history mask will record about the last minute of poll results. + * If it is not in storm mode, then the bank is only checked when + * there is a CMCI interrupt. Check how long it has been since + * this bank was last checked, and adjust the amount of "shift" + * to apply to history. + */ + if (!this_cpu_read(bank_storm[bank])) { + delta = now - this_cpu_read(bank_time_stamp[bank]); + shift = (delta + HZ) / HZ; + } + + /* If has been a long time since the last poll, clear history */ + if (shift >= 64) + history = 0; + else + history = this_cpu_read(bank_history[bank]) << shift; + this_cpu_write(bank_time_stamp[bank], now); + + /* History keeps track of corrected errors. VAL=1 && UC=0 */ + if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL) + history |= 1; + this_cpu_write(bank_history[bank], history); + + if (this_cpu_read(bank_storm[bank])) { + if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) + return; + pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); + cmci_set_threshold(bank, cmci_threshold[bank]); + cmci_storm_end(bank); + } else { + if (hweight64(history) < STORM_BEGIN_THRESHOLD) + return; + pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); + cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); + cmci_storm_begin(bank); + } +} + /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. @@ -147,6 +266,9 @@ static void cmci_discover(int banks) continue; } + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) + goto storm; + if (!mca_cfg.bios_cmci_threshold) { val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; val |= CMCI_THRESHOLD; @@ -159,7 +281,7 @@ static void cmci_discover(int banks) bios_zero_thresh = 1; val |= CMCI_THRESHOLD; } - +storm: val |= MCI_CTL2_CMCI_EN; wrmsrl(MSR_IA32_MCx_CTL2(i), val); rdmsrl(MSR_IA32_MCx_CTL2(i), val); @@ -167,7 +289,14 @@ static void cmci_discover(int banks) /* Did the enable bit stick? -- the bank supports CMCI */ if (val & MCI_CTL2_CMCI_EN) { set_bit(i, owned); - __clear_bit(i, this_cpu_ptr(mce_poll_banks)); + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) { + pr_notice("CPU%d BANK%d CMCI inherited storm\n", smp_processor_id(), i); + this_cpu_write(bank_history[i], ~0ull); + this_cpu_write(bank_time_stamp[i], jiffies); + cmci_storm_begin(i); + } else { + __clear_bit(i, this_cpu_ptr(mce_poll_banks)); + } /* * We are able to set thresholds for some banks that * had a threshold of 0. This means the BIOS has not @@ -177,6 +306,10 @@ static void cmci_discover(int banks) if (mca_cfg.bios_cmci_threshold && bios_zero_thresh && (val & MCI_CTL2_CMCI_THRESHOLD_MASK)) bios_wrong_thresh = 1; + + /* Save default threshold for each bank */ + if (cmci_threshold[i] == 0) + cmci_threshold[i] = val & MCI_CTL2_CMCI_THRESHOLD_MASK; } else { WARN_ON(!test_bit(i, this_cpu_ptr(mce_poll_banks))); } @@ -218,6 +351,8 @@ static void __cmci_disable_bank(int bank) val &= ~MCI_CTL2_CMCI_EN; wrmsrl(MSR_IA32_MCx_CTL2(bank), val); __clear_bit(bank, this_cpu_ptr(mce_banks_owned)); + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) + cmci_storm_end(bank); } /* From patchwork Fri Mar 17 17:20:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13179327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA25EC6FD1D for ; Fri, 17 Mar 2023 17:20:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229494AbjCQRU6 (ORCPT ); Fri, 17 Mar 2023 13:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229644AbjCQRU4 (ORCPT ); Fri, 17 Mar 2023 13:20:56 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 055DEE1932; Fri, 17 Mar 2023 10:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679073655; x=1710609655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0Wc15XvGPYvMWlCpfijnuc/A02oVk5g5vpSMND9b7J8=; b=PAssDW64sHnuDPLfjKJc9dU4kNNAYq/uDdHpBrAs09o0CDtBMJe3TBg+ pEVuhhFNhYeP3jSHtXVrhSehFGrh3co3hOnjtBUY2HjiXpFLaN7k8L3Uu lIDfIzG41cAS9sNgwIO+hFLx95eDVlF8YQL6utPpv6IXQ94jWneQNhFAG K2MW++6X2ubRtT7ScATvKg9VPih5iZG4BrDPoFpn5/swI8SA/fFcpI3Kn Gs4wkscI/9VtgW/DISsiM5BLxl1K9QO5GaUAg+09A1msypB51Nnah0wLc nUKSZPiArPnsjYRccI3UhGqay8roprgGAbYEqFVHFX/9hoPs6TDW3+gvh g==; X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="339858233" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="339858233" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="804166607" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="804166607" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 From: Tony Luck To: Yazen Ghannam Cc: Borislav Petkov , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, hpa@zytor.com, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 3/5] x86/mce: Introduce mce_handle_storm() to deal with begin/end of storms Date: Fri, 17 Mar 2023 10:20:40 -0700 Message-Id: <20230317172042.117201-4-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230317172042.117201-1-tony.luck@intel.com> References: <20230317172042.117201-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: Smita Koralahalli Intel and AMD need to take different actions when a storm begins or ends. Prepare for the storm code moving from intel.c into core.c by adding a function that checks CPU vendor to pick the right action. No functional changes. [Tony: Changed from function pointer to regular function] Signed-off-by: Smita Koralahalli Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/mce/internal.h | 3 +++ arch/x86/kernel/cpu/mce/core.c | 9 +++++++++ arch/x86/kernel/cpu/mce/intel.c | 12 ++++++++++-- 3 files changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 72fbec8f6c3c..f37816b4d4cf 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -43,12 +43,14 @@ extern mce_banks_t mce_banks_ce_disabled; void track_cmci_storm(int bank, u64 status); #ifdef CONFIG_X86_MCE_INTEL +void mce_intel_handle_storm(int bank, bool on); void cmci_disable_bank(int bank); void intel_init_cmci(void); void intel_init_lmce(void); void intel_clear_lmce(void); bool intel_filter_mce(struct mce *m); #else +static inline void mce_intel_handle_storm(int bank, bool on) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } static inline void intel_init_lmce(void) { } @@ -57,6 +59,7 @@ static inline bool intel_filter_mce(struct mce *m) { return false; } #endif void mce_timer_kick(bool storm); +void mce_handle_storm(int bank, bool on); #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 776d4724b1e0..f4d2a7ba29f7 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1985,6 +1985,15 @@ static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c) intel_clear_lmce(); } +void mce_handle_storm(int bank, bool on) +{ + switch (boot_cpu_data.x86_vendor) { + case X86_VENDOR_INTEL: + mce_intel_handle_storm(bank, on); + break; + } +} + static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) { switch (c->x86_vendor) { diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 4106877de028..4238b73c2143 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -152,6 +152,14 @@ static void cmci_set_threshold(int bank, int thresh) raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); } +void mce_intel_handle_storm(int bank, bool on) +{ + if (on) + cmci_set_threshold(bank, cmci_threshold[bank]); + else + cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); +} + static void cmci_storm_begin(int bank) { __set_bit(bank, this_cpu_ptr(mce_poll_banks)); @@ -211,13 +219,13 @@ void track_cmci_storm(int bank, u64 status) if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) return; pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); - cmci_set_threshold(bank, cmci_threshold[bank]); + mce_handle_storm(bank, true); cmci_storm_end(bank); } else { if (hweight64(history) < STORM_BEGIN_THRESHOLD) return; pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); - cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); + mce_handle_storm(bank, false); cmci_storm_begin(bank); } } From patchwork Fri Mar 17 17:20:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13179329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58BBCC7618A for ; Fri, 17 Mar 2023 17:20:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230001AbjCQRU6 (ORCPT ); Fri, 17 Mar 2023 13:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230028AbjCQRU5 (ORCPT ); Fri, 17 Mar 2023 13:20:57 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E7DCE1FCC; Fri, 17 Mar 2023 10:20:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679073655; x=1710609655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oJsYNmQ1zuWvFk0z3p6eEFzy5K8dnpk/uBDHIDe5jzM=; b=D1fpl8v2G09PMYebKFNjH9PkN6xjYH9wtt7SYfb0XDfNjv9L18+j2rRD kVnlMcydV6JK9Fgob2/W+68h2zuwpt8x7wZrCMEWtjqdCl+DYidmIhVOd Ze6wEM2UJmerC9w+1yaWAMsjFHI/c8BlRmuoSpVDv5g4uyop9iBhq+eEB pmeHybng2VvDTtu0BuQ+SngdxFM2GO6YZ7auWwSBK73wCVy+JbWPSRJyV RzK99QyyUSlIwTz2X7QF1tq4hvjlyeO7+cjeTjscFWekiS8p7T3Bwm/EN RjVyUTC+YENbk73mpHalHuSksFh+TNQdWlBoSNCnEzbI3AC1r2d7f3hCY g==; X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="339858244" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="339858244" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="804166610" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="804166610" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 From: Tony Luck To: Yazen Ghannam Cc: Borislav Petkov , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, hpa@zytor.com, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 4/5] x86/mce: Move storm handling to core. Date: Fri, 17 Mar 2023 10:20:41 -0700 Message-Id: <20230317172042.117201-5-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230317172042.117201-1-tony.luck@intel.com> References: <20230317172042.117201-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: Smita Koralahalli AMD's storm handling for threshold interrupts is similar to Intel's CMCI storm handling. Hence, make the storm handling code common by moving to core and removing the vendor exclusivity. On the contrary, setting different thresholds to reduce rate of interrupts in IA32_MCi_CTL2 register is kept Intel intact as the storm handling for AMD slightly differs where in it handles the storms by turning off the interrupts. No functional changes. [Tony: Same as Smita's original, plus changes rolled in from prior patches] Signed-off-by: Smita Koralahalli Signed-off-by: Tony Luck Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam --- arch/x86/kernel/cpu/mce/internal.h | 18 ++++++ arch/x86/kernel/cpu/mce/core.c | 81 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/mce/intel.c | 93 +----------------------------- 3 files changed, 100 insertions(+), 92 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index f37816b4d4cf..9b2c54f30fb9 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -60,6 +60,24 @@ static inline bool intel_filter_mce(struct mce *m) { return false; } void mce_timer_kick(bool storm); void mce_handle_storm(int bank, bool on); +void cmci_storm_begin(int bank); +void cmci_storm_end(int bank); + +DECLARE_PER_CPU(int, stormy_bank_count); +DECLARE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); +DECLARE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); +DECLARE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); + +/* + * How many errors within the history buffer mark the start of a storm + */ +#define STORM_BEGIN_THRESHOLD 5 + +/* + * How many polls of machine check bank without an error before declaring + * the storm is over + */ +#define STORM_END_POLL_THRESHOLD 30 #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index f4d2a7ba29f7..d27daa199523 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -613,6 +613,87 @@ static struct notifier_block mce_default_nb = { .priority = MCE_PRIO_LOWEST, }; +/* + * CMCI storm tracking state + * stormy_bank_count: per-cpu count of MC banks in storm state + * bank_history: bitmask tracking of corrected errors seen in each bank + * bank_time_stamp: last time (in jiffies) that each bank was polled + */ +DEFINE_PER_CPU(int, stormy_bank_count); +DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); +DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); +DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); + +void cmci_storm_begin(int bank) +{ + __set_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_storm[bank], true); + + /* + * If this is the first bank on this CPU to enter storm mode + * start polling + */ + if (this_cpu_inc_return(stormy_bank_count) == 1) + mce_timer_kick(true); +} + +void cmci_storm_end(int bank) +{ + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + this_cpu_write(bank_history[bank], 0ull); + this_cpu_write(bank_storm[bank], false); + + /* If no banks left in storm mode, stop polling */ + if (!this_cpu_dec_return(stormy_bank_count)) + mce_timer_kick(false); +} + +void track_cmci_storm(int bank, u64 status) +{ + unsigned long now = jiffies, delta; + unsigned int shift = 1; + u64 history; + + /* + * When a bank is in storm mode it is polled once per second and + * the history mask will record about the last minute of poll results. + * If it is not in storm mode, then the bank is only checked when + * there is a CMCI interrupt. Check how long it has been since + * this bank was last checked, and adjust the amount of "shift" + * to apply to history. + */ + if (!this_cpu_read(bank_storm[bank])) { + delta = now - this_cpu_read(bank_time_stamp[bank]); + shift = (delta + HZ) / HZ; + } + + /* If has been a long time since the last poll, clear history */ + if (shift >= 64) + history = 0; + else + history = this_cpu_read(bank_history[bank]) << shift; + this_cpu_write(bank_time_stamp[bank], now); + + /* History keeps track of corrected errors. VAL=1 && UC=0 */ + if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL) + history |= 1; + this_cpu_write(bank_history[bank], history); + + if (this_cpu_read(bank_storm[bank])) { + if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) + return; + pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); + mce_handle_storm(bank, true); + cmci_storm_end(bank); + } else { + if (hweight64(history) < STORM_BEGIN_THRESHOLD) + return; + pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); + mce_handle_storm(bank, false); + cmci_storm_begin(bank); + } +} + /* * Read ADDR and MISC registers. */ diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 4238b73c2143..6cc9aa97c092 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -47,17 +47,7 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); */ static DEFINE_RAW_SPINLOCK(cmci_discover_lock); -/* - * CMCI storm tracking state - * stormy_bank_count: per-cpu count of MC banks in storm state - * bank_history: bitmask tracking of corrected errors seen in each bank - * bank_time_stamp: last time (in jiffies) that each bank was polled - * cmci_threshold: MCi_CTL2 threshold for each bank when there is no storm - */ -static DEFINE_PER_CPU(int, stormy_bank_count); -static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history); -static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm); -static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp); +/* MCi_CTL2 threshold for each bank when there is no storm */ static int cmci_threshold[MAX_NR_BANKS]; /* Linux non-storm CMCI threshold (may be overridden by BIOS */ @@ -70,17 +60,6 @@ static int cmci_threshold[MAX_NR_BANKS]; */ #define CMCI_STORM_THRESHOLD 32749 -/* - * How many errors within the history buffer mark the start of a storm - */ -#define STORM_BEGIN_THRESHOLD 5 - -/* - * How many polls of machine check bank without an error before declaring - * the storm is over - */ -#define STORM_END_POLL_THRESHOLD 30 - static int cmci_supported(int *banks) { u64 cap; @@ -160,76 +139,6 @@ void mce_intel_handle_storm(int bank, bool on) cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); } -static void cmci_storm_begin(int bank) -{ - __set_bit(bank, this_cpu_ptr(mce_poll_banks)); - this_cpu_write(bank_storm[bank], true); - - /* - * If this is the first bank on this CPU to enter storm mode - * start polling - */ - if (this_cpu_inc_return(stormy_bank_count) == 1) - mce_timer_kick(true); -} - -static void cmci_storm_end(int bank) -{ - __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); - this_cpu_write(bank_history[bank], 0ull); - this_cpu_write(bank_storm[bank], false); - - /* If no banks left in storm mode, stop polling */ - if (!this_cpu_dec_return(stormy_bank_count)) - mce_timer_kick(false); -} - -void track_cmci_storm(int bank, u64 status) -{ - unsigned long now = jiffies, delta; - unsigned int shift = 1; - u64 history; - - /* - * When a bank is in storm mode it is polled once per second and - * the history mask will record about the last minute of poll results. - * If it is not in storm mode, then the bank is only checked when - * there is a CMCI interrupt. Check how long it has been since - * this bank was last checked, and adjust the amount of "shift" - * to apply to history. - */ - if (!this_cpu_read(bank_storm[bank])) { - delta = now - this_cpu_read(bank_time_stamp[bank]); - shift = (delta + HZ) / HZ; - } - - /* If has been a long time since the last poll, clear history */ - if (shift >= 64) - history = 0; - else - history = this_cpu_read(bank_history[bank]) << shift; - this_cpu_write(bank_time_stamp[bank], now); - - /* History keeps track of corrected errors. VAL=1 && UC=0 */ - if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL) - history |= 1; - this_cpu_write(bank_history[bank], history); - - if (this_cpu_read(bank_storm[bank])) { - if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0)) - return; - pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank); - mce_handle_storm(bank, true); - cmci_storm_end(bank); - } else { - if (hweight64(history) < STORM_BEGIN_THRESHOLD) - return; - pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank); - mce_handle_storm(bank, false); - cmci_storm_begin(bank); - } -} - /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. From patchwork Fri Mar 17 17:20:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 13179328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E56DBC74A5B for ; Fri, 17 Mar 2023 17:20:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbjCQRU7 (ORCPT ); Fri, 17 Mar 2023 13:20:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbjCQRU5 (ORCPT ); Fri, 17 Mar 2023 13:20:57 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28581DFB56; Fri, 17 Mar 2023 10:20:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679073656; x=1710609656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UAk6x7dXlpS53KN8GVwp8qaBKdkzhw/mGjhHXz+ApAs=; b=hKvKOPEh/EJXHun2KQpZNWdhi/Fe590h7/hIz50XLJDtuaKeoGzbxTqV sPKieUrUGgpO1DHCGfKB5gYcCh4eIn3YmllNtBlNBhpjJT1lH8v3rdoma Z74THQiqg1L2DgKWPnG5kXOsA6Y/p1Me5Z9k5CPKgE2WV2rNKusNeoSb1 yX8D0us+TycR8M6Niai6KURzSc+OYHpmWYfiTdPLX5SIJJDR0DDQVmIZi OAf5uVvrmD9XMoz1ix0lIKe0jXffK32Vi+xsUrtw2n1e02DdehK9h/lv9 3LM2km0BBmlpZpLeP4Qjo43ULxOR211lJbqbfqBqpM9ni7m9qMzjQkcoa Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="339858254" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="339858254" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10652"; a="804166614" X-IronPort-AV: E=Sophos;i="5.98,268,1673942400"; d="scan'208";a="804166614" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2023 10:20:49 -0700 From: Tony Luck To: Yazen Ghannam Cc: Borislav Petkov , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, hpa@zytor.com, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 5/5] x86/mce: Handle AMD threshold interrupt storms Date: Fri, 17 Mar 2023 10:20:42 -0700 Message-Id: <20230317172042.117201-6-tony.luck@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230317172042.117201-1-tony.luck@intel.com> References: <20230317172042.117201-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org From: Smita Koralahalli Extend the logic of handling CMCI storms to AMD threshold interrupts. Rely on the similar approach as of Intel's CMCI to mitigate storms per CPU and per bank. But, unlike CMCI, do not set thresholds and reduce interrupt rate on a storm. Rather, disable the interrupt on the corresponding CPU and bank. Re-enable back the interrupts if enough consecutive polls of the bank show no corrected errors (30, as programmed by Intel). Turning off the threshold interrupts would be a better solution on AMD systems as other error severities will still be handled even if the threshold interrupts are disabled. [Tony: Small tweak because mce_handle_storm() isn't a pointer now] Signed-off-by: Smita Koralahalli Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/mce/internal.h | 2 ++ arch/x86/kernel/cpu/mce/amd.c | 49 ++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/mce/core.c | 3 ++ 3 files changed, 54 insertions(+) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 9b2c54f30fb9..b580bd609fdc 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -206,7 +206,9 @@ extern bool filter_mce(struct mce *m); #ifdef CONFIG_X86_MCE_AMD extern bool amd_filter_mce(struct mce *m); +void mce_amd_handle_storm(int bank, bool on); #else +static inline void mce_amd_handle_storm(int bank, bool on) {} static inline bool amd_filter_mce(struct mce *m) { return false; } #endif diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 1c87501e0fa3..b7f92af065e1 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -466,6 +466,47 @@ static void threshold_restart_bank(void *_tr) wrmsr(tr->b->address, lo, hi); } +static void _reset_block(struct threshold_block *block) +{ + struct thresh_restart tr; + + memset(&tr, 0, sizeof(tr)); + tr.b = block; + threshold_restart_bank(&tr); +} + +static void toggle_interrupt_reset_block(struct threshold_block *block, bool on) +{ + if (!block) + return; + + block->interrupt_enable = !!on; + _reset_block(block); +} + +void mce_amd_handle_storm(int bank, bool on) +{ + struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL; + struct threshold_bank **bp = this_cpu_read(threshold_banks); + unsigned long flags; + + if (!bp) + return; + + local_irq_save(flags); + + first_block = bp[bank]->blocks; + if (!first_block) + goto end; + + toggle_interrupt_reset_block(first_block, on); + + list_for_each_entry_safe(block, tmp, &first_block->miscj, miscj) + toggle_interrupt_reset_block(block, on); +end: + local_irq_restore(flags); +} + static void mce_threshold_block_init(struct threshold_block *b, int offset) { struct thresh_restart tr = { @@ -867,6 +908,7 @@ static void amd_threshold_interrupt(void) struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL; struct threshold_bank **bp = this_cpu_read(threshold_banks); unsigned int bank, cpu = smp_processor_id(); + u64 status; /* * Validate that the threshold bank has been initialized already. The @@ -880,6 +922,13 @@ static void amd_threshold_interrupt(void) if (!(per_cpu(bank_map, cpu) & (1 << bank))) continue; + rdmsrl(mca_msr_reg(bank, MCA_STATUS), status); + track_cmci_storm(bank, status); + + /* Return early on an interrupt storm */ + if (this_cpu_read(bank_storm[bank])) + return; + first_block = bp[bank]->blocks; if (!first_block) continue; diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index d27daa199523..6121f0afe45a 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -2072,6 +2072,9 @@ void mce_handle_storm(int bank, bool on) case X86_VENDOR_INTEL: mce_intel_handle_storm(bank, on); break; + case X86_VENDOR_AMD: + mce_amd_handle_storm(bank, on); + break; } }