From patchwork Thu Mar 30 06:19:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 9652977 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 45B4C602BD for ; Thu, 30 Mar 2017 06:22:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3529C2856C for ; Thu, 30 Mar 2017 06:22:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 29DFA28571; Thu, 30 Mar 2017 06:22:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3DC932856D for ; Thu, 30 Mar 2017 06:22:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ctTRS-0000rz-I4; Thu, 30 Mar 2017 06:20:18 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ctTRR-0000rb-5f for xen-devel@lists.xen.org; Thu, 30 Mar 2017 06:20:17 +0000 Received: from [85.158.137.68] by server-7.bemta-3.messagelabs.com id EC/30-23854-0A3ACD85; Thu, 30 Mar 2017 06:20:16 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrNLMWRWlGSWpSXmKPExsXS1tYhrzt/8Z0 IgyvPtC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1oymng/MBW8KKzaf6WBqYHwe2cXIySEkUClx 7tZKJhBbQoBX4siyGaxdjBxAtr/EnvZCiJJeRonDf5NBbDYBfYkVjw+ygtgiAtIS1z5fZgSxm QWqJSZOnwNmCwuYSNxd8QlsJIuAqkTLtynsIDavgI3E02fXmCFWyUtcuHqKBcTmFLCVWP21nQ Vil43E2WuH2CYw8i5gZFjFqF6cWlSWWqRrqZdUlJmeUZKbmJmja2hgrJebWlycmJ6ak5hUrJe cn7uJERgI9QwMjDsYX/90OsQoycGkJMq7ee6dCCG+pPyUyozE4oz4otKc1OJDjDIcHEoSvD6L gHKCRanpqRVpmTnAkIRJS3DwKInwngRJ8xYXJOYWZ6ZDpE4xKkqJ87qDJARAEhmleXBtsDi4x CgrJczLyMDAIMRTkFqUm1mCKv+KUZyDUUmYVxJkCk9mXgnc9FdAi5mAFovb3AJZXJKIkJJqYN TYtWDJwb0Lj+b8NbrwdU1SjKXxkhP3Kusms2ZyytXfObnkcW1mpdPZ07FWr/vcE35F5CfJX3n wbf2j379vaFSW1XBenhE+f72mccCMuaXXH/xp+twf0Xn299X9NQ23HeQ+9Ff+a/02+at2Z4GQ 0/TznJPUXhklN88q9TSIn94WPd/r32tmfRElluKMREMt5qLiRAAUi6zXfgIAAA== X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-16.tower-31.messagelabs.com!1490854812!85204125!2 X-Originating-IP: [134.134.136.31] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 25029 invoked from network); 30 Mar 2017 06:20:15 -0000 Received: from mga06.intel.com (HELO mga06.intel.com) (134.134.136.31) by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 30 Mar 2017 06:20:15 -0000 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP; 29 Mar 2017 23:20:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,245,1486454400"; d="scan'208";a="241919455" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.159.153]) by fmsmga004.fm.intel.com with ESMTP; 29 Mar 2017 23:20:12 -0700 From: Haozhong Zhang To: xen-devel@lists.xen.org Date: Thu, 30 Mar 2017 14:19:55 +0800 Message-Id: <20170330062003.9119-2-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20170330062003.9119-1-haozhong.zhang@intel.com> References: <20170330062003.9119-1-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Jan Beulich , Andrew Cooper Subject: [Xen-devel] [PATCH v3 1/9] x86/mce: handle LMCE locally X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP LMCE is sent to only one CPU thread, so MCE handler, barriers and softirq handler should go without waiting for other CPUs, when handling LMCE. Note LMCE is still broadcast to all vcpus as regular MCE on Intel CPU right now. Signed-off-by: Haozhong Zhang --- Cc: Jan Beulich Cc: Andrew Cooper Changes in v3: * Do not add nowait argument to currently unused mce_barrier(). * Do not access the global variable 'severity_cpu' when handling LMCE. * Rename per-cpu flag 'mce_in_process' to 'nonlocal_mce_in_progress' to clarify it's for non-local MCE. * Add code comments for tricky points. --- xen/arch/x86/cpu/mcheck/barrier.c | 12 ++-- xen/arch/x86/cpu/mcheck/barrier.h | 12 +++- xen/arch/x86/cpu/mcheck/mcaction.c | 4 +- xen/arch/x86/cpu/mcheck/mce.c | 122 ++++++++++++++++++++++++++++--------- xen/arch/x86/cpu/mcheck/mce.h | 2 + xen/arch/x86/cpu/mcheck/x86_mca.h | 4 +- 6 files changed, 118 insertions(+), 38 deletions(-) diff --git a/xen/arch/x86/cpu/mcheck/barrier.c b/xen/arch/x86/cpu/mcheck/barrier.c index 5dce1fb..0b3b091 100644 --- a/xen/arch/x86/cpu/mcheck/barrier.c +++ b/xen/arch/x86/cpu/mcheck/barrier.c @@ -16,11 +16,11 @@ void mce_barrier_dec(struct mce_softirq_barrier *bar) atomic_dec(&bar->val); } -void mce_barrier_enter(struct mce_softirq_barrier *bar) +void mce_barrier_enter(struct mce_softirq_barrier *bar, bool nowait) { int gen; - if (!mce_broadcast) + if ( !mce_broadcast || nowait ) return; atomic_inc(&bar->ingen); gen = atomic_read(&bar->outgen); @@ -34,11 +34,11 @@ void mce_barrier_enter(struct mce_softirq_barrier *bar) } } -void mce_barrier_exit(struct mce_softirq_barrier *bar) +void mce_barrier_exit(struct mce_softirq_barrier *bar, bool nowait) { int gen; - if ( !mce_broadcast ) + if ( !mce_broadcast || nowait ) return; atomic_inc(&bar->outgen); gen = atomic_read(&bar->ingen); @@ -54,6 +54,6 @@ void mce_barrier_exit(struct mce_softirq_barrier *bar) void mce_barrier(struct mce_softirq_barrier *bar) { - mce_barrier_enter(bar); - mce_barrier_exit(bar); + mce_barrier_enter(bar, false); + mce_barrier_exit(bar, false); } diff --git a/xen/arch/x86/cpu/mcheck/barrier.h b/xen/arch/x86/cpu/mcheck/barrier.h index 87f7550..f2613e6 100644 --- a/xen/arch/x86/cpu/mcheck/barrier.h +++ b/xen/arch/x86/cpu/mcheck/barrier.h @@ -25,6 +25,14 @@ void mce_barrier_init(struct mce_softirq_barrier *); void mce_barrier_dec(struct mce_softirq_barrier *); /* + * If nowait is true, mce_barrier_enter/exit() will return immediately + * without touching the barrier. It's used when handling a LMCE which + * is received on only one CPU and thus does not invoke + * mce_barrier_enter/exit() calls on all CPUs. + * + * If nowait is false, mce_barrier_enter/exit() will handle the given + * barrier as below. + * * Increment the generation number and the value. The generation number * is incremented when entering a barrier. This way, it can be checked * on exit if a CPU is trying to re-enter the barrier. This can happen @@ -36,8 +44,8 @@ void mce_barrier_dec(struct mce_softirq_barrier *); * These barrier functions should always be paired, so that the * counter value will reach 0 again after all CPUs have exited. */ -void mce_barrier_enter(struct mce_softirq_barrier *); -void mce_barrier_exit(struct mce_softirq_barrier *); +void mce_barrier_enter(struct mce_softirq_barrier *, bool nowait); +void mce_barrier_exit(struct mce_softirq_barrier *, bool nowait); void mce_barrier(struct mce_softirq_barrier *); diff --git a/xen/arch/x86/cpu/mcheck/mcaction.c b/xen/arch/x86/cpu/mcheck/mcaction.c index dab9eac..ca17d22 100644 --- a/xen/arch/x86/cpu/mcheck/mcaction.c +++ b/xen/arch/x86/cpu/mcheck/mcaction.c @@ -96,7 +96,9 @@ mc_memerr_dhandler(struct mca_binfo *binfo, bank->mc_addr = gfn << PAGE_SHIFT | (bank->mc_addr & (PAGE_SIZE -1 )); - if (fill_vmsr_data(bank, d, global->mc_gstatus, + /* TODO: support injecting LMCE */ + if (fill_vmsr_data(bank, d, + global->mc_gstatus & ~MCG_STATUS_LMCE, vmce_vcpuid == VMCE_INJECT_BROADCAST)) { mce_printk(MCE_QUIET, "Fill vMCE# data for DOM%d " diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c index 11d0e23..c1a59bf 100644 --- a/xen/arch/x86/cpu/mcheck/mce.c +++ b/xen/arch/x86/cpu/mcheck/mce.c @@ -42,6 +42,13 @@ DEFINE_PER_CPU_READ_MOSTLY(struct mca_banks *, poll_bankmask); DEFINE_PER_CPU_READ_MOSTLY(struct mca_banks *, no_cmci_banks); DEFINE_PER_CPU_READ_MOSTLY(struct mca_banks *, mce_clear_banks); +/* + * Flag to indicate that at least one non-local MCE on this CPU has + * not been completed handled. It's set by mcheck_cmn_handler() and + * cleared by mce_softirq(). + */ +static DEFINE_PER_CPU(bool, nonlocal_mce_in_progress); + static void intpose_init(void); static void mcinfo_clear(struct mc_info *); struct mca_banks *mca_allbanks; @@ -186,7 +193,29 @@ static struct mce_softirq_barrier mce_trap_bar; */ static DEFINE_SPINLOCK(mce_logout_lock); +/* + * 'severity_cpu' is used in both mcheck_cmn_handler() and mce_softirq(). + * + * When handling broadcasting MCE, MCE barriers take effect to prevent + * 'severity_cpu' being modified in one function on one CPU and accessed + * in another on a different CPU. + * + * When handling LMCE, mce_barrier_enter() and mce_barrier_exit() are + * effectively NOP, so it's possible that mcheck_cmn_handler() is handling + * a LMCE on CPUx while mce_softirq() is handling another LMCE on CPUy. + * If both are modifying 'severity_cpu', they may interfere with each + * other. Therefore, it's better for mcheck_cmn_handler() and mce_softirq() + * to avoid accessing 'severity_cpu' when handling LMCE, unless other + * approaches are taken to avoid the above interference. + */ static atomic_t severity_cpu = ATOMIC_INIT(-1); +/* + * The following two global variables are used only in mcheck_cmn_handler(). + * Because there can be at most one MCE (including LMCE) outstanding + * in the hardware platform, we don't need to worry about the above + * interference, where two code paths handing different MCE's access + * the same global variables. + */ static atomic_t found_error = ATOMIC_INIT(0); static cpumask_t mce_fatal_cpus; @@ -395,6 +424,7 @@ mcheck_mca_logout(enum mca_source who, struct mca_banks *bankmask, sp->errcnt = errcnt; sp->ripv = (gstatus & MCG_STATUS_RIPV) != 0; sp->eipv = (gstatus & MCG_STATUS_EIPV) != 0; + sp->lmce = (gstatus & MCG_STATUS_LMCE) != 0; sp->uc = uc; sp->pcc = pcc; sp->recoverable = recover; @@ -458,6 +488,7 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) uint64_t gstatus; mctelem_cookie_t mctc = NULL; struct mca_summary bs; + bool lmce; mce_spin_lock(&mce_logout_lock); @@ -467,6 +498,10 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) } mctc = mcheck_mca_logout(MCA_MCE_SCAN, bankmask, &bs, clear_bank); + lmce = bs.lmce; + if (!lmce) + this_cpu(nonlocal_mce_in_progress) = true; + if (bs.errcnt) { /* * Uncorrected errors must be dealt with in softirq context. @@ -488,8 +523,9 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) } atomic_set(&found_error, 1); - /* The last CPU will be take check/clean-up etc */ - atomic_set(&severity_cpu, smp_processor_id()); + /* The last CPU will be take check/clean-up etc. */ + if (!lmce) + atomic_set(&severity_cpu, smp_processor_id()); mce_printk(MCE_CRITICAL, "MCE: clear_bank map %lx on CPU%d\n", *((unsigned long*)clear_bank), smp_processor_id()); @@ -501,16 +537,16 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) } mce_spin_unlock(&mce_logout_lock); - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, lmce); if ( mctc != NULL && mce_urgent_action(regs, mctc)) cpumask_set_cpu(smp_processor_id(), &mce_fatal_cpus); - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, lmce); /* * Wait until everybody has processed the trap. */ - mce_barrier_enter(&mce_trap_bar); - if (atomic_read(&severity_cpu) == smp_processor_id()) + mce_barrier_enter(&mce_trap_bar, lmce); + if (lmce || atomic_read(&severity_cpu) == smp_processor_id()) { /* According to SDM, if no error bank found on any cpus, * something unexpected happening, we can't do any @@ -527,16 +563,16 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) } atomic_set(&found_error, 0); } - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, lmce); /* Clear flags after above fatal check */ - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, lmce); gstatus = mca_rdmsr(MSR_IA32_MCG_STATUS); if ((gstatus & MCG_STATUS_MCIP) != 0) { mce_printk(MCE_CRITICAL, "MCE: Clear MCIP@ last step"); mca_wrmsr(MSR_IA32_MCG_STATUS, 0); } - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, lmce); raise_softirq(MACHINE_CHECK_SOFTIRQ); } @@ -1700,38 +1736,61 @@ static void mce_softirq(void) { int cpu = smp_processor_id(); unsigned int workcpu; + /* + * On platforms supporting broadcasting MCE, if there is a + * non-local MCE waiting for process on this CPU, it implies other + * CPUs received the same MCE as well. Therefore, mce_softirq() + * will be launched on other CPUs as well and compete with this + * mce_softirq() to handle all MCE's on all CPUs. In that case, we + * should use MCE barriers to sync with other CPUs. Otherwise, we + * do not need to wait for other CPUs. + */ + bool nowait = !this_cpu(nonlocal_mce_in_progress); mce_printk(MCE_VERBOSE, "CPU%d enter softirq\n", cpu); - mce_barrier_enter(&mce_inside_bar); + mce_barrier_enter(&mce_inside_bar, nowait); /* - * Everybody is here. Now let's see who gets to do the - * recovery work. Right now we just see if there's a CPU - * that did not have any problems, and pick that one. - * - * First, just set a default value: the last CPU who reaches this - * will overwrite the value and become the default. + * When LMCE is being handled and no non-local MCE is waiting for + * process, mce_softirq() does not need to set 'severity_cpu'. And + * it should not, because the modification in this function may + * interfere with the simultaneous modification in mce_cmn_handler() + * on another CPU. */ - - atomic_set(&severity_cpu, cpu); - - mce_barrier_enter(&mce_severity_bar); - if (!mctelem_has_deferred(cpu)) + if (!nowait) { + /* + * Everybody is here. Now let's see who gets to do the + * recovery work. Right now we just see if there's a CPU + * that did not have any problems, and pick that one. + * + * First, just set a default value: the last CPU who reaches this + * will overwrite the value and become the default. + */ atomic_set(&severity_cpu, cpu); - mce_barrier_exit(&mce_severity_bar); - /* We choose severity_cpu for further processing */ - if (atomic_read(&severity_cpu) == cpu) { + mce_barrier_enter(&mce_severity_bar, nowait); + if (!mctelem_has_deferred(cpu)) + atomic_set(&severity_cpu, cpu); + mce_barrier_exit(&mce_severity_bar, nowait); + } + + /* + * Either no non-local MCE is being handled, or this CPU is chose as + * severity_cpu for further processing ... + */ + if (nowait || atomic_read(&severity_cpu) == cpu) { mce_printk(MCE_VERBOSE, "CPU%d handling errors\n", cpu); /* Step1: Fill DOM0 LOG buffer, vMCE injection buffer and * vMCE MSRs virtualization buffer */ - for_each_online_cpu(workcpu) { - mctelem_process_deferred(workcpu, mce_delayed_action); - } + if (nowait) + mctelem_process_deferred(cpu, mce_delayed_action); + else + for_each_online_cpu(workcpu) + mctelem_process_deferred(workcpu, mce_delayed_action); /* Step2: Send Log to DOM0 through vIRQ */ if (dom0_vmce_enabled()) { @@ -1740,7 +1799,14 @@ static void mce_softirq(void) } } - mce_barrier_exit(&mce_inside_bar); + mce_barrier_exit(&mce_inside_bar, nowait); + + /* + * At this point, either the only LMCE has been handled, or all MCE's + * on this CPU waiting for process have been handled on this CPU (if + * it was chose as severity_cpu) or by mce_softirq() on another CPU. + */ + this_cpu(nonlocal_mce_in_progress) = false; } /* Machine Check owner judge algorithm: diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h index 10e5ceb..3a9134e 100644 --- a/xen/arch/x86/cpu/mcheck/mce.h +++ b/xen/arch/x86/cpu/mcheck/mce.h @@ -109,12 +109,14 @@ struct mca_summary { int eipv; /* meaningful on #MC */ bool uc; /* UC flag */ bool pcc; /* PCC flag */ + bool lmce; /* LMCE flag (Intel only) */ bool recoverable; /* software error recoverable flag */ }; DECLARE_PER_CPU(struct mca_banks *, poll_bankmask); DECLARE_PER_CPU(struct mca_banks *, no_cmci_banks); DECLARE_PER_CPU(struct mca_banks *, mce_clear_banks); +DECLARE_PER_CPU(bool, mce_in_process); extern bool cmci_support; extern bool is_mc_panic; diff --git a/xen/arch/x86/cpu/mcheck/x86_mca.h b/xen/arch/x86/cpu/mcheck/x86_mca.h index 34d1921..de03f82 100644 --- a/xen/arch/x86/cpu/mcheck/x86_mca.h +++ b/xen/arch/x86/cpu/mcheck/x86_mca.h @@ -42,7 +42,9 @@ #define MCG_STATUS_RIPV 0x0000000000000001ULL #define MCG_STATUS_EIPV 0x0000000000000002ULL #define MCG_STATUS_MCIP 0x0000000000000004ULL -/* Bits 3-63 are reserved */ +#define MCG_STATUS_LMCE 0x0000000000000008ULL /* Intel specific */ +/* Bits 3-63 are reserved on CPU not supporting LMCE */ +/* Bits 4-63 are reserved on CPU supporting LMCE */ /* Bitfield of MSR_K8_MCi_STATUS registers */ /* MCA error code */