diff mbox series

[v3,3/3] mm,hwpoison: add kill_accessing_process() to find error virtual address

Message ID 20210421005728.1994268-4-nao.horiguchi@gmail.com (mailing list archive)
State New
Headers show
Series mm,hwpoison: fix sending SIGBUS for Action Required MCE | expand

Commit Message

Naoya Horiguchi April 21, 2021, 12:57 a.m. UTC
From: Naoya Horiguchi <naoya.horiguchi@nec.com>

The previous patch solves the infinite MCE loop issue when multiple
MCE events races.  The remaining issue is to make sure that all threads
processing Action Required MCEs send to the current processes the
SIGBUS with the proper virtual address and the error size.

This patch suggests to do page table walk to find the error virtual
address.  If we find multiple virtual addresses in walking, we now can't
determine which one is correct, so we fall back to sending SIGBUS in
kill_me_maybe() without error info as we do now.  This corner case needs
to be solved in the future.

Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Tested-by: Aili Yao <yaoaili@kingsoft.com>
---
change log v1 -> v2:
- initialize local variables in check_hwpoisoned_entry() and
  hwpoison_pte_range()
- fix and improve logic to calculate error address offset.
---
 arch/x86/kernel/cpu/mce/core.c |  13 ++-
 include/linux/swapops.h        |   5 ++
 mm/memory-failure.c            | 143 ++++++++++++++++++++++++++++++++-
 3 files changed, 158 insertions(+), 3 deletions(-)

Comments

Borislav Petkov April 22, 2021, 5:02 p.m. UTC | #1
On Wed, Apr 21, 2021 at 09:57:28AM +0900, Naoya Horiguchi wrote:
> From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> 
> The previous patch solves the infinite MCE loop issue when multiple

"previous patch" has no meaning when it is in git.

> MCE events races.  The remaining issue is to make sure that all threads

	    "race."

> processing Action Required MCEs send to the current processes the

s/the //

> SIGBUS with the proper virtual address and the error size.
> 
> This patch suggests to do page table walk to find the error virtual

Avoid having "This patch" or "This commit" in the commit message. It is
tautologically useless.

Also, do

$ git grep 'This patch' Documentation/process

for more details.

> address.  If we find multiple virtual addresses in walking, we now can't

Who's "we"?				during the pagetable walk

> determine which one is correct, so we fall back to sending SIGBUS in
> kill_me_maybe() without error info as we do now.  This corner case needs
> to be solved in the future.

Solved how? If you can't map which error comes from which process, you
can't do anything here. You could send SIGBUS to all but you might
injure some innocent bystanders this way.

Just code structuring suggestions below - mm stuff is for someone else
to review properly.

> +static int hwpoison_pte_range(pmd_t *pmdp, unsigned long addr,
> +			      unsigned long end, struct mm_walk *walk)
> +{
> +	struct hwp_walk *hwp = (struct hwp_walk *)walk->private;
> +	int ret = 0;
> +	pte_t *ptep;
> +	spinlock_t *ptl;
> +
> +	ptl = pmd_trans_huge_lock(pmdp, walk->vma);
> +	if (ptl) {

Save yourself an indentation level:

	if (!ptl)
		goto unlock;

> +		pmd_t pmd = *pmdp;
> +
> +		if (pmd_present(pmd)) {

... ditto...

> +			unsigned long pfn = pmd_pfn(pmd);
> +
> +			if (pfn <= hwp->pfn && hwp->pfn < pfn + HPAGE_PMD_NR) {
> +				unsigned long hwpoison_vaddr = addr +
> +					((hwp->pfn - pfn) << PAGE_SHIFT);

... which will allow you to not break those.

> +
> +				ret = set_to_kill(&hwp->tk, hwpoison_vaddr,
> +						  PAGE_SHIFT);
> +			}
> +		}
> +		spin_unlock(ptl);
> +		goto out;
> +	}
> +
> +	if (pmd_trans_unstable(pmdp))
> +		goto out;
> +
> +	ptep = pte_offset_map_lock(walk->vma->vm_mm, pmdp, addr, &ptl);
> +	for (; addr != end; ptep++, addr += PAGE_SIZE) {
> +		ret = check_hwpoisoned_entry(*ptep, addr, PAGE_SHIFT,
> +					     hwp->pfn, &hwp->tk);
> +		if (ret == 1)
> +			break;
> +	}
> +	pte_unmap_unlock(ptep - 1, ptl);
> +out:
> +	cond_resched();
> +	return ret;
> +}
HORIGUCHI NAOYA(堀口 直也) April 23, 2021, 2:18 a.m. UTC | #2
On Thu, Apr 22, 2021 at 07:02:13PM +0200, Borislav Petkov wrote:
> On Wed, Apr 21, 2021 at 09:57:28AM +0900, Naoya Horiguchi wrote:
> > From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> > 
> > The previous patch solves the infinite MCE loop issue when multiple
> 
> "previous patch" has no meaning when it is in git.
> 
> > MCE events races.  The remaining issue is to make sure that all threads
> 
> 	    "race."
> 
> > processing Action Required MCEs send to the current processes the
> 
> s/the //

I'll fix these grammar errors.

> 
> > SIGBUS with the proper virtual address and the error size.
> > 
> > This patch suggests to do page table walk to find the error virtual
> 
> Avoid having "This patch" or "This commit" in the commit message. It is
> tautologically useless.
> 
> Also, do
> 
> $ git grep 'This patch' Documentation/process
> 
> for more details.

I didn't know the following rule:

    Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
    instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
    to do frotz", as if you are giving orders to the codebase to change
    its behaviour.

I'll follow this in my future post.

> 
> > address.  If we find multiple virtual addresses in walking, we now can't
> 
> Who's "we"?				during the pagetable walk

I wrongly abused rhetorical "we". I'll change this sentence in passive form.

> 
> > determine which one is correct, so we fall back to sending SIGBUS in
> > kill_me_maybe() without error info as we do now.  This corner case needs
> > to be solved in the future.
> 
> Solved how?

I don't know exactly.  MCE subsystem seems to have code extracting linear
address, so I wonder that that could be used as a hint to memory_failure()
to find the proper virtual address.

> If you can't map which error comes from which process, you
> can't do anything here. You could send SIGBUS to all but you might
> injure some innocent bystanders this way.

The situation in question is caused by action required MCE, so
we know which process we should send SIGBUS to. So if we choose
to send SIGBUS to all, no innocent bystanders would be affected.
But when the process have multiple virtual addresses associated
with the error physical address, the process receives multiple
SIGBUSs and all but one have wrong value in si_addr in siginfo_t,
so that's confusing.

> 
> Just code structuring suggestions below - mm stuff is for someone else
> to review properly.

Thank you, I'll update with them.

- Naoya Horiguchi

> 
> > +static int hwpoison_pte_range(pmd_t *pmdp, unsigned long addr,
> > +			      unsigned long end, struct mm_walk *walk)
> > +{
> > +	struct hwp_walk *hwp = (struct hwp_walk *)walk->private;
> > +	int ret = 0;
> > +	pte_t *ptep;
> > +	spinlock_t *ptl;
> > +
> > +	ptl = pmd_trans_huge_lock(pmdp, walk->vma);
> > +	if (ptl) {
> 
> Save yourself an indentation level:
> 
> 	if (!ptl)
> 		goto unlock;
> 
> > +		pmd_t pmd = *pmdp;
> > +
> > +		if (pmd_present(pmd)) {
> 
> ... ditto...
> 
> > +			unsigned long pfn = pmd_pfn(pmd);
> > +
> > +			if (pfn <= hwp->pfn && hwp->pfn < pfn + HPAGE_PMD_NR) {
> > +				unsigned long hwpoison_vaddr = addr +
> > +					((hwp->pfn - pfn) << PAGE_SHIFT);
> 
> ... which will allow you to not break those.
> 
> > +
> > +				ret = set_to_kill(&hwp->tk, hwpoison_vaddr,
> > +						  PAGE_SHIFT);
> > +			}
> > +		}
> > +		spin_unlock(ptl);
> > +		goto out;
> > +	}
> > +
> > +	if (pmd_trans_unstable(pmdp))
> > +		goto out;
> > +
> > +	ptep = pte_offset_map_lock(walk->vma->vm_mm, pmdp, addr, &ptl);
> > +	for (; addr != end; ptep++, addr += PAGE_SIZE) {
> > +		ret = check_hwpoisoned_entry(*ptep, addr, PAGE_SHIFT,
> > +					     hwp->pfn, &hwp->tk);
> > +		if (ret == 1)
> > +			break;
> > +	}
> > +	pte_unmap_unlock(ptep - 1, ptl);
> > +out:
> > +	cond_resched();
> > +	return ret;
> > +}
> 
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://people.kernel.org/tglx/notes-about-netiquette
>
Borislav Petkov April 23, 2021, 11:57 a.m. UTC | #3
On Fri, Apr 23, 2021 at 02:18:34AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> I don't know exactly.  MCE subsystem seems to have code extracting linear
> address, so I wonder that that could be used as a hint to memory_failure()
> to find the proper virtual address.

See "Table 15-3. Address Mode in IA32_MCi_MISC[8:6]" in the SDM -
apparently it can report all kinds of address types, depending on the hw
incarnation or MCA bank type or whatnot. Tony knows :)

> The situation in question is caused by action required MCE, so
> we know which process we should send SIGBUS to. So if we choose
> to send SIGBUS to all, no innocent bystanders would be affected.
> But when the process have multiple virtual addresses associated
> with the error physical address, the process receives multiple
> SIGBUSs and all but one have wrong value in si_addr in siginfo_t,
> so that's confusing.

Is that scenario real or hypothetical?

Because I'd expect that if we send it a SIGBUS and we poison that page,
then all the VAs mapping it will have to handle the situation that that
page has been poisoned and pulled from under them.

So from a hw perspective, there won't be any more accesses to the faulty
physical page.

In a perfect world, that is...
HORIGUCHI NAOYA(堀口 直也) April 26, 2021, 8:23 a.m. UTC | #4
On Fri, Apr 23, 2021 at 01:57:25PM +0200, Borislav Petkov wrote:
> On Fri, Apr 23, 2021 at 02:18:34AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> > I don't know exactly.  MCE subsystem seems to have code extracting linear
> > address, so I wonder that that could be used as a hint to memory_failure()
> > to find the proper virtual address.
> 
> See "Table 15-3. Address Mode in IA32_MCi_MISC[8:6]" in the SDM -
> apparently it can report all kinds of address types, depending on the hw
> incarnation or MCA bank type or whatnot. Tony knows :)

"15.9.3.2 Architecturally Defined SRAR Errors" says that the register
is supposed to have physical address.

    For both the data load and instruction fetch errors, the ADDRV and MISCV
    flags in the IA32_MCi_STATUS register are set to indicate that the offending
    physical address information is available from the IA32_MCi_MISC and the
    IA32_MCi_ADDR registers.

> > The situation in question is caused by action required MCE, so
> > we know which process we should send SIGBUS to. So if we choose
> > to send SIGBUS to all, no innocent bystanders would be affected.
> > But when the process have multiple virtual addresses associated
> > with the error physical address, the process receives multiple
> > SIGBUSs and all but one have wrong value in si_addr in siginfo_t,
> > so that's confusing.
> 
> Is that scenario real or hypothetical?
> 
> Because I'd expect that if we send it a SIGBUS and we poison that page,
> then all the VAs mapping it will have to handle the situation that that
> page has been poisoned and pulled from under them.

IIUC, the above should be done by the first MCE handling. In "already
hwpoisoned" case, the page has already been poisoned and all mapping for it
should be already unmapped, then what we need additionally is to send SIGBUS
to report to the application that it should take some action or abort
immediately.

Thanks,
Naoya Horiguchi
diff mbox series

Patch

diff --git v5.12-rc8/arch/x86/kernel/cpu/mce/core.c v5.12-rc8_patched/arch/x86/kernel/cpu/mce/core.c
index 7962355436da..3ce23445a48c 100644
--- v5.12-rc8/arch/x86/kernel/cpu/mce/core.c
+++ v5.12-rc8_patched/arch/x86/kernel/cpu/mce/core.c
@@ -1257,19 +1257,28 @@  static void kill_me_maybe(struct callback_head *cb)
 {
 	struct task_struct *p = container_of(cb, struct task_struct, mce_kill_me);
 	int flags = MF_ACTION_REQUIRED;
+	int ret;
 
 	pr_err("Uncorrected hardware memory error in user-access at %llx", p->mce_addr);
 
 	if (!p->mce_ripv)
 		flags |= MF_MUST_KILL;
 
-	if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
-	    !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
+	ret = memory_failure(p->mce_addr >> PAGE_SHIFT, flags);
+	if (!ret && !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
 		set_mce_nospec(p->mce_addr >> PAGE_SHIFT, p->mce_whole_page);
 		sync_core();
 		return;
 	}
 
+	/*
+	 * -EHWPOISON from memory_failure() means that it already sent SIGBUS
+	 * to the current process with the proper error info, so no need to
+	 * send it here again.
+	 */
+	if (ret == -EHWPOISON)
+		return;
+
 	if (p->mce_vaddr != (void __user *)-1l) {
 		force_sig_mceerr(BUS_MCEERR_AR, p->mce_vaddr, PAGE_SHIFT);
 	} else {
diff --git v5.12-rc8/include/linux/swapops.h v5.12-rc8_patched/include/linux/swapops.h
index d9b7c9132c2f..98ea67fcf360 100644
--- v5.12-rc8/include/linux/swapops.h
+++ v5.12-rc8_patched/include/linux/swapops.h
@@ -323,6 +323,11 @@  static inline int is_hwpoison_entry(swp_entry_t entry)
 	return swp_type(entry) == SWP_HWPOISON;
 }
 
+static inline unsigned long hwpoison_entry_to_pfn(swp_entry_t entry)
+{
+	return swp_offset(entry);
+}
+
 static inline void num_poisoned_pages_inc(void)
 {
 	atomic_long_inc(&num_poisoned_pages);
diff --git v5.12-rc8/mm/memory-failure.c v5.12-rc8_patched/mm/memory-failure.c
index 39d0ff0339b9..7cc563e1770a 100644
--- v5.12-rc8/mm/memory-failure.c
+++ v5.12-rc8_patched/mm/memory-failure.c
@@ -56,6 +56,7 @@ 
 #include <linux/kfifo.h>
 #include <linux/ratelimit.h>
 #include <linux/page-isolation.h>
+#include <linux/pagewalk.h>
 #include "internal.h"
 #include "ras/ras_event.h"
 
@@ -554,6 +555,141 @@  static void collect_procs(struct page *page, struct list_head *tokill,
 		collect_procs_file(page, tokill, force_early);
 }
 
+struct hwp_walk {
+	struct to_kill tk;
+	unsigned long pfn;
+	int flags;
+};
+
+static int set_to_kill(struct to_kill *tk, unsigned long addr, short shift)
+{
+	/* Abort pagewalk when finding multiple mappings to the error page. */
+	if (tk->addr)
+		return 1;
+	tk->addr = addr;
+	tk->size_shift = shift;
+	return 0;
+}
+
+static int check_hwpoisoned_entry(pte_t pte, unsigned long addr, short shift,
+				unsigned long poisoned_pfn, struct to_kill *tk)
+{
+	unsigned long pfn = 0;
+
+	if (pte_present(pte)) {
+		pfn = pte_pfn(pte);
+	} else {
+		swp_entry_t swp = pte_to_swp_entry(pte);
+
+		if (is_hwpoison_entry(swp))
+			pfn = hwpoison_entry_to_pfn(swp);
+	}
+
+	if (!pfn || pfn != poisoned_pfn)
+		return 0;
+
+	return set_to_kill(tk, addr, shift);
+}
+
+static int hwpoison_pte_range(pmd_t *pmdp, unsigned long addr,
+			      unsigned long end, struct mm_walk *walk)
+{
+	struct hwp_walk *hwp = (struct hwp_walk *)walk->private;
+	int ret = 0;
+	pte_t *ptep;
+	spinlock_t *ptl;
+
+	ptl = pmd_trans_huge_lock(pmdp, walk->vma);
+	if (ptl) {
+		pmd_t pmd = *pmdp;
+
+		if (pmd_present(pmd)) {
+			unsigned long pfn = pmd_pfn(pmd);
+
+			if (pfn <= hwp->pfn && hwp->pfn < pfn + HPAGE_PMD_NR) {
+				unsigned long hwpoison_vaddr = addr +
+					((hwp->pfn - pfn) << PAGE_SHIFT);
+
+				ret = set_to_kill(&hwp->tk, hwpoison_vaddr,
+						  PAGE_SHIFT);
+			}
+		}
+		spin_unlock(ptl);
+		goto out;
+	}
+
+	if (pmd_trans_unstable(pmdp))
+		goto out;
+
+	ptep = pte_offset_map_lock(walk->vma->vm_mm, pmdp, addr, &ptl);
+	for (; addr != end; ptep++, addr += PAGE_SIZE) {
+		ret = check_hwpoisoned_entry(*ptep, addr, PAGE_SHIFT,
+					     hwp->pfn, &hwp->tk);
+		if (ret == 1)
+			break;
+	}
+	pte_unmap_unlock(ptep - 1, ptl);
+out:
+	cond_resched();
+	return ret;
+}
+
+#ifdef CONFIG_HUGETLB_PAGE
+static int hwpoison_hugetlb_range(pte_t *ptep, unsigned long hmask,
+			    unsigned long addr, unsigned long end,
+			    struct mm_walk *walk)
+{
+	struct hwp_walk *hwp = (struct hwp_walk *)walk->private;
+	pte_t pte = huge_ptep_get(ptep);
+	struct hstate *h = hstate_vma(walk->vma);
+
+	return check_hwpoisoned_entry(pte, addr, huge_page_shift(h),
+				      hwp->pfn, &hwp->tk);
+}
+#else
+#define hwpoison_hugetlb_range	NULL
+#endif
+
+static struct mm_walk_ops hwp_walk_ops = {
+	.pmd_entry = hwpoison_pte_range,
+	.hugetlb_entry = hwpoison_hugetlb_range,
+};
+
+/*
+ * Sends SIGBUS to the current process with the error info.
+ *
+ * This function is intended to handle "Action Required" MCEs on already
+ * hardware poisoned pages. They could happen, for example, when
+ * memory_failure() failed to unmap the error page at the first call, or
+ * when multiple local machine checks happened on different CPUs.
+ *
+ * MCE handler currently has no easy access to the error virtual address,
+ * so this function walks page table to find it. One challenge on this is
+ * to reliably get the proper virual address of the error to report to
+ * applications via SIGBUS. A process could map a page multiple times to
+ * different virtual addresses, then we now have no way to tell which virtual
+ * address was accessed when the Action Required MCE was generated.
+ * So in such a corner case, we now give up and fall back to sending SIGBUS
+ * with no error info.
+ */
+static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
+				  int flags)
+{
+	int ret;
+	struct hwp_walk priv = {
+		.pfn = pfn,
+	};
+	priv.tk.tsk = p;
+
+	mmap_read_lock(p->mm);
+	ret = walk_page_range(p->mm, 0, TASK_SIZE_MAX, &hwp_walk_ops,
+			      (void *)&priv);
+	if (!ret && priv.tk.addr)
+		kill_proc(&priv.tk, pfn, flags);
+	mmap_read_unlock(p->mm);
+	return ret ? -EFAULT : -EHWPOISON;
+}
+
 static const char *action_name[] = {
 	[MF_IGNORED] = "Ignored",
 	[MF_FAILED] = "Failed",
@@ -1228,7 +1364,10 @@  static int memory_failure_hugetlb(unsigned long pfn, int flags)
 	if (TestSetPageHWPoison(head)) {
 		pr_err("Memory failure: %#lx: already hardware poisoned\n",
 		       pfn);
-		return -EHWPOISON;
+		res = -EHWPOISON;
+		if (flags & MF_ACTION_REQUIRED)
+			res = kill_accessing_process(current, page_to_pfn(head), flags);
+		return res;
 	}
 
 	num_poisoned_pages_inc();
@@ -1438,6 +1577,8 @@  int memory_failure(unsigned long pfn, int flags)
 		pr_err("Memory failure: %#lx: already hardware poisoned\n",
 			pfn);
 		res = -EHWPOISON;
+		if (flags & MF_ACTION_REQUIRED)
+			res = kill_accessing_process(current, pfn, flags);
 		goto unlock_mutex;
 	}