diff mbox series

[1/1] x86/mm: Forbid the zero page once it has uncorrectable errors

Message ID 20220420210009.65666-1-qiuxu.zhuo@intel.com (mailing list archive)
State New
Headers show
Series [1/1] x86/mm: Forbid the zero page once it has uncorrectable errors | expand

Commit Message

Zhuo, Qiuxu April 20, 2022, 9 p.m. UTC
Accessing to the zero page with uncorrectable errors causes unexpected
machine checks. So forbid the zero page from being used by user-space
processes once it has uncorrectable errors. Processes that have already
mapped the zero page with uncorrectable errors will get killed once they
access to it. New processes will not use the zero page.

Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
---
1) Processes that have already mapped the zero page with uncorrectable
   errors could be recovered by attaching a new zeroed anonymous page.
   But this may need to walk all page tables for all such processes to
   update the PTEs pointing to the zero page. Looks like a big modification
   for a rare problem?

2) Some validation tests that sometimes pick up the virtual address
   mapped to the zero page to inject errors get themself killed and can't
   run anymore until reboot the system. To avoid injecting errors to the
   zero page, please refer to the path:

   https://lore.kernel.org/all/20220419211921.2230752-1-tony.luck@intel.com/

 arch/x86/include/asm/pgtable.h | 3 +++
 arch/x86/kernel/cpu/mce/core.c | 6 ++++++
 arch/x86/mm/pgtable.c          | 2 ++
 mm/memory-failure.c            | 2 +-
 4 files changed, 12 insertions(+), 1 deletion(-)

Comments

Dave Hansen April 20, 2022, 1:39 p.m. UTC | #1
On 4/20/22 14:00, Qiuxu Zhuo wrote:
> Accessing to the zero page with uncorrectable errors causes unexpected
> machine checks. So forbid the zero page from being used by user-space
> processes once it has uncorrectable errors. Processes that have already
> mapped the zero page with uncorrectable errors will get killed once they
> access to it. New processes will not use the zero page.

There are lots of pages which are entirely fatal if they have
uncorrectable errors.  On my laptop, if there were an error, there is a
0.00000596% chance it will be in the zero page.

Why is this worth special casing this one page?
Zhuo, Qiuxu April 21, 2022, 7:53 a.m. UTC | #2
> From: Hansen, Dave <dave.hansen@intel.com>
> ...
> Subject: Re: [PATCH 1/1] x86/mm: Forbid the zero page once it has
> uncorrectable errors
> ...
> There are lots of pages which are entirely fatal if they have uncorrectable errors.
> On my laptop, if there were an error, there is a 0.00000596% chance it will be in
> the zero page.
> 
> Why is this worth special casing this one page?

Hi Dave,

   Yes, this is a rare problem. Just feel that the fix is simple, so post it here to see whether you'll consider it 
David Hildenbrand April 21, 2022, 8:50 a.m. UTC | #3
On 21.04.22 09:53, Zhuo, Qiuxu wrote:
>> From: Hansen, Dave <dave.hansen@intel.com>
>> ...
>> Subject: Re: [PATCH 1/1] x86/mm: Forbid the zero page once it has
>> uncorrectable errors
>> ...
>> There are lots of pages which are entirely fatal if they have uncorrectable errors.
>> On my laptop, if there were an error, there is a 0.00000596% chance it will be in
>> the zero page.
>>
>> Why is this worth special casing this one page?
> 
> Hi Dave,
> 
>    Yes, this is a rare problem. Just feel that the fix is simple, so post it here to see whether you'll consider it 
diff mbox series

Patch

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 62ab07e24aef..d4b8693452e5 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -55,6 +55,9 @@  extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
 	__visible;
 #define ZERO_PAGE(vaddr) ((void)(vaddr),virt_to_page(empty_zero_page))
 
+extern bool __read_mostly forbids_zeropage;
+#define mm_forbids_zeropage(x)	forbids_zeropage
+
 extern spinlock_t pgd_lock;
 extern struct list_head pgd_list;
 
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 981496e6bc0e..5b3af27cc8fa 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -44,6 +44,7 @@ 
 #include <linux/sync_core.h>
 #include <linux/task_work.h>
 #include <linux/hardirq.h>
+#include <linux/pgtable.h>
 
 #include <asm/intel-family.h>
 #include <asm/processor.h>
@@ -1370,6 +1371,11 @@  static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callba
 	if (count > 1)
 		return;
 
+	if (is_zero_pfn(current->mce_addr >> PAGE_SHIFT) && !forbids_zeropage) {
+		pr_err("Forbid user-space process from using zero page\n");
+		forbids_zeropage = true;
+	}
+
 	task_work_add(current, &current->mce_kill_me, TWA_RESUME);
 }
 
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 3481b35cb4ec..c0c56bce3acc 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -28,6 +28,8 @@  void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
 
 gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM;
 
+bool __read_mostly forbids_zeropage;
+
 pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
 	return __pte_alloc_one(mm, __userpte_alloc_gfp);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index dcb6bb9cf731..30ad7bdeb89f 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1744,7 +1744,7 @@  int memory_failure(unsigned long pfn, int flags)
 		goto unlock_mutex;
 	}
 
-	if (TestSetPageHWPoison(p)) {
+	if (TestSetPageHWPoison(p) || is_zero_pfn(pfn)) {
 		pr_err("Memory failure: %#lx: already hardware poisoned\n",
 			pfn);
 		res = -EHWPOISON;