diff mbox series

[4/4] mm/memory-failure: Fall back to vma_address() when ->notify_failure() fails

Message ID 166153429427.2758201.14605968329933175594.stgit@dwillia2-xfh.jf.intel.com (mailing list archive)
State Accepted
Commit ac87ca0ea0102ec5dfd17c95eb9fa87a8bf54d61
Headers show
Series mm, xfs, dax: Fixes for memory_failure() handling | expand

Commit Message

Dan Williams Aug. 26, 2022, 5:18 p.m. UTC
In the case where a filesystem is polled to take over the memory failure
and receives -EOPNOTSUPP it indicates that page->index and page->mapping
are valid for reverse mapping the failure address. Introduce
FSDAX_INVALID_PGOFF to distinguish when add_to_kill() is being called
from mf_dax_kill_procs() by a filesytem vs the typical memory_failure()
path.

Otherwise, vma_pgoff_address() is called with an invalid fsdax_pgoff
which then trips this failing signature:

 kernel BUG at mm/memory-failure.c:319!
 invalid opcode: 0000 [#1] PREEMPT SMP PTI
 CPU: 13 PID: 1262 Comm: dax-pmd Tainted: G           OE    N 6.0.0-rc2+ #62
 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
 RIP: 0010:add_to_kill.cold+0x19d/0x209
 [..]
 Call Trace:
  <TASK>
  collect_procs.part.0+0x2c4/0x460
  memory_failure+0x71b/0xba0
  ? _printk+0x58/0x73
  do_madvise.part.0.cold+0xaf/0xc5

Fixes: c36e20249571 ("mm: introduce mf_dax_kill_procs() for fsdax case")
Cc: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Darrick J. Wong <djwong@kernel.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Goldwyn Rodrigues <rgoldwyn@suse.de>
Cc: Jane Chu <jane.chu@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Ritesh Harjani <riteshh@linux.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 mm/memory-failure.c |   22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

Comments

HORIGUCHI NAOYA(堀口 直也) Aug. 29, 2022, 5:42 a.m. UTC | #1
On Fri, Aug 26, 2022 at 10:18:14AM -0700, Dan Williams wrote:
> In the case where a filesystem is polled to take over the memory failure
> and receives -EOPNOTSUPP it indicates that page->index and page->mapping
> are valid for reverse mapping the failure address. Introduce
> FSDAX_INVALID_PGOFF to distinguish when add_to_kill() is being called
> from mf_dax_kill_procs() by a filesytem vs the typical memory_failure()
> path.
> 
> Otherwise, vma_pgoff_address() is called with an invalid fsdax_pgoff
> which then trips this failing signature:
> 
>  kernel BUG at mm/memory-failure.c:319!
>  invalid opcode: 0000 [#1] PREEMPT SMP PTI
>  CPU: 13 PID: 1262 Comm: dax-pmd Tainted: G           OE    N 6.0.0-rc2+ #62
>  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>  RIP: 0010:add_to_kill.cold+0x19d/0x209
>  [..]
>  Call Trace:
>   <TASK>
>   collect_procs.part.0+0x2c4/0x460
>   memory_failure+0x71b/0xba0
>   ? _printk+0x58/0x73
>   do_madvise.part.0.cold+0xaf/0xc5
> 
> Fixes: c36e20249571 ("mm: introduce mf_dax_kill_procs() for fsdax case")
> Cc: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Darrick J. Wong <djwong@kernel.org>
> Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
> Cc: Al Viro <viro@zeniv.linux.org.uk>
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Goldwyn Rodrigues <rgoldwyn@suse.de>
> Cc: Jane Chu <jane.chu@oracle.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Miaohe Lin <linmiaohe@huawei.com>
> Cc: Ritesh Harjani <riteshh@linux.ibm.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Miaohe Lin Aug. 30, 2022, 3:30 a.m. UTC | #2
On 2022/8/27 1:18, Dan Williams wrote:
> In the case where a filesystem is polled to take over the memory failure
> and receives -EOPNOTSUPP it indicates that page->index and page->mapping
> are valid for reverse mapping the failure address. Introduce
> FSDAX_INVALID_PGOFF to distinguish when add_to_kill() is being called
> from mf_dax_kill_procs() by a filesytem vs the typical memory_failure()
> path.

Thanks for fixing.
I'm sorry but I can't find the bug report email. Do you mean mf_dax_kill_procs() can
pass an invalid pgoff to the add_to_kill()? But it seems pgoff is guarded against invalid
value by vma_interval_tree_foreach() in collect_procs_fsdax(). So pgoff should be an valid
value. Or am I miss something?

Thanks,
Miaohe Lin

> 
> Otherwise, vma_pgoff_address() is called with an invalid fsdax_pgoff
> which then trips this failing signature:
> 
>  kernel BUG at mm/memory-failure.c:319!
>  invalid opcode: 0000 [#1] PREEMPT SMP PTI
>  CPU: 13 PID: 1262 Comm: dax-pmd Tainted: G           OE    N 6.0.0-rc2+ #62
>  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>  RIP: 0010:add_to_kill.cold+0x19d/0x209
>  [..]
>  Call Trace:
>   <TASK>
>   collect_procs.part.0+0x2c4/0x460
>   memory_failure+0x71b/0xba0
>   ? _printk+0x58/0x73
>   do_madvise.part.0.cold+0xaf/0xc5
> 
> Fixes: c36e20249571 ("mm: introduce mf_dax_kill_procs() for fsdax case")
> Cc: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Darrick J. Wong <djwong@kernel.org>
> Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
> Cc: Al Viro <viro@zeniv.linux.org.uk>
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Goldwyn Rodrigues <rgoldwyn@suse.de>
> Cc: Jane Chu <jane.chu@oracle.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Miaohe Lin <linmiaohe@huawei.com>
> Cc: Ritesh Harjani <riteshh@linux.ibm.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  mm/memory-failure.c |   22 ++++++++++++----------
>  1 file changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 8a4294afbfa0..e424a9dac749 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -345,13 +345,17 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
>   * not much we can do.	We just print a message and ignore otherwise.
>   */
>  
> +#define FSDAX_INVALID_PGOFF ULONG_MAX
> +
>  /*
>   * Schedule a process for later kill.
>   * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
>   *
> - * Notice: @fsdax_pgoff is used only when @p is a fsdax page.
> - *   In other cases, such as anonymous and file-backend page, the address to be
> - *   killed can be caculated by @p itself.
> + * Note: @fsdax_pgoff is used only when @p is a fsdax page and a
> + * filesystem with a memory failure handler has claimed the
> + * memory_failure event. In all other cases, page->index and
> + * page->mapping are sufficient for mapping the page back to its
> + * corresponding user virtual address.
>   */
>  static void add_to_kill(struct task_struct *tsk, struct page *p,
>  			pgoff_t fsdax_pgoff, struct vm_area_struct *vma,
> @@ -367,11 +371,7 @@ static void add_to_kill(struct task_struct *tsk, struct page *p,
>  
>  	tk->addr = page_address_in_vma(p, vma);
>  	if (is_zone_device_page(p)) {
> -		/*
> -		 * Since page->mapping is not used for fsdax, we need
> -		 * calculate the address based on the vma.
> -		 */
> -		if (p->pgmap->type == MEMORY_DEVICE_FS_DAX)
> +		if (fsdax_pgoff != FSDAX_INVALID_PGOFF)
>  			tk->addr = vma_pgoff_address(fsdax_pgoff, 1, vma);
>  		tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr);
>  	} else
> @@ -523,7 +523,8 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill,
>  			if (!page_mapped_in_vma(page, vma))
>  				continue;
>  			if (vma->vm_mm == t->mm)
> -				add_to_kill(t, page, 0, vma, to_kill);
> +				add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma,
> +					    to_kill);
>  		}
>  	}
>  	read_unlock(&tasklist_lock);
> @@ -559,7 +560,8 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill,
>  			 * to be informed of all such data corruptions.
>  			 */
>  			if (vma->vm_mm == t->mm)
> -				add_to_kill(t, page, 0, vma, to_kill);
> +				add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma,
> +					    to_kill);
>  		}
>  	}
>  	read_unlock(&tasklist_lock);
> 
> 
> .
>
Dan Williams Aug. 30, 2022, 3:57 a.m. UTC | #3
Miaohe Lin wrote:
> On 2022/8/27 1:18, Dan Williams wrote:
> > In the case where a filesystem is polled to take over the memory failure
> > and receives -EOPNOTSUPP it indicates that page->index and page->mapping
> > are valid for reverse mapping the failure address. Introduce
> > FSDAX_INVALID_PGOFF to distinguish when add_to_kill() is being called
> > from mf_dax_kill_procs() by a filesytem vs the typical memory_failure()
> > path.
> 
> Thanks for fixing.
> I'm sorry but I can't find the bug report email. 

Report is here:

https://lore.kernel.org/all/63069db388d43_1b3229426@dwillia2-xfh.jf.intel.com.notmuch/

> Do you mean mf_dax_kill_procs() can pass an invalid pgoff to the
> add_to_kill()? 

No, the problem is that ->notify_failure() returns -EOPNOTSUPP so
memory_failure_dev_pagemap() falls back to mf_generic_kill_procs().
However, mf_generic_kill_procs() end up passing '0' for fsdax_pgoff from
collect_procs_file() to add_to_kill(). A '0' for fsdax_pgoff results in
vma_pgoff_address() returning -EFAULT which causes the VM_BUG_ON() in
dev_pagemap_mapping_shift().
Miaohe Lin Aug. 30, 2022, 6:17 a.m. UTC | #4
On 2022/8/30 11:57, Dan Williams wrote:
> Miaohe Lin wrote:
>> On 2022/8/27 1:18, Dan Williams wrote:
>>> In the case where a filesystem is polled to take over the memory failure
>>> and receives -EOPNOTSUPP it indicates that page->index and page->mapping
>>> are valid for reverse mapping the failure address. Introduce
>>> FSDAX_INVALID_PGOFF to distinguish when add_to_kill() is being called
>>> from mf_dax_kill_procs() by a filesytem vs the typical memory_failure()
>>> path.
>>
>> Thanks for fixing.
>> I'm sorry but I can't find the bug report email. 
> 
> Report is here:
> 
> https://lore.kernel.org/all/63069db388d43_1b3229426@dwillia2-xfh.jf.intel.com.notmuch/
> 
>> Do you mean mf_dax_kill_procs() can pass an invalid pgoff to the
>> add_to_kill()? 
> 
> No, the problem is that ->notify_failure() returns -EOPNOTSUPP so
> memory_failure_dev_pagemap() falls back to mf_generic_kill_procs().
> However, mf_generic_kill_procs() end up passing '0' for fsdax_pgoff from
> collect_procs_file() to add_to_kill(). A '0' for fsdax_pgoff results in
> vma_pgoff_address() returning -EFAULT which causes the VM_BUG_ON() in
> dev_pagemap_mapping_shift().

Many thanks for your explanation.

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

Thanks,
Miaohe Lin
Christoph Hellwig Sept. 5, 2022, 2:45 p.m. UTC | #5
Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 8a4294afbfa0..e424a9dac749 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -345,13 +345,17 @@  static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
  * not much we can do.	We just print a message and ignore otherwise.
  */
 
+#define FSDAX_INVALID_PGOFF ULONG_MAX
+
 /*
  * Schedule a process for later kill.
  * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
  *
- * Notice: @fsdax_pgoff is used only when @p is a fsdax page.
- *   In other cases, such as anonymous and file-backend page, the address to be
- *   killed can be caculated by @p itself.
+ * Note: @fsdax_pgoff is used only when @p is a fsdax page and a
+ * filesystem with a memory failure handler has claimed the
+ * memory_failure event. In all other cases, page->index and
+ * page->mapping are sufficient for mapping the page back to its
+ * corresponding user virtual address.
  */
 static void add_to_kill(struct task_struct *tsk, struct page *p,
 			pgoff_t fsdax_pgoff, struct vm_area_struct *vma,
@@ -367,11 +371,7 @@  static void add_to_kill(struct task_struct *tsk, struct page *p,
 
 	tk->addr = page_address_in_vma(p, vma);
 	if (is_zone_device_page(p)) {
-		/*
-		 * Since page->mapping is not used for fsdax, we need
-		 * calculate the address based on the vma.
-		 */
-		if (p->pgmap->type == MEMORY_DEVICE_FS_DAX)
+		if (fsdax_pgoff != FSDAX_INVALID_PGOFF)
 			tk->addr = vma_pgoff_address(fsdax_pgoff, 1, vma);
 		tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr);
 	} else
@@ -523,7 +523,8 @@  static void collect_procs_anon(struct page *page, struct list_head *to_kill,
 			if (!page_mapped_in_vma(page, vma))
 				continue;
 			if (vma->vm_mm == t->mm)
-				add_to_kill(t, page, 0, vma, to_kill);
+				add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma,
+					    to_kill);
 		}
 	}
 	read_unlock(&tasklist_lock);
@@ -559,7 +560,8 @@  static void collect_procs_file(struct page *page, struct list_head *to_kill,
 			 * to be informed of all such data corruptions.
 			 */
 			if (vma->vm_mm == t->mm)
-				add_to_kill(t, page, 0, vma, to_kill);
+				add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma,
+					    to_kill);
 		}
 	}
 	read_unlock(&tasklist_lock);