Message ID | 20240603092439.3360652-2-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: migrate: support poison recover from migrate folio | expand |
On 6/3/2024 2:24 AM, Kefeng Wang wrote: > There is a memory_failure_queue() call after copy_mc_[user]_highpage(), > see callers, eg, CoW/KSM page copy, it is used to mark the source page > as h/w poisoned and unmap it from other tasks, and the upcomming poison > recover from migrate folio will do the similar thing, so let's move the > memory_failure_queue() into the copy_mc_[user]_highpage() instead of > adding it into each user, this should also enhance the handling of > poisoned page in khugepaged. > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > include/linux/highmem.h | 6 ++++++ > mm/ksm.c | 1 - > mm/memory.c | 12 +++--------- > 3 files changed, 9 insertions(+), 10 deletions(-) > > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index 00341b56d291..6b0d6f3c8580 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -352,6 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, > kunmap_local(vto); > kunmap_local(vfrom); > > + if (ret) > + memory_failure_queue(page_to_pfn(from), 0); > + > return ret; > } > > @@ -368,6 +371,9 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) > kunmap_local(vto); > kunmap_local(vfrom); > > + if (ret) > + memory_failure_queue(page_to_pfn(from), 0); > + > return ret; > } > #else > diff --git a/mm/ksm.c b/mm/ksm.c > index 452ac8346e6e..3d95e5a9f301 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -3091,7 +3091,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, > if (copy_mc_user_highpage(folio_page(new_folio, 0), page, > addr, vma)) { > folio_put(new_folio); > - memory_failure_queue(folio_pfn(folio), 0); > return ERR_PTR(-EHWPOISON); > } > folio_set_dirty(new_folio); > diff --git a/mm/memory.c b/mm/memory.c > index 63f9f98b47bd..e06de844eaba 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3034,10 +3034,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, > unsigned long addr = vmf->address; > > if (likely(src)) { > - if (copy_mc_user_highpage(dst, src, addr, vma)) { > - memory_failure_queue(page_to_pfn(src), 0); > + if (copy_mc_user_highpage(dst, src, addr, vma)) > return -EHWPOISON; > - } > return 0; > } > > @@ -6417,10 +6415,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, > > cond_resched(); > if (copy_mc_user_highpage(dst_page, src_page, > - addr + i*PAGE_SIZE, vma)) { > - memory_failure_queue(page_to_pfn(src_page), 0); > + addr + i*PAGE_SIZE, vma)) > return -EHWPOISON; > - } > } > return 0; > } > @@ -6437,10 +6433,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) > struct page *dst = nth_page(copy_arg->dst, idx); > struct page *src = nth_page(copy_arg->src, idx); > > - if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) { > - memory_failure_queue(page_to_pfn(src), 0); > + if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) > return -EHWPOISON; > - } > return 0; > } > Reviewed-by: Jane Chu <jane.chu@oracle.com> Thanks! -jane
On 2024/6/3 17:24, Kefeng Wang wrote: > There is a memory_failure_queue() call after copy_mc_[user]_highpage(), > see callers, eg, CoW/KSM page copy, it is used to mark the source page > as h/w poisoned and unmap it from other tasks, and the upcomming poison > recover from migrate folio will do the similar thing, so let's move the > memory_failure_queue() into the copy_mc_[user]_highpage() instead of > adding it into each user, this should also enhance the handling of > poisoned page in khugepaged. > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> LGTM. Thanks. Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Thanks. .
diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 00341b56d291..6b0d6f3c8580 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -352,6 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); + if (ret) + memory_failure_queue(page_to_pfn(from), 0); + return ret; } @@ -368,6 +371,9 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); + if (ret) + memory_failure_queue(page_to_pfn(from), 0); + return ret; } #else diff --git a/mm/ksm.c b/mm/ksm.c index 452ac8346e6e..3d95e5a9f301 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -3091,7 +3091,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, if (copy_mc_user_highpage(folio_page(new_folio, 0), page, addr, vma)) { folio_put(new_folio); - memory_failure_queue(folio_pfn(folio), 0); return ERR_PTR(-EHWPOISON); } folio_set_dirty(new_folio); diff --git a/mm/memory.c b/mm/memory.c index 63f9f98b47bd..e06de844eaba 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3034,10 +3034,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, vma)) return -EHWPOISON; - } return 0; } @@ -6417,10 +6415,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, cond_resched(); if (copy_mc_user_highpage(dst_page, src_page, - addr + i*PAGE_SIZE, vma)) { - memory_failure_queue(page_to_pfn(src_page), 0); + addr + i*PAGE_SIZE, vma)) return -EHWPOISON; - } } return 0; } @@ -6437,10 +6433,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) struct page *dst = nth_page(copy_arg->dst, idx); struct page *src = nth_page(copy_arg->src, idx); - if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) return -EHWPOISON; - } return 0; }
There is a memory_failure_queue() call after copy_mc_[user]_highpage(), see callers, eg, CoW/KSM page copy, it is used to mark the source page as h/w poisoned and unmap it from other tasks, and the upcomming poison recover from migrate folio will do the similar thing, so let's move the memory_failure_queue() into the copy_mc_[user]_highpage() instead of adding it into each user, this should also enhance the handling of poisoned page in khugepaged. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- include/linux/highmem.h | 6 ++++++ mm/ksm.c | 1 - mm/memory.c | 12 +++--------- 3 files changed, 9 insertions(+), 10 deletions(-)