Message ID | 20240817084941.2375713-3-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: memory_hotplug: improve do_migrate_range() | expand |
On 2024/8/17 16:49, Kefeng Wang wrote: > Add unmap_posioned_folio() helper which will be reused by > do_migrate_range() from memory hotplug soon. > > Acked-by: David Hildenbrand <david@redhat.com> > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > mm/internal.h | 9 +++++++++ > mm/memory-failure.c | 43 ++++++++++++++++++++++++++----------------- > 2 files changed, 35 insertions(+), 17 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index d7aac802efa5..74490b8ac63d 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1045,6 +1045,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) > /* > * mm/memory-failure.c > */ > +#ifdef CONFIG_MEMORY_FAILURE > +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu); > void shake_folio(struct folio *folio); > extern int hwpoison_filter(struct page *p); > > @@ -1065,6 +1067,13 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, > unsigned long ksm_addr); > unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); > > +#else > +static inline int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) > +{ > + return 0; > +} > +#endif > + > extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, > unsigned long, unsigned long, > unsigned long, unsigned long); > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 353254537b54..93848330de1f 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1554,6 +1554,30 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) > return ret; > } > > +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) Should it be unmap_poisoned_folio ? i.e. s/posioned/poisoned/ :) > +{ > + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { > + struct address_space *mapping; > + /* > + * For hugetlb pages in shared mappings, try_to_unmap > + * could potentially call huge_pmd_unshare. Because of > + * this, take semaphore in write mode here and set > + * TTU_RMAP_LOCKED to indicate we have taken the lock > + * at this higher level. > + */ > + mapping = hugetlb_folio_mapping_lock_write(folio); > + if (!mapping) > + return -EAGAIN; > + > + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); > + i_mmap_unlock_write(mapping); > + } else { > + try_to_unmap(folio, ttu); > + } > + > + return 0; It seems return value is not really needed? We will check folio_mapped below. > +} > + > /* > * Do all that is necessary to remove user space mappings. Unmap > * the pages and send SIGBUS to the processes if the data was dirty. > @@ -1615,23 +1639,8 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, > */ > collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); > > - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { > - /* > - * For hugetlb pages in shared mappings, try_to_unmap > - * could potentially call huge_pmd_unshare. Because of > - * this, take semaphore in write mode here and set > - * TTU_RMAP_LOCKED to indicate we have taken the lock > - * at this higher level. > - */ > - mapping = hugetlb_folio_mapping_lock_write(folio); > - if (mapping) { > - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); > - i_mmap_unlock_write(mapping); > - } else > - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); > - } else { > - try_to_unmap(folio, ttu); > - } > + if (unmap_posioned_folio(folio, ttu)) > + pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); It might be better to pr_info inside the unmap_posioned_folio? unmap_posioned_folio might fail due to other reasons in the future anyway. Thanks. .
On 2024/8/21 15:40, Miaohe Lin wrote: > On 2024/8/17 16:49, Kefeng Wang wrote: >> Add unmap_posioned_folio() helper which will be reused by >> do_migrate_range() from memory hotplug soon. >> >> Acked-by: David Hildenbrand <david@redhat.com> >> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> >> --- >> mm/internal.h | 9 +++++++++ >> mm/memory-failure.c | 43 ++++++++++++++++++++++++++----------------- >> 2 files changed, 35 insertions(+), 17 deletions(-) >> >> diff --git a/mm/internal.h b/mm/internal.h >> index d7aac802efa5..74490b8ac63d 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -1045,6 +1045,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) >> /* >> * mm/memory-failure.c >> */ >> +#ifdef CONFIG_MEMORY_FAILURE >> +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu); >> void shake_folio(struct folio *folio); >> extern int hwpoison_filter(struct page *p); >> >> @@ -1065,6 +1067,13 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, >> unsigned long ksm_addr); >> unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); >> >> +#else >> +static inline int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) >> +{ >> + return 0; >> +} >> +#endif >> + >> extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, >> unsigned long, unsigned long, >> unsigned long, unsigned long); >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c >> index 353254537b54..93848330de1f 100644 >> --- a/mm/memory-failure.c >> +++ b/mm/memory-failure.c >> @@ -1554,6 +1554,30 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) >> return ret; >> } >> >> +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) > > Should it be unmap_poisoned_folio ? i.e. s/posioned/poisoned/ :) Oh, my bad. > >> +{ >> + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { >> + struct address_space *mapping; >> + /* >> + * For hugetlb pages in shared mappings, try_to_unmap >> + * could potentially call huge_pmd_unshare. Because of >> + * this, take semaphore in write mode here and set >> + * TTU_RMAP_LOCKED to indicate we have taken the lock >> + * at this higher level. >> + */ >> + mapping = hugetlb_folio_mapping_lock_write(folio); >> + if (!mapping) >> + return -EAGAIN; >> + >> + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); >> + i_mmap_unlock_write(mapping); >> + } else { >> + try_to_unmap(folio, ttu); >> + } >> + >> + return 0; > > It seems return value is not really needed? We will check folio_mapped below. It is just to keep the same as before, but > >> +} >> + >> /* >> * Do all that is necessary to remove user space mappings. Unmap >> * the pages and send SIGBUS to the processes if the data was dirty. >> @@ -1615,23 +1639,8 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, >> */ >> collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); >> >> - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { >> - /* >> - * For hugetlb pages in shared mappings, try_to_unmap >> - * could potentially call huge_pmd_unshare. Because of >> - * this, take semaphore in write mode here and set >> - * TTU_RMAP_LOCKED to indicate we have taken the lock >> - * at this higher level. >> - */ >> - mapping = hugetlb_folio_mapping_lock_write(folio); >> - if (mapping) { >> - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); >> - i_mmap_unlock_write(mapping); >> - } else >> - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); >> - } else { >> - try_to_unmap(folio, ttu); >> - } >> + if (unmap_posioned_folio(folio, ttu)) >> + pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); > > It might be better to pr_info inside the unmap_posioned_folio? unmap_posioned_folio might fail due > to other reasons in the future anyway. think it again, we have another check and print when recheck folio_mapped(), so killing the return value and this print for hugepage is better, and in memory-hotplug, we don't pr_warn for this case. I will remove return and this print if no objection. > > Thanks. > .
diff --git a/mm/internal.h b/mm/internal.h index d7aac802efa5..74490b8ac63d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1045,6 +1045,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) /* * mm/memory-failure.c */ +#ifdef CONFIG_MEMORY_FAILURE +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu); void shake_folio(struct folio *folio); extern int hwpoison_filter(struct page *p); @@ -1065,6 +1067,13 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, unsigned long ksm_addr); unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); +#else +static inline int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) +{ + return 0; +} +#endif + extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 353254537b54..93848330de1f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1554,6 +1554,30 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) return ret; } +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) +{ + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { + struct address_space *mapping; + /* + * For hugetlb pages in shared mappings, try_to_unmap + * could potentially call huge_pmd_unshare. Because of + * this, take semaphore in write mode here and set + * TTU_RMAP_LOCKED to indicate we have taken the lock + * at this higher level. + */ + mapping = hugetlb_folio_mapping_lock_write(folio); + if (!mapping) + return -EAGAIN; + + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); + i_mmap_unlock_write(mapping); + } else { + try_to_unmap(folio, ttu); + } + + return 0; +} + /* * Do all that is necessary to remove user space mappings. Unmap * the pages and send SIGBUS to the processes if the data was dirty. @@ -1615,23 +1639,8 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, */ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * For hugetlb pages in shared mappings, try_to_unmap - * could potentially call huge_pmd_unshare. Because of - * this, take semaphore in write mode here and set - * TTU_RMAP_LOCKED to indicate we have taken the lock - * at this higher level. - */ - mapping = hugetlb_folio_mapping_lock_write(folio); - if (mapping) { - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); - i_mmap_unlock_write(mapping); - } else - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); - } else { - try_to_unmap(folio, ttu); - } + if (unmap_posioned_folio(folio, ttu)) + pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); unmap_success = !folio_mapped(folio); if (!unmap_success)