Message ID | 20211110082952.19266-3-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: Rework zap ptes on swap entries | expand |
Hi Peter, I was having trouble applying this cleanly to any of my local trees so was wondering which sha1 should I be applying this on top of? Thanks. - Alistair On Wednesday, 10 November 2021 7:29:52 PM AEDT Peter Xu wrote: > Clean the code up by merging the device private/exclusive swap entry handling > with the rest, then we merge the pte clear operation too. > > struct* page is defined in multiple places in the function, move it upward. > > free_swap_and_cache() is only useful for !non_swap_entry() case, put it into > the condition. > > No functional change intended. > > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/memory.c | 25 ++++++++----------------- > 1 file changed, 8 insertions(+), 17 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index e454f3c6aeb9..e5d59a6b6479 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1326,6 +1326,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, > arch_enter_lazy_mmu_mode(); > do { > pte_t ptent = *pte; > + struct page *page; > + > if (pte_none(ptent)) > continue; > > @@ -1333,8 +1335,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, > break; > > if (pte_present(ptent)) { > - struct page *page; > - > page = vm_normal_page(vma, addr, ptent); > if (unlikely(zap_skip_check_mapping(details, page))) > continue; > @@ -1368,32 +1368,23 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, > entry = pte_to_swp_entry(ptent); > if (is_device_private_entry(entry) || > is_device_exclusive_entry(entry)) { > - struct page *page = pfn_swap_entry_to_page(entry); > - > + page = pfn_swap_entry_to_page(entry); > if (unlikely(zap_skip_check_mapping(details, page))) > continue; > - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); > rss[mm_counter(page)]--; > - > if (is_device_private_entry(entry)) > page_remove_rmap(page, false); > - > put_page(page); > - continue; > - } > - > - if (!non_swap_entry(entry)) > - rss[MM_SWAPENTS]--; > - else if (is_migration_entry(entry)) { > - struct page *page; > - > + } else if (is_migration_entry(entry)) { > page = pfn_swap_entry_to_page(entry); > if (unlikely(zap_skip_check_mapping(details, page))) > continue; > rss[mm_counter(page)]--; > + } else if (!non_swap_entry(entry)) { > + rss[MM_SWAPENTS]--; > + if (unlikely(!free_swap_and_cache(entry))) > + print_bad_pte(vma, addr, ptent, NULL); > } > - if (unlikely(!free_swap_and_cache(entry))) > - print_bad_pte(vma, addr, ptent, NULL); > pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); > } while (pte++, addr += PAGE_SIZE, addr != end); > >
On Mon, Nov 15, 2021 at 10:21:18PM +1100, Alistair Popple wrote: > Hi Peter, Hi, Alistair, > > I was having trouble applying this cleanly to any of my local trees so was > wondering which sha1 should I be applying this on top of? Thanks. Thanks for considering trying it out. I thought it was easy to apply onto any of the recent branches as long as with -mm's rc1 applied, and I just did it to Linus's 5.16-rc1 in my uffd-wp rebase: https://github.com/xzpeter/linux/commits/uffd-wp-shmem-hugetlbfs This commit is here: https://github.com/xzpeter/linux/commit/c32043436282bb352e6fe10eb5fa693340fe5281 It could be that "git rebase" is normally smarter so I didn't notice it's not applicable directly. I'll repost a new version soon, please also consider to fetch directly from the git tree too before I do so. Thanks,
diff --git a/mm/memory.c b/mm/memory.c index e454f3c6aeb9..e5d59a6b6479 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1326,6 +1326,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_enter_lazy_mmu_mode(); do { pte_t ptent = *pte; + struct page *page; + if (pte_none(ptent)) continue; @@ -1333,8 +1335,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, break; if (pte_present(ptent)) { - struct page *page; - page = vm_normal_page(vma, addr, ptent); if (unlikely(zap_skip_check_mapping(details, page))) continue; @@ -1368,32 +1368,23 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, entry = pte_to_swp_entry(ptent); if (is_device_private_entry(entry) || is_device_exclusive_entry(entry)) { - struct page *page = pfn_swap_entry_to_page(entry); - + page = pfn_swap_entry_to_page(entry); if (unlikely(zap_skip_check_mapping(details, page))) continue; - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); rss[mm_counter(page)]--; - if (is_device_private_entry(entry)) page_remove_rmap(page, false); - put_page(page); - continue; - } - - if (!non_swap_entry(entry)) - rss[MM_SWAPENTS]--; - else if (is_migration_entry(entry)) { - struct page *page; - + } else if (is_migration_entry(entry)) { page = pfn_swap_entry_to_page(entry); if (unlikely(zap_skip_check_mapping(details, page))) continue; rss[mm_counter(page)]--; + } else if (!non_swap_entry(entry)) { + rss[MM_SWAPENTS]--; + if (unlikely(!free_swap_and_cache(entry))) + print_bad_pte(vma, addr, ptent, NULL); } - if (unlikely(!free_swap_and_cache(entry))) - print_bad_pte(vma, addr, ptent, NULL); pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); } while (pte++, addr += PAGE_SIZE, addr != end);
Clean the code up by merging the device private/exclusive swap entry handling with the rest, then we merge the pte clear operation too. struct* page is defined in multiple places in the function, move it upward. free_swap_and_cache() is only useful for !non_swap_entry() case, put it into the condition. No functional change intended. Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/memory.c | 25 ++++++++----------------- 1 file changed, 8 insertions(+), 17 deletions(-)