Message ID | 20231122162950.3854897-3-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Small-sized THP for anonymous memory | expand |
On 22.11.23 17:29, Ryan Roberts wrote: > In preparation for supporting anonymous small-sized THP, improve > folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be > passed to it. In this case, all contained pages are accounted using the > order-0 folio (or base page) scheme. > > Reviewed-by: Yu Zhao <yuzhao@google.com> > Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > --- > mm/rmap.c | 28 ++++++++++++++++++++-------- > 1 file changed, 20 insertions(+), 8 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 49e4d86a4f70..b086dc957b0c 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, > * This means the inc-and-test can be bypassed. > * The folio does not have to be locked. > * > - * If the folio is large, it is accounted as a THP. As the folio > + * If the folio is pmd-mappable, it is accounted as a THP. As the folio > * is new, it's assumed to be mapped exclusively by a single process. > */ > void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, > unsigned long address) > { > - int nr; > + int nr = folio_nr_pages(folio); > > - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); > + VM_BUG_ON_VMA(address < vma->vm_start || > + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > __folio_set_swapbacked(folio); > + __folio_set_anon(folio, vma, address, true); Likely the changed order doesn't matter. LGTM Reviewed-by: David Hildenbrand <david@redhat.com>
> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, > unsigned long address) > { > - int nr; > + int nr = folio_nr_pages(folio); > > - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); > + VM_BUG_ON_VMA(address < vma->vm_start || > + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > __folio_set_swapbacked(folio); > + __folio_set_anon(folio, vma, address, true); > > - if (likely(!folio_test_pmd_mappable(folio))) { > + if (likely(!folio_test_large(folio))) { > /* increment count (starts at -1) */ > atomic_set(&folio->_mapcount, 0); > - nr = 1; > + SetPageAnonExclusive(&folio->page); > + } else if (!folio_test_pmd_mappable(folio)) { > + int i; > + > + for (i = 0; i < nr; i++) { > + struct page *page = folio_page(folio, i); > + > + /* increment count (starts at -1) */ > + atomic_set(&page->_mapcount, 0); > + SetPageAnonExclusive(page); Hi Ryan, we are doing an entire mapping, right? what is the reason to increase mapcount for each subpage? shouldn't we only increase mapcount of subpage in either split or doublemap case? in page_add_anon_rmap(), are we also increasing mapcount of each subpage for fork() case where the entire large folio is inheritted by child processes? > + } > + > + atomic_set(&folio->_nr_pages_mapped, nr); > } else { > /* increment count (starts at -1) */ > atomic_set(&folio->_entire_mapcount, 0); > atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); > - nr = folio_nr_pages(folio); > + SetPageAnonExclusive(&folio->page); > __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); > } > > __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); > - __folio_set_anon(folio, vma, address, true); > - SetPageAnonExclusive(&folio->page); > } Thanks Barry
On 24/11/2023 17:40, David Hildenbrand wrote: > On 22.11.23 17:29, Ryan Roberts wrote: >> In preparation for supporting anonymous small-sized THP, improve >> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be >> passed to it. In this case, all contained pages are accounted using the >> order-0 folio (or base page) scheme. >> >> Reviewed-by: Yu Zhao <yuzhao@google.com> >> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >> --- >> mm/rmap.c | 28 ++++++++++++++++++++-------- >> 1 file changed, 20 insertions(+), 8 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 49e4d86a4f70..b086dc957b0c 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct >> vm_area_struct *vma, >> * This means the inc-and-test can be bypassed. >> * The folio does not have to be locked. >> * >> - * If the folio is large, it is accounted as a THP. As the folio >> + * If the folio is pmd-mappable, it is accounted as a THP. As the folio >> * is new, it's assumed to be mapped exclusively by a single process. >> */ >> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, >> unsigned long address) >> { >> - int nr; >> + int nr = folio_nr_pages(folio); >> >> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); >> + VM_BUG_ON_VMA(address < vma->vm_start || >> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); >> __folio_set_swapbacked(folio); >> + __folio_set_anon(folio, vma, address, true); > > Likely the changed order doesn't matter. Yes; the reason I moved __folio_set_anon() up here is because SetPageAnonExclusive() asserts that the page is anon, and SetPageAnonExclusive() has to be called differently for the 3 cases. I couldn't see any reason why it wouldn't be safe to call __folio_set_anon() before setting up the mapcounts. > > LGTM > > Reviewed-by: David Hildenbrand <david@redhat.com> > Thanks!
On 27/11/2023 04:36, Barry Song wrote: >> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, >> unsigned long address) >> { >> - int nr; >> + int nr = folio_nr_pages(folio); >> >> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); >> + VM_BUG_ON_VMA(address < vma->vm_start || >> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); >> __folio_set_swapbacked(folio); >> + __folio_set_anon(folio, vma, address, true); >> >> - if (likely(!folio_test_pmd_mappable(folio))) { >> + if (likely(!folio_test_large(folio))) { >> /* increment count (starts at -1) */ >> atomic_set(&folio->_mapcount, 0); >> - nr = 1; >> + SetPageAnonExclusive(&folio->page); >> + } else if (!folio_test_pmd_mappable(folio)) { >> + int i; >> + >> + for (i = 0; i < nr; i++) { >> + struct page *page = folio_page(folio, i); >> + >> + /* increment count (starts at -1) */ >> + atomic_set(&page->_mapcount, 0); >> + SetPageAnonExclusive(page); > > Hi Ryan, > > we are doing an entire mapping, right? what is the reason to > increase mapcount for each subpage? shouldn't we only increase > mapcount of subpage in either split or doublemap case? > > in page_add_anon_rmap(), are we also increasing mapcount of > each subpage for fork() case where the entire large folio > is inheritted by child processes? I think this is all answered by the conversation we just had in the context of the contpte series? Let me know if you still have concerns. > >> + } >> + >> + atomic_set(&folio->_nr_pages_mapped, nr); >> } else { >> /* increment count (starts at -1) */ >> atomic_set(&folio->_entire_mapcount, 0); >> atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); >> - nr = folio_nr_pages(folio); >> + SetPageAnonExclusive(&folio->page); >> __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); >> } >> >> __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); >> - __folio_set_anon(folio, vma, address, true); >> - SetPageAnonExclusive(&folio->page); >> } > > Thanks > Barry >
diff --git a/mm/rmap.c b/mm/rmap.c index 49e4d86a4f70..b086dc957b0c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, * This means the inc-and-test can be bypassed. * The folio does not have to be locked. * - * If the folio is large, it is accounted as a THP. As the folio + * If the folio is pmd-mappable, it is accounted as a THP. As the folio * is new, it's assumed to be mapped exclusively by a single process. */ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { - int nr; + int nr = folio_nr_pages(folio); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); + VM_BUG_ON_VMA(address < vma->vm_start || + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); __folio_set_swapbacked(folio); + __folio_set_anon(folio, vma, address, true); - if (likely(!folio_test_pmd_mappable(folio))) { + if (likely(!folio_test_large(folio))) { /* increment count (starts at -1) */ atomic_set(&folio->_mapcount, 0); - nr = 1; + SetPageAnonExclusive(&folio->page); + } else if (!folio_test_pmd_mappable(folio)) { + int i; + + for (i = 0; i < nr; i++) { + struct page *page = folio_page(folio, i); + + /* increment count (starts at -1) */ + atomic_set(&page->_mapcount, 0); + SetPageAnonExclusive(page); + } + + atomic_set(&folio->_nr_pages_mapped, nr); } else { /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); - nr = folio_nr_pages(folio); + SetPageAnonExclusive(&folio->page); __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); } __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); - __folio_set_anon(folio, vma, address, true); - SetPageAnonExclusive(&folio->page); } /**