Message ID | 20231020183331.10770-4-vishal.moola@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Some khugepaged folio conversions | expand |
On 20.10.23 20:33, Vishal Moola (Oracle) wrote: > Both callers of is_refcount_suitable() have been converted to use > folios, so convert it to take in a folio. Both callers only operate on > head pages of folios so mapcount/refcount conversions here are trivial. > > Removes 3 calls to compound head, and removes 315 bytes of kernel text. > > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> > --- > mm/khugepaged.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 6c4b5af43371..9efd8ff68f06 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -524,15 +524,15 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte, > } > } > > -static bool is_refcount_suitable(struct page *page) > +static bool is_refcount_suitable(struct folio *folio) > { > int expected_refcount; > > - expected_refcount = total_mapcount(page); > - if (PageSwapCache(page)) > - expected_refcount += compound_nr(page); > + expected_refcount = folio_mapcount(folio); > + if (folio_test_swapcache(folio)) > + expected_refcount += folio_nr_pages(folio); > > - return page_count(page) == expected_refcount; > + return folio_ref_count(folio) == expected_refcount; > } > > static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > @@ -625,7 +625,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > * but not from this process. The other process cannot write to > * the page, only trigger CoW. > */ > - if (!is_refcount_suitable(&folio->page)) { > + if (!is_refcount_suitable(folio)) { > folio_unlock(folio); > result = SCAN_PAGE_COUNT; > goto out; > @@ -1371,7 +1371,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, > * has excessive GUP pins (i.e. 512). Anyway the same check > * will be done again later the risk seems low. > */ > - if (!is_refcount_suitable(&folio->page)) { > + if (!is_refcount_suitable(folio)) { > result = SCAN_PAGE_COUNT; > goto out_unmap; > } Reviewed-by: David Hildenbrand <david@redhat.com>
On Fri, Oct 20, 2023 at 11:34 AM Vishal Moola (Oracle) <vishal.moola@gmail.com> wrote: > > Both callers of is_refcount_suitable() have been converted to use > folios, so convert it to take in a folio. Both callers only operate on > head pages of folios so mapcount/refcount conversions here are trivial. > > Removes 3 calls to compound head, and removes 315 bytes of kernel text. > > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> Reviewed-by: Yang Shi <shy828301@gmail.com> > --- > mm/khugepaged.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 6c4b5af43371..9efd8ff68f06 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -524,15 +524,15 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte, > } > } > > -static bool is_refcount_suitable(struct page *page) > +static bool is_refcount_suitable(struct folio *folio) > { > int expected_refcount; > > - expected_refcount = total_mapcount(page); > - if (PageSwapCache(page)) > - expected_refcount += compound_nr(page); > + expected_refcount = folio_mapcount(folio); > + if (folio_test_swapcache(folio)) > + expected_refcount += folio_nr_pages(folio); > > - return page_count(page) == expected_refcount; > + return folio_ref_count(folio) == expected_refcount; > } > > static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > @@ -625,7 +625,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > * but not from this process. The other process cannot write to > * the page, only trigger CoW. > */ > - if (!is_refcount_suitable(&folio->page)) { > + if (!is_refcount_suitable(folio)) { > folio_unlock(folio); > result = SCAN_PAGE_COUNT; > goto out; > @@ -1371,7 +1371,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, > * has excessive GUP pins (i.e. 512). Anyway the same check > * will be done again later the risk seems low. > */ > - if (!is_refcount_suitable(&folio->page)) { > + if (!is_refcount_suitable(folio)) { > result = SCAN_PAGE_COUNT; > goto out_unmap; > } > -- > 2.40.1 > >
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6c4b5af43371..9efd8ff68f06 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -524,15 +524,15 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte, } } -static bool is_refcount_suitable(struct page *page) +static bool is_refcount_suitable(struct folio *folio) { int expected_refcount; - expected_refcount = total_mapcount(page); - if (PageSwapCache(page)) - expected_refcount += compound_nr(page); + expected_refcount = folio_mapcount(folio); + if (folio_test_swapcache(folio)) + expected_refcount += folio_nr_pages(folio); - return page_count(page) == expected_refcount; + return folio_ref_count(folio) == expected_refcount; } static int __collapse_huge_page_isolate(struct vm_area_struct *vma, @@ -625,7 +625,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, * but not from this process. The other process cannot write to * the page, only trigger CoW. */ - if (!is_refcount_suitable(&folio->page)) { + if (!is_refcount_suitable(folio)) { folio_unlock(folio); result = SCAN_PAGE_COUNT; goto out; @@ -1371,7 +1371,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, * has excessive GUP pins (i.e. 512). Anyway the same check * will be done again later the risk seems low. */ - if (!is_refcount_suitable(&folio->page)) { + if (!is_refcount_suitable(folio)) { result = SCAN_PAGE_COUNT; goto out_unmap; }
Both callers of is_refcount_suitable() have been converted to use folios, so convert it to take in a folio. Both callers only operate on head pages of folios so mapcount/refcount conversions here are trivial. Removes 3 calls to compound head, and removes 315 bytes of kernel text. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- mm/khugepaged.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)