Message ID | 20210811101431.83940-1-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: remove redundant compound_head() calling | expand |
On 8/11/21 12:14 PM, Muchun Song wrote: > There is a READ_ONCE() in the macro of compound_head(), which will > prevent compiler from optimizing the code when there are more than > once calling of it in a function. Remove the redundant calling of > compound_head() from page_to_index() and page_add_file_rmap() for > better code generation. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> Seems to be compatible with folio/for-next and not made redundant by that (yet? didn't check the branches planned for future versions), so OK. But long-term I'd expect these optimizations to be obsoleted by the folio work. > --- > include/linux/pagemap.h | 7 +++---- > mm/rmap.c | 6 ++++-- > 2 files changed, 7 insertions(+), 6 deletions(-) > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index 79ec90e97e94..03b9a957ef10 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -608,18 +608,17 @@ static inline struct page *read_mapping_page(struct address_space *mapping, > */ > static inline pgoff_t page_to_index(struct page *page) > { > - pgoff_t pgoff; > + struct page *head; > > if (likely(!PageTransTail(page))) > return page->index; > > + head = compound_head(page); > /* > * We don't initialize ->index for tail pages: calculate based on > * head page > */ > - pgoff = compound_head(page)->index; > - pgoff += page - compound_head(page); > - return pgoff; > + return head->index + page - head; > } > > extern pgoff_t hugetlb_basepage_index(struct page *page); > diff --git a/mm/rmap.c b/mm/rmap.c > index 232494888628..2e216835f07c 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1231,11 +1231,13 @@ void page_add_file_rmap(struct page *page, bool compound) > nr_pages); > } else { > if (PageTransCompound(page) && page_mapping(page)) { > + struct page *head = compound_head(page); > + > VM_WARN_ON_ONCE(!PageLocked(page)); > > - SetPageDoubleMap(compound_head(page)); > + SetPageDoubleMap(head); > if (PageMlocked(page)) > - clear_page_mlock(compound_head(page)); > + clear_page_mlock(head); > } > if (!atomic_inc_and_test(&page->_mapcount)) > goto out; >
On Fri, Aug 13, 2021 at 04:02:33PM +0200, Vlastimil Babka wrote: > On 8/11/21 12:14 PM, Muchun Song wrote: > > There is a READ_ONCE() in the macro of compound_head(), which will > > prevent compiler from optimizing the code when there are more than > > once calling of it in a function. Remove the redundant calling of > > compound_head() from page_to_index() and page_add_file_rmap() for > > better code generation. > > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > Seems to be compatible with folio/for-next and not made redundant by that (yet? > didn't check the branches planned for future versions), so OK. But long-term I'd > expect these optimizations to be obsoleted by the folio work. Yes, I haven't touched page_add_file_rmap() in my tree yet. Trying to keep my focus on page cache instead of working on more generic mm stuff. Hopefully other people will work on those pieces once the folio work lands. Looking at page_add_file_rmap(), it needs someone to think in detail about what 'compound' means in the context of sub-PMD-sized compound pages. I suspect it really means "map_as_pmd".
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 79ec90e97e94..03b9a957ef10 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -608,18 +608,17 @@ static inline struct page *read_mapping_page(struct address_space *mapping, */ static inline pgoff_t page_to_index(struct page *page) { - pgoff_t pgoff; + struct page *head; if (likely(!PageTransTail(page))) return page->index; + head = compound_head(page); /* * We don't initialize ->index for tail pages: calculate based on * head page */ - pgoff = compound_head(page)->index; - pgoff += page - compound_head(page); - return pgoff; + return head->index + page - head; } extern pgoff_t hugetlb_basepage_index(struct page *page); diff --git a/mm/rmap.c b/mm/rmap.c index 232494888628..2e216835f07c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1231,11 +1231,13 @@ void page_add_file_rmap(struct page *page, bool compound) nr_pages); } else { if (PageTransCompound(page) && page_mapping(page)) { + struct page *head = compound_head(page); + VM_WARN_ON_ONCE(!PageLocked(page)); - SetPageDoubleMap(compound_head(page)); + SetPageDoubleMap(head); if (PageMlocked(page)) - clear_page_mlock(compound_head(page)); + clear_page_mlock(head); } if (!atomic_inc_and_test(&page->_mapcount)) goto out;
There is a READ_ONCE() in the macro of compound_head(), which will prevent compiler from optimizing the code when there are more than once calling of it in a function. Remove the redundant calling of compound_head() from page_to_index() and page_add_file_rmap() for better code generation. Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- include/linux/pagemap.h | 7 +++---- mm/rmap.c | 6 ++++-- 2 files changed, 7 insertions(+), 6 deletions(-)