diff mbox series

[08/11] mm/rmap: Fix assumptions of THP size

Message ID 20200908195539.25896-9-willy@infradead.org (mailing list archive)
State New, archived
Headers show
Series Remove assumptions of THP size | expand

Commit Message

Matthew Wilcox Sept. 8, 2020, 7:55 p.m. UTC
Ask the page what size it is instead of assuming it's PMD size.  Do this
for anon pages as well as file pages for when someone decides to support
that.  Leave the assumption alone for pages which are PMD mapped; we
don't currently grow THPs beyond PMD size, so we don't need to change
this code yet.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/rmap.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Kirill A . Shutemov Sept. 9, 2020, 2:47 p.m. UTC | #1
On Tue, Sep 08, 2020 at 08:55:35PM +0100, Matthew Wilcox (Oracle) wrote:
> Ask the page what size it is instead of assuming it's PMD size.  Do this
> for anon pages as well as file pages for when someone decides to support
> that.  Leave the assumption alone for pages which are PMD mapped; we
> don't currently grow THPs beyond PMD size, so we don't need to change
> this code yet.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
SeongJae Park Sept. 15, 2020, 7:27 a.m. UTC | #2
On Tue,  8 Sep 2020 20:55:35 +0100 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:

> Ask the page what size it is instead of assuming it's PMD size.  Do this
> for anon pages as well as file pages for when someone decides to support
> that.  Leave the assumption alone for pages which are PMD mapped; we
> don't currently grow THPs beyond PMD size, so we don't need to change
> this code yet.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Reviewed-by: SeongJae Park <sjpark@amazon.de>


Thanks,
SeongJae Park
diff mbox series

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index 83cc459edc40..10f93129648c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1205,7 +1205,7 @@  void page_add_file_rmap(struct page *page, bool compound)
 	VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
 	lock_page_memcg(page);
 	if (compound && PageTransHuge(page)) {
-		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+		for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
 			if (atomic_inc_and_test(&page[i]._mapcount))
 				nr++;
 		}
@@ -1246,7 +1246,7 @@  static void page_remove_file_rmap(struct page *page, bool compound)
 
 	/* page still mapped by someone else? */
 	if (compound && PageTransHuge(page)) {
-		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+		for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
 			if (atomic_add_negative(-1, &page[i]._mapcount))
 				nr++;
 		}
@@ -1293,7 +1293,7 @@  static void page_remove_anon_compound_rmap(struct page *page)
 		 * Subpages can be mapped with PTEs too. Check how many of
 		 * them are still mapped.
 		 */
-		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+		for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
 			if (atomic_add_negative(-1, &page[i]._mapcount))
 				nr++;
 		}
@@ -1303,10 +1303,10 @@  static void page_remove_anon_compound_rmap(struct page *page)
 		 * page of the compound page is unmapped, but at least one
 		 * small page is still mapped.
 		 */
-		if (nr && nr < HPAGE_PMD_NR)
+		if (nr && nr < thp_nr_pages(page))
 			deferred_split_huge_page(page);
 	} else {
-		nr = HPAGE_PMD_NR;
+		nr = thp_nr_pages(page);
 	}
 
 	if (unlikely(PageMlocked(page)))