diff mbox series

[RFC] mm: use nth_page() for all memmap (struct page) position operations.

Message ID 20230823030622.96112-1-zi.yan@sent.com (mailing list archive)
State New
Headers show
Series [RFC] mm: use nth_page() for all memmap (struct page) position operations. | expand

Commit Message

Zi Yan Aug. 23, 2023, 3:06 a.m. UTC
From: Zi Yan <ziy@nvidia.com>

With sparsemem and without vmemmap, memmap (struct page) array might not be
contiguous all the time. Thus, memmap position operations like page + N,
page++, might not give a valid struct page. Use nth_page() to properly
operate on struct page position changes.

TODO: change arch code if this change is regarded as necessary and
sparsemem + !vmemmap can be enabled on the arch.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 block/bio.c             |  2 +-
 block/blk-map.c         |  2 +-
 block/blk-merge.c       |  2 +-
 fs/hfs/btree.c          |  7 ++++---
 fs/hugetlbfs/inode.c    |  4 ++--
 fs/nfsd/vfs.c           |  4 ++--
 include/linux/pagemap.h |  2 +-
 kernel/kexec_core.c     |  2 +-
 lib/iov_iter.c          | 21 +++++++++++----------
 mm/cma.c                |  2 +-
 mm/compaction.c         |  8 ++++----
 mm/debug.c              |  2 +-
 mm/filemap.c            |  4 ++--
 mm/highmem.c            |  6 +++---
 mm/huge_memory.c        | 28 +++++++++++++--------------
 mm/hugetlb.c            |  2 +-
 mm/hugetlb_vmemmap.c    |  2 +-
 mm/internal.h           |  4 ++--
 mm/kasan/common.c       |  4 ++--
 mm/khugepaged.c         | 10 +++++-----
 mm/kmemleak.c           |  2 +-
 mm/memory.c             |  2 +-
 mm/mm_init.c            |  4 ++--
 mm/page_alloc.c         | 42 ++++++++++++++++++++---------------------
 mm/page_poison.c        |  4 ++--
 mm/vmalloc.c            |  2 +-
 26 files changed, 88 insertions(+), 86 deletions(-)

Comments

Matthew Wilcox Aug. 23, 2023, 3:27 a.m. UTC | #1
On Tue, Aug 22, 2023 at 11:06:22PM -0400, Zi Yan wrote:
> With sparsemem and without vmemmap, memmap (struct page) array might not be
> contiguous all the time. Thus, memmap position operations like page + N,
> page++, might not give a valid struct page. Use nth_page() to properly
> operate on struct page position changes.

This is too big to be a single patch; you need to break it up by
subsystem at least.  And it's not against current -next; just the first
one I'm looking at is wrecked by "block: move the bi_size update out of
__bio_try_merge_page" from July 24th.

> +++ b/block/bio.c
> @@ -923,7 +923,7 @@ static inline bool page_is_mergeable(const struct bio_vec *bv,
>  		return true;
>  	else if (IS_ENABLED(CONFIG_KMSAN))
>  		return false;
> -	return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE);
> +	return nth_page(bv->bv_page, bv_end / PAGE_SIZE) == nth_page(page, off / PAGE_SIZE);

I think this one is actually wrong.  We already checked the addresses were
physically contiguous earlier in the function:

        phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) + bv_end - 1;
        phys_addr_t page_addr = page_to_phys(page);

        if (vec_end_addr + 1 != page_addr + off)
                return false;

so this line is checking whether the struct pages are virtually contiguous.

That makes me suspicious of the other changes in the block layer,
because a bvec is defined to not cross a virtual discontiguity in
memmap.

> +++ b/fs/hfs/btree.c
> @@ -270,7 +270,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
>  	off = off16;
>  
>  	off += node->page_offset;
> -	pagep = node->page + (off >> PAGE_SHIFT);
> +	pagep = nth_page(node->page, (off >> PAGE_SHIFT));

Are normal filesystems ever going to see folios that cross memmap
discontiguities?  I think hugetlb is the only way to see such things.

> +++ b/mm/compaction.c
> @@ -362,7 +362,7 @@ __reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source,
>  			return true;
>  		}
>  
> -		page += (1 << PAGE_ALLOC_COSTLY_ORDER);
> +		page = nth_page(page, (1 << PAGE_ALLOC_COSTLY_ORDER));
>  	} while (page <= end_page);
>  
>  	return false;

Isn't this within a single page block?

> +++ b/mm/debug.c
> @@ -67,7 +67,7 @@ static void __dump_page(struct page *page)
>  	int mapcount;
>  	char *type = "";
>  
> -	if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) {
> +	if (page < head || (page >= nth_page(head, MAX_ORDER_NR_PAGES))) {

It's kind of right there in the name.  MAX_ORDER_NR_PAGES.
Zi Yan Aug. 23, 2023, 2:31 p.m. UTC | #2
On 22 Aug 2023, at 23:27, Matthew Wilcox wrote:

> On Tue, Aug 22, 2023 at 11:06:22PM -0400, Zi Yan wrote:
>> With sparsemem and without vmemmap, memmap (struct page) array might not be
>> contiguous all the time. Thus, memmap position operations like page + N,
>> page++, might not give a valid struct page. Use nth_page() to properly
>> operate on struct page position changes.
>
> This is too big to be a single patch; you need to break it up by
> subsystem at least.  And it's not against current -next; just the first
> one I'm looking at is wrecked by "block: move the bi_size update out of
> __bio_try_merge_page" from July 24th.

Sure. Will break up the patch and rebase it against -next.

>
>> +++ b/block/bio.c
>> @@ -923,7 +923,7 @@ static inline bool page_is_mergeable(const struct bio_vec *bv,
>>  		return true;
>>  	else if (IS_ENABLED(CONFIG_KMSAN))
>>  		return false;
>> -	return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE);
>> +	return nth_page(bv->bv_page, bv_end / PAGE_SIZE) == nth_page(page, off / PAGE_SIZE);
>
> I think this one is actually wrong.  We already checked the addresses were
> physically contiguous earlier in the function:
>
>         phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) + bv_end - 1;
>         phys_addr_t page_addr = page_to_phys(page);
>
>         if (vec_end_addr + 1 != page_addr + off)
>                 return false;
>
> so this line is checking whether the struct pages are virtually contiguous.

Got it.

>
> That makes me suspicious of the other changes in the block layer,
> because a bvec is defined to not cross a virtual discontiguity in
> memmap.

Yes, just checked the definition of struct bio_vec and confirmed it. I will drop
the changes to block layer.

>> +++ b/fs/hfs/btree.c
>> @@ -270,7 +270,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
>>  	off = off16;
>>
>>  	off += node->page_offset;
>> -	pagep = node->page + (off >> PAGE_SHIFT);
>> +	pagep = nth_page(node->page, (off >> PAGE_SHIFT));
>
> Are normal filesystems ever going to see folios that cross memmap
> discontiguities?  I think hugetlb is the only way to see such things.

Right. So most likely only mm code can see hugetlb would need nth_page() instead
of direct struct page offset operations.

>
>> +++ b/mm/compaction.c
>> @@ -362,7 +362,7 @@ __reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source,
>>  			return true;
>>  		}
>>
>> -		page += (1 << PAGE_ALLOC_COSTLY_ORDER);
>> +		page = nth_page(page, (1 << PAGE_ALLOC_COSTLY_ORDER));
>>  	} while (page <= end_page);
>>
>>  	return false;
>
> Isn't this within a single page block?
>
>> +++ b/mm/debug.c
>> @@ -67,7 +67,7 @@ static void __dump_page(struct page *page)
>>  	int mapcount;
>>  	char *type = "";
>>
>> -	if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) {
>> +	if (page < head || (page >= nth_page(head, MAX_ORDER_NR_PAGES))) {
>
> It's kind of right there in the name.  MAX_ORDER_NR_PAGES.

I was trying to be on the safe side. I also get your point that I probably should
only convert only if necessary. I will check my changes and drop unnecessary ones.

Thank you for the review.

--
Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 8672179213b9..4df66b28f9bd 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -923,7 +923,7 @@  static inline bool page_is_mergeable(const struct bio_vec *bv,
 		return true;
 	else if (IS_ENABLED(CONFIG_KMSAN))
 		return false;
-	return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE);
+	return nth_page(bv->bv_page, bv_end / PAGE_SIZE) == nth_page(page, off / PAGE_SIZE);
 }
 
 /**
diff --git a/block/blk-map.c b/block/blk-map.c
index 44d74a30ddac..21b9bdc29328 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -178,7 +178,7 @@  static int bio_copy_user_iov(struct request *rq, struct rq_map_data *map_data,
 			}
 
 			page = map_data->pages[i / nr_pages];
-			page += (i % nr_pages);
+			page = nth_page(page, (i % nr_pages));
 
 			i++;
 		} else {
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 65e75efa9bd3..26b7d0c8605f 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -480,7 +480,7 @@  static unsigned blk_bvec_map_sg(struct request_queue *q,
 		 * the block layer, but the code below should be removed once
 		 * these offenders (mostly MMC/SD drivers) are fixed.
 		 */
-		page += (offset >> PAGE_SHIFT);
+		page = nth_page(page, (offset >> PAGE_SHIFT));
 		offset &= ~PAGE_MASK;
 
 		*sg = blk_next_sg(sg, sglist);
diff --git a/fs/hfs/btree.c b/fs/hfs/btree.c
index 2fa4b1f8cc7f..acf62fba587f 100644
--- a/fs/hfs/btree.c
+++ b/fs/hfs/btree.c
@@ -270,7 +270,7 @@  struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 	off = off16;
 
 	off += node->page_offset;
-	pagep = node->page + (off >> PAGE_SHIFT);
+	pagep = nth_page(node->page, (off >> PAGE_SHIFT));
 	data = kmap_local_page(*pagep);
 	off &= ~PAGE_MASK;
 	idx = 0;
@@ -294,7 +294,8 @@  struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 			}
 			if (++off >= PAGE_SIZE) {
 				kunmap_local(data);
-				data = kmap_local_page(*++pagep);
+				data = kmap_local_page(nth_page(*pagep, 1));
+				*pagep = nth_page(*pagep, 1);
 				off = 0;
 			}
 			idx += 8;
@@ -315,7 +316,7 @@  struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 		len = hfs_brec_lenoff(node, 0, &off16);
 		off = off16;
 		off += node->page_offset;
-		pagep = node->page + (off >> PAGE_SHIFT);
+		pagep = nth_page(node->page, (off >> PAGE_SHIFT));
 		data = kmap_local_page(*pagep);
 		off &= ~PAGE_MASK;
 	}
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index e7611ae1e612..65e50901c850 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -295,7 +295,7 @@  static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
 	size_t res = 0;
 
 	/* First subpage to start the loop. */
-	page += offset / PAGE_SIZE;
+	page = nth_page(page, offset / PAGE_SIZE);
 	offset %= PAGE_SIZE;
 	while (1) {
 		if (is_raw_hwpoison_page_in_hugepage(page))
@@ -309,7 +309,7 @@  static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
 			break;
 		offset += n;
 		if (offset == PAGE_SIZE) {
-			page++;
+			page = nth_page(page, 1);
 			offset = 0;
 		}
 	}
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index 8a2321d19194..61507bbbfb62 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -953,8 +953,8 @@  nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
 	unsigned offset = buf->offset;
 	struct page *last_page;
 
-	last_page = page + (offset + sd->len - 1) / PAGE_SIZE;
-	for (page += offset / PAGE_SIZE; page <= last_page; page++) {
+	last_page = nth_page(page, (offset + sd->len - 1) / PAGE_SIZE);
+	for (page = nth_page(page, offset / PAGE_SIZE); page <= last_page; page = nth_page(page, 1)) {
 		/*
 		 * Skip page replacement when extending the contents
 		 * of the current page.
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b5c4c8beefe2..2afbd063103f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -763,7 +763,7 @@  static inline struct page *find_subpage(struct page *head, pgoff_t index)
 	if (PageHuge(head))
 		return head;
 
-	return head + (index & (thp_nr_pages(head) - 1));
+	return nth_page(head, (index & (thp_nr_pages(head) - 1)));
 }
 
 unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start,
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index e2f2574d8b74..06ef47bf012f 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -335,7 +335,7 @@  static void kimage_free_pages(struct page *page)
 	arch_kexec_pre_free_pages(page_address(page), count);
 
 	for (i = 0; i < count; i++)
-		ClearPageReserved(page + i);
+		ClearPageReserved(nth_page(page, i));
 	__free_pages(page, order);
 }
 
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index e4dc809d1075..c0a1228b6da2 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -52,8 +52,9 @@ 
 	while (n) {						\
 		unsigned offset = p->bv_offset + skip;		\
 		unsigned left;					\
-		void *kaddr = kmap_local_page(p->bv_page +	\
-					offset / PAGE_SIZE);	\
+		void *kaddr = kmap_local_page(			\
+					nth_page(p->bv_page, 	\
+					offset / PAGE_SIZE));	\
 		base = kaddr + offset % PAGE_SIZE;		\
 		len = min(min(n, (size_t)(p->bv_len - skip)),	\
 		     (size_t)(PAGE_SIZE - offset % PAGE_SIZE));	\
@@ -473,7 +474,7 @@  size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
 		return 0;
 	if (WARN_ON_ONCE(i->data_source))
 		return 0;
-	page += offset / PAGE_SIZE; // first subpage
+	page = nth_page(page, offset / PAGE_SIZE); // first subpage
 	offset %= PAGE_SIZE;
 	while (1) {
 		void *kaddr = kmap_local_page(page);
@@ -486,7 +487,7 @@  size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
 			break;
 		offset += n;
 		if (offset == PAGE_SIZE) {
-			page++;
+			page = nth_page(page, 1);
 			offset = 0;
 		}
 	}
@@ -503,7 +504,7 @@  size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t byte
 		return 0;
 	if (WARN_ON_ONCE(i->data_source))
 		return 0;
-	page += offset / PAGE_SIZE; // first subpage
+	page = nth_page(page, offset / PAGE_SIZE); // first subpage
 	offset %= PAGE_SIZE;
 	while (1) {
 		void *kaddr = kmap_local_page(page);
@@ -520,7 +521,7 @@  size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t byte
 			break;
 		offset += n;
 		if (offset == PAGE_SIZE) {
-			page++;
+			page = nth_page(page, 1);
 			offset = 0;
 		}
 	}
@@ -534,7 +535,7 @@  size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
 	size_t res = 0;
 	if (!page_copy_sane(page, offset, bytes))
 		return 0;
-	page += offset / PAGE_SIZE; // first subpage
+	page = nth_page(page, offset / PAGE_SIZE); // first subpage
 	offset %= PAGE_SIZE;
 	while (1) {
 		void *kaddr = kmap_local_page(page);
@@ -547,7 +548,7 @@  size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
 			break;
 		offset += n;
 		if (offset == PAGE_SIZE) {
-			page++;
+			page = nth_page(page, 1);
 			offset = 0;
 		}
 	}
@@ -1125,7 +1126,7 @@  static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
 			return -ENOMEM;
 		p = *pages;
 		for (int k = 0; k < n; k++)
-			get_page(p[k] = page + k);
+			get_page(p[k] = nth_page(page, k));
 		maxsize = min_t(size_t, maxsize, n * PAGE_SIZE - *start);
 		i->count -= maxsize;
 		i->iov_offset += maxsize;
@@ -1665,7 +1666,7 @@  static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i,
 		return -ENOMEM;
 	p = *pages;
 	for (k = 0; k < maxpages; k++)
-		p[k] = page + k;
+		p[k] = nth_page(page, k);
 
 	maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset);
 	iov_iter_advance(i, maxsize);
diff --git a/mm/cma.c b/mm/cma.c
index 4880f72102fa..98d77f8679ee 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -501,7 +501,7 @@  struct page *cma_alloc(struct cma *cma, unsigned long count,
 	 */
 	if (page) {
 		for (i = 0; i < count; i++)
-			page_kasan_tag_reset(page + i);
+			page_kasan_tag_reset(nth_page(page, i));
 	}
 
 	if (ret && !no_warn) {
diff --git a/mm/compaction.c b/mm/compaction.c
index 38c8d216c6a3..02765ceea819 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -362,7 +362,7 @@  __reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source,
 			return true;
 		}
 
-		page += (1 << PAGE_ALLOC_COSTLY_ORDER);
+		page = nth_page(page, (1 << PAGE_ALLOC_COSTLY_ORDER));
 	} while (page <= end_page);
 
 	return false;
@@ -602,7 +602,7 @@  static unsigned long isolate_freepages_block(struct compact_control *cc,
 	page = pfn_to_page(blockpfn);
 
 	/* Isolate free pages. */
-	for (; blockpfn < end_pfn; blockpfn += stride, page += stride) {
+	for (; blockpfn < end_pfn; blockpfn += stride, page = nth_page(page, stride)) {
 		int isolated;
 
 		/*
@@ -628,7 +628,7 @@  static unsigned long isolate_freepages_block(struct compact_control *cc,
 
 			if (likely(order <= MAX_ORDER)) {
 				blockpfn += (1UL << order) - 1;
-				page += (1UL << order) - 1;
+				page = nth_page(page, (1UL << order) - 1);
 				nr_scanned += (1UL << order) - 1;
 			}
 			goto isolate_fail;
@@ -665,7 +665,7 @@  static unsigned long isolate_freepages_block(struct compact_control *cc,
 		}
 		/* Advance to the end of split page */
 		blockpfn += isolated - 1;
-		page += isolated - 1;
+		page = nth_page(page, isolated - 1);
 		continue;
 
 isolate_fail:
diff --git a/mm/debug.c b/mm/debug.c
index ee533a5ceb79..90b6308c2143 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -67,7 +67,7 @@  static void __dump_page(struct page *page)
 	int mapcount;
 	char *type = "";
 
-	if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) {
+	if (page < head || (page >= nth_page(head, MAX_ORDER_NR_PAGES))) {
 		/*
 		 * Corrupt page, so we cannot call page_mapping. Instead, do a
 		 * safe subset of the steps that page_mapping() does. Caution:
diff --git a/mm/filemap.c b/mm/filemap.c
index dfade1ef1765..39e025a4072c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3481,7 +3481,7 @@  static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 	pte_t *old_ptep = vmf->pte;
 
 	do {
-		if (PageHWPoison(page + count))
+		if (PageHWPoison(nth_page(page, count)))
 			goto skip;
 
 		if (mmap_miss > 0)
@@ -3506,7 +3506,7 @@  static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 		}
 
 		count++;
-		page += count;
+		page = nth_page(page, count);
 		vmf->pte += count;
 		addr += count * PAGE_SIZE;
 		count = 0;
diff --git a/mm/highmem.c b/mm/highmem.c
index e19269093a93..7c7a6d4553b7 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -411,7 +411,7 @@  void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
 			unsigned this_end = min_t(unsigned, end1, PAGE_SIZE);
 
 			if (end1 > start1) {
-				kaddr = kmap_local_page(page + i);
+				kaddr = kmap_local_page(nth_page(page, i));
 				memset(kaddr + start1, 0, this_end - start1);
 			}
 			end1 -= this_end;
@@ -426,7 +426,7 @@  void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
 
 			if (end2 > start2) {
 				if (!kaddr)
-					kaddr = kmap_local_page(page + i);
+					kaddr = kmap_local_page(nth_page(page, i));
 				memset(kaddr + start2, 0, this_end - start2);
 			}
 			end2 -= this_end;
@@ -435,7 +435,7 @@  void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
 
 		if (kaddr) {
 			kunmap_local(kaddr);
-			flush_dcache_page(page + i);
+			flush_dcache_page(nth_page(page, i));
 		}
 
 		if (!end1 && !end2)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4465915711c3..8380924fd756 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1478,7 +1478,7 @@  struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 	if (flags & FOLL_TOUCH)
 		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
 
-	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+	page = nth_page(page, (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT);
 	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
 
 	return page;
@@ -2214,13 +2214,13 @@  static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 			swp_entry_t swp_entry;
 			if (write)
 				swp_entry = make_writable_migration_entry(
-							page_to_pfn(page + i));
+							page_to_pfn(page) + i);
 			else if (anon_exclusive)
 				swp_entry = make_readable_exclusive_migration_entry(
-							page_to_pfn(page + i));
+							page_to_pfn(page) + i);
 			else
 				swp_entry = make_readable_migration_entry(
-							page_to_pfn(page + i));
+							page_to_pfn(page) + i);
 			if (young)
 				swp_entry = make_migration_entry_young(swp_entry);
 			if (dirty)
@@ -2231,11 +2231,11 @@  static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 			if (uffd_wp)
 				entry = pte_swp_mkuffd_wp(entry);
 		} else {
-			entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
+			entry = mk_pte(nth_page(page, i), READ_ONCE(vma->vm_page_prot));
 			if (write)
 				entry = pte_mkwrite(entry);
 			if (anon_exclusive)
-				SetPageAnonExclusive(page + i);
+				SetPageAnonExclusive(nth_page(page, i));
 			if (!young)
 				entry = pte_mkold(entry);
 			/* NOTE: this may set soft-dirty too on some archs */
@@ -2245,7 +2245,7 @@  static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 				entry = pte_mksoft_dirty(entry);
 			if (uffd_wp)
 				entry = pte_mkuffd_wp(entry);
-			page_add_anon_rmap(page + i, vma, addr, RMAP_NONE);
+			page_add_anon_rmap(nth_page(page, i), vma, addr, RMAP_NONE);
 		}
 		VM_BUG_ON(!pte_none(ptep_get(pte)));
 		set_pte_at(mm, addr, pte, entry);
@@ -2405,7 +2405,7 @@  static void __split_huge_page_tail(struct folio *folio, int tail,
 		struct lruvec *lruvec, struct list_head *list)
 {
 	struct page *head = &folio->page;
-	struct page *page_tail = head + tail;
+	struct page *page_tail = nth_page(head, tail);
 	/*
 	 * Careful: new_folio is not a "real" folio before we cleared PageTail.
 	 * Don't pass it around before clear_compound_head().
@@ -2520,8 +2520,8 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 	for (i = nr - 1; i >= 1; i--) {
 		__split_huge_page_tail(folio, i, lruvec, list);
 		/* Some pages can be beyond EOF: drop them from page cache */
-		if (head[i].index >= end) {
-			struct folio *tail = page_folio(head + i);
+		if (nth_page(head, i)->index >= end) {
+			struct folio *tail = page_folio(nth_page(head, i));
 
 			if (shmem_mapping(head->mapping))
 				shmem_uncharge(head->mapping->host, 1);
@@ -2531,11 +2531,11 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 			__filemap_remove_folio(tail, NULL);
 			folio_put(tail);
 		} else if (!PageAnon(page)) {
-			__xa_store(&head->mapping->i_pages, head[i].index,
-					head + i, 0);
+			__xa_store(&head->mapping->i_pages, nth_page(head, i)->index,
+					nth_page(head, i), 0);
 		} else if (swap_cache) {
 			__xa_store(&swap_cache->i_pages, offset + i,
-					head + i, 0);
+					nth_page(head, i), 0);
 		}
 	}
 
@@ -2567,7 +2567,7 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 		split_swap_cluster(folio->swap);
 
 	for (i = 0; i < nr; i++) {
-		struct page *subpage = head + i;
+		struct page *subpage = nth_page(head, i);
 		if (subpage == page)
 			continue;
 		unlock_page(subpage);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a82c3104337e..eaabc0f0bdc0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6478,7 +6478,7 @@  struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 			}
 		}
 
-		page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+		page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
 
 		/*
 		 * Note that page may be a sub-page, and with vmemmap
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 4b9734777f69..21137d9f2633 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -61,7 +61,7 @@  static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
 		pte_t entry, *pte;
 		pgprot_t pgprot = PAGE_KERNEL;
 
-		entry = mk_pte(head + i, pgprot);
+		entry = mk_pte(nth_page(head, i), pgprot);
 		pte = pte_offset_kernel(&__pmd, addr);
 		set_pte_at(&init_mm, addr, pte, entry);
 	}
diff --git a/mm/internal.h b/mm/internal.h
index f59a53111817..0185b515afb0 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -363,7 +363,7 @@  static inline struct page *find_buddy_page_pfn(struct page *page,
 	unsigned long __buddy_pfn = __find_buddy_pfn(pfn, order);
 	struct page *buddy;
 
-	buddy = page + (__buddy_pfn - pfn);
+	buddy = nth_page(page, (__buddy_pfn - pfn));
 	if (buddy_pfn)
 		*buddy_pfn = __buddy_pfn;
 
@@ -427,7 +427,7 @@  static inline void prep_compound_head(struct page *page, unsigned int order)
 
 static inline void prep_compound_tail(struct page *head, int tail_idx)
 {
-	struct page *p = head + tail_idx;
+	struct page *p = nth_page(head, tail_idx);
 
 	if (tail_idx > TAIL_MAPPING_REUSED_MAX)
 		p->mapping = TAIL_MAPPING;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 256930da578a..6d9d10695037 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -110,7 +110,7 @@  bool __kasan_unpoison_pages(struct page *page, unsigned int order, bool init)
 	kasan_unpoison(set_tag(page_address(page), tag),
 		       PAGE_SIZE << order, init);
 	for (i = 0; i < (1 << order); i++)
-		page_kasan_tag_set(page + i, tag);
+		page_kasan_tag_set(nth_page(page, i), tag);
 
 	return true;
 }
@@ -128,7 +128,7 @@  void __kasan_poison_slab(struct slab *slab)
 	unsigned long i;
 
 	for (i = 0; i < compound_nr(page); i++)
-		page_kasan_tag_reset(page + i);
+		page_kasan_tag_reset(nth_page(page, i));
 	kasan_poison(page_address(page), page_size(page),
 		     KASAN_SLAB_REDZONE, false);
 }
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 40d43eccdee8..b88b6da99b8d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1563,7 +1563,7 @@  int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		 * Note that uprobe, debugger, or MAP_PRIVATE may change the
 		 * page table, but the new page will not be a subpage of hpage.
 		 */
-		if (hpage + i != page)
+		if (nth_page(hpage, i) != page)
 			goto abort;
 	}
 
@@ -1595,7 +1595,7 @@  int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 			goto abort;
 		}
 		page = vm_normal_page(vma, addr, ptent);
-		if (hpage + i != page)
+		if (nth_page(hpage, i) != page)
 			goto abort;
 
 		/*
@@ -2026,17 +2026,17 @@  static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	index = start;
 	list_for_each_entry(page, &pagelist, lru) {
 		while (index < page->index) {
-			clear_highpage(hpage + (index % HPAGE_PMD_NR));
+			clear_highpage(nth_page(hpage, (index % HPAGE_PMD_NR)));
 			index++;
 		}
-		if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) {
+		if (copy_mc_highpage(nth_page(hpage, (page->index % HPAGE_PMD_NR)), page) > 0) {
 			result = SCAN_COPY_MC;
 			goto rollback;
 		}
 		index++;
 	}
 	while (index < end) {
-		clear_highpage(hpage + (index % HPAGE_PMD_NR));
+		clear_highpage(nth_page(hpage, (index % HPAGE_PMD_NR)));
 		index++;
 	}
 
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 2918150e31bd..dda8da9fbadd 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1593,7 +1593,7 @@  static void kmemleak_scan(void)
 			/* only scan if page is in use */
 			if (page_count(page) == 0)
 				continue;
-			scan_block(page, page + 1, NULL);
+			scan_block(page, nth_page(page, 1), NULL);
 			if (!(pfn & 63))
 				cond_resched();
 		}
diff --git a/mm/memory.c b/mm/memory.c
index 12647d139a13..1da1eb017128 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5981,7 +5981,7 @@  static int clear_subpage(unsigned long addr, int idx, void *arg)
 {
 	struct page *page = arg;
 
-	clear_user_highpage(page + idx, addr);
+	clear_user_highpage(nth_page(page, idx), addr);
 	return 0;
 }
 
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 50f2f34745af..c9aa456dcb2c 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1974,7 +1974,7 @@  static void __init deferred_free_range(unsigned long pfn,
 	/* Free a large naturally-aligned chunk if possible */
 	if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) {
 		for (i = 0; i < nr_pages; i += pageblock_nr_pages)
-			set_pageblock_migratetype(page + i, MIGRATE_MOVABLE);
+			set_pageblock_migratetype(nth_page(page, i), MIGRATE_MOVABLE);
 		__free_pages_core(page, MAX_ORDER);
 		return;
 	}
@@ -1982,7 +1982,7 @@  static void __init deferred_free_range(unsigned long pfn,
 	/* Accept chunks smaller than MAX_ORDER upfront */
 	accept_memory(PFN_PHYS(pfn), PFN_PHYS(pfn + nr_pages));
 
-	for (i = 0; i < nr_pages; i++, page++, pfn++) {
+	for (i = 0; i < nr_pages; i++, page = nth_page(page, 1), pfn++) {
 		if (pageblock_aligned(pfn))
 			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
 		__free_pages_core(page, 0);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 442c1b3480aa..65397117ace8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -730,7 +730,7 @@  buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
 		return false;
 
 	higher_page_pfn = buddy_pfn & pfn;
-	higher_page = page + (higher_page_pfn - pfn);
+	higher_page = nth_page(page, (higher_page_pfn - pfn));
 
 	return find_buddy_page_pfn(higher_page, higher_page_pfn, order + 1,
 			NULL) != NULL;
@@ -816,7 +816,7 @@  static inline void __free_one_page(struct page *page,
 		else
 			del_page_from_free_list(buddy, zone, order);
 		combined_pfn = buddy_pfn & pfn;
-		page = page + (combined_pfn - pfn);
+		page = nth_page(page, (combined_pfn - pfn));
 		pfn = combined_pfn;
 		order++;
 	}
@@ -968,7 +968,7 @@  static inline bool is_check_pages_enabled(void)
 static int free_tail_page_prepare(struct page *head_page, struct page *page)
 {
 	struct folio *folio = (struct folio *)head_page;
-	int ret = 1, index = page - head_page;
+	int ret = 1, index = folio_page_idx(folio, page);
 
 	/*
 	 * We rely page->lru.next never has bit 0 set, unless the page
@@ -1062,7 +1062,7 @@  static void kernel_init_pages(struct page *page, int numpages)
 	/* s390's use of memset() could override KASAN redzones. */
 	kasan_disable_current();
 	for (i = 0; i < numpages; i++)
-		clear_highpage_kasan_tagged(page + i);
+		clear_highpage_kasan_tagged(nth_page(page, i));
 	kasan_enable_current();
 }
 
@@ -1104,14 +1104,14 @@  static __always_inline bool free_pages_prepare(struct page *page,
 			page[1].flags &= ~PAGE_FLAGS_SECOND;
 		for (i = 1; i < (1 << order); i++) {
 			if (compound)
-				bad += free_tail_page_prepare(page, page + i);
+				bad += free_tail_page_prepare(page, nth_page(page, i));
 			if (is_check_pages_enabled()) {
-				if (free_page_is_bad(page + i)) {
+				if (free_page_is_bad(nth_page(page, i))) {
 					bad++;
 					continue;
 				}
 			}
-			(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
+			nth_page(page, i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
 		}
 	}
 	if (PageMappingFlags(page))
@@ -1433,7 +1433,7 @@  static inline bool check_new_pages(struct page *page, unsigned int order)
 {
 	if (is_check_pages_enabled()) {
 		for (int i = 0; i < (1 << order); i++) {
-			struct page *p = page + i;
+			struct page *p = nth_page(page,  i);
 
 			if (check_new_page(p))
 				return true;
@@ -1505,7 +1505,7 @@  inline void post_alloc_hook(struct page *page, unsigned int order,
 	if (zero_tags) {
 		/* Initialize both memory and memory tags. */
 		for (i = 0; i != 1 << order; ++i)
-			tag_clear_highpage(page + i);
+			tag_clear_highpage(nth_page(page, i));
 
 		/* Take note that memory was initialized by the loop above. */
 		init = false;
@@ -1521,7 +1521,7 @@  inline void post_alloc_hook(struct page *page, unsigned int order,
 		 * tags to ensure page_address() dereferencing does not fault.
 		 */
 		for (i = 0; i != 1 << order; ++i)
-			page_kasan_tag_reset(page + i);
+			page_kasan_tag_reset(nth_page(page, i));
 	}
 	/* If memory is still not initialized, initialize it now. */
 	if (init)
@@ -1676,7 +1676,7 @@  static void change_pageblock_range(struct page *pageblock_page,
 
 	while (nr_pageblocks--) {
 		set_pageblock_migratetype(pageblock_page, migratetype);
-		pageblock_page += pageblock_nr_pages;
+		pageblock_page = nth_page(pageblock_page, pageblock_nr_pages);
 	}
 }
 
@@ -2564,7 +2564,7 @@  void split_page(struct page *page, unsigned int order)
 	VM_BUG_ON_PAGE(!page_count(page), page);
 
 	for (i = 1; i < (1 << order); i++)
-		set_page_refcounted(page + i);
+		set_page_refcounted(nth_page(page, i));
 	split_page_owner(page, 1 << order);
 	split_page_memcg(page, 1 << order);
 }
@@ -2597,8 +2597,8 @@  int __isolate_free_page(struct page *page, unsigned int order)
 	 * pageblock
 	 */
 	if (order >= pageblock_order - 1) {
-		struct page *endpage = page + (1 << order) - 1;
-		for (; page < endpage; page += pageblock_nr_pages) {
+		struct page *endpage = nth_page(page, (1 << order) - 1);
+		for (; page < endpage; page = nth_page(page, pageblock_nr_pages)) {
 			int mt = get_pageblock_migratetype(page);
 			/*
 			 * Only change normal pageblocks (i.e., they can merge
@@ -4559,7 +4559,7 @@  void __free_pages(struct page *page, unsigned int order)
 		free_the_page(page, order);
 	else if (!head)
 		while (order-- > 0)
-			free_the_page(page + (1 << order), order);
+			free_the_page(nth_page(page, (1 << order)), order);
 }
 EXPORT_SYMBOL(__free_pages);
 
@@ -4705,15 +4705,15 @@  static void *make_alloc_exact(unsigned long addr, unsigned int order,
 	if (addr) {
 		unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE);
 		struct page *page = virt_to_page((void *)addr);
-		struct page *last = page + nr;
+		struct page *last = nth_page(page, nr);
 
 		split_page_owner(page, 1 << order);
 		split_page_memcg(page, 1 << order);
 		while (page < --last)
 			set_page_refcounted(last);
 
-		last = page + (1UL << order);
-		for (page += nr; page < last; page++)
+		last = nth_page(page, (1UL << order));
+		for (page = nth_page(page, nr); page < last; page = nth_page(page, 1))
 			__free_pages_ok(page, 0, FPI_TO_TAIL);
 	}
 	return (void *)addr;
@@ -6511,12 +6511,12 @@  static void break_down_buddy_pages(struct zone *zone, struct page *page,
 		high--;
 		size >>= 1;
 
-		if (target >= &page[size]) {
-			next_page = page + size;
+		if (target >= nth_page(page, size)) {
+			next_page = nth_page(page, size);
 			current_buddy = page;
 		} else {
 			next_page = page;
-			current_buddy = page + size;
+			current_buddy = nth_page(page, size);
 		}
 
 		if (set_page_guard(zone, current_buddy, high, migratetype))
diff --git a/mm/page_poison.c b/mm/page_poison.c
index b4f456437b7e..30c8885b4990 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -35,7 +35,7 @@  void __kernel_poison_pages(struct page *page, int n)
 	int i;
 
 	for (i = 0; i < n; i++)
-		poison_page(page + i);
+		poison_page(nth_page(page, i));
 }
 
 static bool single_bit_flip(unsigned char a, unsigned char b)
@@ -94,7 +94,7 @@  void __kernel_unpoison_pages(struct page *page, int n)
 	int i;
 
 	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
+		unpoison_page(nth_page(page, i));
 }
 
 #ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 228a4a5312f2..58158d8ada1f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3089,7 +3089,7 @@  vm_area_alloc_pages(gfp_t gfp, int nid,
 		 * vm_struct APIs independent of the physical/mapped size.
 		 */
 		for (i = 0; i < (1U << order); i++)
-			pages[nr_allocated + i] = page + i;
+			pages[nr_allocated + i] = nth_page(page, i);
 
 		cond_resched();
 		nr_allocated += 1U << order;