diff mbox series

[RFC,05/13] mm/rmap: remove do_page_add_anon_rmap()

Message ID 20220224122614.94921-6-david@redhat.com (mailing list archive)
State New
Headers show
Series mm: COW fixes part 2: reliable GUP pins of anonymous pages | expand

Commit Message

David Hildenbrand Feb. 24, 2022, 12:26 p.m. UTC
... and instead convert page_add_anon_rmap() to accept flags.

Passing flags instead of bools is usually nicer either way, and we want
to more often also pass RMAP_EXCLUSIVE in follow up patches when
detecting that an anonymous page is exclusive: for example, when
restoring an anonymous page from a writable migration entry.

This is a preparation for marking an anonymous page inside
page_add_anon_rmap() as exclusive when RMAP_EXCLUSIVE is passed.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/rmap.h |  4 +---
 mm/huge_memory.c     |  2 +-
 mm/ksm.c             |  2 +-
 mm/memory.c          |  2 +-
 mm/migrate.c         |  2 +-
 mm/rmap.c            | 17 +++--------------
 mm/swapfile.c        |  2 +-
 7 files changed, 9 insertions(+), 22 deletions(-)

Comments

Linus Torvalds Feb. 24, 2022, 5:29 p.m. UTC | #1
On Thu, Feb 24, 2022 at 4:29 AM David Hildenbrand <david@redhat.com> wrote:
>
> ... and instead convert page_add_anon_rmap() to accept flags.

Can you fix the comment above the RMAP_xyz definitions? That one still says

  /* bitflags for do_page_add_anon_rmap() */

that tnow no longer exists.

Also, while this kind of code isn't unusual, I think it's still confusing:

> +               page_add_anon_rmap(page, vma, addr, 0);

because when reading that, at least I go "what does 0 mean? Is it a
page offset, or what?"

It might be a good idea to simply add a

  #define RMAP_PAGE 0x00

or something like that, just to have the callers all make it obvious
that we're talking about that RMAP_xyz bits - even if some of them may
be default.

(Then using an enum of a special type is something we do if we want to
add extra clarity or sparse testing, I don't think there are enough
users for that to make sense)

               Linus
David Hildenbrand Feb. 24, 2022, 5:41 p.m. UTC | #2
On 24.02.22 18:29, Linus Torvalds wrote:
> On Thu, Feb 24, 2022 at 4:29 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> ... and instead convert page_add_anon_rmap() to accept flags.
> 
> Can you fix the comment above the RMAP_xyz definitions? That one still says
> 
>   /* bitflags for do_page_add_anon_rmap() */
> 
> that tnow no longer exists.

Oh, yes sure.

> 
> Also, while this kind of code isn't unusual, I think it's still confusing:
> 
>> +               page_add_anon_rmap(page, vma, addr, 0);
> 
> because when reading that, at least I go "what does 0 mean? Is it a
> page offset, or what?"

Yes, I agree.

> 
> It might be a good idea to simply add a
> 
>   #define RMAP_PAGE 0x00
> 
> or something like that, just to have the callers all make it obvious
> that we're talking about that RMAP_xyz bits - even if some of them may
> be default.
> 
> (Then using an enum of a special type is something we do if we want to
> add extra clarity or sparse testing, I don't think there are enough
> users for that to make sense)
> 

Actually, I thought about doing it similarly to what I did in
page_alloc.c with fpi_t:

typedef int __bitwise fpi_t;

#define FPI_NONE		((__force fpi_t)0)


I can do something similar here.
Linus Torvalds Feb. 24, 2022, 5:48 p.m. UTC | #3
On Thu, Feb 24, 2022 at 9:41 AM David Hildenbrand <david@redhat.com> wrote:
>
> Actually, I thought about doing it similarly to what I did in
> page_alloc.c with fpi_t:
>
> typedef int __bitwise fpi_t;
>
> #define FPI_NONE                ((__force fpi_t)0)
>
> I can do something similar here.

Yeah, that looks good. And then the relevant function declarations and
definitions also have that explicit type there instead of 'int', which
adds a bit more documentation for people grepping for use.

Thanks,
               Linus
David Hildenbrand Feb. 25, 2022, 9:01 a.m. UTC | #4
On 24.02.22 18:48, Linus Torvalds wrote:
> On Thu, Feb 24, 2022 at 9:41 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> Actually, I thought about doing it similarly to what I did in
>> page_alloc.c with fpi_t:
>>
>> typedef int __bitwise fpi_t;
>>
>> #define FPI_NONE                ((__force fpi_t)0)
>>
>> I can do something similar here.
> 
> Yeah, that looks good. And then the relevant function declarations and
> definitions also have that explicit type there instead of 'int', which
> adds a bit more documentation for people grepping for use.
> 

While at it already, I'll also add a patch to drop the "bool compound" parameter
from page_add_new_anon_rmap()

(description only)

Author: David Hildenbrand <david@redhat.com>
Date:   Fri Feb 25 09:38:11 2022 +0100

    mm/rmap: drop "compound" parameter from page_add_new_anon_rmap()
    
    New anonymous pages are always mapped natively: only THP/khugepagd code
    maps a new compound anonymous page and passes "true". Otherwise, we're
    just dealing with simple, non-compound pages.
    
    Let's give the interface clearer semantics and document these.
    
    Signed-off-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 92c3585b8c6a..f230e86b4587 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -167,9 +167,7 @@  struct anon_vma *page_get_anon_vma(struct page *page);
  */
 void page_move_anon_rmap(struct page *, struct vm_area_struct *);
 void page_add_anon_rmap(struct page *, struct vm_area_struct *,
-		unsigned long, bool);
-void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
-			   unsigned long, int);
+		unsigned long, int);
 void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
 		unsigned long, bool);
 void page_add_file_rmap(struct page *, bool);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c126d728b8de..2ca137e01e84 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3115,7 +3115,7 @@  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 
 	flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE);
 	if (PageAnon(new))
-		page_add_anon_rmap(new, vma, mmun_start, true);
+		page_add_anon_rmap(new, vma, mmun_start, RMAP_COMPOUND);
 	else
 		page_add_file_rmap(new, true);
 	set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
diff --git a/mm/ksm.c b/mm/ksm.c
index c20bd4d9a0d9..e897577c54c5 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1153,7 +1153,7 @@  static int replace_page(struct vm_area_struct *vma, struct page *page,
 	 */
 	if (!is_zero_pfn(page_to_pfn(kpage))) {
 		get_page(kpage);
-		page_add_anon_rmap(kpage, vma, addr, false);
+		page_add_anon_rmap(kpage, vma, addr, 0);
 		newpte = mk_pte(kpage, vma->vm_page_prot);
 	} else {
 		newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
diff --git a/mm/memory.c b/mm/memory.c
index b9602d41d907..c2f1c2428303 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3709,7 +3709,7 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		lru_cache_add_inactive_or_unevictable(page, vma);
 	} else {
-		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
+		page_add_anon_rmap(page, vma, vmf->address, exclusive);
 	}
 
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
diff --git a/mm/migrate.c b/mm/migrate.c
index 524c2648ab36..d4d72a15224c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -246,7 +246,7 @@  static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 #endif
 		{
 			if (PageAnon(new))
-				page_add_anon_rmap(new, vma, pvmw.address, false);
+				page_add_anon_rmap(new, vma, pvmw.address, 0);
 			else
 				page_add_file_rmap(new, false);
 			set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte);
diff --git a/mm/rmap.c b/mm/rmap.c
index f825aeef61ca..902ebf99d147 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1129,26 +1129,15 @@  static void __page_check_anon_rmap(struct page *page,
  * @page:	the page to add the mapping to
  * @vma:	the vm area in which the mapping is added
  * @address:	the user virtual address mapped
- * @compound:	charge the page as compound or small page
+ * @flags:	the rmap flags
  *
  * The caller needs to hold the pte lock, and the page must be locked in
  * the anon_vma case: to serialize mapping,index checking after setting,
  * and to ensure that PageAnon is not being upgraded racily to PageKsm
  * (but PageKsm is never downgraded to PageAnon).
  */
-void page_add_anon_rmap(struct page *page,
-	struct vm_area_struct *vma, unsigned long address, bool compound)
-{
-	do_page_add_anon_rmap(page, vma, address, compound ? RMAP_COMPOUND : 0);
-}
-
-/*
- * Special version of the above for do_swap_page, which often runs
- * into pages that are exclusively owned by the current process.
- * Everybody else should continue to use page_add_anon_rmap above.
- */
-void do_page_add_anon_rmap(struct page *page,
-	struct vm_area_struct *vma, unsigned long address, int flags)
+void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
+			unsigned long address, int flags)
 {
 	bool compound = flags & RMAP_COMPOUND;
 	bool first;
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a5183315dc58..eacda7ca7ac9 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1800,7 +1800,7 @@  static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 	inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
 	get_page(page);
 	if (page == swapcache) {
-		page_add_anon_rmap(page, vma, addr, false);
+		page_add_anon_rmap(page, vma, addr, 0);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, addr, false);
 		lru_cache_add_inactive_or_unevictable(page, vma);