diff mbox series

[v2,3/3] mm: remove folio_test_anon(folio)==false path in __folio_add_anon_rmap()

Message ID 20240617231137.80726-4-21cnbao@gmail.com (mailing list archive)
State New
Headers show
Series mm: clarify folio_add_new_anon_rmap() and __folio_add_anon_rmap() | expand

Commit Message

Barry Song June 17, 2024, 11:11 p.m. UTC
From: Barry Song <v-songbaohua@oppo.com>

The folio_test_anon(folio)==false cases has been relocated to
folio_add_new_anon_rmap(). Additionally, four other callers
consistently pass anonymous folios.

stack 1:
remove_migration_pmd
   -> folio_add_anon_rmap_pmd
     -> __folio_add_anon_rmap

stack 2:
__split_huge_pmd_locked
   -> folio_add_anon_rmap_ptes
      -> __folio_add_anon_rmap

stack 3:
remove_migration_pmd
   -> folio_add_anon_rmap_pmd
      -> __folio_add_anon_rmap (RMAP_LEVEL_PMD)

stack 4:
try_to_merge_one_page
   -> replace_page
     -> folio_add_anon_rmap_pte
       -> __folio_add_anon_rmap

__folio_add_anon_rmap() only needs to handle the cases
folio_test_anon(folio)==true now.
We can remove the !folio_test_anon(folio)) path within
__folio_add_anon_rmap() now.

Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Tested-by: Shuai Yuan <yuanshuai@oppo.com>
---
 mm/rmap.c | 17 +++--------------
 1 file changed, 3 insertions(+), 14 deletions(-)

Comments

David Hildenbrand June 18, 2024, 9:55 a.m. UTC | #1
On 18.06.24 01:11, Barry Song wrote:
> From: Barry Song <v-songbaohua@oppo.com>
> 
> The folio_test_anon(folio)==false cases has been relocated to
> folio_add_new_anon_rmap(). Additionally, four other callers
> consistently pass anonymous folios.
> 
> stack 1:
> remove_migration_pmd
>     -> folio_add_anon_rmap_pmd
>       -> __folio_add_anon_rmap
> 
> stack 2:
> __split_huge_pmd_locked
>     -> folio_add_anon_rmap_ptes
>        -> __folio_add_anon_rmap
> 
> stack 3:
> remove_migration_pmd
>     -> folio_add_anon_rmap_pmd
>        -> __folio_add_anon_rmap (RMAP_LEVEL_PMD)
> 
> stack 4:
> try_to_merge_one_page
>     -> replace_page
>       -> folio_add_anon_rmap_pte
>         -> __folio_add_anon_rmap
> 
> __folio_add_anon_rmap() only needs to handle the cases
> folio_test_anon(folio)==true now.
> We can remove the !folio_test_anon(folio)) path within
> __folio_add_anon_rmap() now.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> Tested-by: Shuai Yuan <yuanshuai@oppo.com>
> ---
>   mm/rmap.c | 17 +++--------------
>   1 file changed, 3 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 2b19bb92eda5..ddcdda752982 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1297,23 +1297,12 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
>   {
>   	int i, nr, nr_pmdmapped = 0;
>   
> +	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
> +
>   	nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped);
>   
> -	if (unlikely(!folio_test_anon(folio))) {
> -		VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
> -		/*
> -		 * For a PTE-mapped large folio, we only know that the single
> -		 * PTE is exclusive. Further, __folio_set_anon() might not get
> -		 * folio->index right when not given the address of the head
> -		 * page.
> -		 */
> -		VM_WARN_ON_FOLIO(folio_test_large(folio) &&
> -				 level != RMAP_LEVEL_PMD, folio);
> -		__folio_set_anon(folio, vma, address,
> -				 !!(flags & RMAP_EXCLUSIVE));
> -	} else if (likely(!folio_test_ksm(folio))) {
> +	if (likely(!folio_test_ksm(folio)))
>   		__page_check_anon_rmap(folio, page, vma, address);
> -	}
>   
>   	__folio_mod_stat(folio, nr, nr_pmdmapped);
>   

Lovely!

Acked-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index 2b19bb92eda5..ddcdda752982 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1297,23 +1297,12 @@  static __always_inline void __folio_add_anon_rmap(struct folio *folio,
 {
 	int i, nr, nr_pmdmapped = 0;
 
+	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
+
 	nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped);
 
-	if (unlikely(!folio_test_anon(folio))) {
-		VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
-		/*
-		 * For a PTE-mapped large folio, we only know that the single
-		 * PTE is exclusive. Further, __folio_set_anon() might not get
-		 * folio->index right when not given the address of the head
-		 * page.
-		 */
-		VM_WARN_ON_FOLIO(folio_test_large(folio) &&
-				 level != RMAP_LEVEL_PMD, folio);
-		__folio_set_anon(folio, vma, address,
-				 !!(flags & RMAP_EXCLUSIVE));
-	} else if (likely(!folio_test_ksm(folio))) {
+	if (likely(!folio_test_ksm(folio)))
 		__page_check_anon_rmap(folio, page, vma, address);
-	}
 
 	__folio_mod_stat(folio, nr, nr_pmdmapped);