diff mbox series

[1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios

Message ID d6f8e4451910da1de0420eb82724dd85c368741c.1724054125.git.baolin.wang@linux.alibaba.com (mailing list archive)
State New
Headers show
Series support shmem mTHP collapse | expand

Commit Message

Baolin Wang Aug. 19, 2024, 8:14 a.m. UTC
Expand the is_refcount_suitable() to support reference checks for file folios,
as preparation for supporting shmem mTHP collapse.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

Comments

David Hildenbrand Aug. 19, 2024, 8:36 a.m. UTC | #1
On 19.08.24 10:14, Baolin Wang wrote:
> Expand the is_refcount_suitable() to support reference checks for file folios,
> as preparation for supporting shmem mTHP collapse.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>   mm/khugepaged.c | 11 ++++++++---
>   1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index cdd1d8655a76..f11b4f172e61 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -549,8 +549,14 @@ static bool is_refcount_suitable(struct folio *folio)
>   	int expected_refcount;
>   
>   	expected_refcount = folio_mapcount(folio);
> -	if (folio_test_swapcache(folio))
> +	if (folio_test_anon(folio)) {
> +		expected_refcount += folio_test_swapcache(folio) ?
> +					folio_nr_pages(folio) : 0;
> +	} else {
>   		expected_refcount += folio_nr_pages(folio);
> +		if (folio_test_private(folio))
> +			expected_refcount++;
> +	}

Alternatively, a bit neater

if (!folio_test_anon(folio) || folio_test_swapcache(folio))
	expected_refcount += folio_nr_pages(folio);
if (folio_test_private(folio))
	expected_refcount++;

The latter check should be fine even for anon folios (although always false)


>   
>   	return folio_ref_count(folio) == expected_refcount;
>   }
> @@ -2285,8 +2291,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
>   			break;
>   		}
>   
> -		if (folio_ref_count(folio) !=
> -		    1 + folio_mapcount(folio) + folio_test_private(folio)) {

The "1" is due to the pagecache, right? IIUC, we don't hold a raised 
folio refcount as we do the xas_for_each().
Baolin Wang Aug. 19, 2024, 8:42 a.m. UTC | #2
On 2024/8/19 16:36, David Hildenbrand wrote:
> On 19.08.24 10:14, Baolin Wang wrote:
>> Expand the is_refcount_suitable() to support reference checks for file 
>> folios,
>> as preparation for supporting shmem mTHP collapse.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/khugepaged.c | 11 ++++++++---
>>   1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index cdd1d8655a76..f11b4f172e61 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -549,8 +549,14 @@ static bool is_refcount_suitable(struct folio 
>> *folio)
>>       int expected_refcount;
>>       expected_refcount = folio_mapcount(folio);
>> -    if (folio_test_swapcache(folio))
>> +    if (folio_test_anon(folio)) {
>> +        expected_refcount += folio_test_swapcache(folio) ?
>> +                    folio_nr_pages(folio) : 0;
>> +    } else {
>>           expected_refcount += folio_nr_pages(folio);
>> +        if (folio_test_private(folio))
>> +            expected_refcount++;
>> +    }
> 
> Alternatively, a bit neater
> 
> if (!folio_test_anon(folio) || folio_test_swapcache(folio))
>      expected_refcount += folio_nr_pages(folio);
> if (folio_test_private(folio))
>      expected_refcount++;
> 
> The latter check should be fine even for anon folios (although always 
> false)

Looks better. Will do in v2.

>>       return folio_ref_count(folio) == expected_refcount;
>>   }
>> @@ -2285,8 +2291,7 @@ static int hpage_collapse_scan_file(struct 
>> mm_struct *mm, unsigned long addr,
>>               break;
>>           }
>> -        if (folio_ref_count(folio) !=
>> -            1 + folio_mapcount(folio) + folio_test_private(folio)) {
> 
> The "1" is due to the pagecache, right? IIUC, we don't hold a raised 
> folio refcount as we do the xas_for_each().

Right.
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cdd1d8655a76..f11b4f172e61 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -549,8 +549,14 @@  static bool is_refcount_suitable(struct folio *folio)
 	int expected_refcount;
 
 	expected_refcount = folio_mapcount(folio);
-	if (folio_test_swapcache(folio))
+	if (folio_test_anon(folio)) {
+		expected_refcount += folio_test_swapcache(folio) ?
+					folio_nr_pages(folio) : 0;
+	} else {
 		expected_refcount += folio_nr_pages(folio);
+		if (folio_test_private(folio))
+			expected_refcount++;
+	}
 
 	return folio_ref_count(folio) == expected_refcount;
 }
@@ -2285,8 +2291,7 @@  static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 			break;
 		}
 
-		if (folio_ref_count(folio) !=
-		    1 + folio_mapcount(folio) + folio_test_private(folio)) {
+		if (!is_refcount_suitable(folio)) {
 			result = SCAN_PAGE_COUNT;
 			break;
 		}