diff mbox series

[v2] mm:vmscan: shrink skip folio mapped by an exiting task

Message ID 20240131131244.1144-1-justinjiang@vivo.com (mailing list archive)
State New
Headers show
Series [v2] mm:vmscan: shrink skip folio mapped by an exiting task | expand

Commit Message

zhiguojiang Jan. 31, 2024, 1:12 p.m. UTC
If the folio shrinked by shrink_inactive_list is mapped by an exiting
task, this folio should be freed in the task exit flow rather than be
reclaimed in the shrink flow, because the former takes less time.

When the exiting tasks and shrink_inactive_list occur at the same time,
the lruvecs's folios which shrink_inactive_list reclaims may be mapped
by the exiting tasks. And when system is low memory, it more likely to
occur, because more backend applidatuions will be killed.

The shrink_inactive_list reclaims the exiting tasks's folios in lruvecs
and transforms the exiting tasks's anon folios into swap memory, which
will lead to the increasing load of the current exiting tasks.

This patch can alleviate the load of the tasks exiting process. Because
it can make that the exiting tasks release its anon folios faster
instead of releasing its swap memory from its anon folios swap-in in
shrink_inactive_list.

Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com>
---

Change log:
v1->v2:
1.The VM_EXITING added in v1 patch is removed, because it will fail
to compile in 32-bit system.

 mm/rmap.c | 7 +++++++
 1 file changed, 7 insertions(+)
 mode change 100644 => 100755 mm/rmap.c

Comments

Andrew Morton Feb. 2, 2024, 9:22 a.m. UTC | #1
On Wed, 31 Jan 2024 21:12:44 +0800 Zhiguo Jiang <justinjiang@vivo.com> wrote:

> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -840,6 +840,13 @@ static bool folio_referenced_one(struct folio *folio,
>  	int referenced = 0;
>  	unsigned long start = address, ptes = 0;
>  
> +	/* Skip this folio if it's mapped by an exiting task */
> +	if (unlikely(!atomic_read(&vma->vm_mm->mm_users)) ||
> +		unlikely(test_bit(MMF_OOM_SKIP, &vma->vm_mm->flags))) {
> +		pra->referenced = -1;
> +		return false;
> +	}

The code comment explains what the code does.  This is, as usual,
pretty obvious from reading the code!

A better comment is one which explains *why* the code is doing what it
does.
zhiguojiang Feb. 2, 2024, 9:36 a.m. UTC | #2
在 2024/2/2 17:22, Andrew Morton 写道:
> On Wed, 31 Jan 2024 21:12:44 +0800 Zhiguo Jiang <justinjiang@vivo.com> wrote:
>
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -840,6 +840,13 @@ static bool folio_referenced_one(struct folio *folio,
>>   	int referenced = 0;
>>   	unsigned long start = address, ptes = 0;
>>   
>> +	/* Skip this folio if it's mapped by an exiting task */
>> +	if (unlikely(!atomic_read(&vma->vm_mm->mm_users)) ||
>> +		unlikely(test_bit(MMF_OOM_SKIP, &vma->vm_mm->flags))) {
>> +		pra->referenced = -1;
>> +		return false;
>> +	}
> The code comment explains what the code does.  This is, as usual,
> pretty obvious from reading the code!
>
> A better comment is one which explains *why* the code is doing what it
> does.
Yes, thank you for your recognition, and I also appreciate Matthew Wilcox's
professional and patient guidance in patch v1. I will study this 
seriously in the
future.

Best Regards.
diff mbox series

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index 1cf2bffa48ed..e6702bfafdde
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -840,6 +840,13 @@  static bool folio_referenced_one(struct folio *folio,
 	int referenced = 0;
 	unsigned long start = address, ptes = 0;
 
+	/* Skip this folio if it's mapped by an exiting task */
+	if (unlikely(!atomic_read(&vma->vm_mm->mm_users)) ||
+		unlikely(test_bit(MMF_OOM_SKIP, &vma->vm_mm->flags))) {
+		pra->referenced = -1;
+		return false;
+	}
+
 	while (page_vma_mapped_walk(&pvmw)) {
 		address = pvmw.address;