diff mbox series

[v2,1/4] mm/vmscan: remove the PageDirty check after MADV_FREE pages are page_ref_freezed

Message ID 20210717065911.61497-2-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series Cleanups for vmscan | expand

Commit Message

Miaohe Lin July 17, 2021, 6:59 a.m. UTC
If the MADV_FREE pages are redirtied before they could be reclaimed,
put the pages back to anonymous LRU list by setting SwapBacked flag
and the pages will be reclaimed in normal swapout way. But as Yu Zhao
pointed out, "The page has only one reference left, which is from the
isolation. After the caller puts the page back on lru and drops the
reference, the page will be freed anyway. It doesn't matter which lru
it goes." So we don't bother checking PageDirty here.

[Yu Zhao's comment is also quoted in the code.]

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

Comments

Yu Zhao July 18, 2021, 12:58 a.m. UTC | #1
On Sat, Jul 17, 2021 at 12:59 AM Miaohe Lin <linmiaohe@huawei.com> wrote:

> If the MADV_FREE pages are redirtied before they could be reclaimed,
> put the pages back to anonymous LRU list by setting SwapBacked flag
> and the pages will be reclaimed in normal swapout way. But as Yu Zhao
> pointed out, "The page has only one reference left, which is from the
> isolation. After the caller puts the page back on lru and drops the
> reference, the page will be freed anyway. It doesn't matter which lru
> it goes." So we don't bother checking PageDirty here.
>
> [Yu Zhao's comment is also quoted in the code.]
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>

Reviewed-by: Yu Zhao <yuzhao@google.com>


> ---
>  mm/vmscan.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a7602f71ec04..92a515e82b1b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1627,11 +1627,14 @@ static unsigned int shrink_page_list(struct
> list_head *page_list,
>                         /* follow __remove_mapping for reference */
>                         if (!page_ref_freeze(page, 1))
>                                 goto keep_locked;
> -                       if (PageDirty(page)) {
> -                               page_ref_unfreeze(page, 1);
> -                               goto keep_locked;
> -                       }
> -
> +                       /*
> +                        * The page has only one reference left, which is
> +                        * from the isolation. After the caller puts the
> +                        * page back on lru and drops the reference, the
> +                        * page will be freed anyway. It doesn't matter
> +                        * which lru it goes. So we don't bother checking
> +                        * PageDirty here.
> +                        */
>                         count_vm_event(PGLAZYFREED);
>                         count_memcg_page_event(page, PGLAZYFREED);
>                 } else if (!mapping || !__remove_mapping(mapping, page,
> true,
> --
> 2.23.0
>
>
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a7602f71ec04..92a515e82b1b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1627,11 +1627,14 @@  static unsigned int shrink_page_list(struct list_head *page_list,
 			/* follow __remove_mapping for reference */
 			if (!page_ref_freeze(page, 1))
 				goto keep_locked;
-			if (PageDirty(page)) {
-				page_ref_unfreeze(page, 1);
-				goto keep_locked;
-			}
-
+			/*
+			 * The page has only one reference left, which is
+			 * from the isolation. After the caller puts the
+			 * page back on lru and drops the reference, the
+			 * page will be freed anyway. It doesn't matter
+			 * which lru it goes. So we don't bother checking
+			 * PageDirty here.
+			 */
 			count_vm_event(PGLAZYFREED);
 			count_memcg_page_event(page, PGLAZYFREED);
 		} else if (!mapping || !__remove_mapping(mapping, page, true,