diff mbox series

[1/3] mm: don't call activate_page() on new ksm pages

Message ID 20200812040423.2707213-1-yuzhao@google.com (mailing list archive)
State New, archived
Headers show
Series [1/3] mm: don't call activate_page() on new ksm pages | expand

Commit Message

Yu Zhao Aug. 12, 2020, 4:04 a.m. UTC
lru_cache_add_active_or_unevictable() already adds new ksm pages to
active lru. Calling activate_page() isn't really necessary in this
case.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/swapfile.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Yang Shi Aug. 13, 2020, 5:19 a.m. UTC | #1
On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@google.com> wrote:
>
> lru_cache_add_active_or_unevictable() already adds new ksm pages to
> active lru. Calling activate_page() isn't really necessary in this
> case.
>
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  mm/swapfile.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 6c26916e95fd..cf115ea26a20 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
>                    pte_mkold(mk_pte(page, vma->vm_page_prot)));
>         if (page == swapcache) {
>                 page_add_anon_rmap(page, vma, addr, false);
> +               /*
> +                * Move the page to the active list so it is not
> +                * immediately swapped out again after swapon.
> +                */
> +               activate_page(page);

Actually I think we could just remove this activate_page() call with
Joonsoo's anonymous page workingset series merged. The active bit will
be taken care by workingset_refault().

>         } else { /* ksm created a completely new copy */
>                 page_add_new_anon_rmap(page, vma, addr, false);
>                 lru_cache_add_active_or_unevictable(page, vma);

And it looks the latest linus's tree already changed this to
lru_cache_add_inactive_or_unevictable() by commit b518154e59
("mm/vmscan: protect the workingset on anonymous LRU")

Added Joonsoo in this loop.

>         }
>         swap_free(entry);
> -       /*
> -        * Move the page to the active list so it is not
> -        * immediately swapped out again after swapon.
> -        */
> -       activate_page(page);
>  out:
>         pte_unmap_unlock(pte, ptl);
>         if (page != swapcache) {
> --
> 2.28.0.236.gb10cc79966-goog
>
>
Yu Zhao Aug. 13, 2020, 7:34 a.m. UTC | #2
On Wed, Aug 12, 2020 at 10:19:24PM -0700, Yang Shi wrote:
> On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@google.com> wrote:
> >
> > lru_cache_add_active_or_unevictable() already adds new ksm pages to
> > active lru. Calling activate_page() isn't really necessary in this
> > case.
> >
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> >  mm/swapfile.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 6c26916e95fd..cf115ea26a20 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> >                    pte_mkold(mk_pte(page, vma->vm_page_prot)));
> >         if (page == swapcache) {
> >                 page_add_anon_rmap(page, vma, addr, false);
> > +               /*
> > +                * Move the page to the active list so it is not
> > +                * immediately swapped out again after swapon.
> > +                */
> > +               activate_page(page);
> 
> Actually I think we could just remove this activate_page() call with
> Joonsoo's anonymous page workingset series merged. The active bit will
> be taken care by workingset_refault().
> 
> >         } else { /* ksm created a completely new copy */
> >                 page_add_new_anon_rmap(page, vma, addr, false);
> >                 lru_cache_add_active_or_unevictable(page, vma);
> 
> And it looks the latest linus's tree already changed this to
> lru_cache_add_inactive_or_unevictable() by commit b518154e59
> ("mm/vmscan: protect the workingset on anonymous LRU")

Oops, apparently my tree is out of date. I'll work on a new version
that removes the superfluous activate_page(). Meanwhile, can you
please take a look at the rest of this series and let me know if
there is anything else that we might want to change? Thank you.
Yang Shi Aug. 17, 2020, 8:48 p.m. UTC | #3
On Thu, Aug 13, 2020 at 12:34 AM Yu Zhao <yuzhao@google.com> wrote:
>
> On Wed, Aug 12, 2020 at 10:19:24PM -0700, Yang Shi wrote:
> > On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@google.com> wrote:
> > >
> > > lru_cache_add_active_or_unevictable() already adds new ksm pages to
> > > active lru. Calling activate_page() isn't really necessary in this
> > > case.
> > >
> > > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > > ---
> > >  mm/swapfile.c | 10 +++++-----
> > >  1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > > index 6c26916e95fd..cf115ea26a20 100644
> > > --- a/mm/swapfile.c
> > > +++ b/mm/swapfile.c
> > > @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> > >                    pte_mkold(mk_pte(page, vma->vm_page_prot)));
> > >         if (page == swapcache) {
> > >                 page_add_anon_rmap(page, vma, addr, false);
> > > +               /*
> > > +                * Move the page to the active list so it is not
> > > +                * immediately swapped out again after swapon.
> > > +                */
> > > +               activate_page(page);
> >
> > Actually I think we could just remove this activate_page() call with
> > Joonsoo's anonymous page workingset series merged. The active bit will
> > be taken care by workingset_refault().
> >
> > >         } else { /* ksm created a completely new copy */
> > >                 page_add_new_anon_rmap(page, vma, addr, false);
> > >                 lru_cache_add_active_or_unevictable(page, vma);
> >
> > And it looks the latest linus's tree already changed this to
> > lru_cache_add_inactive_or_unevictable() by commit b518154e59
> > ("mm/vmscan: protect the workingset on anonymous LRU")
>
> Oops, apparently my tree is out of date. I'll work on a new version
> that removes the superfluous activate_page(). Meanwhile, can you
> please take a look at the rest of this series and let me know if
> there is anything else that we might want to change? Thank you.

I took a look at those two patches. For the #2 I didn't spot anything
wrong, but I may miss something. For the #3, TBH I don't think the
justification is strong enough since you just moved the PG_waiters bit
cleared to allocation time, someone could argue it may hurt allocation
latency.
diff mbox series

Patch

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 6c26916e95fd..cf115ea26a20 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1913,16 +1913,16 @@  static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 		   pte_mkold(mk_pte(page, vma->vm_page_prot)));
 	if (page == swapcache) {
 		page_add_anon_rmap(page, vma, addr, false);
+		/*
+		 * Move the page to the active list so it is not
+		 * immediately swapped out again after swapon.
+		 */
+		activate_page(page);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, addr, false);
 		lru_cache_add_active_or_unevictable(page, vma);
 	}
 	swap_free(entry);
-	/*
-	 * Move the page to the active list so it is not
-	 * immediately swapped out again after swapon.
-	 */
-	activate_page(page);
 out:
 	pte_unmap_unlock(pte, ptl);
 	if (page != swapcache) {