diff mbox series

[RFC,v3,02/12] mm: memcontrol: introduce compact_lock_page_lruvec_irqsave

Message ID 20210421070059.69361-3-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series Use obj_cgroup APIs to charge the LRU pages | expand

Commit Message

Muchun Song April 21, 2021, 7 a.m. UTC
If we reuse the objcg APIs to charge LRU pages, the page_memcg()
can be changed when the LRU pages reparented. In this case, we need
to acquire the new lruvec lock.

    lruvec = mem_cgroup_page_lruvec(page);

    // The page is reparented.

    compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);

    // Acquired the wrong lruvec lock and need to retry.

But compact_lock_irqsave() only take lruvec lock as the parameter,
we cannot aware this change. If it can take the page as parameter
to acquire the lruvec lock. When the page memcg is changed, we can
use the page_memcg() detect whether we need to reacquire the new
lruvec lock. So compact_lock_irqsave() is not suitable for us.
Similar to lock_page_lruvec_irqsave(), introduce
compact_lock_page_lruvec_irqsave() to acquire the lruvec lock in
the compaction routine.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/compaction.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

Comments

Roman Gushchin May 25, 2021, 5:21 p.m. UTC | #1
On Wed, Apr 21, 2021 at 03:00:49PM +0800, Muchun Song wrote:
> If we reuse the objcg APIs to charge LRU pages, the page_memcg()
> can be changed when the LRU pages reparented. In this case, we need
> to acquire the new lruvec lock.
> 
>     lruvec = mem_cgroup_page_lruvec(page);
> 
>     // The page is reparented.
> 
>     compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
> 
>     // Acquired the wrong lruvec lock and need to retry.
> 
> But compact_lock_irqsave() only take lruvec lock as the parameter,
> we cannot aware this change. If it can take the page as parameter
> to acquire the lruvec lock. When the page memcg is changed, we can
> use the page_memcg() detect whether we need to reacquire the new
> lruvec lock. So compact_lock_irqsave() is not suitable for us.
> Similar to lock_page_lruvec_irqsave(), introduce
> compact_lock_page_lruvec_irqsave() to acquire the lruvec lock in
> the compaction routine.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Acked-by: Roman Gushchin <guro@fb.com>

> ---
>  mm/compaction.c | 27 ++++++++++++++++++++++++---
>  1 file changed, 24 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 1c500e697c88..082293587cc6 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -511,6 +511,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
>  	return true;
>  }
>  
> +static struct lruvec *
> +compact_lock_page_lruvec_irqsave(struct page *page, unsigned long *flags,
> +				 struct compact_control *cc)

Maybe compact_lock_page_irqsave() to make it more similar to
compact_lock_irqsafe()? But it's up to you.

Thanks!
Muchun Song May 26, 2021, 2:49 a.m. UTC | #2
On Wed, May 26, 2021 at 1:21 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Wed, Apr 21, 2021 at 03:00:49PM +0800, Muchun Song wrote:
> > If we reuse the objcg APIs to charge LRU pages, the page_memcg()
> > can be changed when the LRU pages reparented. In this case, we need
> > to acquire the new lruvec lock.
> >
> >     lruvec = mem_cgroup_page_lruvec(page);
> >
> >     // The page is reparented.
> >
> >     compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
> >
> >     // Acquired the wrong lruvec lock and need to retry.
> >
> > But compact_lock_irqsave() only take lruvec lock as the parameter,
> > we cannot aware this change. If it can take the page as parameter
> > to acquire the lruvec lock. When the page memcg is changed, we can
> > use the page_memcg() detect whether we need to reacquire the new
> > lruvec lock. So compact_lock_irqsave() is not suitable for us.
> > Similar to lock_page_lruvec_irqsave(), introduce
> > compact_lock_page_lruvec_irqsave() to acquire the lruvec lock in
> > the compaction routine.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Acked-by: Roman Gushchin <guro@fb.com>

Thanks.

>
> > ---
> >  mm/compaction.c | 27 ++++++++++++++++++++++++---
> >  1 file changed, 24 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 1c500e697c88..082293587cc6 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -511,6 +511,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
> >       return true;
> >  }
> >
> > +static struct lruvec *
> > +compact_lock_page_lruvec_irqsave(struct page *page, unsigned long *flags,
> > +                              struct compact_control *cc)
>
> Maybe compact_lock_page_irqsave() to make it more similar to
> compact_lock_irqsafe()? But it's up to you.

I like a more brief name, compact_lock_page_irqsave is a good and
brief name. Thanks.

>
> Thanks!
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index 1c500e697c88..082293587cc6 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -511,6 +511,29 @@  static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
 	return true;
 }
 
+static struct lruvec *
+compact_lock_page_lruvec_irqsave(struct page *page, unsigned long *flags,
+				 struct compact_control *cc)
+{
+	struct lruvec *lruvec;
+
+	lruvec = mem_cgroup_page_lruvec(page);
+
+	/* Track if the lock is contended in async mode */
+	if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
+		if (spin_trylock_irqsave(&lruvec->lru_lock, *flags))
+			goto out;
+
+		cc->contended = true;
+	}
+
+	spin_lock_irqsave(&lruvec->lru_lock, *flags);
+out:
+	lruvec_memcg_debug(lruvec, page);
+
+	return lruvec;
+}
+
 /*
  * Compaction requires the taking of some coarse locks that are potentially
  * very heavily contended. The lock should be periodically unlocked to avoid
@@ -1001,11 +1024,9 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			if (locked)
 				unlock_page_lruvec_irqrestore(locked, flags);
 
-			compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+			lruvec = compact_lock_page_lruvec_irqsave(page, &flags, cc);
 			locked = lruvec;
 
-			lruvec_memcg_debug(lruvec, page);
-
 			/* Try get exclusive access under lock */
 			if (!skip_updated) {
 				skip_updated = true;