diff mbox series

[v3,1/2] mm: zswap: fix global shrinker memcg iteration

Message ID 20240720044127.508042-2-flintglass@gmail.com (mailing list archive)
State New
Headers show
Series mm: zswap: fixes for global shrinker | expand

Commit Message

Takero Funaki July 20, 2024, 4:41 a.m. UTC
This patch fixes an issue where the zswap global shrinker stopped
iterating through the memcg tree.

The problem was that shrink_worker() would stop iterating when a memcg
was being offlined and restart from the tree root.  Now, it properly
handles the offline memcg and continues shrinking with the next memcg.

To avoid holding refcount of offline memcg encountered during the memcg
tree walking, shrink_worker() must continue iterating to release the
offline memcg to ensure the next memcg stored in the cursor is online.

The offline memcg cleaner has also been changed to avoid the same issue.
When the next memcg of the offlined memcg is also offline, the refcount
stored in the iteration cursor was held until the next shrink_worker()
run. The cleaner must release the offline memcg recursively.

Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
Signed-off-by: Takero Funaki <flintglass@gmail.com>
---
 mm/zswap.c | 77 +++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 56 insertions(+), 21 deletions(-)

Comments

Nhat Pham July 22, 2024, 9:39 p.m. UTC | #1
On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@gmail.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
>
> The problem was that shrink_worker() would stop iterating when a memcg
> was being offlined and restart from the tree root.  Now, it properly
> handles the offline memcg and continues shrinking with the next memcg.
>
> To avoid holding refcount of offline memcg encountered during the memcg
> tree walking, shrink_worker() must continue iterating to release the
> offline memcg to ensure the next memcg stored in the cursor is online.
>
> The offline memcg cleaner has also been changed to avoid the same issue.
> When the next memcg of the offlined memcg is also offline, the refcount
> stored in the iteration cursor was held until the next shrink_worker()
> run. The cleaner must release the offline memcg recursively.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@gmail.com>
Hmm LGTM for the most part - a couple nits
[...]
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                       zswap_next_shrink, NULL);
nit: this can fit in a single line right? Looks like it's exactly 80 characters.
[...]
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                               zswap_next_shrink, NULL);
Same with this.
[...]
> +               /*
> +                * We verified the memcg is online and got an extra memcg
> +                * reference.  Our memcg might be offlined concurrently but the
> +                * respective offline cleaner must be waiting for our lock.
> +                */
>                 spin_unlock(&zswap_shrink_lock);
nit: can we remove this spin_unlock() call + the one within the `if
(!memcg)` block, and just do it unconditionally outside of if
(!memcg)? Looks like we are unlocking regardless of whether memcg is
null or not.

memcg is a local variable, not protected by zswap_shrink_lock, so this
should be fine right?

Otherwise:
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Yosry Ahmed July 23, 2024, 6:30 a.m. UTC | #2
On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@gmail.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
>
> The problem was that shrink_worker() would stop iterating when a memcg
> was being offlined and restart from the tree root.  Now, it properly
> handles the offline memcg and continues shrinking with the next memcg.
>
> To avoid holding refcount of offline memcg encountered during the memcg
> tree walking, shrink_worker() must continue iterating to release the
> offline memcg to ensure the next memcg stored in the cursor is online.
>
> The offline memcg cleaner has also been changed to avoid the same issue.
> When the next memcg of the offlined memcg is also offline, the refcount
> stored in the iteration cursor was held until the next shrink_worker()
> run. The cleaner must release the offline memcg recursively.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@gmail.com>
> ---
>  mm/zswap.c | 77 +++++++++++++++++++++++++++++++++++++++---------------
>  1 file changed, 56 insertions(+), 21 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a50e2986cd2f..6528668c9af3 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -775,12 +775,33 @@ void zswap_folio_swapin(struct folio *folio)
>         }
>  }
>
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
>  void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
>  {
>         /* lock out zswap shrinker walking memcg tree */
>         spin_lock(&zswap_shrink_lock);
> -       if (zswap_next_shrink == memcg)
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +       if (zswap_next_shrink == memcg) {
> +               do {
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                       zswap_next_shrink, NULL);
> +               } while (zswap_next_shrink &&
> +                               !mem_cgroup_online(zswap_next_shrink));
> +               /*
> +                * We verified the next memcg is online.  Even if the next
> +                * memcg is being offlined here, another cleaner must be
> +                * waiting for our lock.  We can leave the online memcg
> +                * reference.
> +                */

I think this comment and the similar one at the end of the loop in
shrink_worker() are very similar and not necessary. The large comment
above the loop in shrink_worker() already explains how that loop and
the offline memcg cleaner interact, and I think the locking follows
naturally from there. You can explicitly mention the locking there as
well if you think it helps, but I think these comments are a little
repetitive and do not add much value.

I don't feel strongly about it tho, if Nhat feels like they add value
then I am okay with that.

Otherwise, and with Nhat's other comments addressed:
Acked-by: Yosry Ahmed <yosryahmed@google.com>

> +       }
>         spin_unlock(&zswap_shrink_lock);
>  }
>
> @@ -1319,18 +1340,38 @@ static void shrink_worker(struct work_struct *w)
>         /* Reclaim down to the accept threshold */
>         thr = zswap_accept_thr_pages();
>
> -       /* global reclaim will select cgroup in a round-robin fashion. */
> +       /* global reclaim will select cgroup in a round-robin fashion.
> +        *
> +        * We save iteration cursor memcg into zswap_next_shrink,
> +        * which can be modified by the offline memcg cleaner
> +        * zswap_memcg_offline_cleanup().
> +        *
> +        * Since the offline cleaner is called only once, we cannot leave an
> +        * offline memcg reference in zswap_next_shrink.
> +        * We can rely on the cleaner only if we get online memcg under lock.
> +        *
> +        * If we get an offline memcg, we cannot determine if the cleaner has
> +        * already been called or will be called later. We must put back the
> +        * reference before returning from this function. Otherwise, the
> +        * offline memcg left in zswap_next_shrink will hold the reference
> +        * until the next run of shrink_worker().
> +        */
>         do {
>                 spin_lock(&zswap_shrink_lock);
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> -               memcg = zswap_next_shrink;
>
>                 /*
> -                * We need to retry if we have gone through a full round trip, or if we
> -                * got an offline memcg (or else we risk undoing the effect of the
> -                * zswap memcg offlining cleanup callback). This is not catastrophic
> -                * per se, but it will keep the now offlined memcg hostage for a while.
> -                *
> +                * Start shrinking from the next memcg after zswap_next_shrink.
> +                * When the offline cleaner has already advanced the cursor,
> +                * advancing the cursor here overlooks one memcg, but this
> +                * should be negligibly rare.
> +                */
> +               do {
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                               zswap_next_shrink, NULL);
> +                       memcg = zswap_next_shrink;
> +               } while (memcg && !mem_cgroup_tryget_online(memcg));
> +
> +               /*
>                  * Note that if we got an online memcg, we will keep the extra
>                  * reference in case the original reference obtained by mem_cgroup_iter
>                  * is dropped by the zswap memcg offlining callback, ensuring that the
> @@ -1344,17 +1385,11 @@ static void shrink_worker(struct work_struct *w)
>                         goto resched;
>                 }
>
> -               if (!mem_cgroup_tryget_online(memcg)) {
> -                       /* drop the reference from mem_cgroup_iter() */
> -                       mem_cgroup_iter_break(NULL, memcg);
> -                       zswap_next_shrink = NULL;
> -                       spin_unlock(&zswap_shrink_lock);
> -
> -                       if (++failures == MAX_RECLAIM_RETRIES)
> -                               break;
> -
> -                       goto resched;
> -               }
> +               /*
> +                * We verified the memcg is online and got an extra memcg
> +                * reference.  Our memcg might be offlined concurrently but the
> +                * respective offline cleaner must be waiting for our lock.
> +                */
>                 spin_unlock(&zswap_shrink_lock);
>
>                 ret = shrink_memcg(memcg);
> --
> 2.43.0
>
Yosry Ahmed July 23, 2024, 6:37 a.m. UTC | #3
On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@gmail.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
>
> The problem was that shrink_worker() would stop iterating when a memcg
> was being offlined and restart from the tree root.  Now, it properly
> handles the offline memcg and continues shrinking with the next memcg.

It is probably worth explicitly calling out that before this change,
the shrinker would stop considering an offline memcg as a failure and
stop after hitting 16 failures, but after this change, a failure is
hitting the end of the tree. This means that cgroup trees with a lot
of offline cgroups will now observe significantly higher zswap
writeback activity.

Similarly, in the next patch commit log, please explicitly call out
the expected behavioral change, that hitting an empty memcg or
reaching the end of a tree is no longer considered a failure if there
is progress, which means that trees with a few cgroups using zswap
will now observe significantly higher zswap writeback activity.

>
> To avoid holding refcount of offline memcg encountered during the memcg
> tree walking, shrink_worker() must continue iterating to release the
> offline memcg to ensure the next memcg stored in the cursor is online.
>
> The offline memcg cleaner has also been changed to avoid the same issue.
> When the next memcg of the offlined memcg is also offline, the refcount
> stored in the iteration cursor was held until the next shrink_worker()
> run. The cleaner must release the offline memcg recursively.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@gmail.com>
> ---
>  mm/zswap.c | 77 +++++++++++++++++++++++++++++++++++++++---------------
>  1 file changed, 56 insertions(+), 21 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a50e2986cd2f..6528668c9af3 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -775,12 +775,33 @@ void zswap_folio_swapin(struct folio *folio)
>         }
>  }
>
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
>  void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
>  {
>         /* lock out zswap shrinker walking memcg tree */
>         spin_lock(&zswap_shrink_lock);
> -       if (zswap_next_shrink == memcg)
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +       if (zswap_next_shrink == memcg) {
> +               do {
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                       zswap_next_shrink, NULL);
> +               } while (zswap_next_shrink &&
> +                               !mem_cgroup_online(zswap_next_shrink));
> +               /*
> +                * We verified the next memcg is online.  Even if the next
> +                * memcg is being offlined here, another cleaner must be
> +                * waiting for our lock.  We can leave the online memcg
> +                * reference.
> +                */
> +       }
>         spin_unlock(&zswap_shrink_lock);
>  }
>
> @@ -1319,18 +1340,38 @@ static void shrink_worker(struct work_struct *w)
>         /* Reclaim down to the accept threshold */
>         thr = zswap_accept_thr_pages();
>
> -       /* global reclaim will select cgroup in a round-robin fashion. */
> +       /* global reclaim will select cgroup in a round-robin fashion.
> +        *
> +        * We save iteration cursor memcg into zswap_next_shrink,
> +        * which can be modified by the offline memcg cleaner
> +        * zswap_memcg_offline_cleanup().
> +        *
> +        * Since the offline cleaner is called only once, we cannot leave an
> +        * offline memcg reference in zswap_next_shrink.
> +        * We can rely on the cleaner only if we get online memcg under lock.
> +        *
> +        * If we get an offline memcg, we cannot determine if the cleaner has
> +        * already been called or will be called later. We must put back the
> +        * reference before returning from this function. Otherwise, the
> +        * offline memcg left in zswap_next_shrink will hold the reference
> +        * until the next run of shrink_worker().
> +        */
>         do {
>                 spin_lock(&zswap_shrink_lock);
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> -               memcg = zswap_next_shrink;
>
>                 /*
> -                * We need to retry if we have gone through a full round trip, or if we
> -                * got an offline memcg (or else we risk undoing the effect of the
> -                * zswap memcg offlining cleanup callback). This is not catastrophic
> -                * per se, but it will keep the now offlined memcg hostage for a while.
> -                *
> +                * Start shrinking from the next memcg after zswap_next_shrink.
> +                * When the offline cleaner has already advanced the cursor,
> +                * advancing the cursor here overlooks one memcg, but this
> +                * should be negligibly rare.
> +                */
> +               do {
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                               zswap_next_shrink, NULL);
> +                       memcg = zswap_next_shrink;
> +               } while (memcg && !mem_cgroup_tryget_online(memcg));
> +
> +               /*
>                  * Note that if we got an online memcg, we will keep the extra
>                  * reference in case the original reference obtained by mem_cgroup_iter
>                  * is dropped by the zswap memcg offlining callback, ensuring that the
> @@ -1344,17 +1385,11 @@ static void shrink_worker(struct work_struct *w)
>                         goto resched;
>                 }
>
> -               if (!mem_cgroup_tryget_online(memcg)) {
> -                       /* drop the reference from mem_cgroup_iter() */
> -                       mem_cgroup_iter_break(NULL, memcg);
> -                       zswap_next_shrink = NULL;
> -                       spin_unlock(&zswap_shrink_lock);
> -
> -                       if (++failures == MAX_RECLAIM_RETRIES)
> -                               break;
> -
> -                       goto resched;
> -               }
> +               /*
> +                * We verified the memcg is online and got an extra memcg
> +                * reference.  Our memcg might be offlined concurrently but the
> +                * respective offline cleaner must be waiting for our lock.
> +                */
>                 spin_unlock(&zswap_shrink_lock);
>
>                 ret = shrink_memcg(memcg);
> --
> 2.43.0
>
Takero Funaki July 23, 2024, 3:35 p.m. UTC | #4
2024年7月23日(火) 6:39 Nhat Pham <nphamcs@gmail.com>:
>
> On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@gmail.com> wrote:
> >
> > This patch fixes an issue where the zswap global shrinker stopped
> > iterating through the memcg tree.
> >
> > The problem was that shrink_worker() would stop iterating when a memcg
> > was being offlined and restart from the tree root.  Now, it properly
> > handles the offline memcg and continues shrinking with the next memcg.
> >
> > To avoid holding refcount of offline memcg encountered during the memcg
> > tree walking, shrink_worker() must continue iterating to release the
> > offline memcg to ensure the next memcg stored in the cursor is online.
> >
> > The offline memcg cleaner has also been changed to avoid the same issue.
> > When the next memcg of the offlined memcg is also offline, the refcount
> > stored in the iteration cursor was held until the next shrink_worker()
> > run. The cleaner must release the offline memcg recursively.
> >
> > Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> > Signed-off-by: Takero Funaki <flintglass@gmail.com>
> Hmm LGTM for the most part - a couple nits
> [...]
> > +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> > +                                       zswap_next_shrink, NULL);
> nit: this can fit in a single line right? Looks like it's exactly 80 characters.

Isn't that over 90 chars? But yes, we can reduce line breaks using
memcg as temporary, like:
-       if (zswap_next_shrink == memcg)
-               zswap_next_shrink = mem_cgroup_iter(NULL,
zswap_next_shrink, NULL);
+       if (zswap_next_shrink == memcg) {
+               do {
+                       memcg = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
+                       zswap_next_shrink = memcg;
+               } while (memcg && !mem_cgroup_online(memcg));


> [...]
> > +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> > +                                               zswap_next_shrink, NULL);
> Same with this.
> [...]
> > +               /*
> > +                * We verified the memcg is online and got an extra memcg
> > +                * reference.  Our memcg might be offlined concurrently but the
> > +                * respective offline cleaner must be waiting for our lock.
> > +                */
> >                 spin_unlock(&zswap_shrink_lock);
> nit: can we remove this spin_unlock() call + the one within the `if
> (!memcg)` block, and just do it unconditionally outside of if
> (!memcg)? Looks like we are unlocking regardless of whether memcg is
> null or not.
>
> memcg is a local variable, not protected by zswap_shrink_lock, so this
> should be fine right?
>
> Otherwise:
> Reviewed-by: Nhat Pham <nphamcs@gmail.com>

Ah that's right. We no longer modify zswap_next_shrink in the if
branches. Merging the two spin_unlock.
Nhat Pham July 23, 2024, 3:55 p.m. UTC | #5
On Tue, Jul 23, 2024 at 8:35 AM Takero Funaki <flintglass@gmail.com> wrote:
>
> 2024年7月23日(火) 6:39 Nhat Pham <nphamcs@gmail.com>:
> >
> > On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@gmail.com> wrote:
> > >
> > > This patch fixes an issue where the zswap global shrinker stopped
> > > iterating through the memcg tree.
> > >
> > > The problem was that shrink_worker() would stop iterating when a memcg
> > > was being offlined and restart from the tree root.  Now, it properly
> > > handles the offline memcg and continues shrinking with the next memcg.
> > >
> > > To avoid holding refcount of offline memcg encountered during the memcg
> > > tree walking, shrink_worker() must continue iterating to release the
> > > offline memcg to ensure the next memcg stored in the cursor is online.
> > >
> > > The offline memcg cleaner has also been changed to avoid the same issue.
> > > When the next memcg of the offlined memcg is also offline, the refcount
> > > stored in the iteration cursor was held until the next shrink_worker()
> > > run. The cleaner must release the offline memcg recursively.
> > >
> > > Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> > > Signed-off-by: Takero Funaki <flintglass@gmail.com>
> > Hmm LGTM for the most part - a couple nits
> > [...]
> > > +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> > > +                                       zswap_next_shrink, NULL);
> > nit: this can fit in a single line right? Looks like it's exactly 80 characters.
>
> Isn't that over 90 chars? But yes, we can reduce line breaks using
> memcg as temporary, like:

Huh. Weird. I applied the patch locally, and it looked 80 chars to me ha.

Anyway - just some nits. If checkpatch complains then yeah no need to fix this.
Takero Funaki July 23, 2024, 3:56 p.m. UTC | #6
2024年7月23日(火) 15:37 Yosry Ahmed <yosryahmed@google.com>:
>
> On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@gmail.com> wrote:
> >
> > This patch fixes an issue where the zswap global shrinker stopped
> > iterating through the memcg tree.
> >
> > The problem was that shrink_worker() would stop iterating when a memcg
> > was being offlined and restart from the tree root.  Now, it properly
> > handles the offline memcg and continues shrinking with the next memcg.
>
> It is probably worth explicitly calling out that before this change,
> the shrinker would stop considering an offline memcg as a failure and
> stop after hitting 16 failures, but after this change, a failure is
> hitting the end of the tree. This means that cgroup trees with a lot
> of offline cgroups will now observe significantly higher zswap
> writeback activity.
>
> Similarly, in the next patch commit log, please explicitly call out
> the expected behavioral change, that hitting an empty memcg or
> reaching the end of a tree is no longer considered a failure if there
> is progress, which means that trees with a few cgroups using zswap
> will now observe significantly higher zswap writeback activity.
>

Thanks for the comments.  Dropping the comments and changing the
commit message to:
    The problem was that shrink_worker() would restart iterating memcg tree
    from the tree root, considering an offline memcg as a failure, and abort
    shrinking after encountering the offline memcg 16 times even if there is
    only one offline memcg. After this change, an offline memcg in the tree
    is no longer considered a failure. This allows the shrinker to continue
    shrinking the other online memcgs regardless of whether an offline memcg
    exists, gives higher zswap writeback activity.

These issues do not require many offline memcgs or empty memcgs.
Without these patches, the shrinker would abort shrinking too early
even if there is only one offline memcg or only one empty memcg. The
shrinker counted the same memcg as another failure in every tree walks
and the failures limited writeback upto 16 pages * memcgs.
Chengming Zhou July 26, 2024, 2:47 a.m. UTC | #7
On 2024/7/20 12:41, Takero Funaki wrote:
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
> 
> The problem was that shrink_worker() would stop iterating when a memcg
> was being offlined and restart from the tree root.  Now, it properly
> handles the offline memcg and continues shrinking with the next memcg.
> 
> To avoid holding refcount of offline memcg encountered during the memcg
> tree walking, shrink_worker() must continue iterating to release the
> offline memcg to ensure the next memcg stored in the cursor is online.
> 
> The offline memcg cleaner has also been changed to avoid the same issue.
> When the next memcg of the offlined memcg is also offline, the refcount
> stored in the iteration cursor was held until the next shrink_worker()
> run. The cleaner must release the offline memcg recursively.
> 
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@gmail.com>

Looks good to me! With other comments addressed:

Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Thanks.

> ---
>   mm/zswap.c | 77 +++++++++++++++++++++++++++++++++++++++---------------
>   1 file changed, 56 insertions(+), 21 deletions(-)
> 
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a50e2986cd2f..6528668c9af3 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -775,12 +775,33 @@ void zswap_folio_swapin(struct folio *folio)
>   	}
>   }
>   
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
>   void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
>   {
>   	/* lock out zswap shrinker walking memcg tree */
>   	spin_lock(&zswap_shrink_lock);
> -	if (zswap_next_shrink == memcg)
> -		zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +	if (zswap_next_shrink == memcg) {
> +		do {
> +			zswap_next_shrink = mem_cgroup_iter(NULL,
> +					zswap_next_shrink, NULL);
> +		} while (zswap_next_shrink &&
> +				!mem_cgroup_online(zswap_next_shrink));
> +		/*
> +		 * We verified the next memcg is online.  Even if the next
> +		 * memcg is being offlined here, another cleaner must be
> +		 * waiting for our lock.  We can leave the online memcg
> +		 * reference.
> +		 */
> +	}
>   	spin_unlock(&zswap_shrink_lock);
>   }
>   
> @@ -1319,18 +1340,38 @@ static void shrink_worker(struct work_struct *w)
>   	/* Reclaim down to the accept threshold */
>   	thr = zswap_accept_thr_pages();
>   
> -	/* global reclaim will select cgroup in a round-robin fashion. */
> +	/* global reclaim will select cgroup in a round-robin fashion.
> +	 *
> +	 * We save iteration cursor memcg into zswap_next_shrink,
> +	 * which can be modified by the offline memcg cleaner
> +	 * zswap_memcg_offline_cleanup().
> +	 *
> +	 * Since the offline cleaner is called only once, we cannot leave an
> +	 * offline memcg reference in zswap_next_shrink.
> +	 * We can rely on the cleaner only if we get online memcg under lock.
> +	 *
> +	 * If we get an offline memcg, we cannot determine if the cleaner has
> +	 * already been called or will be called later. We must put back the
> +	 * reference before returning from this function. Otherwise, the
> +	 * offline memcg left in zswap_next_shrink will hold the reference
> +	 * until the next run of shrink_worker().
> +	 */
>   	do {
>   		spin_lock(&zswap_shrink_lock);
> -		zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> -		memcg = zswap_next_shrink;
>   
>   		/*
> -		 * We need to retry if we have gone through a full round trip, or if we
> -		 * got an offline memcg (or else we risk undoing the effect of the
> -		 * zswap memcg offlining cleanup callback). This is not catastrophic
> -		 * per se, but it will keep the now offlined memcg hostage for a while.
> -		 *
> +		 * Start shrinking from the next memcg after zswap_next_shrink.
> +		 * When the offline cleaner has already advanced the cursor,
> +		 * advancing the cursor here overlooks one memcg, but this
> +		 * should be negligibly rare.
> +		 */
> +		do {
> +			zswap_next_shrink = mem_cgroup_iter(NULL,
> +						zswap_next_shrink, NULL);
> +			memcg = zswap_next_shrink;
> +		} while (memcg && !mem_cgroup_tryget_online(memcg));
> +
> +		/*
>   		 * Note that if we got an online memcg, we will keep the extra
>   		 * reference in case the original reference obtained by mem_cgroup_iter
>   		 * is dropped by the zswap memcg offlining callback, ensuring that the
> @@ -1344,17 +1385,11 @@ static void shrink_worker(struct work_struct *w)
>   			goto resched;
>   		}
>   
> -		if (!mem_cgroup_tryget_online(memcg)) {
> -			/* drop the reference from mem_cgroup_iter() */
> -			mem_cgroup_iter_break(NULL, memcg);
> -			zswap_next_shrink = NULL;
> -			spin_unlock(&zswap_shrink_lock);
> -
> -			if (++failures == MAX_RECLAIM_RETRIES)
> -				break;
> -
> -			goto resched;
> -		}
> +		/*
> +		 * We verified the memcg is online and got an extra memcg
> +		 * reference.  Our memcg might be offlined concurrently but the
> +		 * respective offline cleaner must be waiting for our lock.
> +		 */
>   		spin_unlock(&zswap_shrink_lock);
>   
>   		ret = shrink_memcg(memcg);
diff mbox series

Patch

diff --git a/mm/zswap.c b/mm/zswap.c
index a50e2986cd2f..6528668c9af3 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -775,12 +775,33 @@  void zswap_folio_swapin(struct folio *folio)
 	}
 }
 
+/*
+ * This function should be called when a memcg is being offlined.
+ *
+ * Since the global shrinker shrink_worker() may hold a reference
+ * of the memcg, we must check and release the reference in
+ * zswap_next_shrink.
+ *
+ * shrink_worker() must handle the case where this function releases
+ * the reference of memcg being shrunk.
+ */
 void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
 {
 	/* lock out zswap shrinker walking memcg tree */
 	spin_lock(&zswap_shrink_lock);
-	if (zswap_next_shrink == memcg)
-		zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
+	if (zswap_next_shrink == memcg) {
+		do {
+			zswap_next_shrink = mem_cgroup_iter(NULL,
+					zswap_next_shrink, NULL);
+		} while (zswap_next_shrink &&
+				!mem_cgroup_online(zswap_next_shrink));
+		/*
+		 * We verified the next memcg is online.  Even if the next
+		 * memcg is being offlined here, another cleaner must be
+		 * waiting for our lock.  We can leave the online memcg
+		 * reference.
+		 */
+	}
 	spin_unlock(&zswap_shrink_lock);
 }
 
@@ -1319,18 +1340,38 @@  static void shrink_worker(struct work_struct *w)
 	/* Reclaim down to the accept threshold */
 	thr = zswap_accept_thr_pages();
 
-	/* global reclaim will select cgroup in a round-robin fashion. */
+	/* global reclaim will select cgroup in a round-robin fashion.
+	 *
+	 * We save iteration cursor memcg into zswap_next_shrink,
+	 * which can be modified by the offline memcg cleaner
+	 * zswap_memcg_offline_cleanup().
+	 *
+	 * Since the offline cleaner is called only once, we cannot leave an
+	 * offline memcg reference in zswap_next_shrink.
+	 * We can rely on the cleaner only if we get online memcg under lock.
+	 *
+	 * If we get an offline memcg, we cannot determine if the cleaner has
+	 * already been called or will be called later. We must put back the
+	 * reference before returning from this function. Otherwise, the
+	 * offline memcg left in zswap_next_shrink will hold the reference
+	 * until the next run of shrink_worker().
+	 */
 	do {
 		spin_lock(&zswap_shrink_lock);
-		zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
-		memcg = zswap_next_shrink;
 
 		/*
-		 * We need to retry if we have gone through a full round trip, or if we
-		 * got an offline memcg (or else we risk undoing the effect of the
-		 * zswap memcg offlining cleanup callback). This is not catastrophic
-		 * per se, but it will keep the now offlined memcg hostage for a while.
-		 *
+		 * Start shrinking from the next memcg after zswap_next_shrink.
+		 * When the offline cleaner has already advanced the cursor,
+		 * advancing the cursor here overlooks one memcg, but this
+		 * should be negligibly rare.
+		 */
+		do {
+			zswap_next_shrink = mem_cgroup_iter(NULL,
+						zswap_next_shrink, NULL);
+			memcg = zswap_next_shrink;
+		} while (memcg && !mem_cgroup_tryget_online(memcg));
+
+		/*
 		 * Note that if we got an online memcg, we will keep the extra
 		 * reference in case the original reference obtained by mem_cgroup_iter
 		 * is dropped by the zswap memcg offlining callback, ensuring that the
@@ -1344,17 +1385,11 @@  static void shrink_worker(struct work_struct *w)
 			goto resched;
 		}
 
-		if (!mem_cgroup_tryget_online(memcg)) {
-			/* drop the reference from mem_cgroup_iter() */
-			mem_cgroup_iter_break(NULL, memcg);
-			zswap_next_shrink = NULL;
-			spin_unlock(&zswap_shrink_lock);
-
-			if (++failures == MAX_RECLAIM_RETRIES)
-				break;
-
-			goto resched;
-		}
+		/*
+		 * We verified the memcg is online and got an extra memcg
+		 * reference.  Our memcg might be offlined concurrently but the
+		 * respective offline cleaner must be waiting for our lock.
+		 */
 		spin_unlock(&zswap_shrink_lock);
 
 		ret = shrink_memcg(memcg);