diff mbox series

mm: zswap: make the lock critical section obvious in shrink_worker()

Message ID 20240803053306.2685541-1-yosryahmed@google.com (mailing list archive)
State New
Headers show
Series mm: zswap: make the lock critical section obvious in shrink_worker() | expand

Commit Message

Yosry Ahmed Aug. 3, 2024, 5:33 a.m. UTC
Move the comments and spin_{lock/unlock}() calls around in
shrink_worker() to make it obvious the lock is protecting the loop
updating zswap_next_shrink.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---

This is intended to be squashed into "mm: zswap: fix global shrinker
memcg iteration".

---
 mm/zswap.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

Comments

Nhat Pham Aug. 3, 2024, 9:35 p.m. UTC | #1
On Fri, Aug 2, 2024 at 10:33 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> Move the comments and spin_{lock/unlock}() calls around in
> shrink_worker() to make it obvious the lock is protecting the loop
> updating zswap_next_shrink.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>

Thanks, it looks cleaner to me too.

Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Chengming Zhou Aug. 5, 2024, 2:06 a.m. UTC | #2
On 2024/8/3 13:33, Yosry Ahmed wrote:
> Move the comments and spin_{lock/unlock}() calls around in
> shrink_worker() to make it obvious the lock is protecting the loop
> updating zswap_next_shrink.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>

Yeah, it's clearer.

Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Thanks.

> ---
> 
> This is intended to be squashed into "mm: zswap: fix global shrinker
> memcg iteration".
> 
> ---
>   mm/zswap.c | 14 ++++++--------
>   1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/zswap.c b/mm/zswap.c
> index babf0abbcc765..df620eacd1d11 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1364,24 +1364,22 @@ static void shrink_worker(struct work_struct *w)
>   	 * until the next run of shrink_worker().
>   	 */
>   	do {
> -		spin_lock(&zswap_shrink_lock);
> -
>   		/*
>   		 * Start shrinking from the next memcg after zswap_next_shrink.
>   		 * When the offline cleaner has already advanced the cursor,
>   		 * advancing the cursor here overlooks one memcg, but this
>   		 * should be negligibly rare.
> +		 *
> +		 * If we get an online memcg, keep the extra reference in case
> +		 * the original one obtained by mem_cgroup_iter() is dropped by
> +		 * zswap_memcg_offline_cleanup() while we are shrinking the
> +		 * memcg.
>   		 */
> +		spin_lock(&zswap_shrink_lock);
>   		do {
>   			memcg = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
>   			zswap_next_shrink = memcg;
>   		} while (memcg && !mem_cgroup_tryget_online(memcg));
> -		/*
> -		 * Note that if we got an online memcg, we will keep the extra
> -		 * reference in case the original reference obtained by mem_cgroup_iter
> -		 * is dropped by the zswap memcg offlining callback, ensuring that the
> -		 * memcg is not killed when we are reclaiming.
> -		 */
>   		spin_unlock(&zswap_shrink_lock);
>   
>   		if (!memcg) {
Johannes Weiner Aug. 5, 2024, 5:06 p.m. UTC | #3
On Sat, Aug 03, 2024 at 05:33:06AM +0000, Yosry Ahmed wrote:
> Move the comments and spin_{lock/unlock}() calls around in
> shrink_worker() to make it obvious the lock is protecting the loop
> updating zswap_next_shrink.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
diff mbox series

Patch

diff --git a/mm/zswap.c b/mm/zswap.c
index babf0abbcc765..df620eacd1d11 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1364,24 +1364,22 @@  static void shrink_worker(struct work_struct *w)
 	 * until the next run of shrink_worker().
 	 */
 	do {
-		spin_lock(&zswap_shrink_lock);
-
 		/*
 		 * Start shrinking from the next memcg after zswap_next_shrink.
 		 * When the offline cleaner has already advanced the cursor,
 		 * advancing the cursor here overlooks one memcg, but this
 		 * should be negligibly rare.
+		 *
+		 * If we get an online memcg, keep the extra reference in case
+		 * the original one obtained by mem_cgroup_iter() is dropped by
+		 * zswap_memcg_offline_cleanup() while we are shrinking the
+		 * memcg.
 		 */
+		spin_lock(&zswap_shrink_lock);
 		do {
 			memcg = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
 			zswap_next_shrink = memcg;
 		} while (memcg && !mem_cgroup_tryget_online(memcg));
-		/*
-		 * Note that if we got an online memcg, we will keep the extra
-		 * reference in case the original reference obtained by mem_cgroup_iter
-		 * is dropped by the zswap memcg offlining callback, ensuring that the
-		 * memcg is not killed when we are reclaiming.
-		 */
 		spin_unlock(&zswap_shrink_lock);
 
 		if (!memcg) {