From patchwork Thu Jun 22 08:53:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13288660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6024EB64DB for ; Thu, 22 Jun 2023 08:57:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 746D88D0010; Thu, 22 Jun 2023 04:57:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F6CE8D0001; Thu, 22 Jun 2023 04:57:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BEFA8D0010; Thu, 22 Jun 2023 04:57:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4C9E98D0001 for ; Thu, 22 Jun 2023 04:57:51 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 28E91120AE8 for ; Thu, 22 Jun 2023 08:57:51 +0000 (UTC) X-FDA: 80929781142.21.FCE18F6 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf22.hostedemail.com (Postfix) with ESMTP id 42C36C000D for ; Thu, 22 Jun 2023 08:57:49 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=jvhIGmYE; spf=pass (imf22.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687424269; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+P5gj8bno5Yz104PoCsSeC7OSXGzrjC+L9aN78tCOOI=; b=afRw/mFxj2QXxpKRkwVVqyxDFVOWulhj21PUPHgy3JZOaRZbHjUrnUlp8rc9YlifHf/S+h ARm9xkMfjcQdjyYDguHPCF9iq744RbKcfxOHDiRbbYvtbLGz2oR8HUttXqz4yrqhPLpKH/ Hj8jsLhL9aBdFzbsnfHnQsK83Wzs+vY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=jvhIGmYE; spf=pass (imf22.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687424269; a=rsa-sha256; cv=none; b=j1SYZRt/ulX0UhIscgouI2sMl2k8jYXAEfG6cHz5VG1OmTOn4AFveKR+eiCZupU44DHfJS gVF8K6ikD/sjkb0CImmIj+aYXIFPBoJWbFcUsast7g5XAi7eD3ELfFQTPOP4+dOyinafpg 4vm/zenKbIPYOQ6YK7TVEBEuYFd0gXI= Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1b693afe799so2467045ad.1 for ; Thu, 22 Jun 2023 01:57:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687424268; x=1690016268; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+P5gj8bno5Yz104PoCsSeC7OSXGzrjC+L9aN78tCOOI=; b=jvhIGmYE039/syWqcfLyIdo6t9Qplyt6OsSxUBVvxtsPAkw48Ft91VW4Wz2O26wmhx R0vxBU2Gya4gcRkfUKW9xk7qtBQ/FB/itjiP7zdM6kYiVPyrZbA9p3LfIx3f71WW1xO7 LLQ+0DljNwUv19wViW0KnJsUJ8+fuKrQN9IBdM3/tIdgoFSPmAiPoFKu75bet98dvkR/ 1RKnODQtDxONo/sndY6eLEsUvIJFr3WP9l67MNEVgdfQojcwBUWJp7hhTFie6SRhHC3h g1EMHwWUjxMnky7BCMGJqXeVS7+DBkpxiGCDfvbg7pDqaD88DLgjwvpVG1NnvXM5z1NX rt6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687424268; x=1690016268; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+P5gj8bno5Yz104PoCsSeC7OSXGzrjC+L9aN78tCOOI=; b=WW5dW+36ymRazcZAZjjxs42988msjYYgygpPIzwy2iM5InBBydAeji0CCnE7nyc4sH noup/vF9ZbEEAyunBjSgeQ2s6R4Z+utrLLZADuMB90J2bkKMetfrRa30Lsn4+IFWVjw6 m3F+curaEfsaXk5el5b9uvlj2R60Xe8mPqdVGOffRfbdp1sfiWBaG6DCbnYcM3js1WSg lMhSgt4Pqcrf5ZoeO18vcu+590MLBfNASYTyyuHC9qhsn+qEXQV2NfYDtt0nCl+WRdIQ Yh9F6MMcw9v5OLqmvdXRJu3VPUgIxV3tGKKgITIV3IZo3e9owr7QazS/jdNpxM9gDTKq QBEw== X-Gm-Message-State: AC+VfDzis5Lqj8a0wEP+P4yIf0EwlGxy9Yw6BQzq/EkODV9oQHUWCPEX qRMBQcdud0iITdXqn/fpLCCEIA== X-Google-Smtp-Source: ACHHUZ57rUw3vY36SztXPXN7lhzSsB9eXMClAkEXukaFLir2ldlvIHkHUZcbPwVIoKn5tOhLnnwBWg== X-Received: by 2002:a17:902:ecc6:b0:1ae:1364:6086 with SMTP id a6-20020a170902ecc600b001ae13646086mr21606033plh.2.1687424268179; Thu, 22 Jun 2023 01:57:48 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id h2-20020a170902f7c200b001b549fce345sm4806971plw.230.2023.06.22.01.57.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Jun 2023 01:57:47 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, dm-devel@redhat.com, linux-raid@vger.kernel.org, linux-bcache@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-nfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-btrfs@vger.kernel.org, Qi Zheng Subject: [PATCH 28/29] mm: shrinkers: convert shrinker_rwsem to mutex Date: Thu, 22 Jun 2023 16:53:34 +0800 Message-Id: <20230622085335.77010-29-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230622085335.77010-1-zhengqi.arch@bytedance.com> References: <20230622085335.77010-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 42C36C000D X-Rspam-User: X-Stat-Signature: 93dq5ppxb57nr3kuoowcch1dxknqehdk X-Rspamd-Server: rspam01 X-HE-Tag: 1687424269-788024 X-HE-Meta: U2FsdGVkX1+I7US7RDEMjsYmfs/3W+u+Q7H9mDu6a7P3EZAabYTWMOI6YG5T6Py4SA9q+0g0GQRbCpmHhE2jfcoxWuOiSP9VXxAdp8dBWPD52Rg9mud/J2evD9WSHfO8a6A89gxyDcvdBEXgdJBJwYcFEaRYl18uJp9ut3w1E0/hfegydhw8/zaVAfBN55wZIlyLdiTfxryafFdIF5nHCc5S99DlG4Z+U0JKRWfvWQVqbEB+ulLJXqQXBC7Ln61J96QLuSJSYLdWGO+swf/0nyRIjFwdHkRHM3uzzdaS2imsDDL59P41o68A1eHZste8eAtSuAQ9lC1wk4p02tARsUhZLkVZ5Zek8Yt7r5R9qDftRrhfCZ/DPZ8g14JFLCQqKQrN1Iv1cs/na28kSPXTgOhqy95SY5MJW3O5uhUviyHCYxoNx7cLjvYoC7sYDXkOZEfhuBto1PI0qh6YaQC4vPIidJbv6zkM3W6Wy7ed3mO3pnIr7T08WRwT7du77L/PSiNYmMhmaG2LVfOujBnMv9sEtGip695CvK3ZsRwaA4SvO44DzLpCNJhxBqceVKCTuNLpFEsU4UAbHgYdEM4YyiSCvC2EolGgnnqldi6MaLJlketi+dS2veyAZ6Ly+jJnUEckTRZ46YZXRH86PDTd2H9CKeVAnF5t7FcD3WuE07ZBuWerwx4axtEzQCweg7YYNLi4g3fFeDkEfDl2NfZtsveX6RtIsVOwhWr9DaFzO1B8FMlSQ/66tYiUfNuLRNurdKx8xpedOJNWCMlE4L+X5dD0ZJ+eXrp6o0KtDucFk90yF9PaaOpJgolw/vQNWgRwRqotK8AvS0wJ/XwMjsSJWHatgyOykmxAm8ANHHLp2f9H3rONLdoK5XLlTl4iqc/reyn6ozGdkgtCc6UjiPTDwDvNzXEuosaINmBEfqdpySdJr4yX9/Vw3qu9YNYbKhYMlmc9wgVU1Hq7sCNQAXk vJDLHFWp 8Zyq9WiN5XnsRCgXbFtGtPzv4dX42ceyDOPe+2OnrD34mdl6HEwB7drvOkQ+C1OTe7Br9XEQjXuV9jh8xFFDrQsyPWENk13vUmrp+lZEefCHP5ltmgSTe77j9CdWp3EOiSBtmqzHtU41DGmgzuWXkA1CHD0RJZmWy/m+HeDFExDVJZVpmrOAaqQOlnBo6nic46FfccpNafWVHC/xq6nfTZvc7ttTslAuWAIf84V6/0KrbRGBBpIxZKGjB4AhvhiQvKqX5EkbFs6uw1ViZ7D7nZotDH6MCPXWrRQW9bAlu/lz+t4hX6jlQpwVsTOQu73K0lU5fMqLFzlahyDgDS0teF4OFv8ATx4sGShg2TwIVtafEi8o72a1U3gfsLqVJOOe4Q2HW81NSMVsXaERmIV+rdM3QsaA5HERseVYShg8N2Jp4r655PnQZBOl9hMyeA8raFzRIX7rnXrhZvP5BW/NIx2/BNQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now there are no readers of shrinker_rwsem, so we can simply replace it with mutex lock. Signed-off-by: Qi Zheng --- drivers/md/dm-cache-metadata.c | 2 +- drivers/md/dm-thin-metadata.c | 2 +- fs/super.c | 2 +- mm/shrinker_debug.c | 14 +++++++------- mm/vmscan.c | 34 +++++++++++++++++----------------- 5 files changed, 27 insertions(+), 27 deletions(-) diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c index acffed750e3e..9e0c69958587 100644 --- a/drivers/md/dm-cache-metadata.c +++ b/drivers/md/dm-cache-metadata.c @@ -1828,7 +1828,7 @@ int dm_cache_metadata_abort(struct dm_cache_metadata *cmd) * Replacement block manager (new_bm) is created and old_bm destroyed outside of * cmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of * shrinker associated with the block manager's bufio client vs cmd root_lock). - * - must take shrinker_rwsem without holding cmd->root_lock + * - must take shrinker_mutex without holding cmd->root_lock */ new_bm = dm_block_manager_create(cmd->bdev, DM_CACHE_METADATA_BLOCK_SIZE << SECTOR_SHIFT, CACHE_MAX_CONCURRENT_LOCKS); diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index fd464fb024c3..9f5cb52c5763 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -1887,7 +1887,7 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) * Replacement block manager (new_bm) is created and old_bm destroyed outside of * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of * shrinker associated with the block manager's bufio client vs pmd root_lock). - * - must take shrinker_rwsem without holding pmd->root_lock + * - must take shrinker_mutex without holding pmd->root_lock */ new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, THIN_MAX_CONCURRENT_LOCKS); diff --git a/fs/super.c b/fs/super.c index 791342bb8ac9..471800ff793a 100644 --- a/fs/super.c +++ b/fs/super.c @@ -54,7 +54,7 @@ static char *sb_writers_name[SB_FREEZE_LEVELS] = { * One thing we have to be careful of with a per-sb shrinker is that we don't * drop the last active reference to the superblock from within the shrinker. * If that happens we could trigger unregistering the shrinker from within the - * shrinker path and that leads to deadlock on the shrinker_rwsem. Hence we + * shrinker path and that leads to deadlock on the shrinker_mutex. Hence we * take a passive reference to the superblock to avoid this from occurring. */ static unsigned long super_cache_scan(struct shrinker *shrink, diff --git a/mm/shrinker_debug.c b/mm/shrinker_debug.c index c18fa9b6b7f0..7ad903f84463 100644 --- a/mm/shrinker_debug.c +++ b/mm/shrinker_debug.c @@ -7,7 +7,7 @@ #include /* defined in vmscan.c */ -extern struct rw_semaphore shrinker_rwsem; +extern struct mutex shrinker_mutex; extern struct list_head shrinker_list; static DEFINE_IDA(shrinker_debugfs_ida); @@ -177,7 +177,7 @@ int shrinker_debugfs_add(struct shrinker *shrinker) char buf[128]; int id; - lockdep_assert_held(&shrinker_rwsem); + lockdep_assert_held(&shrinker_mutex); /* debugfs isn't initialized yet, add debugfs entries later. */ if (!shrinker_debugfs_root) @@ -220,7 +220,7 @@ int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...) if (!new) return -ENOMEM; - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); old = shrinker->name; shrinker->name = new; @@ -238,7 +238,7 @@ int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...) shrinker->debugfs_entry = entry; } - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); kfree_const(old); @@ -251,7 +251,7 @@ struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, { struct dentry *entry = shrinker->debugfs_entry; - lockdep_assert_held(&shrinker_rwsem); + lockdep_assert_held(&shrinker_mutex); kfree_const(shrinker->name); shrinker->name = NULL; @@ -280,14 +280,14 @@ static int __init shrinker_debugfs_init(void) shrinker_debugfs_root = dentry; /* Create debugfs entries for shrinkers registered at boot */ - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); list_for_each_entry(shrinker, &shrinker_list, list) if (!shrinker->debugfs_entry) { ret = shrinker_debugfs_add(shrinker); if (ret) break; } - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); return ret; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 0711b63e68d9..bcdd97caa403 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -35,7 +35,7 @@ #include #include #include -#include +#include #include #include #include @@ -190,7 +190,7 @@ struct scan_control { int vm_swappiness = 60; LIST_HEAD(shrinker_list); -DECLARE_RWSEM(shrinker_rwsem); +DEFINE_MUTEX(shrinker_mutex); #ifdef CONFIG_MEMCG static int shrinker_nr_max; @@ -210,7 +210,7 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, int nid) { return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, - lockdep_is_held(&shrinker_rwsem)); + lockdep_is_held(&shrinker_mutex)); } static struct shrinker_info *shrinker_info_rcu(struct mem_cgroup *memcg, @@ -283,7 +283,7 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) int nid, size, ret = 0; int map_size, defer_size = 0; - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); map_size = shrinker_map_size(shrinker_nr_max); defer_size = shrinker_defer_size(shrinker_nr_max); size = map_size + defer_size; @@ -299,7 +299,7 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) info->map_nr_max = shrinker_nr_max; rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); return ret; } @@ -315,7 +315,7 @@ static int expand_shrinker_info(int new_id) if (!root_mem_cgroup) goto out; - lockdep_assert_held(&shrinker_rwsem); + lockdep_assert_held(&shrinker_mutex); map_size = shrinker_map_size(new_nr_max); defer_size = shrinker_defer_size(new_nr_max); @@ -364,7 +364,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) if (mem_cgroup_disabled()) return -ENOSYS; - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); if (id < 0) goto unlock; @@ -378,7 +378,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) shrinker->id = id; ret = 0; unlock: - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); return ret; } @@ -388,7 +388,7 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) BUG_ON(id < 0); - lockdep_assert_held(&shrinker_rwsem); + lockdep_assert_held(&shrinker_mutex); idr_remove(&shrinker_idr, id); } @@ -433,7 +433,7 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg) parent = root_mem_cgroup; /* Prevent from concurrent shrinker_info expand */ - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); for_each_node(nid) { child_info = shrinker_info_protected(memcg, nid); parent_info = shrinker_info_protected(parent, nid); @@ -442,7 +442,7 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg) atomic_long_add(nr, &parent_info->nr_deferred[i]); } } - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); } static bool cgroup_reclaim(struct scan_control *sc) @@ -743,9 +743,9 @@ void free_prealloced_shrinker(struct shrinker *shrinker) shrinker->name = NULL; #endif if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); unregister_memcg_shrinker(shrinker); - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); return; } @@ -755,13 +755,13 @@ void free_prealloced_shrinker(struct shrinker *shrinker) void register_shrinker_prepared(struct shrinker *shrinker) { - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); refcount_set(&shrinker->refcount, 1); init_completion(&shrinker->completion_wait); list_add_tail_rcu(&shrinker->list, &shrinker_list); shrinker->flags |= SHRINKER_REGISTERED; shrinker_debugfs_add(shrinker); - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); } static int __register_shrinker(struct shrinker *shrinker) @@ -815,13 +815,13 @@ void unregister_shrinker(struct shrinker *shrinker) shrinker_put(shrinker); wait_for_completion(&shrinker->completion_wait); - down_write(&shrinker_rwsem); + mutex_lock(&shrinker_mutex); list_del_rcu(&shrinker->list); shrinker->flags &= ~SHRINKER_REGISTERED; if (shrinker->flags & SHRINKER_MEMCG_AWARE) unregister_memcg_shrinker(shrinker); debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id); - up_write(&shrinker_rwsem); + mutex_unlock(&shrinker_mutex); shrinker_debugfs_remove(debugfs_entry, debugfs_id);