From patchwork Mon Jun 24 17:53:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13709942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E85C2BD09 for ; Mon, 24 Jun 2024 17:53:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96BE66B03A1; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 919A46B03A4; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 770216B03A5; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 547FA6B03A1 for ; Mon, 24 Jun 2024 13:53:56 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CD55181287 for ; Mon, 24 Jun 2024 17:53:55 +0000 (UTC) X-FDA: 82266530430.10.1533C13 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf23.hostedemail.com (Postfix) with ESMTP id EBBE614001E for ; Mon, 24 Jun 2024 17:53:53 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ndz8SwF4; spf=pass (imf23.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719251623; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ptGWaBdSVZzJ+AqKuzcUZFu1raNFqIlkIOrE6FycX5I=; b=EXqYymY75EYHdlIZD2GHvaFCBDl9AJK1ck6SWBOyOXAfT70lTWlMJH48ivwQmd+gtKKG06 UldSwWEjRQenlbvhqHnhAIXbl52VZOYMrPZ83+59DMxjWWjuXwXiCpXc0KY5wZvzy4brHl X7hrWHY+aCHGy6Xqwn0j9vIACtgC18o= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ndz8SwF4; spf=pass (imf23.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719251623; a=rsa-sha256; cv=none; b=bfzycmZGYjApOA5VGVnaau8vvQoM3zEF2hLuHNYTMBxHjEjuNEnWTc+m7UbGjYx1+cdJo1 JPAYlB75kay7n1d26+5uta/vNSZbY8rcz+GZhifcUxIv6s/J+ktFSB9tiEeczrLTU6IrHp yJXIUctFh3S/SEGZSWb+E30altMu6lk= Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-70679845d69so1124806b3a.1 for ; Mon, 24 Jun 2024 10:53:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719251632; x=1719856432; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ptGWaBdSVZzJ+AqKuzcUZFu1raNFqIlkIOrE6FycX5I=; b=ndz8SwF48oa87+/U2Aas/yshM4q9vxo97JJ2x8Txy3V7oMzJflGmkmJGC+LgDTmkN+ LkuLvxNR0OrBiFIMG4Dd0ReZBWc91qaWVrxdp62XqKEGiNs9ew+OkBhHriPw9N0OtFt9 lVWJ60L7ap7abVYebxyZ9Mamj8QMLtcxba4d8oNOVMh/Hvz/RXNnB0sf94f2UVD1Ifyp R8Gzd+heqmWROqapr9rP4GNnji0soRqPUeFVDmGnoGwvsUqeW1UX0AZ954ikonDu+mkt dtBIwJ3QCLf67ymdJ/VkQY1EFavHaeaBIO01gVww3vqy41mg70nsx2Z2kkaWyir1v2I3 /tUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719251632; x=1719856432; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ptGWaBdSVZzJ+AqKuzcUZFu1raNFqIlkIOrE6FycX5I=; b=JAudDzC/ICOdjmqTbnouh7N6mC55yzbR01et6TVs3nGFItR6l7/LzrxGDxoVpYxTvd iHc5Cgu72Uuyd1NOtsNyM1Ju7JM8E3lxcbGrSpAxdiaw/eB9HAyPPk+s5fem1boUb1Nr P9FGAl81Wp71/4yXzFSxmIhv6lIYmy3LxffaA1U0F6VaZknEobkdlfk4svYrVgM1tqzg 51Y0Z+5u1Lcvh6/GtIP5WFO55T4g2zi9esHXpvt0vkZEeK6Eu0zRihZlWKIZrlMIi80m Q78DD6Un3HqpJmumzxLKBr1hHrcDLcGsfofRDGFwR2gjV04NtFNIaVbtb7Jgh7UQOlDQ ArGQ== X-Gm-Message-State: AOJu0Yzu3BeoJDykT2aL0eN+P2cdZkwz3GyRh86nzVCmFyTWrG3f6NqA NSpAmeNXjLkSyJpV4nKjQuDgabxPVaR2fwMknazhFM/TmBFgz+GGmnO58m2PGew= X-Google-Smtp-Source: AGHT+IEjGQMZjY1G92YtLTsuxQd2VmQ+Uz3y9H00Qo8FqHJJ8Q1XjGE9dYmYtW3KXDaIGGOBWsZdWw== X-Received: by 2002:a05:6a20:158e:b0:1bc:e978:8bf2 with SMTP id adf61e73a8af0-1bcf7eae9f3mr4771907637.23.1719251632204; Mon, 24 Jun 2024 10:53:52 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70662ee117asm5008049b3a.211.2024.06.24.10.53.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 24 Jun 2024 10:53:51 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Johannes Weiner , Roman Gushchin , Waiman Long , Shakeel Butt , Nhat Pham , Michal Hocko , Chengming Zhou , Qi Zheng , Muchun Song , Chris Li , Yosry Ahmed , "Huang, Ying" , Kairui Song Subject: [PATCH 7/7] mm/list_lru: Simplify the list_lru walk callback function Date: Tue, 25 Jun 2024 01:53:13 +0800 Message-ID: <20240624175313.47329-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240624175313.47329-1-ryncsn@gmail.com> References: <20240624175313.47329-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Stat-Signature: hamjah5gwcxap5roy6e3oskjfwenhif8 X-Rspam-User: X-Rspamd-Queue-Id: EBBE614001E X-Rspamd-Server: rspam02 X-HE-Tag: 1719251633-738002 X-HE-Meta: U2FsdGVkX1+GcTstpNLnqroW3sq4Xakg+0p4/qZcGmGOpA5agAurydKaB3qyErZfF1YQdaBlRZZRZ+jxYk3U+eUy9DwU0l3BKVB0RdagRT8YgKluKGwEFXJh1M50UaM+61RvxpCn2Jw2Wg4+1aq2AYJoHrX7RoksoeRbdPobiqquRjzv/B2NC88q0Sa5QiJxIg2lsGq22NXINLPTpJ+j1lOoFOQsrnDo7dkxIj6pGVNrYtR54x2oDVW/iqCr1QawIhw5p66TjN0mvqNmlJeMGynDPwZU/qWhokgKiVmIvyppoT8m4foQA+Kl9aK1hSN/q0XipJUo+FglLm6Q3Pm1CiBen6GYQUIb8nEvd5kXjnJKL89tIIULSE5W4eaa/1UpzaNvWwZsUpiMqZm5P8CANgAuGKbjVdCj/rJnKmfh5YFqtoeDW9EpTsjq9nkMRYx87A+8S7K+e5iQWeyaer626e+OaQC8mZAu4UWcgk1gaSwh9HAKEVXeLUvkPjMH09GXm2SNGZr1Hyw7yEOjiTcxeQrZ+LS81DBff9vXFG8b6ANhfclOiZGmBEen+9yieNnpdv0tA5ChAjjxKBsYtK0I7e1qi6EZ1iSyMKa2AmELJxU5Gzb7b+8zvOZWm3BLm9AUYhFceNz/WQ/UohmsEhFviQQgEVziP7fQPtE+V6Ro7S/UNwKltxeXlwd8DZhc2Xs2qmlZJaCpaHQ/vY1doyosNkHPEzzxq8VlFUawz+02niMUFiuY7mym/PxdjQ39D7w3hRYrTnYPSKMx2audhkUhzDk0MdtWM2YrC8B+eTANrZFfyMmeuQoneFu4VZ8YgZpqXK1jFheogIk0U8470y15ewI5dhHioFbtUa0pzvHdIg9xPOwWwXGokmiwmIyGyZ68CAijbbDjxdaUgt8Q3/oPTRNyszKrOEZULT3pJkByuhvw4jNzZKB5zQP1V6yRR4pa1j7DQXGtLlp5pYrEjvy Wl9NpCAE m9zzyAU1tVqnyLlXyHhFlSM0y55AS7LD8fSczW7CN7/Ri/JyuKWDR5auTsklp8gS5QDZAXM1RJIF7jS8NavH0TqtkS9GLPbUi0gtCJu+pQK7jcXNv8cqdJNVaZ0+LdmLf9C7LaQPWJmcFTnnwwmKSKdFUV3cA46hxsLYDPtfHje6J7jPGeIshJ8K9UEzg8UeXOAqgCPieamzi6vlXK3Gyz2CvKBVb7fE8LXXyoHbjOwXAhMrvx48D9YjxVeWHNUT6GnNjtUcrSagDBkt6UpIWTZepMc4ZFjBHTBSdo2XqAAdjd+41CNT3MiAXu+eiGvX4iXCd+kfXE7z58tIrOkTc+JhbaRtFMnePqls7JKOzTbMyamTngugDbxKOSt68hy5beoVX/IJVScLGLwB/3Qz8fsyob6MVur470/wTYMjZFozYDTWFwlGFXeVCkfuO7vhtDdMMMiaBYoRSLoD1DNmPG292n0TUa5g2piwM/a7SDG7VVZEcIQzLJfoKwsMcC3QKCNMIARO/a34QoO8IRj7b0xEz68o2NgpKbF7fhgr0pGwFkYD69XcBLjtv0Jq7BbUSSfhZNDAjx2eoxlE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Now isolation no longer takes the list_lru global node lock, only use the per-cgroup lock instead. And this lock is inside the list_lru_one being walked, no longer needed to pass the lock explicitly. Signed-off-by: Kairui Song --- drivers/android/binder_alloc.c | 5 ++--- drivers/android/binder_alloc.h | 2 +- fs/dcache.c | 4 ++-- fs/gfs2/quota.c | 2 +- fs/inode.c | 4 ++-- fs/nfs/nfs42xattr.c | 4 ++-- fs/nfsd/filecache.c | 5 +---- fs/xfs/xfs_buf.c | 2 -- fs/xfs/xfs_qm.c | 5 ++--- include/linux/list_lru.h | 2 +- mm/list_lru.c | 2 +- mm/workingset.c | 15 +++++++-------- mm/zswap.c | 4 ++-- 13 files changed, 24 insertions(+), 32 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index dd47d621e561..c55cce54f20c 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1055,9 +1055,8 @@ void binder_alloc_vma_close(struct binder_alloc *alloc) */ enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lock, void *cb_arg) - __must_hold(lock) + __must_hold(&lru->lock) { struct binder_lru_page *page = container_of(item, typeof(*page), lru); struct binder_alloc *alloc = page->alloc; @@ -1092,7 +1091,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, list_lru_isolate(lru, item); spin_unlock(&alloc->lock); - spin_unlock(lock); + spin_unlock(&lru->lock); if (vma) { trace_binder_unmap_user_start(alloc, index); diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 70387234477e..c02c8ebcb466 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -118,7 +118,7 @@ static inline void binder_selftest_alloc(struct binder_alloc *alloc) {} #endif enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lock, void *cb_arg); + void *cb_arg); struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc, size_t data_size, size_t offsets_size, diff --git a/fs/dcache.c b/fs/dcache.c index 407095188f83..4e5f8382ee3f 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1077,7 +1077,7 @@ void shrink_dentry_list(struct list_head *list) } static enum lru_status dentry_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *freeable = arg; struct dentry *dentry = container_of(item, struct dentry, d_lru); @@ -1158,7 +1158,7 @@ long prune_dcache_sb(struct super_block *sb, struct shrink_control *sc) } static enum lru_status dentry_lru_isolate_shrink(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *freeable = arg; struct dentry *dentry = container_of(item, struct dentry, d_lru); diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index aa9cf0102848..31aece125a75 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -152,7 +152,7 @@ static void gfs2_qd_list_dispose(struct list_head *list) static enum lru_status gfs2_qd_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *dispose = arg; struct gfs2_quota_data *qd = diff --git a/fs/inode.c b/fs/inode.c index 35da4e54e365..1fb52253a843 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -803,7 +803,7 @@ void invalidate_inodes(struct super_block *sb) * with this flag set because they are the inodes that are out of order. */ static enum lru_status inode_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *freeable = arg; struct inode *inode = container_of(item, struct inode, i_lru); @@ -845,7 +845,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item, if (inode_has_buffers(inode) || !mapping_empty(&inode->i_data)) { __iget(inode); spin_unlock(&inode->i_lock); - spin_unlock(lru_lock); + spin_unlock(&lru->lock); if (remove_inode_buffers(inode)) { unsigned long reap; reap = invalidate_mapping_pages(&inode->i_data, 0, -1); diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c index b6e3d8f77b91..37d79400e5f4 100644 --- a/fs/nfs/nfs42xattr.c +++ b/fs/nfs/nfs42xattr.c @@ -802,7 +802,7 @@ static struct shrinker *nfs4_xattr_large_entry_shrinker; static enum lru_status cache_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *dispose = arg; struct inode *inode; @@ -867,7 +867,7 @@ nfs4_xattr_cache_count(struct shrinker *shrink, struct shrink_control *sc) static enum lru_status entry_lru_isolate(struct list_head *item, - struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) + struct list_lru_one *lru, void *arg) { struct list_head *dispose = arg; struct nfs4_xattr_bucket *bucket; diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index ad9083ca144b..f68c4a1c529f 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -456,7 +456,6 @@ void nfsd_file_net_dispose(struct nfsd_net *nn) * nfsd_file_lru_cb - Examine an entry on the LRU list * @item: LRU entry to examine * @lru: controlling LRU - * @lock: LRU list lock (unused) * @arg: dispose list * * Return values: @@ -466,9 +465,7 @@ void nfsd_file_net_dispose(struct nfsd_net *nn) */ static enum lru_status nfsd_file_lru_cb(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lock, void *arg) - __releases(lock) - __acquires(lock) + void *arg) { struct list_head *head = arg; struct nfsd_file *nf = list_entry(item, struct nfsd_file, nf_lru); diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index aa4dbda7b536..43b914c1f621 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1857,7 +1857,6 @@ static enum lru_status xfs_buftarg_drain_rele( struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, void *arg) { @@ -1956,7 +1955,6 @@ static enum lru_status xfs_buftarg_isolate( struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, void *arg) { struct xfs_buf *bp = container_of(item, struct xfs_buf, b_lru); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 8d17099765ae..f1b6e73c0e68 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -412,9 +412,8 @@ static enum lru_status xfs_qm_dquot_isolate( struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, void *arg) - __releases(lru_lock) __acquires(lru_lock) + __releases(&lru->lock) __acquires(&lru->lock) { struct xfs_dquot *dqp = container_of(item, struct xfs_dquot, q_lru); @@ -460,7 +459,7 @@ xfs_qm_dquot_isolate( trace_xfs_dqreclaim_dirty(dqp); /* we have to drop the LRU lock to flush the dquot */ - spin_unlock(lru_lock); + spin_unlock(&lru->lock); error = xfs_qm_dqflush(dqp, &bp); if (error) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index b84483ef93a7..df6b9374ca68 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -184,7 +184,7 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, - struct list_lru_one *list, spinlock_t *lock, void *cb_arg); + struct list_lru_one *list, void *cb_arg); /** * list_lru_walk_one: walk a @lru, isolating and disposing freeable items. diff --git a/mm/list_lru.c b/mm/list_lru.c index c503921cbb13..d8d653317c2c 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -279,7 +279,7 @@ __list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, break; --*nr_to_walk; - ret = isolate(item, l, &l->lock, cb_arg); + ret = isolate(item, l, cb_arg); switch (ret) { /* * LRU_RETRY and LRU_REMOVED_RETRY will drop the lru lock, diff --git a/mm/workingset.c b/mm/workingset.c index 947423c3e719..e3552e7318a5 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -704,8 +704,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, static enum lru_status shadow_lru_isolate(struct list_head *item, struct list_lru_one *lru, - spinlock_t *lru_lock, - void *arg) __must_hold(lru_lock) + void *arg) __must_hold(lru->lock) { struct xa_node *node = container_of(item, struct xa_node, private_list); struct address_space *mapping; @@ -714,20 +713,20 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, /* * Page cache insertions and deletions synchronously maintain * the shadow node LRU under the i_pages lock and the - * lru_lock. Because the page cache tree is emptied before - * the inode can be destroyed, holding the lru_lock pins any + * &lru->lock. Because the page cache tree is emptied before + * the inode can be destroyed, holding the &lru->lock pins any * address_space that has nodes on the LRU. * * We can then safely transition to the i_pages lock to * pin only the address_space of the particular node we want - * to reclaim, take the node off-LRU, and drop the lru_lock. + * to reclaim, take the node off-LRU, and drop the &lru->lock. */ mapping = container_of(node->array, struct address_space, i_pages); /* Coming from the list, invert the lock order */ if (!xa_trylock(&mapping->i_pages)) { - spin_unlock_irq(lru_lock); + spin_unlock_irq(&lru->lock); ret = LRU_RETRY; goto out; } @@ -736,7 +735,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, if (mapping->host != NULL) { if (!spin_trylock(&mapping->host->i_lock)) { xa_unlock(&mapping->i_pages); - spin_unlock_irq(lru_lock); + spin_unlock_irq(&lru->lock); ret = LRU_RETRY; goto out; } @@ -745,7 +744,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item, list_lru_isolate(lru, item); __dec_node_page_state(virt_to_page(node), WORKINGSET_NODES); - spin_unlock(lru_lock); + spin_unlock(&lru->lock); /* * The nodes should only contain one or more shadow entries, diff --git a/mm/zswap.c b/mm/zswap.c index f7a2afaeea53..24e1e0c87172 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1097,7 +1097,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, * shrinker functions **********************************/ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, - spinlock_t *lock, void *arg) + void *arg) { struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); bool *encountered_page_in_swapcache = (bool *)arg; @@ -1143,7 +1143,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o * It's safe to drop the lock here because we return either * LRU_REMOVED_RETRY or LRU_RETRY. */ - spin_unlock(lock); + spin_unlock(&l->lock); writeback_result = zswap_writeback_entry(entry, swpentry);