From patchwork Thu Nov 30 19:40:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13474955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D348CC10DC1 for ; Thu, 30 Nov 2023 19:40:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A4C86B048C; Thu, 30 Nov 2023 14:40:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 657346B048D; Thu, 30 Nov 2023 14:40:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E5466B048E; Thu, 30 Nov 2023 14:40:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3021B6B048C for ; Thu, 30 Nov 2023 14:40:28 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F4239C029D for ; Thu, 30 Nov 2023 19:40:27 +0000 (UTC) X-FDA: 81515637294.21.B21D243 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf07.hostedemail.com (Postfix) with ESMTP id 07E5B40014 for ; Thu, 30 Nov 2023 19:40:25 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LjH+WfPC; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701373226; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/6nBb1ssTU+mhuBhjnt3BPgBHYwDMg2Rr8tyQosbVIQ=; b=q9IynI4oBticurYfJty3iSPrm87yUrQGsdn1eFlLNqq760UeCR9FJyn+s/xwnN/w194Ku9 I2fR0KDmpQSqy+gRvlvDSRweFn1BqxEsxlOd1gVQi/vQ7B4NZCYiGXvvJF6dD7zrLBgc8g sBgn91Ul7vMIxJN08x73AKPup/QyXCk= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LjH+WfPC; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701373226; a=rsa-sha256; cv=none; b=HbMHdIbkX7s/3iUZW8Z1bo2bi2qd5aJD4hhpw3J2kTu7QcJI2gqgI1xLbiAxlaqX4k5NvF H/IjmIqRUNkXQVqwQjK1xjozGjzcaqebc4NYCWhT4opfni/Un59qMiZvUHPcqbQ3zgtnxh CCJnTDAPdGtT9Ie4yOS58cJhsAEZRk4= Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-2856437b584so1192492a91.3 for ; Thu, 30 Nov 2023 11:40:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701373225; x=1701978025; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/6nBb1ssTU+mhuBhjnt3BPgBHYwDMg2Rr8tyQosbVIQ=; b=LjH+WfPCrYGC9iox+nWV2jSyckOxrhyfivSJtu2jmuVVwYqjnaqSthBiCPBsXMFwiG XjWo/HlDa7RKaWXiXH/esVtwdsvPtXb1rWpXQiZbEBRfh0eHja/IDi66Pazoz5Sr5OQw szkYrYKmJuxPVcrd4VrUNx2uiOPIRv1nI32zj5Kd0QBvzA7eEQdhR24WpyOJkjiUqXgX JoD2VB6FbP/R6xUomPtgMRHEkPIO6et9pwPIW1NSluAGBBzkjqgJhlPlzUYQyufok9kg NVSUBL0KL0H29zDwVvLAE2inv+31cD0ppHuAV7MnZkEMVTlrIB8eHnQJQYwSNyUN7IU+ /+Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701373225; x=1701978025; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/6nBb1ssTU+mhuBhjnt3BPgBHYwDMg2Rr8tyQosbVIQ=; b=OV88JKviaQZx2WiV6RJi2HvPr7kiwFaQAdfELfLCyrDBl/a+G7K/HsQ7WTARduC19/ nM6hcxzTXe+fDYplP6fsJITHfWEcF8faEzGl8HW9vL6IBPdReXHd5U9vu7s2UKSh4Vu2 hWa44Y67zw0Dhn6GfkM1ErOCmo3YHz/fX/kRWls9GahL3HB97aEpCFeysN6mym7rY37a 260VX8byvTYNZeDGL755pvlZyUO9D1O70quNeVmYV64KbXeWTvS2q5v4nHs3oBm5/o+i fFTFnClhe51qPC9AhhOeb/OPfe1zgYg/sp+MBsu5UWDVq4VctaZImXQczzZUYskbSaNt fWVw== X-Gm-Message-State: AOJu0YxFJb1+632SC/4meiLn6a+9OIc1JGKnNrfbcaa9yuy8grB7KOkD MTxY6PRisTpW3U8tTTqzDLw= X-Google-Smtp-Source: AGHT+IFO2pecORzH4X0XKhxzd2F0JG3YkimSZv+d/Ii9d1bXrhZ3ZTUq7tyiZf75hSxfRZQDSXE1PA== X-Received: by 2002:a17:90b:3907:b0:27c:ed8e:1840 with SMTP id ob7-20020a17090b390700b0027ced8e1840mr22399631pjb.10.1701373224704; Thu, 30 Nov 2023 11:40:24 -0800 (PST) Received: from localhost (fwdproxy-prn-013.fbsv.net. [2a03:2880:ff:d::face:b00c]) by smtp.gmail.com with ESMTPSA id e4-20020a170902d38400b001cfc1b931a9sm1760037pld.249.2023.11.30.11.40.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 11:40:24 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Date: Thu, 30 Nov 2023 11:40:18 -0800 Message-Id: <20231130194023.4102148-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231130194023.4102148-1-nphamcs@gmail.com> References: <20231130194023.4102148-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 07E5B40014 X-Stat-Signature: mkfus44wisoius5x8498odmwt96beuej X-HE-Tag: 1701373225-949808 X-HE-Meta: U2FsdGVkX1+DxRh1jXw0t+kz+G8LPJzjqbwQWL0iuRWJc5NAX6ZC0OxFp++Jcv/tv2f7WIhoISM/v/khzYOYEpoJLsbI5OHDanHoXNtdDZ+zRJRYxwm/XlNfXUm//6qkjIPqnkw5Rfif9yqUgEvPldCjYSQuV5j4x4vpTpnJPNJWG2PND0CK4fCvKc6XSHcHlpgsPUKLDc9K7PnKH5iY/HEG3sq+fASd9SfjIFJx2osAM0PnbKjrFGvUU44rGH0dBC+LhZqz/l0BjRfr1RTt9rO/aykV6CuoSkZi3oy00L2dkYyLgnv/dJ/J1rIqZY7b5ueTzsyLotD6PON3EwUcZ9cS+hgYXDAGxu/RDZtsd7cuPgPKsu3ZoFoIupWNekVPtcD6QvCMga9aIN4vHh+8BcZGJXotiLqFKkW4hFJV+p7NSCgK6o2/42ASKZ4ccZxuiRuNtMbNoTSRTV5IsclM59pZk2QYWq44pY57yL6VhwvcCE71gyuO0F59C9uHMWumRWQr4RYodj06d2tLQ1oQcZIk63jxKXTQRXmX9nIaKgZq/7rgsUDuAMl8VzXTOWEPknVnk4NY3diQ59l/1/Cesj8+39/vO83OYgMmq7co3vz7n36rjUJjbPMpICJGeQCwwdawwlIhHIoNDnAA9fffDatYjjmAjLas7l6Yzlq0r+zOCSb9Y0COz073sXegSnmGxma9wqCB8Wdd6VqQF4iFquGYceTfxMkl+0K+tKVWpp3NjIN/K3p0xgCqNJa+ZEc913ObOlOOHGryBNpfVcFMGojJOkvNcLtuXAgjCYn0TEzFaXZElHO2uJsSWve57/0PUQZDmfiwG0jdZPHdd5yLpUyAmpjVVS7skkH5GZNPH99NP0RwZt4DpCPNQNK0e2AAaL0/tWJP1ScOR1UNLMZS99MaIePdsFxly+PZ/iThAbu4fnFmovM808mRDhDODXUZOIsF+SPRCmgGkV0++Wj wVonhk0m 8CwkZmI8ZlukzIAAcOe0eDDOsD2errIQp2e+fXzWGYMh0R3PBJ6O90cFusd7nrh815FxzatydVuY6rVSFNn1iTshJuzZ5ynUNxRej6SbEI+1bs2yhhBz7jQC+vChu449Pjs8VOLDsK7SwmZiwmV3yVI2hAHgAbE/Y2jKcszGnw1pW3dYw8o3Rfgul7vHPvvGoLMqke3/zX+7+2EmqMA8vQicflbXTGg+E6AFLF/6SxJjncbS+KClvOptRvNn+Bj/HtXjriz5ZGqAR5Qa+DypeM6+JraSkbGo6qR6dd0jqNxaQoF2V85gNH7oCkgcCc9ulXa8HVTV6E2j7Z9lfgQhLJixrmzf9kGCKcrj7DDzGr4YcQZ0tJZ9pPq+tT7E1vZBoj+yHAvKTnco7W3H6O2xqlRD8Xp0fxaTfIJ+1kZ6s579gATeXqzfRH9vd/Hk3GycG50JnNjB7+FH23EQ4/BXKluEkbQz/rLJTTWO1+3WvUcJ0z2pMDXfcFzsS5hPi7Ff+gSWW20XE59u7sfgKqUPQGJtup/AbmsSeRlyAf54x+Zh+UBY+ogijX91K1TovhJRUlkZKndXlEReWvNA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The interface of list_lru is based on the assumption that the list node and the data it represents belong to the same allocated on the correct node/memcg. While this assumption is valid for existing slab objects LRU such as dentries and inodes, it is undocumented, and rather inflexible for certain potential list_lru users (such as the upcoming zswap shrinker and the THP shrinker). It has caused us a lot of issues during our development. This patch changes list_lru interface so that the caller must explicitly specify numa node and memcg when adding and removing objects. The old list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and list_lru_del_obj(), respectively. It also extends the list_lru API with a new function, list_lru_putback, which undoes a previous list_lru_isolate call. Unlike list_lru_add, it does not increment the LRU node count (as list_lru_isolate does not decrement the node count). list_lru_putback also allows for explicit memcg and NUMA node selection. Suggested-by: Johannes Weiner Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- drivers/android/binder_alloc.c | 7 ++--- fs/dcache.c | 8 +++-- fs/gfs2/quota.c | 6 ++-- fs/inode.c | 4 +-- fs/nfs/nfs42xattr.c | 8 ++--- fs/nfsd/filecache.c | 4 +-- fs/xfs/xfs_buf.c | 6 ++-- fs/xfs/xfs_dquot.c | 2 +- fs/xfs/xfs_qm.c | 2 +- include/linux/list_lru.h | 54 ++++++++++++++++++++++++++++++++-- mm/list_lru.c | 48 +++++++++++++++++++++++++----- mm/workingset.c | 4 +-- 12 files changed, 117 insertions(+), 36 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 138f6d43d13b..f69d30c9f50f 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -234,7 +234,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, if (page->page_ptr) { trace_binder_alloc_lru_start(alloc, index); - on_lru = list_lru_del(&binder_alloc_lru, &page->lru); + on_lru = list_lru_del_obj(&binder_alloc_lru, &page->lru); WARN_ON(!on_lru); trace_binder_alloc_lru_end(alloc, index); @@ -285,7 +285,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, trace_binder_free_lru_start(alloc, index); - ret = list_lru_add(&binder_alloc_lru, &page->lru); + ret = list_lru_add_obj(&binder_alloc_lru, &page->lru); WARN_ON(!ret); trace_binder_free_lru_end(alloc, index); @@ -848,7 +848,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) if (!alloc->pages[i].page_ptr) continue; - on_lru = list_lru_del(&binder_alloc_lru, + on_lru = list_lru_del_obj(&binder_alloc_lru, &alloc->pages[i].lru); page_addr = alloc->buffer + i * PAGE_SIZE; binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, @@ -1287,4 +1287,3 @@ int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, dest, bytes); } - diff --git a/fs/dcache.c b/fs/dcache.c index c82ae731df9a..2ba37643b9c5 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -428,7 +428,8 @@ static void d_lru_add(struct dentry *dentry) this_cpu_inc(nr_dentry_unused); if (d_is_negative(dentry)) this_cpu_inc(nr_dentry_negative); - WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + WARN_ON_ONCE(!list_lru_add_obj( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); } static void d_lru_del(struct dentry *dentry) @@ -438,7 +439,8 @@ static void d_lru_del(struct dentry *dentry) this_cpu_dec(nr_dentry_unused); if (d_is_negative(dentry)) this_cpu_dec(nr_dentry_negative); - WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + WARN_ON_ONCE(!list_lru_del_obj( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); } static void d_shrink_del(struct dentry *dentry) @@ -1240,7 +1242,7 @@ static enum lru_status dentry_lru_isolate(struct list_head *item, * * This is guaranteed by the fact that all LRU management * functions are intermediated by the LRU API calls like - * list_lru_add and list_lru_del. List movement in this file + * list_lru_add_obj and list_lru_del_obj. List movement in this file * only ever occur through this functions or through callbacks * like this one, that are called from the LRU API. * diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 95dae7838b4e..b57f8c7b35be 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -271,7 +271,7 @@ static struct gfs2_quota_data *gfs2_qd_search_bucket(unsigned int hash, if (qd->qd_sbd != sdp) continue; if (lockref_get_not_dead(&qd->qd_lockref)) { - list_lru_del(&gfs2_qd_lru, &qd->qd_lru); + list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru); return qd; } } @@ -344,7 +344,7 @@ static void qd_put(struct gfs2_quota_data *qd) } qd->qd_lockref.count = 0; - list_lru_add(&gfs2_qd_lru, &qd->qd_lru); + list_lru_add_obj(&gfs2_qd_lru, &qd->qd_lru); spin_unlock(&qd->qd_lockref.lock); } @@ -1517,7 +1517,7 @@ void gfs2_quota_cleanup(struct gfs2_sbd *sdp) lockref_mark_dead(&qd->qd_lockref); spin_unlock(&qd->qd_lockref.lock); - list_lru_del(&gfs2_qd_lru, &qd->qd_lru); + list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru); list_add(&qd->qd_lru, &dispose); } spin_unlock(&qd_lock); diff --git a/fs/inode.c b/fs/inode.c index f238d987dec9..ef2034a985e0 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -464,7 +464,7 @@ static void __inode_add_lru(struct inode *inode, bool rotate) if (!mapping_shrinkable(&inode->i_data)) return; - if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru)) + if (list_lru_add_obj(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_inc(nr_unused); else if (rotate) inode->i_state |= I_REFERENCED; @@ -482,7 +482,7 @@ void inode_add_lru(struct inode *inode) static void inode_lru_list_del(struct inode *inode) { - if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru)) + if (list_lru_del_obj(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_dec(nr_unused); } diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c index 2ad66a8922f4..49aaf28a6950 100644 --- a/fs/nfs/nfs42xattr.c +++ b/fs/nfs/nfs42xattr.c @@ -132,7 +132,7 @@ nfs4_xattr_entry_lru_add(struct nfs4_xattr_entry *entry) lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ? &nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru; - return list_lru_add(lru, &entry->lru); + return list_lru_add_obj(lru, &entry->lru); } static bool @@ -143,7 +143,7 @@ nfs4_xattr_entry_lru_del(struct nfs4_xattr_entry *entry) lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ? &nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru; - return list_lru_del(lru, &entry->lru); + return list_lru_del_obj(lru, &entry->lru); } /* @@ -349,7 +349,7 @@ nfs4_xattr_cache_unlink(struct inode *inode) oldcache = nfsi->xattr_cache; if (oldcache != NULL) { - list_lru_del(&nfs4_xattr_cache_lru, &oldcache->lru); + list_lru_del_obj(&nfs4_xattr_cache_lru, &oldcache->lru); oldcache->inode = NULL; } nfsi->xattr_cache = NULL; @@ -474,7 +474,7 @@ nfs4_xattr_get_cache(struct inode *inode, int add) kref_get(&cache->ref); nfsi->xattr_cache = cache; cache->inode = inode; - list_lru_add(&nfs4_xattr_cache_lru, &cache->lru); + list_lru_add_obj(&nfs4_xattr_cache_lru, &cache->lru); } spin_unlock(&inode->i_lock); diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index ef063f93fde9..6c2decfdeb4b 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -322,7 +322,7 @@ nfsd_file_check_writeback(struct nfsd_file *nf) static bool nfsd_file_lru_add(struct nfsd_file *nf) { set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags); - if (list_lru_add(&nfsd_file_lru, &nf->nf_lru)) { + if (list_lru_add_obj(&nfsd_file_lru, &nf->nf_lru)) { trace_nfsd_file_lru_add(nf); return true; } @@ -331,7 +331,7 @@ static bool nfsd_file_lru_add(struct nfsd_file *nf) static bool nfsd_file_lru_remove(struct nfsd_file *nf) { - if (list_lru_del(&nfsd_file_lru, &nf->nf_lru)) { + if (list_lru_del_obj(&nfsd_file_lru, &nf->nf_lru)) { trace_nfsd_file_lru_del(nf); return true; } diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 545c7991b9b5..669332849680 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -169,7 +169,7 @@ xfs_buf_stale( atomic_set(&bp->b_lru_ref, 0); if (!(bp->b_state & XFS_BSTATE_DISPOSE) && - (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru))) + (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru))) atomic_dec(&bp->b_hold); ASSERT(atomic_read(&bp->b_hold) >= 1); @@ -1047,7 +1047,7 @@ xfs_buf_rele( * buffer for the LRU and clear the (now stale) dispose list * state flag */ - if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) { + if (list_lru_add_obj(&bp->b_target->bt_lru, &bp->b_lru)) { bp->b_state &= ~XFS_BSTATE_DISPOSE; atomic_inc(&bp->b_hold); } @@ -1060,7 +1060,7 @@ xfs_buf_rele( * was on was the disposal list */ if (!(bp->b_state & XFS_BSTATE_DISPOSE)) { - list_lru_del(&bp->b_target->bt_lru, &bp->b_lru); + list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru); } else { ASSERT(list_empty(&bp->b_lru)); } diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index ac6ba646624d..49f619f5aa96 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -1064,7 +1064,7 @@ xfs_qm_dqput( struct xfs_quotainfo *qi = dqp->q_mount->m_quotainfo; trace_xfs_dqput_free(dqp); - if (list_lru_add(&qi->qi_lru, &dqp->q_lru)) + if (list_lru_add_obj(&qi->qi_lru, &dqp->q_lru)) XFS_STATS_INC(dqp->q_mount, xs_qm_dquot_unused); } xfs_dqunlock(dqp); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 94a7932ac570..67d0a8564ff3 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -171,7 +171,7 @@ xfs_qm_dqpurge( * hits zero, so it really should be on the freelist here. */ ASSERT(!list_empty(&dqp->q_lru)); - list_lru_del(&qi->qi_lru, &dqp->q_lru); + list_lru_del_obj(&qi->qi_lru, &dqp->q_lru); XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot_unused); xfs_qm_dqdestroy(dqp); diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index db86ad78d428..7675a48a0701 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -75,6 +75,8 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * list_lru_add: add an element to the lru list's tail * @lru: the lru pointer * @item: the item to be added. + * @nid: the node id of the sublist to add the item to. + * @memcg: the cgroup of the sublist to add the item to. * * If the element is already part of a list, this function returns doing * nothing. Therefore the caller does not need to keep state about whether or @@ -87,12 +89,28 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren * * Return: true if the list was updated, false otherwise */ -bool list_lru_add(struct list_lru *lru, struct list_head *item); +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); /** - * list_lru_del: delete an element to the lru list + * list_lru_add_obj: add an element to the lru list's tail + * @lru: the lru pointer + * @item: the item to be added. + * + * This function is similar to list_lru_add(), but the NUMA node and the + * memcg of the sublist is determined by @item list_head. This assumption is + * valid for slab objects LRU such as dentries, inodes, etc. + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_add_obj(struct list_lru *lru, struct list_head *item); + +/** + * list_lru_del: delete an element from the lru list * @lru: the lru pointer * @item: the item to be deleted. + * @nid: the node id of the sublist to delete the item from. + * @memcg: the cgroup of the sublist to delete the item from. * * This function works analogously as list_lru_add() in terms of list * manipulation. The comments about an element already pertaining to @@ -100,7 +118,21 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); * * Return: true if the list was updated, false otherwise */ -bool list_lru_del(struct list_lru *lru, struct list_head *item); +bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + +/** + * list_lru_del_obj: delete an element from the lru list + * @lru: the lru pointer + * @item: the item to be deleted. + * + * This function is similar to list_lru_del(), but the NUMA node and the + * memcg of the sublist is determined by @item list_head. This assumption is + * valid for slab objects LRU such as dentries, inodes, etc. + * + * Return value: true if the list was updated, false otherwise. + */ +bool list_lru_del_obj(struct list_lru *lru, struct list_head *item); /** * list_lru_count_one: return the number of objects currently held by @lru @@ -138,6 +170,22 @@ static inline unsigned long list_lru_count(struct list_lru *lru) void list_lru_isolate(struct list_lru_one *list, struct list_head *item); void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); +/** + * list_lru_putback: undo list_lru_isolate + * @lru: the lru pointer. + * @item: the item to put back. + * @nid: the node id of the sublist to put the item back to. + * @memcg: the cgroup of the sublist to put the item back to. + * + * Put back an isolated item into its original LRU. Note that unlike + * list_lru_add, this does not increment the node LRU count (as + * list_lru_isolate does not originally decrement this count). + * + * Since we might have dropped the LRU lock in between, recompute list_lru_one + * from the node's id and memcg. + */ +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, struct list_lru_one *list, spinlock_t *lock, void *cb_arg); diff --git a/mm/list_lru.c b/mm/list_lru.c index a05e5bef3b40..fcca67ac26ec 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -116,21 +116,19 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, } #endif /* CONFIG_MEMCG_KMEM */ -bool list_lru_add(struct list_lru *lru, struct list_head *item) +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) { - int nid = page_to_nid(virt_to_page(item)); struct list_lru_node *nlru = &lru->node[nid]; - struct mem_cgroup *memcg; struct list_lru_one *l; spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, &memcg); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) - set_shrinker_bit(memcg, nid, - lru_shrinker_id(lru)); + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); nlru->nr_items++; spin_unlock(&nlru->lock); return true; @@ -140,15 +138,25 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_add); -bool list_lru_del(struct list_lru *lru, struct list_head *item) +bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return list_lru_add(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_add_obj); + +bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, NULL); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -160,6 +168,16 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_del); +bool list_lru_del_obj(struct list_lru *lru, struct list_head *item) +{ + int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return list_lru_del(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_del_obj); + void list_lru_isolate(struct list_lru_one *list, struct list_head *item) { list_del_init(item); @@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, } EXPORT_SYMBOL_GPL(list_lru_isolate_move); +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ + struct list_lru_one *list = + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + + if (list_empty(item)) { + list_add_tail(item, &list->list); + if (!list->nr_items++) + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); + } +} +EXPORT_SYMBOL_GPL(list_lru_putback); + unsigned long list_lru_count_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg) { diff --git a/mm/workingset.c b/mm/workingset.c index b192e44a0e7c..c17d45c6f29b 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -615,12 +615,12 @@ void workingset_update_node(struct xa_node *node) if (node->count && node->count == node->nr_values) { if (list_empty(&node->private_list)) { - list_lru_add(&shadow_nodes, &node->private_list); + list_lru_add_obj(&shadow_nodes, &node->private_list); __inc_lruvec_kmem_state(node, WORKINGSET_NODES); } } else { if (!list_empty(&node->private_list)) { - list_lru_del(&shadow_nodes, &node->private_list); + list_lru_del_obj(&shadow_nodes, &node->private_list); __dec_lruvec_kmem_state(node, WORKINGSET_NODES); } } From patchwork Thu Nov 30 19:40:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13474956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3717C10DC2 for ; Thu, 30 Nov 2023 19:40:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E47C76B048D; Thu, 30 Nov 2023 14:40:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF5FA6B048E; Thu, 30 Nov 2023 14:40:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C225C6B048F; Thu, 30 Nov 2023 14:40:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B60EF6B048D for ; Thu, 30 Nov 2023 14:40:28 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8D98DC029D for ; Thu, 30 Nov 2023 19:40:28 +0000 (UTC) X-FDA: 81515637336.18.CB07916 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf27.hostedemail.com (Postfix) with ESMTP id C68324000E for ; Thu, 30 Nov 2023 19:40:26 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ks0954qq; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701373226; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wu5h2uK3efWNzQSn40V3SLhhxDZPSFSFXXaEbLeDNVU=; b=Mlm5UlwrBYPULHOvtOcFFytKkk8EoTv1GhoWexletb0f02wFd50UNF79yt3Nr5eWmRjRL/ tRfdyJOcreBrMooEZ+DlDFKSmL9eJLX/mOFaJ6VkMObL+7QjoUAC9SeLjwkSml+05isuqF drrMjqzcpLa8vSf8M6OIAVVonl8AEYk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ks0954qq; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701373226; a=rsa-sha256; cv=none; b=DiMt9mrR0v+fzOClAqAw1zEKpg7A6iIQgjQDaSe3+GykVltcLps9PLl82XvquW/+dRA0ct D5gvFwIGAYAC5JSmecQ/3cxIYDFkxsxBylxFougPi0SCjEnb73Be7MMdXTKqRF6/qwsNqR j2Y6HqAjl1fRI6ZzUfcYg4UxAM/3cQk= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6cdd13c586fso1268906b3a.0 for ; Thu, 30 Nov 2023 11:40:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701373226; x=1701978026; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wu5h2uK3efWNzQSn40V3SLhhxDZPSFSFXXaEbLeDNVU=; b=Ks0954qqRpADN8lp9so9dbbTJQu0FLqCVdh4qjTBG1Cq/Fpc4/GK63HUAgsZ/jd8Ct LGPMZrPbQNi0sGHLbuj6bhoJjkJHlYLISPvbJ6BYjQeAjukbnoUo4ZEi6mYp1V+6qgSJ nXN76wFYiWPR1ewlsrVfN6l3BKxMiDzThyZ7SgxQwbvuf9OaSLF2e9O4TS4XCduVppCX P89YPxSYbJxfMq+VW5qcAHEmABUP/uKDT999ylD8l2ZLdImQAwGL2omUBbigE7ZfqRgO RfVaCK/qs3H4sGksQGjU9xgicvwrvLqSMiqLsssOv4drNKQmmcpgXt9j5Grs3wgoo+5v +M2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701373226; x=1701978026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wu5h2uK3efWNzQSn40V3SLhhxDZPSFSFXXaEbLeDNVU=; b=vQr4S35vjqSnTxtlFsiac/BopsFoASM6avBrLoIY16cbZBSvr9i/sk0UgdCHAIpR3w oOZt6r34JU3KsL6N6m9jlf+ocqr6mKb968A//hF6JfZOQXRVTFGlrRB+eI+8rVOCir80 ull67rs+yhPyr1pIpfsm9Rr06xpr7i97PqFhSIOlJBrylOS9OzI/RiWIos1F8eJFBPM6 Ul/PS+XE4VLJWd7XALDcN8bMfbKB79IvXXx2wIeAEL9f5lwUCw+dHMrGlB9ocl9Lcjlj YYB20o5fJ4kMgZFtKh53/5vOnwncS00y1Yr1QmztpBureJqdNNZRRpQq2FCK+pzr19aQ M1NA== X-Gm-Message-State: AOJu0Yy9s0nuC87N8uV3DgG34PxBaOx1/BJMXWYpdC2Cm/3dRQjgYQsv XzW82Cc8JEkJru+HT4K3od8= X-Google-Smtp-Source: AGHT+IHAYv2QywW+kJu67uQmw9fkTUq5k9I3OqO0/TQg0O7WFleUTQj+azK6qIpJqRZiFpj/7FBacg== X-Received: by 2002:a05:6a00:1894:b0:6cd:eb43:f1cd with SMTP id x20-20020a056a00189400b006cdeb43f1cdmr2816622pfh.9.1701373225625; Thu, 30 Nov 2023 11:40:25 -0800 (PST) Received: from localhost (fwdproxy-prn-005.fbsv.net. [2a03:2880:ff:5::face:b00c]) by smtp.gmail.com with ESMTPSA id g2-20020aa78742000000b006cda62f118bsm1593501pfo.60.2023.11.30.11.40.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 11:40:25 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Date: Thu, 30 Nov 2023 11:40:19 -0800 Message-Id: <20231130194023.4102148-3-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231130194023.4102148-1-nphamcs@gmail.com> References: <20231130194023.4102148-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C68324000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 8mqtks915bctci4mjbaq7987jbh1jigf X-HE-Tag: 1701373226-189298 X-HE-Meta: U2FsdGVkX1/tF0GkAczgO1ziHu0sR89R7jQiLqf/1gxQv+QncQ7bKZUbHjSqR8xUeUCeAMdqFQgUKxoTRqmfq6hAsPZ1EwWL1VkaXYICIS5wSUoPd+lxNJwCIBJKWMcr9S/kjPjo9+7KqBFUww808IRL4bhm3GvDDyfDGvo/JcwfdkUDZrl6zwkgtjMazwUXX0sSiZBI6pLgRG1akAQyVEhiv/AKj4B421o4Rj3TqaqxmTP3T4FdrhVVqcpmLL4cp8eh8mGx0M2yQ2PwyBp8H/2t+c+iUyGWl3lS7du5A04wRLgwz6W5F8ZdgdaALe3HCtQwdc0rnb46Cf25riwEy1FaZ/9lMbQBsnkluWbGPuu5zcZVrV0jrplct2IT9KtHSuzMwMv56mwF+BuUF3VrOSuaUEgYQI5gc4Xs7zRe48dZKAMvJ5xU9znZdA3KWnCCAEXAdcB+vEHWiCnSjp+TJsfyJ+j6LCfZS0sFVt11CRAZgLTssdlMNk9HaKf43p0OHQ05lwIKdVDbdgTyq2ah2BCzpWP9ZCcG48zxVL3fQqaI7yT4mQkULDOiuOz5uYYQZ2LoIH1c3DUFdGozUTNtdWf/5RL07MWgRfwVUF4LfFsl6q2hL0/XFxg5fSJdESWy1GSrqfyif4tCyNxQNj+Dg2q1KUMseoF9w+za9mhcbIHMO6a4p24WzL/nN8yfzHn3+l9z+BQyVD/Emyxqq/mWAm83KxvF6KodvKbZCcjWQUPDeHXeBlTNRP1K2px6/Ausu8RBhByCF1mahUbuCoNomu/jvVJ/TWRyZISArmSIXghg1PxIyTGwtqk/iD8+BwjzdGnNVfRWOJTBpV8jFGWV0GhoIEyvYE+vixMVluFVFCpoGzZFFBiJVK7PpPjw8aqTqrthz6evx0HCSWWu2dRdsR74lBkrWanEhoGh9yHCFUpO4HXgICDcKo6EkNsZTVCnpI6jm2e0PeUIOnFfU5r yveJ8Gnw vBGXhIqPSbe8eo7iDY2abMGTOMeve76aZ4TFuDfYvUNLYl7xU/9c6UeensAUpTPgpf1b93s6T8Hz0fgH+DrxhsnSijcX16SyqYxN8fxmUCfrX3z5Ko7j6vOHopOE2K2M2CAsRG0anj2Wn+OhbDo3CCsIXAVhMOtZj49q/9kSI1/BEeLIxltJPRqYn26+Cm3eVTgEq6NV+jOsQHHAE8dnv61+hmTIHQFRqRchS7h9Ni8x8+LaY822IUYgJkwvnNc4ru4VMCL9jWhkB0L5vXZhrzT3qvnuJDFe7TXIGSPCT/wSf/F+tgq9+nUxiR5xwFYPMMiLjPHBMPsrLzBtWTdB5lqL6cZGqc4+2eeGDbZnV+TGsnTMdgKIoOXXT4bNOBg8LxcDcA6fPcTxp8/y69qdidBXnN8rTaFUNAXfJeojk/HiYd00MLWY43ClSZ5Q1XiIE2ZOht+aJ8ZWJKxT0cqJre1rzJdZzGiWeMyIH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000073, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch implements a helper function that try to get a reference to an memcg's css, as well as checking if it is online. This new function is almost exactly the same as the existing mem_cgroup_tryget(), except for the onlineness check. In the !CONFIG_MEMCG case, it always returns true, analogous to mem_cgroup_tryget(). This is useful for e.g to the new zswap writeback scheme, where we need to select the next online memcg as a candidate for the global limit reclaim. Signed-off-by: Nhat Pham Reviewed-by: Yosry Ahmed --- include/linux/memcontrol.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7bdcf3020d7a..2bd7d14ace78 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg) return !memcg || css_tryget(&memcg->css); } +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg) +{ + return !memcg || css_tryget_online(&memcg->css); +} + static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg) return true; } +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg) +{ + return true; +} + static inline void mem_cgroup_put(struct mem_cgroup *memcg) { } From patchwork Thu Nov 30 19:40:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13474957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1AE1C07CA9 for ; Thu, 30 Nov 2023 19:40:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D0E26B048F; Thu, 30 Nov 2023 14:40:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 482496B0490; Thu, 30 Nov 2023 14:40:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D41D6B0491; Thu, 30 Nov 2023 14:40:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 17CA16B048F for ; Thu, 30 Nov 2023 14:40:30 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DF249160240 for ; Thu, 30 Nov 2023 19:40:29 +0000 (UTC) X-FDA: 81515637378.07.2514116 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf21.hostedemail.com (Postfix) with ESMTP id E47241C0015 for ; Thu, 30 Nov 2023 19:40:27 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=QvVhuTFy; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701373228; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FLpkBFq0YXp/ZMVIN7r+MmWV5KZAgAh9j+5iz7Jq43Q=; b=GAKtH5WxSGVTqYbniAVVblGW751zUpUEY6ORNqpZ3hx5jD3WMJ3+1WIeNje/kKzxlb7IRb TilK2OxR6kt5IaD9/tPeqO+Q9tBIDqT+Np+pqHHLSrW1w2wZljtMmGuxPn4x0z76aZ0Rhq A3kAjies8vrHcPyBqidKb8STythguYw= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=QvVhuTFy; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701373228; a=rsa-sha256; cv=none; b=ZsY2o1+Ft1iHNboKXqnWauEFRthIAnshL5zulJVmKfHe7RQiDwq1xGKj54XmM2C7ElLCTq DPK5kOkskbSk8Qdta9WxqW7SxIR8qCWv5xjKVhK2jE9niTopOwMznacGGCrng0BACRVPeD qGQ47vZJcVB8mPm6NdHmmAaIlOdxlnI= Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-6cdd405ca77so1355525b3a.0 for ; Thu, 30 Nov 2023 11:40:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701373227; x=1701978027; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FLpkBFq0YXp/ZMVIN7r+MmWV5KZAgAh9j+5iz7Jq43Q=; b=QvVhuTFyUHvMVGt/j4A15+6Jj2lO/TUDbVRGl6WLY5wO12rXSiKpKNjgy+Yhf25Gq4 GgsYWJit1tNd6jNpS4jDxP99sFIMIXugAd51ommleBRuxhymY2ZGERIQTn0UbgIe0GmU PUu5RLztNyXszF7ju+MqknmdJPY5d+ga4as0YqR9Xox4ZUkaTdfdcKE570fNZ6+HBTfA h+3zfRo64Zuf7UxZb7RhoMpYljQu4X+jnVVl1ChKacksnv5J0H59kfga5D8hhgnG5bui udooVY4bTDDT3JWeBjRht030gdE5l3dW5yiGrfZ67/X9EcBi7wNeCgbFZRoOASMVBzUg foMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701373227; x=1701978027; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FLpkBFq0YXp/ZMVIN7r+MmWV5KZAgAh9j+5iz7Jq43Q=; b=FfNFkVYIpr2o3In64APZfjZtu74FtMhTqVIodUc2yQzdptgvsvHlDBF/ErRH3ot3Dl 6GjWmnN48Pkd2/ywugnq+RhGErvvW0ZDqLGfFJI9zK7/CSSD/zHYK2ybkkBNpygbIzpP SrWBNTxmUdVT74GKeMAZjntvi7Ko/Wu4EBNEM7z2aAJkw0LCmBX3uMyW5RKCQwpQJ6ow Lrk0CKmYhQB8bxMvCM3YPdiToVYtl4Iix5U2MMLIcRn6N+y9P8PD8/zUSc4+G+nfvqln ILxBa/Wpb8WSWeuoXfefK11D4WO0i8REXqFrer+JkNug5As8MHQ9m09BpuKNMIQESSFt GcKw== X-Gm-Message-State: AOJu0YyPlZFgMaIiyIq6HW4NNVCnE+At6aKYokQd1suvoHWfw5ptAKdh kI7byzP04C8bMvpzKNMAzzU= X-Google-Smtp-Source: AGHT+IH0yE967+h5Vl6odJ05oaCpC98ZyGZaN/ycD4v7YYVJ5Z8W0vfYYLS2/AHCiVw5izjM5GxO2Q== X-Received: by 2002:a05:6a20:438a:b0:18a:e176:5283 with SMTP id i10-20020a056a20438a00b0018ae1765283mr23195017pzl.39.1701373226669; Thu, 30 Nov 2023 11:40:26 -0800 (PST) Received: from localhost (fwdproxy-prn-004.fbsv.net. [2a03:2880:ff:4::face:b00c]) by smtp.gmail.com with ESMTPSA id g12-20020aa79dcc000000b006cde7044871sm1588980pfq.195.2023.11.30.11.40.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 11:40:26 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v8 3/6] zswap: make shrinking memcg-aware Date: Thu, 30 Nov 2023 11:40:20 -0800 Message-Id: <20231130194023.4102148-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231130194023.4102148-1-nphamcs@gmail.com> References: <20231130194023.4102148-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: E47241C0015 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: q7th8tb7ej1woex8rzqaewucng1ssr8k X-HE-Tag: 1701373227-831075 X-HE-Meta: U2FsdGVkX18x1a9XuZ7lgSCeyU4IVUNOrMJ3UHCjj9EfJLpKhATe/f7D9trSRTnB3huvbGu4oDSrzfywzm5r9IkQB1Y/5W5QmS1SAk5u9xLdZE0MqdqRgxsW11zjvB79dQAQHC7rfvFzq+XcpRPCORFRkXKU88JlwZT2a3bgx2unh0kl4vceIVfumf7TIMZ6CeOUtsUVftJsHFle3IrfCsi8Xum2PcfMbvpEjRyfLVIEXLEG5C1PpAAinbIsEQNqx+9H52IqE6mojhxopEopgcT+PqtQGGRWAaLmGX8g4BfDsdaaOR78NxwHRvqJEnFplMRzsx/zT6ehuUoWmRSB1cXxOb54lHMaSgFt5lQMQNWfiJaW1RV73odOgbU0cSFiG4JVNIwPLeybv1dEyKG3IStLm2TOxThJ2sgWyttiLjA1vLHdmzSggRH+P7Zts9aWWL7xr09SMI4V+lewdDcaUJx88IcXhx0gpJ2Lenn79nIbIgIY4aw+ub/jKoN4oQhkSwuFppSZOWE4oUaQRFGOAP0wT1tD/2ThENbvZvQaLYVrwVut95ofiD2xNxvd80/juw1LXUJkIseEubWrCobsjFoHfYeQ09akX2pvbCAsiDTvvyq+m50B2nxdlz/+JnI0hNkiGR7QdUP9LD9Zk6fiX2VM0+uES1Q03bMfNZONDxOqnCjsUmNwGEbklYR3c6a+ARriqyGOaWI16ndGko0AAM9K3VF2eXliRFwPOLvPmcOC82bKo+XOvGBxO69zaA18x6xiCaq+HlGvhk5xkvFybhocjpTQShVJX4L/wIJ4Qoyqn3UmSLMpprfvXIUA7S2tTfYGjHbAdLscM4aZbfpYu7ycBBV9NqfGW1MKMVbkMHW/+6zGQOr+vXPrqxL6+4fej+iRjM1/x7RJcLPWCojvfN5tTsykSpsevtLSKMSkeoJfbsWi00MjoIUfKlvFbzaeDgJPwrRey8S65XmgIsw Y3mzAtXU QkdZaUc47nfH2HIF98+jGg3p6gu3nyNeP0kj9uUZeOBqYY0cWp7eZ8Hj0fBnA8fIBtfoTTD34U8fwD0F1kCO0hU3fBrRyKtsuSPD+4K/jsYSZkN6nyk/HlMlaOfeTxZX3hDxkhr/zi/TFoNQWeUJJdTyf72vZzIaXNe2EH2VzVbjMuTWYKYZpk1973q3xkgwvUQNvylO03K8KiJFshC3dpdRpv4dSQqlvwQzG4Q/ZnSbKwDoTZicTu9yMP9PQaziO1V5s/sBCsu8CZPAF9RthPd7HfbeX59OjZCWc6kiXWwWhEo8MgbQb7spIKbNtViYyTFUf7v9TLgjzcOlt+mayUnNvI9uHb4JxGmJ2dhAEWmvzmjYXLI/qXhDIJkkukmzUUTlCgph0dMP642yo1x8hc344GZ0NLO22pNMtljtge+Zb88etAOEQDNvgmihb8DF9NeXxSja+RwPzp62sVsaCiXvSwNxAngp+qxpLp60E/6hAhb8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Domenico Cerasuolo Currently, we only have a single global LRU for zswap. This makes it impossible to perform worload-specific shrinking - an memcg cannot determine which pages in the pool it owns, and often ends up writing pages from other memcgs. This issue has been previously observed in practice and mitigated by simply disabling memcg-initiated shrinking: https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u This patch fully resolves the issue by replacing the global zswap LRU with memcg- and NUMA-specific LRUs, and modify the reclaim logic: a) When a store attempt hits an memcg limit, it now triggers a synchronous reclaim attempt that, if successful, allows the new hotter page to be accepted by zswap. b) If the store attempt instead hits the global zswap limit, it will trigger an asynchronous reclaim attempt, in which an memcg is selected for reclaim in a round-robin-like fashion. Signed-off-by: Domenico Cerasuolo Co-developed-by: Nhat Pham Signed-off-by: Nhat Pham --- include/linux/memcontrol.h | 5 + include/linux/zswap.h | 2 + mm/memcontrol.c | 2 + mm/swap.h | 3 +- mm/swap_state.c | 24 +++- mm/zswap.c | 269 +++++++++++++++++++++++++++++-------- 6 files changed, 245 insertions(+), 60 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 2bd7d14ace78..a308c8eacf20 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) +{ + return NULL; +} + static inline bool folio_memcg_kmem(struct folio *folio) { return false; diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 2a60ce39cfde..e571e393669b 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio); void zswap_invalidate(int type, pgoff_t offset); void zswap_swapon(int type); void zswap_swapoff(int type); +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); #else @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio) static inline void zswap_invalidate(int type, pgoff_t offset) {} static inline void zswap_swapon(int type) {} static inline void zswap_swapoff(int type) {} +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 470821d1ba1a..792ca21c5815 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); + zswap_memcg_offline_cleanup(memcg); + memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); diff --git a/mm/swap.h b/mm/swap.h index 73c332ee4d91..c0dc73e10e91 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated); + bool *new_page_allocated, + bool skip_if_exists); struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index 85d9e5806a6a..6c84236382f3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated) + bool *new_page_allocated, + bool skip_if_exists) { struct swap_info_struct *si; struct folio *folio; @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (err != -EEXIST) goto fail_put_swap; + /* + * Protect against a recursive call to __read_swap_cache_async() + * on the same entry waiting forever here because SWAP_HAS_CACHE + * is set but the folio is not the swap cache yet. This can + * happen today if mem_cgroup_swapin_charge_folio() below + * triggers reclaim through zswap, which may call + * __read_swap_cache_async() in the writeback path. + */ + if (skip_if_exists) + goto fail_put_swap; + /* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, mpol = get_vma_policy(vma, addr, 0, &ilx); page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); mpol_cond_put(mpol); if (page_allocated) @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, mpol, ilx, &page_allocated); + gfp_mask, mpol, ilx, &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, pte_unmap(pte); pte = NULL; page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, - &page_allocated); + &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; diff --git a/mm/zswap.c b/mm/zswap.c index 4bdb2d83bb0d..f323e45cbdc7 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -35,6 +35,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" @@ -174,8 +175,8 @@ struct zswap_pool { struct work_struct shrink_work; struct hlist_node node; char tfm_name[CRYPTO_MAX_ALG_NAME]; - struct list_head lru; - spinlock_t lru_lock; + struct list_lru list_lru; + struct mem_cgroup *next_shrink; }; /* @@ -291,15 +292,46 @@ static void zswap_update_total_size(void) zswap_pool_total_size = total; } +/* should be called under RCU */ +#ifdef CONFIG_MEMCG +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry) +{ + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL; +} +#else +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry) +{ + return NULL; +} +#endif + +static inline int entry_to_nid(struct zswap_entry *entry) +{ + return page_to_nid(virt_to_page(entry)); +} + +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + + /* lock out zswap pools list modification */ + spin_lock(&zswap_pools_lock); + list_for_each_entry(pool, &zswap_pools, list) { + if (pool->next_shrink == memcg) + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL); + } + spin_unlock(&zswap_pools_lock); +} + /********************************* * zswap entry functions **********************************/ static struct kmem_cache *zswap_entry_cache; -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) { struct zswap_entry *entry; - entry = kmem_cache_alloc(zswap_entry_cache, gfp); + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); if (!entry) return NULL; entry->refcount = 1; @@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* lru functions +**********************************/ +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + struct mem_cgroup *memcg; + + /* + * Note that it is safe to use rcu_read_lock() here, even in the face of + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection + * used in list_lru lookup, only two scenarios are possible: + * + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The + * new entry will be reparented to memcg's parent's list_lru. + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The + * new entry will be added directly to memcg's parent's list_lru. + * + * Similar reasoning holds for list_lru_del() and list_lru_putback(). + */ + rcu_read_lock(); + memcg = mem_cgroup_from_entry(entry); + /* will always succeed */ + list_lru_add(list_lru, &entry->lru, nid, memcg); + rcu_read_unlock(); +} + +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_entry(entry); + /* will always succeed */ + list_lru_del(list_lru, &entry->lru, nid, memcg); + rcu_read_unlock(); +} + +static void zswap_lru_putback(struct list_lru *list_lru, + struct zswap_entry *entry) +{ + int nid = entry_to_nid(entry); + spinlock_t *lock = &list_lru->node[nid].lock; + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_entry(entry); + spin_lock(lock); + /* we cannot use list_lru_add here, because it increments node's lru count */ + list_lru_putback(list_lru, &entry->lru, nid, memcg); + spin_unlock(lock); + rcu_read_unlock(); +} + /********************************* * rbtree functions **********************************/ @@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry) if (!entry->length) atomic_dec(&zswap_same_filled_pages); else { - spin_lock(&entry->pool->lru_lock); - list_del(&entry->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); zswap_pool_put(entry->pool); } @@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree, zswap_entry_put(tree, entry); } -static int zswap_reclaim_entry(struct zswap_pool *pool) +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg) { - struct zswap_entry *entry; + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); struct zswap_tree *tree; pgoff_t swpoffset; - int ret; + enum lru_status ret = LRU_REMOVED_RETRY; + int writeback_result; - /* Get an entry off the LRU */ - spin_lock(&pool->lru_lock); - if (list_empty(&pool->lru)) { - spin_unlock(&pool->lru_lock); - return -EINVAL; - } - entry = list_last_entry(&pool->lru, struct zswap_entry, lru); - list_del_init(&entry->lru); /* * Once the lru lock is dropped, the entry might get freed. The * swpoffset is copied to the stack, and entry isn't deref'd again @@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) */ swpoffset = swp_offset(entry->swpentry); tree = zswap_trees[swp_type(entry->swpentry)]; - spin_unlock(&pool->lru_lock); + list_lru_isolate(l, item); + /* + * It's safe to drop the lock here because we return either + * LRU_REMOVED_RETRY or LRU_RETRY. + */ + spin_unlock(lock); /* Check for invalidate() race */ spin_lock(&tree->lock); - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) { - ret = -EAGAIN; + if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) goto unlock; - } + /* Hold a reference to prevent a free during writeback */ zswap_entry_get(entry); spin_unlock(&tree->lock); - ret = zswap_writeback_entry(entry, tree); + writeback_result = zswap_writeback_entry(entry, tree); spin_lock(&tree->lock); - if (ret) { - /* Writeback failed, put entry back on LRU */ - spin_lock(&pool->lru_lock); - list_move(&entry->lru, &pool->lru); - spin_unlock(&pool->lru_lock); + if (writeback_result) { + zswap_reject_reclaim_fail++; + zswap_lru_putback(&entry->pool->list_lru, entry); + ret = LRU_RETRY; goto put_unlock; } + zswap_written_back_pages++; /* * Writeback started successfully, the page now belongs to the @@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) zswap_entry_put(tree, entry); unlock: spin_unlock(&tree->lock); - return ret ? -EAGAIN : 0; + spin_lock(lock); + return ret; +} + +static int shrink_memcg(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + int nid, shrunk = 0; + + /* + * Skip zombies because their LRUs are reparented and we would be + * reclaiming from the parent instead of the dead memcg. + */ + if (memcg && !mem_cgroup_online(memcg)) + return -ENOENT; + + pool = zswap_pool_current_get(); + if (!pool) + return -EINVAL; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + unsigned long nr_to_walk = 1; + + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg, + &shrink_memcg_cb, NULL, &nr_to_walk); + } + zswap_pool_put(pool); + return shrunk ? 0 : -EAGAIN; } static void shrink_worker(struct work_struct *w) { struct zswap_pool *pool = container_of(w, typeof(*pool), shrink_work); + struct mem_cgroup *memcg; int ret, failures = 0; + /* global reclaim will select cgroup in a round-robin fashion. */ do { - ret = zswap_reclaim_entry(pool); - if (ret) { - zswap_reject_reclaim_fail++; - if (ret != -EAGAIN) + spin_lock(&zswap_pools_lock); + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL); + memcg = pool->next_shrink; + + /* + * We need to retry if we have gone through a full round trip, or if we + * got an offline memcg (or else we risk undoing the effect of the + * zswap memcg offlining cleanup callback). This is not catastrophic + * per se, but it will keep the now offlined memcg hostage for a while. + * + * Note that if we got an online memcg, we will keep the extra + * reference in case the original reference obtained by mem_cgroup_iter + * is dropped by the zswap memcg offlining callback, ensuring that the + * memcg is not killed when we are reclaiming. + */ + if (!memcg) { + spin_unlock(&zswap_pools_lock); + if (++failures == MAX_RECLAIM_RETRIES) break; + + goto resched; + } + + if (!mem_cgroup_online(memcg)) { + /* drop the reference from mem_cgroup_iter() */ + mem_cgroup_put(memcg); + pool->next_shrink = NULL; + spin_unlock(&zswap_pools_lock); + if (++failures == MAX_RECLAIM_RETRIES) break; + + goto resched; } + spin_unlock(&zswap_pools_lock); + + ret = shrink_memcg(memcg); + /* drop the extra reference */ + mem_cgroup_put(memcg); + + if (ret == -EINVAL) + break; + if (ret && ++failures == MAX_RECLAIM_RETRIES) + break; + +resched: cond_resched(); } while (!zswap_can_accept()); - zswap_pool_put(pool); } static struct zswap_pool *zswap_pool_create(char *type, char *compressor) @@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - INIT_LIST_HEAD(&pool->lru); - spin_lock_init(&pool->lru_lock); + list_lru_init_memcg(&pool->list_lru, NULL); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); @@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool) cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); + list_lru_destroy(&pool->list_lru); + + spin_lock(&zswap_pools_lock); + mem_cgroup_put(pool->next_shrink); + pool->next_shrink = NULL; + spin_unlock(&zswap_pools_lock); + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) zpool_destroy_pool(pool->zpools[i]); kfree(pool); @@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* try to allocate swap cache page */ mpol = get_task_policy(current); page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &page_was_allocated); + NO_INTERLEAVE_INDEX, &page_was_allocated, true); if (!page) { ret = -ENOMEM; goto fail; @@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* start writeback */ __swap_writepage(page, &wbc); put_page(page); - zswap_written_back_pages++; return ret; @@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio) struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; struct zpool *zpool; unsigned int dlen = PAGE_SIZE; @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock); - - /* - * XXX: zswap reclaim does not work with cgroups yet. Without a - * cgroup-aware entry LRU, we will push out entries system-wide based on - * local cgroup limits. - */ objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) - goto reject; + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto reject; + } + mem_cgroup_put(memcg); + } /* reclaim space if needed */ if (zswap_is_full()) { @@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio) } /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL); + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); if (!entry) { zswap_reject_kmemcache_fail++; goto reject; @@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio) if (!entry->pool) goto freepage; + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_pool; + } + mem_cgroup_put(memcg); + } + /* compress */ acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); @@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_add(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&entry->pool->list_lru, entry); } spin_unlock(&tree->lock); @@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio) put_dstmem: mutex_unlock(acomp_ctx->mutex); +put_pool: zswap_pool_put(entry->pool); freepage: zswap_entry_cache_free(entry); @@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio) zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_move(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); + zswap_lru_add(&entry->pool->list_lru, entry); } zswap_entry_put(tree, entry); spin_unlock(&tree->lock); From patchwork Thu Nov 30 19:40:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13474958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FF01C4167B for ; Thu, 30 Nov 2023 19:40:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1028A6B0490; Thu, 30 Nov 2023 14:40:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 061E56B0491; Thu, 30 Nov 2023 14:40:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E003A6B0492; Thu, 30 Nov 2023 14:40:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C5F506B0490 for ; Thu, 30 Nov 2023 14:40:30 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A06E31A02CB for ; Thu, 30 Nov 2023 19:40:30 +0000 (UTC) X-FDA: 81515637420.02.0A232D2 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf15.hostedemail.com (Postfix) with ESMTP id D0A2CA000C for ; Thu, 30 Nov 2023 19:40:28 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=d4dbXUGK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701373228; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=niZWwQJsg3s1qyFf7Pomuagp8ezW4TnieKlMs2EcKiM=; b=M7C9XIh37BrVEBxMaUo9n1E+HJ81Tmad4RLpvwONOIXbw6NkgGYhywu2bavIbNY7IWNRwC lqEL9LRqmP7v1ZcgbO6Za3cQcTrrJEQJ+VVVaXisgb7PSeURSXAG2wO4/FWeesVrE5VO2T tem+NEL4nyEeJtL4w0AIB7aOmeJlSPQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=d4dbXUGK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701373228; a=rsa-sha256; cv=none; b=AMqedC4h6UZVCe1c3m8ljtKSkuudpKI1wHN5yCND0VdZXa29b4WYoi3EP8dU+r5v1xgPL2 fkscD+CpPXdKZ+y2XdFv3XCFieCyReHxyKgHC31P6R9L71SXZqLgLUiUlJ/DAUcNcZDITb kY2jnbRbiACthUmeijAbOcIJj41HI60= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1cfc2bcffc7so12320505ad.1 for ; Thu, 30 Nov 2023 11:40:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701373228; x=1701978028; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=niZWwQJsg3s1qyFf7Pomuagp8ezW4TnieKlMs2EcKiM=; b=d4dbXUGKIh3lM6E9jjNjxBjz2PrbYn4BzX0LUfatPuNkW616Dk/KmH81B923uDmA5E D/3W3iovoo0dlmS4PVd8or7XtOY2+Ys8LbBshjhq3E6sN4mXXAPejD2LFDFw4Oy/RleE RXv+3rN+Z+r9jJgw4QudWUWr7b42BOY7NkqPFCXCfPHBWIBEzt8MC94VlJE2wWQ/Ltx9 RTqyA22Mz2LDz4lfmJjdvB3zR9odlbsPOUeDBABF6bUsq2kqZwGjDujoI+qncT5GRrA1 dq5Pxu2gSOl3ZmP0miJzS2n+dVT1fkuc66krfyTtwmRxx8fCdzumdsLnweEjUAstMmLc HTEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701373228; x=1701978028; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=niZWwQJsg3s1qyFf7Pomuagp8ezW4TnieKlMs2EcKiM=; b=wMot9NveAmnPOACaQ/S1IqF98GwdBEh3bLywmgap7GHmVkOzl1HMltlRZ4icjGttTH 6TqTz+UdP3p5XtlsO6ieUZnrhlsbMQEcaQGPUh5fhppi/wDXoJaRLDemUPPrLoormL08 SP8II6okFyql0ity9xafBEDD2NFddsOh2p62+zqt9cjVvshiP5cG8kgDtMIPjywt7kyg qPclbbGDaj/UyTXQagvd/SSRzgcUu6B2MTe7NSJT6miVU2VmlAlCVRg5E2l19FFjX/0v SfZChgZFiLA5dO1AcIUc2PGOgM0OHqGgInKfUuRDwh3ygC4fVgryAjTaA/ey5oFYqqJE pkhA== X-Gm-Message-State: AOJu0YyEVIRXinlZNAMy6VztzGi6Xw7WtOzm0Ljp0T9CO6BZn9S0XP8w EhLQzCKwX8XRzDvYE0zVkRc= X-Google-Smtp-Source: AGHT+IFfTa0RqUbqN68E841pToksnDYD8v2JC5yPpULGd+IB0r6tjgvOZkq96Lrz0GR4iBzyYy4mBw== X-Received: by 2002:a17:902:82c3:b0:1cc:237c:75d6 with SMTP id u3-20020a17090282c300b001cc237c75d6mr21599414plz.67.1701373227682; Thu, 30 Nov 2023 11:40:27 -0800 (PST) Received: from localhost (fwdproxy-prn-003.fbsv.net. [2a03:2880:ff:3::face:b00c]) by smtp.gmail.com with ESMTPSA id b13-20020a170902d30d00b001cf9eac2d3asm1781733plc.118.2023.11.30.11.40.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 11:40:27 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Date: Thu, 30 Nov 2023 11:40:21 -0800 Message-Id: <20231130194023.4102148-5-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231130194023.4102148-1-nphamcs@gmail.com> References: <20231130194023.4102148-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D0A2CA000C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: qpemicf6zqzur6nno86sbumfjxsq73f5 X-HE-Tag: 1701373228-406065 X-HE-Meta: U2FsdGVkX19iCFU/atgYbcsVlQA3ykUvMH6ebJ3cj709Sv0t8pR8uUE0j5hPlEHqfBBYyQ4VRpkp+GIonDh2x7uCraIg9HBYlcOhPdUirS1k7LuiwZcN7XbsChM5uQbutx3HtLqFJ8JXn7yNZX+COIrljd0DcclJOJr2p5bziJhHHG/xwwBJQYkxUS0lLo9jWg/Y8tn4EcNt1VRi0zIScoR8abllOc6K2l7JkVILBq9N55UFpmd00ry8cB3WZR97Vcglzy3wBltlFDjsZ4gAlW+OOA4q7T3fX7WQZYAn84t6QjlSfhLauTbF/7m/Iek+FTY2Sv7MoKIvxssN6uhITU91la+mJ8OC+X5Hh8A0TjZ+u2q9jD8Ln9Xas6P4LkvcgAD6r+rHfDM4i8S3WAJMgko14r8Ki/l/442OEl31jmq3SwJGbR52e8gpHjEd46AoU3sFeMsQt27Ye2G5XGrKPcpizuxVW1299dfM7hbyohgmf9DOujFYAYJyw7Tv9PBm8hCzPkG2GjkHx8bcjaic+/illuhn64iiPUG3z5jQvdz1Fp93c9DekNpKUu727ANtyh2Oe2DzlfekVFgPcchZponY9NOIp/AeS2mOXZVZBVjwKvAw/KPEC/TUpqranwwMeUANepQOFx+gYexl6vKP9dFG1qY/gP1D6D7YT4Bc0jixBWE7r3iGgoexc0gWb/GOoDH1HtbMYkVWdh8P3J4gZlmqOODK0YDY4nUYO0m3Oh40fS9cDmGG2M0/o+p+ROmZTIxuFjDqk8VeE7QjYdzUvuyOSD/OEBOr9FKrCfUMAmwRU/ErYhS6Td0UlIfQngUIP//ViEw2sAGX8kIo2u3fbJIdiD9Q5ay6y+xV0DPoZlxPUHt4N08CG4Ls6XuKbd4ez7Ch8MjBBRdQGtP7eYfbt0ecp7jF7cd654a6iKr2ZrYLHwCQQCGZ9f74NDiQru6yg64JoC3kW3B5NIsiKfG gx51iHjS pgM/LdTd1AIdD8MWUep8d14R2vN1kmZj+8FjsbR70VDKJn09l8c+XhBiKNkEVbHXN0GvUbtF/Sc4eOYpsZlWVIZvI5Lb9jkn2ViEm1uai22TSh9PtGgYN1bhJ4LTerG11I7NGJTnKPF15qctunvQkSt6ckiv3n3iHXUjhr++QCCBDayJdXjUyR6CWe10C1HDOjRMbh0oXnR4/OfzvfRDKMDOyKIhhMpGchT6Wm7ACmsNlQB20o7q7NbQzhiw4BDNqBDOfsLVUVtEgv7BUdsdifFfCnU6XJ5DK2E8lGPUQWyzADAGGr4tUiGUlOXGCkEzKhuMKsBfBrCNPFmYpbHVmYoiwbvsm7JdRkW7Oz302iIDwizgTbvd6ZHaoIm9DJ13MDLyCflW5BBddMsS8WwqrZpJebXv6mTq/4Ih8+40rj8IbcUT8UjRayzF1EvRDFK5RObpOrExStWdOJPUw0EgwUnHk4+2lJb+SETRrKXI3ntsRNDcON1U9ZKKkrs3Bq2i/S0Kn7BLEOPtT4pmlmo/10jhSZUxj4bCgYlMr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Domenico Cerasuolo Since zswap now writes back pages from memcg-specific LRUs, we now need a new stat to show writebacks count for each memcg. Suggested-by: Nhat Pham Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham --- include/linux/vm_event_item.h | 1 + mm/memcontrol.c | 1 + mm/vmstat.c | 1 + mm/zswap.c | 4 ++++ 4 files changed, 7 insertions(+) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index d1b847502f09..f4569ad98edf 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -142,6 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_ZSWAP ZSWPIN, ZSWPOUT, + ZSWP_WB, #endif #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 792ca21c5815..21d79249c8b4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -703,6 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = { #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) ZSWPIN, ZSWPOUT, + ZSWP_WB, #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, diff --git a/mm/vmstat.c b/mm/vmstat.c index afa5a38fcc9c..2249f85e4a87 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = { #ifdef CONFIG_ZSWAP "zswpin", "zswpout", + "zswp_wb", #endif #ifdef CONFIG_X86 "direct_map_level2_splits", diff --git a/mm/zswap.c b/mm/zswap.c index f323e45cbdc7..49b79393e472 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -760,6 +760,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o } zswap_written_back_pages++; + if (entry->objcg) + count_objcg_event(entry->objcg, ZSWP_WB); + + count_vm_event(ZSWP_WB); /* * Writeback started successfully, the page now belongs to the * swapcache. Drop the entry from zswap - unless invalidate already From patchwork Thu Nov 30 19:40:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13474959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 318ABC4167B for ; Thu, 30 Nov 2023 19:40:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D53E6B0491; Thu, 30 Nov 2023 14:40:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 260C76B0492; Thu, 30 Nov 2023 14:40:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0149D6B0493; Thu, 30 Nov 2023 14:40:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E2C206B0491 for ; Thu, 30 Nov 2023 14:40:31 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BD661A0314 for ; Thu, 30 Nov 2023 19:40:31 +0000 (UTC) X-FDA: 81515637462.23.B860A03 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf14.hostedemail.com (Postfix) with ESMTP id BDE7B10000C for ; Thu, 30 Nov 2023 19:40:29 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=g15hgAbA; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701373229; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qpgmz+lE50DZaA/oBKT5EnJWyAwPJO9gmArvQZM8Xwo=; b=FFGmqVwsnoMqBHKQ2Ha1Keju926kgIVjhH+1e8mNxSPtc2AFCLp7OeQQJEQhgbMAN+aVCv zuateOfnDn9Jr9V2sgN9YLQ5nxvX3gPV3y+T2iUEnpbHqPillnA339YIMROV9y5yFgBsOZ K8RXNjMVaEb+ChK2DFMZw5EUBPdU4Yo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=g15hgAbA; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701373229; a=rsa-sha256; cv=none; b=VbUXCIk5Iw/rcWE2b0xr9DG0ofVtea8Q8fo2gP5xS+WX8N26fBbZedT3DV4nd/Zg2wPMaR C3p4M12X4w0Ic3fZ1GSZvPeT8Z/9EGKd88Oyrfuzz49zCQfatlZQ0RKLN+4g/JcEQ63RWT BrsTsYXoDwedLVnGiEHUOeGoRIvIywE= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6cdd584591eso1333805b3a.2 for ; Thu, 30 Nov 2023 11:40:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701373229; x=1701978029; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qpgmz+lE50DZaA/oBKT5EnJWyAwPJO9gmArvQZM8Xwo=; b=g15hgAbADRudky2/bjtfTKJGAE7mV9m77ErMeTpHuFkNVV52yzOwmVhW2HvaNTfxyl 0esBZaD2NuQt1ts4EofPESMgVSdpkbpiApLuAzZJg1SrI4eYClphTO6edbAhMg/gsu0r YvKvb/hPnuwEIvCdb/7STzus3Y8BoATijr99pj/tyeEskDT18fnAzOkgnZQFMwm9nBye ae175Cd3HLG/QD9nkLKvWXgy8XhorJxL+4d8Uyg7PsLToB5nWGrrFo3zjoe0wEAU8GhM nBJQBz5Ce0R+i4K+fcUrjht5MEMc1AaI8ditR4MHZ5tYbc36nlkeVNzlZzswHcltkkWT fm/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701373229; x=1701978029; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qpgmz+lE50DZaA/oBKT5EnJWyAwPJO9gmArvQZM8Xwo=; b=pnlLjrSK+cAGhW53/mxHaINixlFgFSFP1fKOEFp+VPhBc4CeXWVdLUzzrlgZDUFEwW 7wnKH5T1wI+wtM74EVr08vw0qUw4nWpWiW605IpUaJttH3SaLKvPO0la8nwEE/bEx5eI CSjuIyEwNmIB+kcGfAkswKGHVpPyCHYbwAPlNeUQVkEuOdNQf/ZBFTw1tduGwNG1gxeo 8wnrvnfPhVXNs4ZpNru220VXsH/hTVhaQlKAFF4jxsScuX17hFNZNeFNSN5LdgPKYLTB e+NVvJW5vYmoOI6GrD7F90Z8v6NFCzIaNUVOc2OcJ1jIyL17HA1XgJeC5ro1edzM8Sgc GQ9A== X-Gm-Message-State: AOJu0YydYE4ihqRxGqMKGFHFyGHJ35OZ3x7t72cEX6EV0/QXMjHCH1h0 6cTKCz67Z2gwArtxEBuuvFI= X-Google-Smtp-Source: AGHT+IGYdyLqpZ865rWuypFMfdyVFVpmYSTqMroZfbGqbPl/REmPVsjsfgA/flekoxBz7Bf3NKxULw== X-Received: by 2002:a05:6a20:a103:b0:189:11e8:6237 with SMTP id q3-20020a056a20a10300b0018911e86237mr27078069pzk.51.1701373228536; Thu, 30 Nov 2023 11:40:28 -0800 (PST) Received: from localhost (fwdproxy-prn-000.fbsv.net. [2a03:2880:ff::face:b00c]) by smtp.gmail.com with ESMTPSA id x67-20020a626346000000b006be0fb89ac2sm1630852pfb.197.2023.11.30.11.40.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 11:40:28 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Date: Thu, 30 Nov 2023 11:40:22 -0800 Message-Id: <20231130194023.4102148-6-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231130194023.4102148-1-nphamcs@gmail.com> References: <20231130194023.4102148-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: BDE7B10000C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 6pkjjwoqsq8haf7frzs7rm71bdoguywb X-HE-Tag: 1701373229-88615 X-HE-Meta: U2FsdGVkX18O7lJOXKLCWo3KfKwU3rRKV+9xjYwZxd6t+kdm2JMlq3BZpYK85TsvNIVvVeQBxjyt+ho6RdzPcLntDGvXghrh5Idq4Zk5DFN7KA8iLncuiUZLObMiXOtONg09UzkYeC1QbcquTH4TdB2EG0amFiQCnCGqRwQdUBYTd8eC470MjV9gUngVgFOrN4eXrMwbqTdI3hiIZFm7LX1AxiSMBl4DVRWnEMEAu/+qzrgVTrCGluVVtfvcGZrb6/szoQ96tXsElsf8Rz2/9K2/k+cARG+LjoWQ+CSgyWwjwfDlmQ8lA4ZwVbYIvlZ4X5HJlBrNoBYI6Yt6FAxjCY2mkJMV6kg+ccRoAWxbwzl8Mi3pw3s321ud0TSc3XRGlMsrCGP13yzVkI/i8xNXXHEKI5M8dMewIkkW/zh26DdmiJjn8rmBXM5wmKEwt5tMT0861q0MYKZKlFRTsdCQZVR1vU1dEWBEGygPdh3eHnvMXk8bMRPcjhLxwvPGupzeN7HQFXL+hifiUtmhn0lCA4ix+bS1DckZ4OH0j6oJjC9Pi+nsACk/Y6si/ttQ0Sb6adWdvvpe/onGiSReF2W6KO+ss6NdlOG+JJQ9cJjciqcx09pohp62qx0wPJUKtpW9DoncmEMPFp291u5pWJsPuF2zZOd/bOw/TE9UlDCPFwu/LWe3vNx4mpzlcnAB57lJaCVjLCGrz31dE/GRlVijhnAnyr4sceS+qMW7tioqdR6stmF/eg2YcUcdMivq5tj9tZzhRQ0uuWbKaYxsqeYzw+RjdltyHP4Qp9017aaZzyhmO2pO2GKSXykr2VWqP4/9Us/wTAT8twfIUNPpczmJxUGR4BUXpeCAzV4+xMn+3LhvZPpr2zprQy8pBiLKrFdlZP2Tv1vWG20Rcyx0vdviWNsM0SpJim9vTAIDRkKPaYIh+5D/k3ucZXJ3bGYxTa0KLbJy4Ch0d47YEMctHxr Xv1Tltio KUHAwnt2mv2OAfU8I9bJESh64TBeDWkkgYpiHL2psmsvyrRpIEKMzCpgBYI3souaWTUYEqwKA+gFl1D0HAUZ9+mkPuPQUkh5mvP/dKJJifZPu3hjJsveFxaTc1dCru/UHkgErSnk63zoLfJLtNSr/X9XBJ7RJwqo9L1Vur9LGxw48yrdAMK485I/a8xi1ggiTdtbH1HxXe8HNRBI7QjI2OuzpqoJjtaaYMMJU7nkbyWIPBKoUOXS16GCz4v/TzQgF2bgTiI/G1WqEKvzDgfJ7ZnQ9qux4wlcjdAY62lpvEd1Jen82vLFkXBquc3YsqXlFl5ClZkOnFcBN+ifikLQUtXcWwa1sO4epKvqGshB3YdovMBgS7Eivmf5/A6ueYinobRiTdc09uZjrNB5PYwmVZo/FOmAHlVFQYBJDmd5njBuLbkY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.217860, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Domenico Cerasuolo The memcg-zswap self test is updated to adjust to the behavior change implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"), where zswap performs writeback for specific memcg. Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham Acked-by: Chris Li (Google) --- tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c index c99d2adaca3f..47fdaa146443 100644 --- a/tools/testing/selftests/cgroup/test_zswap.c +++ b/tools/testing/selftests/cgroup/test_zswap.c @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value) return read_int("/sys/kernel/debug/zswap/stored_pages", value); } -static int get_zswap_written_back_pages(size_t *value) +static int get_cg_wb_count(const char *cg) { - return read_int("/sys/kernel/debug/zswap/written_back_pages", value); + return cg_read_key_long(cg, "memory.stat", "zswp_wb"); } static long get_zswpout(const char *cgroup) @@ -73,6 +73,24 @@ static int allocate_bytes(const char *cgroup, void *arg) return 0; } +static char *setup_test_group_1M(const char *root, const char *name) +{ + char *group_name = cg_name(root, name); + + if (!group_name) + return NULL; + if (cg_create(group_name)) + goto fail; + if (cg_write(group_name, "memory.max", "1M")) { + cg_destroy(group_name); + goto fail; + } + return group_name; +fail: + free(group_name); + return NULL; +} + /* * Sanity test to check that pages are written into zswap. */ @@ -117,43 +135,51 @@ static int test_zswap_usage(const char *root) /* * When trying to store a memcg page in zswap, if the memcg hits its memory - * limit in zswap, writeback should not be triggered. - * - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may - * not zswap"). Needs to be revised when a per memcg writeback mechanism is - * implemented. + * limit in zswap, writeback should affect only the zswapped pages of that + * memcg. */ static int test_no_invasive_cgroup_shrink(const char *root) { - size_t written_back_before, written_back_after; int ret = KSFT_FAIL; - char *test_group; + size_t control_allocation_size = MB(10); + char *control_allocation, *wb_group = NULL, *control_group = NULL; /* Set up */ - test_group = cg_name(root, "no_shrink_test"); - if (!test_group) - goto out; - if (cg_create(test_group)) + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1"); + if (!wb_group) + return KSFT_FAIL; + if (cg_write(wb_group, "memory.zswap.max", "10K")) goto out; - if (cg_write(test_group, "memory.max", "1M")) + control_group = setup_test_group_1M(root, "per_memcg_wb_test2"); + if (!control_group) goto out; - if (cg_write(test_group, "memory.zswap.max", "10K")) + + /* Push some test_group2 memory into zswap */ + if (cg_enter_current(control_group)) goto out; - if (get_zswap_written_back_pages(&written_back_before)) + control_allocation = malloc(control_allocation_size); + for (int i = 0; i < control_allocation_size; i += 4095) + control_allocation[i] = 'a'; + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1) goto out; - /* Allocate 10x memory.max to push memory into zswap */ - if (cg_run(test_group, allocate_bytes, (void *)MB(10))) + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */ + if (cg_run(wb_group, allocate_bytes, (void *)MB(10))) goto out; - /* Verify that no writeback happened because of the memcg allocation */ - if (get_zswap_written_back_pages(&written_back_after)) - goto out; - if (written_back_after == written_back_before) + /* Verify that only zswapped memory from gwb_group has been written back */ + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0) ret = KSFT_PASS; out: - cg_destroy(test_group); - free(test_group); + cg_enter_current(root); + if (control_group) { + cg_destroy(control_group); + free(control_group); + } + cg_destroy(wb_group); + free(wb_group); + if (control_allocation) + free(control_allocation); return ret; } From patchwork Thu Nov 30 19:40:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13474960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9077FC07CA9 for ; Thu, 30 Nov 2023 19:40:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3231B6B0492; Thu, 30 Nov 2023 14:40:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D5BF6B0493; Thu, 30 Nov 2023 14:40:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D72A6B0494; Thu, 30 Nov 2023 14:40:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EDA2B6B0492 for ; Thu, 30 Nov 2023 14:40:32 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C12821C0279 for ; Thu, 30 Nov 2023 19:40:32 +0000 (UTC) X-FDA: 81515637504.12.1EF409E Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf23.hostedemail.com (Postfix) with ESMTP id C0D4514000C for ; Thu, 30 Nov 2023 19:40:30 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WOwpZMgj; spf=pass (imf23.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701373230; a=rsa-sha256; cv=none; b=XGxn/S+8YqO2qgf0h18rmMS2Z+gVeeWVBppeqbOFX58KSHPzIiDTDnjV/wFKYtURTEpZmi 5PomJaC5csHagR5COmB1hTuZ/Srz8XaRoigXkL7KoWMue0fqJmPxq2Jq6hSQ+GMxV5Xg5p tO5fO2D8+zP7SNyQhOm2NCbZ38KyTCg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WOwpZMgj; spf=pass (imf23.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701373230; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FOTrbD43X5foLPygmCbyWFshwNlSfN2deu74f1mt++U=; b=dVNNIxujv0T6Ula/UCFavmnQY5Hg78q24UyTjLscfKnF/U0U57tFdFeYRxjJbxoccI8t4q eivJw9xPB1MCarrpcfRhUtBjT+lXQD5sVbTZZxbMc0RFW7672ru/PADAsqPzkbBv/0bdXK vzZqYUQ3ma1aNx+jOLYMHDZqRc3xRoM= Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-5c5fb06b131so1081767a12.0 for ; Thu, 30 Nov 2023 11:40:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701373230; x=1701978030; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FOTrbD43X5foLPygmCbyWFshwNlSfN2deu74f1mt++U=; b=WOwpZMgj9bRIkHyJL1K9DhwvRWl5tcdisNo0Ec0SZyLM1zA3nWkaRq2wgr8elDuIK7 G4GC3pqGyqyoEcYVWsNDPphUnOaGGgxHoxMZBjIr7MMDl7sm8kyHJpA3M3HEEUCEjIp+ tah7W0UXZwykMaWu6sfwE3h36mPl0U82/JEIvwhyMvn7mlTpy0yDswAwlDV3ZT3if6fk ntOlPES/9p0/HydKswRALVpWYK8/W6sVlfSldaUS8YwJ0cNTtInGxeEoE8HQ/GS+hrQL O5d2FI+tnLhyhvJyd+5HqytPe6EoUynuFlaymBGV6zjjMCAlZ45F96uWBpY5EN2wX7tk FghQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701373230; x=1701978030; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FOTrbD43X5foLPygmCbyWFshwNlSfN2deu74f1mt++U=; b=D2z2FOXxZpeoTB2kQl98vrVy3RkPdQUiAcsRz97DoQjcNnvbIdj/XfuOCymUcsiavM T9jOepp9MorxnyLsQdF7mohXTwOqr6ZH/zcOdi10/bZbeSwgvDVsMcQEP4bbHpSBjdKV U1HuAM2r6NIzcjKeFGP2QgpikjZQbJqredhMQSAD3VUhmpMIondG0y0tlSW8XcYj8HZc KyqeiMUGl2XwXV2a6XA2Xm0lEn1L3rJcFMBpGU/v+/XZGJQBdPUb1E+MXGlvQWbgDBRt Nm/v+8yZh0QdSOLksPPouImhSlF7oFbayg/XmD8pzpkgP0vWSx0k30mGcf/jf5oWn3kx 1CiA== X-Gm-Message-State: AOJu0YxpQIoHG4ayORgqh0qsMYupdS4mfcy3QPDZuUi+E/dr9gI7ulMT HZumacqsxdSGyBw7qc2UCeU= X-Google-Smtp-Source: AGHT+IGaumhZJXtT2QsELkZoY43dA6mpkVGgVoU4LYVN4sVEkb6O8Rspok/DiV7ZBZIKbo1xAKEgbA== X-Received: by 2002:a17:90b:3890:b0:285:ad29:416c with SMTP id mu16-20020a17090b389000b00285ad29416cmr16868361pjb.46.1701373229504; Thu, 30 Nov 2023 11:40:29 -0800 (PST) Received: from localhost (fwdproxy-prn-026.fbsv.net. [2a03:2880:ff:1a::face:b00c]) by smtp.gmail.com with ESMTPSA id gf18-20020a17090ac7d200b0028098225450sm3669723pjb.1.2023.11.30.11.40.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 11:40:29 -0800 (PST) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Date: Thu, 30 Nov 2023 11:40:23 -0800 Message-Id: <20231130194023.4102148-7-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231130194023.4102148-1-nphamcs@gmail.com> References: <20231130194023.4102148-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C0D4514000C X-Stat-Signature: b6djaf7srabgurtk1dfndgyubrx5gyfr X-Rspam-User: X-HE-Tag: 1701373230-356546 X-HE-Meta: U2FsdGVkX18JZkuCYLlN91fi2GyyG6mlKjgUdkOn9+2JLwiCm53o7OL/DdLnOKT5h/+jb+RHIal3XQc3TRFHWAi9+TO9GJIey1T2ExZ63GjZAsOVJ7Rn9ctgH74qGCVYGkFBQD62qnNcer3syFSl2PxgRrvktZAXCKDW5OPQo+V/mFZ+pHD6KxLZrnv+dASaM1WvQ+FvG0GbmTJMkijK659VTBvSsCGZNPENUA+Z0z/p+FT1HpWhHcAi/01S7lTLOhJBdQcVcVlcSeE/yPy4UAtr3bdcJjda60uTMD1Eo4OXQ0UZuztkSGw/+HqGFDiqSiF3ujdjO0ok40sJFMesuNVtzUvn/biEVfcBWfV+zlCwv+cFGiZQPhUYfygfMvZaizSKUPCagkesN0mm7uIB/fj+qnel9Vib1M5z9oVqXtXlCTR+eN1tcg5pxpZ+5tMeXd9rlE+FfLc9P1+P6loy5pdlzbvGMiYrvC4CdYhK9WAbTICe1a28bwx7317yTv7TgOpVpt2MzW7y8Zl5XQ83wIuxfEV20bFF+DtbFZcb6fFssfyPX1uH+taX0+0/LruWdrcGUW/Vl9Fhp4V/em3oEsqjo1XiZuWqDeno5JKF9BUWa7W3RskrpncXcpVYXStAoKXE5avJDDUj/tBQySL8wKwdrStCZEIrRtrt5qK4tThV+REpz7isobgqqjSp3AJFiqy/7SQQLWZvde7J9V3nmvkUS6KUXo7vSJg62fENw7svdEd4GYb8XHm6NW3rxgbCHd4MYwJpvkxznR6sVht1jDYuOwcX4IVSE6hpuVyNXVEYrhlmmZa0wnXRaF6exz6wS09PBHYs8tSPoBnbR8TMuIaJSWtrVQmYRFQTYKVXvsUUtJqSkiMjPM0kEV+tOR3MwGvO/HsUZxXwFyoeUQGMRk4ZRgJqpPKX3MLfY31LwbdH8AU09ISSzD34VByoyDfTzrML/8ACKNLi6zbjRE3 g5g43rzh i+xV0B7zEkX63yrdEpMNUt3rMqY+pLfh0WgaTkCr4Eg45nYWFgkMHC4kkA6cls7alKSWKUJ9vCowBE7fLJptboDaWSYGTOMn5tvtf1VfD0Sb1usx8OMXhGr1gF6TOIQebgGGl9WvBzB3/XzfSal28qod9u744aEaY8Ly0zTCt4TzP9Wyxk2oTlAPkdR7yAiFJPUwWjKpfKx3iqLPA7NoGaiaZ6aBUhwc5dYOffTps6oFLOyH3ociTeHH4oH7MJjy3uFSRBduybqlRzCCncxsP6S38t5SSsvp5e9W6EfhZumpgbvfzkigya2DoeRLq6VeyegsBbqRTUDEys6BdZ9F7r/oiTNx0f+IR4RtifnQ/vev7p1DLf3jfqwqmI6OMPmohj9qGNce9elyw8cymsIX1ngLU9XhInASzYctjOSUbIRE0TFIgb/DE4ynJf24jrwInhqh2+jGsxzkGT5YdJC9KDISdPVXbxtQ9lkh9CiVOrwhJxbc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, we only shrink the zswap pool when the user-defined limit is hit. This means that if we set the limit too high, cold data that are unlikely to be used again will reside in the pool, wasting precious memory. It is hard to predict how much zswap space will be needed ahead of time, as this depends on the workload (specifically, on factors such as memory access patterns and compressibility of the memory pages). This patch implements a memcg- and NUMA-aware shrinker for zswap, that is initiated when there is memory pressure. The shrinker does not have any parameter that must be tuned by the user, and can be opted in or out on a per-memcg basis. Furthermore, to make it more robust for many workloads and prevent overshrinking (i.e evicting warm pages that might be refaulted into memory), we build in the following heuristics: * Estimate the number of warm pages residing in zswap, and attempt to protect this region of the zswap LRU. * Scale the number of freeable objects by an estimate of the memory saving factor. The better zswap compresses the data, the fewer pages we will evict to swap (as we will otherwise incur IO for relatively small memory saving). * During reclaim, if the shrinker encounters a page that is also being brought into memory, the shrinker will cautiously terminate its shrinking action, as this is a sign that it is touching the warmer region of the zswap LRU. As a proof of concept, we ran the following synthetic benchmark: build the linux kernel in a memory-limited cgroup, and allocate some cold data in tmpfs to see if the shrinker could write them out and improved the overall performance. Depending on the amount of cold data generated, we observe from 14% to 35% reduction in kernel CPU time used in the kernel builds. Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- Documentation/admin-guide/mm/zswap.rst | 10 ++ include/linux/mmzone.h | 2 + include/linux/zswap.h | 25 +++- mm/Kconfig | 14 ++ mm/mmzone.c | 1 + mm/swap_state.c | 2 + mm/zswap.c | 185 ++++++++++++++++++++++++- 7 files changed, 233 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 45b98390e938..62fc244ec702 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -153,6 +153,16 @@ attribute, e. g.:: Setting this parameter to 100 will disable the hysteresis. +When there is a sizable amount of cold memory residing in the zswap pool, it +can be advantageous to proactively write these cold pages to swap and reclaim +the memory for other use cases. By default, the zswap shrinker is disabled. +User can enable it as follows: + + echo Y > /sys/module/zswap/parameters/shrinker_enabled + +This can be enabled at the boot time if ``CONFIG_ZSWAP_SHRINKER_DEFAULT_ON`` is +selected. + A debugfs interface is provided for various statistic about pool size, number of pages stored, same-value filled pages and various counters for the reasons pages are rejected. diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7b1816450bfc..b23bc5390240 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -22,6 +22,7 @@ #include #include #include +#include #include /* Free memory management - zoned buddy allocator. */ @@ -641,6 +642,7 @@ struct lruvec { #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif + struct zswap_lruvec_state zswap_lruvec_state; }; /* Isolate for asynchronous migration */ diff --git a/include/linux/zswap.h b/include/linux/zswap.h index e571e393669b..08c240e16a01 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -5,20 +5,40 @@ #include #include +struct lruvec; + extern u64 zswap_pool_total_size; extern atomic_t zswap_stored_pages; #ifdef CONFIG_ZSWAP +struct zswap_lruvec_state { + /* + * Number of pages in zswap that should be protected from the shrinker. + * This number is an estimate of the following counts: + * + * a) Recent page faults. + * b) Recent insertion to the zswap LRU. This includes new zswap stores, + * as well as recent zswap LRU rotations. + * + * These pages are likely to be warm, and might incur IO if the are written + * to swap. + */ + atomic_long_t nr_zswap_protected; +}; + bool zswap_store(struct folio *folio); bool zswap_load(struct folio *folio); void zswap_invalidate(int type, pgoff_t offset); void zswap_swapon(int type); void zswap_swapoff(int type); void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); - +void zswap_lruvec_state_init(struct lruvec *lruvec); +void zswap_page_swapin(struct page *page); #else +struct zswap_lruvec_state {}; + static inline bool zswap_store(struct folio *folio) { return false; @@ -33,7 +53,8 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {} static inline void zswap_swapon(int type) {} static inline void zswap_swapoff(int type) {} static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} - +static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {} +static inline void zswap_page_swapin(struct page *page) {} #endif #endif /* _LINUX_ZSWAP_H */ diff --git a/mm/Kconfig b/mm/Kconfig index 57cd378c73d6..ca87cdb72f11 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -61,6 +61,20 @@ config ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON The cost is that if the page was never dirtied and needs to be swapped out again, it will be re-compressed. +config ZSWAP_SHRINKER_DEFAULT_ON + bool "Shrink the zswap pool on memory pressure" + depends on ZSWAP + default n + help + If selected, the zswap shrinker will be enabled, and the pages + stored in the zswap pool will become available for reclaim (i.e + written back to the backing swap device) on memory pressure. + + This means that zswap writeback could happen even if the pool is + not yet full, or the cgroup zswap limit has not been reached, + reducing the chance that cold pages will reside in the zswap pool + and consume memory indefinitely. + choice prompt "Default compressor" depends on ZSWAP diff --git a/mm/mmzone.c b/mm/mmzone.c index b594d3f268fe..c01896eca736 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -78,6 +78,7 @@ void lruvec_init(struct lruvec *lruvec) memset(lruvec, 0, sizeof(struct lruvec)); spin_lock_init(&lruvec->lru_lock); + zswap_lruvec_state_init(lruvec); for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); diff --git a/mm/swap_state.c b/mm/swap_state.c index 6c84236382f3..c597cec606e4 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -687,6 +687,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); + zswap_page_swapin(page); return page; } @@ -862,6 +863,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, &page_allocated, false); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); + zswap_page_swapin(page); return page; } diff --git a/mm/zswap.c b/mm/zswap.c index 49b79393e472..0f086ffd7b6a 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -148,6 +148,11 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644); /* Number of zpools in zswap_pool (empirically determined for scalability) */ #define ZSWAP_NR_ZPOOLS 32 +/* Enable/disable memory pressure-based shrinker. */ +static bool zswap_shrinker_enabled = IS_ENABLED( + CONFIG_ZSWAP_SHRINKER_DEFAULT_ON); +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644); + /********************************* * data structures **********************************/ @@ -177,6 +182,8 @@ struct zswap_pool { char tfm_name[CRYPTO_MAX_ALG_NAME]; struct list_lru list_lru; struct mem_cgroup *next_shrink; + struct shrinker *shrinker; + atomic_t nr_stored; }; /* @@ -275,17 +282,26 @@ static bool zswap_can_accept(void) DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); } +static u64 get_zswap_pool_size(struct zswap_pool *pool) +{ + u64 pool_size = 0; + int i; + + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) + pool_size += zpool_get_total_size(pool->zpools[i]); + + return pool_size; +} + static void zswap_update_total_size(void) { struct zswap_pool *pool; u64 total = 0; - int i; rcu_read_lock(); list_for_each_entry_rcu(pool, &zswap_pools, list) - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) - total += zpool_get_total_size(pool->zpools[i]); + total += get_zswap_pool_size(pool); rcu_read_unlock(); @@ -344,13 +360,34 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* zswap lruvec functions +**********************************/ +void zswap_lruvec_state_init(struct lruvec *lruvec) +{ + atomic_long_set(&lruvec->zswap_lruvec_state.nr_zswap_protected, 0); +} + +void zswap_page_swapin(struct page *page) +{ + struct lruvec *lruvec; + + if (page) { + lruvec = folio_lruvec(page_folio(page)); + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected); + } +} + /********************************* * lru functions **********************************/ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) { + atomic_long_t *nr_zswap_protected; + unsigned long lru_size, old, new; int nid = entry_to_nid(entry); struct mem_cgroup *memcg; + struct lruvec *lruvec; /* * Note that it is safe to use rcu_read_lock() here, even in the face of @@ -368,6 +405,19 @@ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) memcg = mem_cgroup_from_entry(entry); /* will always succeed */ list_lru_add(list_lru, &entry->lru, nid, memcg); + + /* Update the protection area */ + lru_size = list_lru_count_one(list_lru, nid, memcg); + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + nr_zswap_protected = &lruvec->zswap_lruvec_state.nr_zswap_protected; + old = atomic_long_inc_return(nr_zswap_protected); + /* + * Decay to avoid overflow and adapt to changing workloads. + * This is based on LRU reclaim cost decaying heuristics. + */ + do { + new = old > lru_size / 4 ? old / 2 : old; + } while (!atomic_long_try_cmpxchg(nr_zswap_protected, &old, new)); rcu_read_unlock(); } @@ -389,6 +439,7 @@ static void zswap_lru_putback(struct list_lru *list_lru, int nid = entry_to_nid(entry); spinlock_t *lock = &list_lru->node[nid].lock; struct mem_cgroup *memcg; + struct lruvec *lruvec; rcu_read_lock(); memcg = mem_cgroup_from_entry(entry); @@ -396,6 +447,10 @@ static void zswap_lru_putback(struct list_lru *list_lru, /* we cannot use list_lru_add here, because it increments node's lru count */ list_lru_putback(list_lru, &entry->lru, nid, memcg); spin_unlock(lock); + + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry))); + /* increment the protection area to account for the LRU rotation. */ + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected); rcu_read_unlock(); } @@ -485,6 +540,7 @@ static void zswap_free_entry(struct zswap_entry *entry) else { zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); + atomic_dec(&entry->pool->nr_stored); zswap_pool_put(entry->pool); } zswap_entry_cache_free(entry); @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, return entry; } +/********************************* +* shrinker functions +**********************************/ +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg); + +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); + unsigned long shrink_ret, nr_protected, lru_size; + struct zswap_pool *pool = shrinker->private_data; + bool encountered_page_in_swapcache = false; + + nr_protected = + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); + lru_size = list_lru_shrink_count(&pool->list_lru, sc); + + /* + * Abort if the shrinker is disabled or if we are shrinking into the + * protected region. + * + * This short-circuiting is necessary because if we have too many multiple + * concurrent reclaimers getting the freeable zswap object counts at the + * same time (before any of them made reasonable progress), the total + * number of reclaimed objects might be more than the number of unprotected + * objects (i.e the reclaimers will reclaim into the protected area of the + * zswap LRU). + */ + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { + sc->nr_scanned = 0; + return SHRINK_STOP; + } + + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, + &encountered_page_in_swapcache); + + if (encountered_page_in_swapcache) + return SHRINK_STOP; + + return shrink_ret ? shrink_ret : SHRINK_STOP; +} + +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct zswap_pool *pool = shrinker->private_data; + struct mem_cgroup *memcg = sc->memcg; + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; + +#ifdef CONFIG_MEMCG_KMEM + cgroup_rstat_flush(memcg->css.cgroup); + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); +#else + /* use pool stats instead of memcg stats */ + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; + nr_stored = atomic_read(&pool->nr_stored); +#endif + + if (!zswap_shrinker_enabled || !nr_stored) + return 0; + + nr_protected = + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc); + /* + * Subtract the lru size by an estimate of the number of pages + * that should be protected. + */ + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0; + + /* + * Scale the number of freeable pages by the memory saving factor. + * This ensures that the better zswap compresses memory, the fewer + * pages we will evict to swap (as it will otherwise incur IO for + * relatively small memory saving). + */ + return mult_frac(nr_freeable, nr_backing, nr_stored); +} + +static void zswap_alloc_shrinker(struct zswap_pool *pool) +{ + pool->shrinker = + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap"); + if (!pool->shrinker) + return; + + pool->shrinker->private_data = pool; + pool->shrinker->scan_objects = zswap_shrinker_scan; + pool->shrinker->count_objects = zswap_shrinker_count; + pool->shrinker->batch = 0; + pool->shrinker->seeks = DEFAULT_SEEKS; +} + /********************************* * per-cpu code **********************************/ @@ -721,6 +873,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o spinlock_t *lock, void *arg) { struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); + bool *encountered_page_in_swapcache = (bool *)arg; struct zswap_tree *tree; pgoff_t swpoffset; enum lru_status ret = LRU_REMOVED_RETRY; @@ -756,6 +909,17 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o zswap_reject_reclaim_fail++; zswap_lru_putback(&entry->pool->list_lru, entry); ret = LRU_RETRY; + + /* + * Encountering a page already in swap cache is a sign that we are shrinking + * into the warmer region. We should terminate shrinking (if we're in the dynamic + * shrinker context). + */ + if (writeback_result == -EEXIST && encountered_page_in_swapcache) { + ret = LRU_SKIP; + *encountered_page_in_swapcache = true; + } + goto put_unlock; } zswap_written_back_pages++; @@ -913,6 +1077,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) &pool->node); if (ret) goto error; + + zswap_alloc_shrinker(pool); + if (!pool->shrinker) + goto error; + pr_debug("using %s compressor\n", pool->tfm_name); /* being the current pool takes 1 ref; this func expects the @@ -920,13 +1089,19 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - list_lru_init_memcg(&pool->list_lru, NULL); + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker)) + goto lru_fail; + shrinker_register(pool->shrinker); INIT_WORK(&pool->shrink_work, shrink_worker); + atomic_set(&pool->nr_stored, 0); zswap_pool_debug("created", pool); return pool; +lru_fail: + list_lru_destroy(&pool->list_lru); + shrinker_free(pool->shrinker); error: if (pool->acomp_ctx) free_percpu(pool->acomp_ctx); @@ -984,6 +1159,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool) zswap_pool_debug("destroying", pool); + shrinker_free(pool->shrinker); cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); list_lru_destroy(&pool->list_lru); @@ -1540,6 +1716,7 @@ bool zswap_store(struct folio *folio) if (entry->length) { INIT_LIST_HEAD(&entry->lru); zswap_lru_add(&entry->pool->list_lru, entry); + atomic_inc(&entry->pool->nr_stored); } spin_unlock(&tree->lock);