From patchwork Tue Oct 17 23:21:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13426192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2456CDB486 for ; Tue, 17 Oct 2023 23:21:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 34ABE8D011B; Tue, 17 Oct 2023 19:21:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BE0F8D0002; Tue, 17 Oct 2023 19:21:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 036B58D011B; Tue, 17 Oct 2023 19:21:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E93A18D0002 for ; Tue, 17 Oct 2023 19:21:56 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C3A40A0F31 for ; Tue, 17 Oct 2023 23:21:56 +0000 (UTC) X-FDA: 81356528232.07.D6397C4 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf06.hostedemail.com (Postfix) with ESMTP id EE6F6180008 for ; Tue, 17 Oct 2023 23:21:54 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TakVKMAV; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697584915; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1B54c0LHR/V83+Kt54U5jvPsezVzBi7N33Dc1Oz6d2w=; b=2nY4j8GklzPX9qhJR3Gl9pBjLwUUbyultjWdOa4KR0KU8gXJ/7zKdtZUk/mDoAVMoQbhqI X9z4WtPR1vSh6E29SZR5U/xg8Wd0WPK+i1nQxXaCi3xQCVySSyVoPzd1LlcfL/kZM/dpWh SDqkTrLsuS5sIp1uDj3pHuMwfAKyAeU= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TakVKMAV; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697584915; a=rsa-sha256; cv=none; b=UyXJ2NiwxAd/nw+8mq5zHkQph6rIL5Ep7Nl096RisqCWsb45eEyLSXcelr/tzW4IReRUJq wGXJJs1cCitm4zjs2kuBuW7SyRVjKXlyhjq2AxkMTDK+ZWz/nVdxllOYM2R5JkpLHlOpOl Qi/iv7peKYy6dyx+1ZCQkmPg8FOcicY= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1c9d407bb15so52657035ad.0 for ; Tue, 17 Oct 2023 16:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584914; x=1698189714; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1B54c0LHR/V83+Kt54U5jvPsezVzBi7N33Dc1Oz6d2w=; b=TakVKMAVkUCLw1XOeEd58AxLreuKMC88o4uJFTAiaBVGXs+BT/rsEUs9ZfePMHOjgN m4k6XMaGzf9uPPA9Yf4KJEXXtpYbY+B5aPEa9mO2Tj9hDHw9CBUrlCKHMexoBo/u9r5M I6hCWmtr5CS+mBT3BFe6XNRxHD8RgsqBo76o5hPZlErE/mYDsHP4rR43YvhczJPtQy3T aTne+HXm+GDKe8o/SH+JQ7tOegGGWhrYhiEBNX6Gv4jTezeIcYcqG3EEDlAbw1S5Lo6z /ebq0t454QVL0L4CizykYezePElKI1qermvuFE9byHwGIoKW5BWVy60PqGvQ/XSOhXM2 WU/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584914; x=1698189714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1B54c0LHR/V83+Kt54U5jvPsezVzBi7N33Dc1Oz6d2w=; b=ahGHwWfXbge3IvHRfadncw3oLJZJLDksbnURZAc1LtwlF0yqSAOfnY76M+Tg1s06dO KFUAZTG3p0BpgJieb4RltKhW3dz0bJfQV3VFYc0PZGpzKrPJITPA6Z7xnAluy5x/Xsmo aIiOZ7zOlID7rjpwKAhN8TPmYIFNzeAuc8H7/eAU3pk+EtdhcLW0p0QDSW3YdX7AzEfW QxBSB6VrRzhBD9hNX7UUbgSvkG0Vl2tfASycdC7kzi4QD/glAniWizG/uBzC0v0Usz0J N6j+AHb+TugFF0tjiZV9b0i6tjAqxRJ57bcshihJeMTvvPll6BIkI601lS1pD7P3h4qY Ib6A== X-Gm-Message-State: AOJu0YzSqZz/Fa2SKCOvyBCx3mHVpP/q72kPM1TTPn6qAwkf1zQZvQul LBJ+hI0i+pRXtPvyD8wTcFo= X-Google-Smtp-Source: AGHT+IFlsuynVW7P3E7VqGOiwRobgD/7Wox9Yzl01kYjLi6Wndqfn1dtW4y8opjslMhleYXh+bTIaQ== X-Received: by 2002:a17:903:1388:b0:1c9:aac5:df1a with SMTP id jx8-20020a170903138800b001c9aac5df1amr3209709plb.51.1697584913825; Tue, 17 Oct 2023 16:21:53 -0700 (PDT) Received: from localhost (fwdproxy-prn-013.fbsv.net. [2a03:2880:ff:d::face:b00c]) by smtp.gmail.com with ESMTPSA id u14-20020a170902e5ce00b001b86492d724sm2120023plf.223.2023.10.17.16.21.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:53 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Date: Tue, 17 Oct 2023 16:21:48 -0700 Message-Id: <20231017232152.2605440-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EE6F6180008 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: z6a1q8m15eb35q4dzwfqpyxqf9frosan X-HE-Tag: 1697584914-665652 X-HE-Meta: U2FsdGVkX18RPN4qA02xNMmN2ZegY7LHrhwwZVk33sdyugbOU7TpllrZM1SLgiYHsv0DcQuCgqaLuDNnXNg1uGAUanZqMGlLVXBJMrFHV5qfr4D4R8YOjac1WtJl0paLANDzch4Eg/eXR2GWCTTYN30pJ8taKsywppKh9n0lgXClOkx192QtEwifn+1GzglnXCjsMjgeD/TopnQBGw6drjmqmHkO0yo6ABWph8zMfaxXcYXxFcWu9nawe/6D8TgRcu5wTwqB7x87nWioxwF4qiHEzc8+FekZARcZyEzQMNpOm+gcOBp1HAeLt8khgBh1QhZSn0Q14SzZfdAMlX0HD3yPkr/KKi983LIYwB4k7O0Sjk6AFXRllqAdWT5QGz0xTnkq3kGbOUaCD5AK59izl5+iNG4T112sxOepVQr5Xuf+Kiz24inoG353WfCevss1snqBAybXxYctFLiBZ6wItKBBetzZgRCPGVluh+QfGgW1BD8oWR3Kn/moquIvOkukYpHRULL5zZCW4083ThkIffGEJG8SOsJmPetRB0H9aOToBI3Fm7X5+oR4sBSN0wBt9jnmhsrSw0N3BLb6PHcyoWfSgaN4rFGcTnoEGLPGzmy9j43wyryGhLFlafvuU+5YtO96A9p6tFaGHpniYEyXbdhafPkRcI56lutlHERQ/G2GyvLgyZDOCsSPMPq5VSb59QR8V86Qkl7ApXnlbDG/MyqRm0dHehoJ35xXXj4P6j2FhdL2KiBwkXAwEOtkC9qAJQNK8ZB4sHEjlKN+JHWySZ38N50yszc5tocGpvIshsmI9BP8HjKEnLbyXxuWN+L2qv1Uw7RzbNSE0QXDgWpvI0W9ChtQXltAWaq2EGuzXh1AByH0IRpfz+xfQyJppw/vPmCP98mepuSPc9asL8RiaYFwir/Z9/GCeONzHSk8Fp12AlHqEVdgWXYN5TFEzDVXO6VOTtQpnqAw8tVzIHA 9skMAw93 5dwitZfcXT7Qa/RvayQdz9MSKagJlmndTs0HJXIFMF9A00PNFMk5WDq6T6Tcz19RdwM7UDoiYBZu+flz/yrMWKiKNfYmtRZYrXapOdWiLXmvI45EuzyNLGbElqMiwtTfGBKKCWInziHrbenFKjclJqc8PiGQUyPQTT9mFtBhgx9uTgElO3c0zoM/REku8brmtIQMv7XmpTAQF8ZxyzBs4D7ZfAMqVcVKOtvxipYLWDOpwI/pt5w81+xOowUNFWJTogfj97HBAdOj8RoVlomcziV8wEZhh3UsDFmBcF2rZAFNrULeY28t05wsa/2ITHiX8Am/041fhM5QmXG73cGBfj8zLBU/IQQUmgeH/1+cUryOYVw3zSFI7Hfbdb/KIZttut7M3Ben0HA5aav+X7VKnHY3wX2H5Z1ETh2FaZeyAs/kvvjefmBQJb583Hs4wREwD/vV9yepyHKHtRPdcYe0Cn9XGlyzzPT0PvMOr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The interface of list_lru is based on the assumption that objects are allocated on the correct node/memcg, with this change it is introduced the possibility to explicitly specify numa node and memcgroup when adding and removing objects. This is so that users of list_lru can track node/memcg of the items outside of the list_lru, like in zswap, where the allocations can be made by kswapd for data that's charged to a different cgroup. Signed-off-by: Nhat Pham --- include/linux/list_lru.h | 38 +++++++++++++++++++++++++++++++++++ mm/list_lru.c | 43 +++++++++++++++++++++++++++++++++++----- 2 files changed, 76 insertions(+), 5 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index b35968ee9fb5..0f5f39cacbbb 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren */ bool list_lru_add(struct list_lru *lru, struct list_head *item); +/** + * __list_lru_add: add an element to a specific sublist. + * @list_lru: the lru pointer + * @item: the item to be added. + * @memcg: the cgroup of the sublist to add the item to. + * @nid: the node id of the sublist to add the item to. + * + * This function is similar to list_lru_add(), but it allows the caller to + * specify the sublist to which the item should be added. This can be useful + * when the list_head node is not necessarily in the same cgroup and NUMA node + * as the data it represents, such as zswap, where the list_head node could be + * from kswapd and the data from a different cgroup altogether. + * + * Return value: true if the list was updated, false otherwise + */ +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + /** * list_lru_del: delete an element to the lru list * @list_lru: the lru pointer @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); */ bool list_lru_del(struct list_lru *lru, struct list_head *item); +/** + * __list_lru_del: delete an element from a specific sublist. + * @list_lru: the lru pointer + * @item: the item to be deleted. + * @memcg: the cgroup of the sublist to delete the item from. + * @nid: the node id of the sublist to delete the item from. + * + * Return value: true if the list was updated, false otherwise. + */ +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + /** * list_lru_count_one: return the number of objects currently held by @lru * @lru: the lru pointer. @@ -136,6 +166,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru) void list_lru_isolate(struct list_lru_one *list, struct list_head *item); void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, struct list_head *head); +/* + * list_lru_putback: undo list_lru_isolate. + * + * Since we might have dropped the LRU lock in between, recompute list_lru_one + * from the node's id and memcg. + */ +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, struct list_lru_one *list, spinlock_t *lock, void *cb_arg); diff --git a/mm/list_lru.c b/mm/list_lru.c index a05e5bef3b40..63b75163c6ad 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -119,13 +119,22 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr, bool list_lru_add(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return __list_lru_add(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_add); + +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; - struct mem_cgroup *memcg; struct list_lru_one *l; spin_lock(&nlru->lock); if (list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, &memcg); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) @@ -138,17 +147,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) spin_unlock(&nlru->lock); return false; } -EXPORT_SYMBOL_GPL(list_lru_add); +EXPORT_SYMBOL_GPL(__list_lru_add); bool list_lru_del(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item)); + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ? + mem_cgroup_from_slab_obj(item) : NULL; + + return __list_lru_del(lru, item, nid, memcg); +} +EXPORT_SYMBOL_GPL(list_lru_del); + +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; spin_lock(&nlru->lock); if (!list_empty(item)) { - l = list_lru_from_kmem(lru, nid, item, NULL); + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); list_del_init(item); l->nr_items--; nlru->nr_items--; @@ -158,7 +177,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) spin_unlock(&nlru->lock); return false; } -EXPORT_SYMBOL_GPL(list_lru_del); +EXPORT_SYMBOL_GPL(__list_lru_del); void list_lru_isolate(struct list_lru_one *list, struct list_head *item) { @@ -175,6 +194,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item, } EXPORT_SYMBOL_GPL(list_lru_isolate_move); +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ + struct list_lru_one *list = + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); + + if (list_empty(item)) { + list_add_tail(item, &list->list); + if (!list->nr_items++) + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); + } +} +EXPORT_SYMBOL_GPL(list_lru_putback); + unsigned long list_lru_count_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg) { From patchwork Tue Oct 17 23:21:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13426193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD7D1CDB474 for ; Tue, 17 Oct 2023 23:21:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 235AE8D011C; Tue, 17 Oct 2023 19:21:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E6588D0002; Tue, 17 Oct 2023 19:21:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F04A08D011C; Tue, 17 Oct 2023 19:21:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DCA148D0002 for ; Tue, 17 Oct 2023 19:21:57 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AF2A3C0F5B for ; Tue, 17 Oct 2023 23:21:57 +0000 (UTC) X-FDA: 81356528274.30.77DA830 Received: from mail-il1-f178.google.com (mail-il1-f178.google.com [209.85.166.178]) by imf17.hostedemail.com (Postfix) with ESMTP id CB07940006 for ; Tue, 17 Oct 2023 23:21:55 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UhtcgHvc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.178 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697584915; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z0yR7t4chcx/GxJlONbCjqRh/QEv995Kpd4l2i4wMTc=; b=s2atte0Wwbol5hqdKuTxtVU9vdXE0sHg3724RvCnxQRM57bwjbV7Id+pciyEiCS00bsHg6 OdiW7OS5MeC0/cw4lJx5T2rFr3z+8zHjxzyc2/3HAfILMmJvZZ80qR07zqz9IBB9dFsL0j 410GH+0jndUYeBHNFXAn1DZWzBjmVFg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UhtcgHvc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.178 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697584915; a=rsa-sha256; cv=none; b=2luO1Q4PBQ3fE/BlAbYOYiEraPeQLIayrb4vyOAszddWpxG2U1YrhDxQhkhgw3VX3zWaZw Kmo+Q+6ZOoCyvgB88gxXecKJhTXUfkIfZ8eUcMQH2MP8uStVl95liOdRjWrvSRPKhd0oEO RcKvyun91QFSHypHQTiLgC3cXbvKXRM= Received: by mail-il1-f178.google.com with SMTP id e9e14a558f8ab-35135b79b6aso24429445ab.0 for ; Tue, 17 Oct 2023 16:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584915; x=1698189715; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z0yR7t4chcx/GxJlONbCjqRh/QEv995Kpd4l2i4wMTc=; b=UhtcgHvcUXXKwrCKxoAt8heuEG/OuLX319pkP51oRFESAVVwjJ/f1CrLH8IYQH7hzm Fdt/TNOPSzJjEouJhg1pFssFKq5o9oR7+DoOQ1S9pGkVvvaui6o7mnLBDKu8LRJzZWQX DeTQes+mD1QdVe1MvJ/XqL68R4oaC8WoBLDAqerJUKccQr8r6gmNxfnNpPLar4/BGNE9 Ap0tQbRvhAFwV10ZzWptMxyiGVYO1NnxA+L4sDw5kKgmqV7knmgQdP2IQPGQT5w45vv5 plT6QUUClop8N1JcYVMKtcnYfGckJp492QrrRu+5s/iTYekK+mSDaQUQS2xVC+NhpDDA errg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584915; x=1698189715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z0yR7t4chcx/GxJlONbCjqRh/QEv995Kpd4l2i4wMTc=; b=uzbVuS31yH+DvI34X9jPOAweBZgIi2PIZWrnbmFiF2X7aUkZGT7WObHV04x9FLRP5C 991Z6feW8gW3tYcy8MiRnBOXYzNtV3lE19laBvO0ubLxaFi9Gm0cySxQDABrGAUsbh7N NKR3W9mpNYHXr96YT+tQLaifHooYa+D9x6h8XA2ei2tVbvEOIZzmOskcDzK+gtfvv0Co XURzPD+AS0Ppujp22WMhPF/EDkQrYDAbFbtX6ujozM5OoZTztWFizrSNWLvuV/VSm70P 25iI8+rHMc9sug1qt7mhoh+IaGwPpro12UfsODa/Ae7Ys6AanT1Ll/m9aaTT2dBvUYZf oMYw== X-Gm-Message-State: AOJu0YwSuZ8UW1PhIN2z0qojxenTv8BEXbZ1u8j4DAF66DBCUq/70oIz I7pwwBq6+GiswVJtllPJ4/s= X-Google-Smtp-Source: AGHT+IF492rntMEFZ92EzcGGf1iZwzrZ31rvADEXJ0KEnNifROSUNX57VqMh/sKZSN/YXQBPQRvKtQ== X-Received: by 2002:a05:6e02:1b82:b0:34f:70ec:d4cf with SMTP id h2-20020a056e021b8200b0034f70ecd4cfmr4908802ili.8.1697584914772; Tue, 17 Oct 2023 16:21:54 -0700 (PDT) Received: from localhost (fwdproxy-prn-012.fbsv.net. [2a03:2880:ff:c::face:b00c]) by smtp.gmail.com with ESMTPSA id k125-20020a632483000000b005742092c211sm430376pgk.64.2023.10.17.16.21.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:54 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 2/5] zswap: make shrinking memcg-aware Date: Tue, 17 Oct 2023 16:21:49 -0700 Message-Id: <20231017232152.2605440-3-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CB07940006 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: gkm78anmxdcd7md67d5brwifqrgj46mt X-HE-Tag: 1697584915-845277 X-HE-Meta: U2FsdGVkX1/vVBPGMBNC7iM4uj6HvYAYvfMc7PpUvcnhDuUDj527wyGEgdu2f5fGju0fT0UrIKIhuiq5f6xDVHc7N1PENbzPrh1XNaCvDcVWkhv2uQoj6pqr7STn/PTJKwlavCNKtjhSq6RfyrjfyDVAbLYW7LCptpoohysUyF2s8mpMfggdZ1QmEFjWw2cLDyFJz4P50W8axj5ptAFmgqBSnqcXhwvTOoVNXSJ1Hl7JLrsbMmTOsFFPzjs376s2jd0+V2rUis3sKedwRE2BbpDxeLh8jJBL1urwEXP1HTAuCJM220thzkYcZqzfBuU8lvz/h5Y3r9hsH1YlvLA9vFNSP2brcN7A+A+AAjP4SR/JfNOVvZjswh7WIh5Nhwbgb35cEXW4OWDkU3S6B4DQhkmNacgASAExiFUW2gq0foOuDsJL41A+l/xBr1tuz1MU71G400OGaAeL/SAkXi+gPxHKnfN4ftIm6RodDe3OYUaFsqzUFnMJuJNLsgI8uh2zwltouafxo9uloChe8+C3BrUkrG4aw29BhF9VBjR2F9ZOpmAawg6UH3UU0cisf0+eEHC3zDND9Im7NdQNjW7LIdRJYJD1tUFBgHgQa6IuNrQFcY1Q+k5rm5yh+99vB0dlQ9Ld4l9bkfJxlV0mNK1aVyJRi5fFu/ixotRCIETfV2np/zfTiJbvkKA9qvxaZZbYAONKmhM18biZzejJzx4rScqkt9199f5q6tdYcdWhUrKtdrKHgPAWjmQGV8tUui9bcQ1CMS6bebdFv37kcs7NioQmqRgPqGSxpp5mP/41NQGdTH2zE1CgujFeO6uDVHjwvQqCuloQ5N8TP1BEl8x56EnbhdpUt9Nm5uYy9Gj6UY3YH3gFwJF97DuU2nDxn5sBwLc6lzzbuGL1x8JJ2hHq/Qx9CEqa3eFLCbhhCULth34w5HDGm0If15Fjk5q0dMkma9I9s/18AKukmHPqOgt dbqhKjYE H0yya87ZPGZ3VqiEmh2hSNn3pDBi4tTBo2z9/reocyVjfsLqLQLWogn23PJsipRnCx2taySpH5Apc0zi5RRasQu8Qi/x+reDjcEx+7VcNjoIM0MeiqDjGpNJ/lHuJE0x1kHQ8CFxzJ7Sc60JydoyDAfQvXyfvoL0suFrkDsVhpdHMwPmCR3xv5gsElp3A4E8dLxdLvggdeibte6Jn9WEhib7gZZLxjvlbIROrisLkha2zDatO0UpscataKGFRx0/VrkglGL7a0ZaneVr7u+spf3AnFXNKYvmgfWybgCxoD/1uZDqdHCfCeJYFXQLNDqVPzNpe3OXJDazE95dSZ8t7/I0lQvbuoSr3h5Omif8iFVNo20qzpvWbYFqCTem2XdD9ZgaBJtOwvpQep802fyoO+G4bU/usuhSzlkiqnZ8DLHRKtZyt4FgSjjeTTHT5FiP70Nohz6/pS8drgJ5yKOOxNdb/1Nhm6+GCGC2UYq3cwJ8gTHU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Domenico Cerasuolo Currently, we only have a single global LRU for zswap. This makes it impossible to perform worload-specific shrinking - an memcg cannot determine which pages in the pool it owns, and often ends up writing pages from other memcgs. This issue has been previously observed in practice and mitigated by simply disabling memcg-initiated shrinking: https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u This patch fully resolves the issue by replacing the global zswap LRU with memcg- and NUMA-specific LRUs, and modify the reclaim logic: a) When a store attempt hits an memcg limit, it now triggers a synchronous reclaim attempt that, if successful, allows the new hotter page to be accepted by zswap. b) If the store attempt instead hits the global zswap limit, it will trigger an asynchronous reclaim attempt, in which an memcg is selected for reclaim in a round-robin-like fashion. Signed-off-by: Domenico Cerasuolo Co-developed-by: Nhat Pham Signed-off-by: Nhat Pham --- include/linux/memcontrol.h | 5 ++ mm/swap.h | 3 +- mm/swap_state.c | 17 +++- mm/zswap.c | 179 ++++++++++++++++++++++++++----------- 4 files changed, 147 insertions(+), 57 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 031102ac9311..3de10fabea0f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) +{ + return NULL; +} + static inline bool folio_memcg_kmem(struct folio *folio) { return false; diff --git a/mm/swap.h b/mm/swap.h index 8a3c7a0ace4f..bbd6ce661a20 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated); + bool *new_page_allocated, + bool fail_if_exists); struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index b3b14bd0dd64..0356df52b06a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated) + bool *new_page_allocated, bool fail_if_exists) { struct swap_info_struct *si; struct folio *folio; @@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (err != -EEXIST) goto fail_put_swap; + /* + * This check guards against a state that happens if a call + * to __read_swap_cache_async triggers a reclaim, if the + * reclaimer (zswap's writeback as of now) then decides to + * reclaim that same entry, then the subsequent call to + * __read_swap_cache_async would get stuck in this loop. + */ + if (fail_if_exists && err == -EEXIST) + goto fail_put_swap; /* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE @@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, - vma, addr, &page_was_allocated); + vma, addr, &page_was_allocated, false); if (page_was_allocated) swap_readpage(retpage, false, plug); @@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, &page_allocated); + gfp_mask, vma, addr, &page_allocated, false); if (!page) continue; if (page_allocated) { @@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, pte_unmap(pte); pte = NULL; page = __read_swap_cache_async(entry, gfp_mask, vma, - addr, &page_allocated); + addr, &page_allocated, false); if (!page) continue; if (page_allocated) { diff --git a/mm/zswap.c b/mm/zswap.c index 083c693602b8..d2989ad11814 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" @@ -171,8 +172,8 @@ struct zswap_pool { struct work_struct shrink_work; struct hlist_node node; char tfm_name[CRYPTO_MAX_ALG_NAME]; - struct list_head lru; - spinlock_t lru_lock; + struct list_lru list_lru; + struct mem_cgroup *next_shrink; }; /* @@ -288,15 +289,25 @@ static void zswap_update_total_size(void) zswap_pool_total_size = total; } +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry) +{ + return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL; +} + +static inline int entry_to_nid(struct zswap_entry *entry) +{ + return page_to_nid(virt_to_page(entry)); +} + /********************************* * zswap entry functions **********************************/ static struct kmem_cache *zswap_entry_cache; -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) { struct zswap_entry *entry; - entry = kmem_cache_alloc(zswap_entry_cache, gfp); + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); if (!entry) return NULL; entry->refcount = 1; @@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) kmem_cache_free(zswap_entry_cache, entry); } +/********************************* +* lru functions +**********************************/ +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) +{ + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry); + bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg); + + mem_cgroup_put(memcg); + return added; +} + +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry) +{ + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry); + bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg); + + mem_cgroup_put(memcg); + return removed; +} + /********************************* * rbtree functions **********************************/ @@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry) if (!entry->length) atomic_dec(&zswap_same_filled_pages); else { - spin_lock(&entry->pool->lru_lock); - list_del(&entry->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); zswap_pool_put(entry->pool); } @@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree, zswap_entry_put(tree, entry); } -static int zswap_reclaim_entry(struct zswap_pool *pool) +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg) { - struct zswap_entry *entry; + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); + struct mem_cgroup *memcg; struct zswap_tree *tree; pgoff_t swpoffset; - int ret; + enum lru_status ret = LRU_REMOVED_RETRY; + int writeback_result; - /* Get an entry off the LRU */ - spin_lock(&pool->lru_lock); - if (list_empty(&pool->lru)) { - spin_unlock(&pool->lru_lock); - return -EINVAL; - } - entry = list_last_entry(&pool->lru, struct zswap_entry, lru); - list_del_init(&entry->lru); /* * Once the lru lock is dropped, the entry might get freed. The * swpoffset is copied to the stack, and entry isn't deref'd again @@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) */ swpoffset = swp_offset(entry->swpentry); tree = zswap_trees[swp_type(entry->swpentry)]; - spin_unlock(&pool->lru_lock); + list_lru_isolate(l, item); + spin_unlock(lock); /* Check for invalidate() race */ spin_lock(&tree->lock); if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) { - ret = -EAGAIN; goto unlock; } /* Hold a reference to prevent a free during writeback */ zswap_entry_get(entry); spin_unlock(&tree->lock); - ret = zswap_writeback_entry(entry, tree); + writeback_result = zswap_writeback_entry(entry, tree); spin_lock(&tree->lock); - if (ret) { - /* Writeback failed, put entry back on LRU */ - spin_lock(&pool->lru_lock); - list_move(&entry->lru, &pool->lru); - spin_unlock(&pool->lru_lock); + if (writeback_result) { + zswap_reject_reclaim_fail++; + memcg = get_mem_cgroup_from_entry(entry); + spin_lock(lock); + /* we cannot use zswap_lru_add here, because it increments node's lru count */ + list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg); + spin_unlock(lock); + mem_cgroup_put(memcg); + ret = LRU_RETRY; goto put_unlock; } + zswap_written_back_pages++; /* * Writeback started successfully, the page now belongs to the @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool) zswap_entry_put(tree, entry); unlock: spin_unlock(&tree->lock); - return ret ? -EAGAIN : 0; + spin_lock(lock); + return ret; +} + +static int shrink_memcg(struct mem_cgroup *memcg) +{ + struct zswap_pool *pool; + int nid, shrunk = 0; + + pool = zswap_pool_current_get(); + if (!pool) + return -EINVAL; + + /* + * Skip zombies because their LRUs are reparented and we would be + * reclaiming from the parent instead of the dead memcgroup. + */ + if (memcg && !mem_cgroup_online(memcg)) + goto out; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + unsigned long nr_to_walk = 1; + + if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb, + NULL, &nr_to_walk)) + shrunk++; + } +out: + zswap_pool_put(pool); + return shrunk ? 0 : -EAGAIN; } static void shrink_worker(struct work_struct *w) @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w) shrink_work); int ret, failures = 0; + /* global reclaim will select cgroup in a round-robin fashion. */ do { - ret = zswap_reclaim_entry(pool); + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL); + + ret = shrink_memcg(pool->next_shrink); + if (ret) { - zswap_reject_reclaim_fail++; if (ret != -EAGAIN) break; if (++failures == MAX_RECLAIM_RETRIES) @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - INIT_LIST_HEAD(&pool->lru); - spin_lock_init(&pool->lru_lock); + list_lru_init_memcg(&pool->list_lru, NULL); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool) cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); + list_lru_destroy(&pool->list_lru); + if (pool->next_shrink) + mem_cgroup_put(pool->next_shrink); for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) zpool_destroy_pool(pool->zpools[i]); kfree(pool); @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* try to allocate swap cache page */ page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0, - &page_was_allocated); + &page_was_allocated, true); if (!page) { ret = -ENOMEM; goto fail; @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* start writeback */ __swap_writepage(page, &wbc); put_page(page); - zswap_written_back_pages++; return ret; @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio) struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; struct zpool *zpool; + int lru_alloc_ret; unsigned int dlen = PAGE_SIZE; unsigned long handle, value; char *buf; @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock); - - /* - * XXX: zswap reclaim does not work with cgroups yet. Without a - * cgroup-aware entry LRU, we will push out entries system-wide based on - * local cgroup limits. - */ objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) - goto reject; + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto reject; + } + mem_cgroup_put(memcg); + } /* reclaim space if needed */ if (zswap_is_full()) { @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio) zswap_pool_reached_full = false; } + pool = zswap_pool_current_get(); + if (!pool) + goto reject; + /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL); + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); if (!entry) { zswap_reject_kmemcache_fail++; + zswap_pool_put(pool); goto reject; } @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio) entry->length = 0; entry->value = value; atomic_inc(&zswap_same_filled_pages); + zswap_pool_put(pool); goto insert_entry; } kunmap_atomic(src); @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio) goto freepage; /* if entry is successfully added, it keeps the reference */ - entry->pool = zswap_pool_current_get(); - if (!entry->pool) - goto freepage; + entry->pool = pool; + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL); + mem_cgroup_put(memcg); + + if (lru_alloc_ret) + goto freepage; + } /* compress */ acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_add(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&pool->list_lru, entry); } spin_unlock(&tree->lock); @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio) put_dstmem: mutex_unlock(acomp_ctx->mutex); - zswap_pool_put(entry->pool); freepage: + zswap_pool_put(entry->pool); zswap_entry_cache_free(entry); reject: if (objcg) @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio) zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { - spin_lock(&entry->pool->lru_lock); - list_move(&entry->lru, &entry->pool->lru); - spin_unlock(&entry->pool->lru_lock); + zswap_lru_del(&entry->pool->list_lru, entry); + zswap_lru_add(&entry->pool->list_lru, entry); } zswap_entry_put(tree, entry); spin_unlock(&tree->lock); From patchwork Tue Oct 17 23:21:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13426194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD0AECDB488 for ; Tue, 17 Oct 2023 23:22:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E38E8D011D; Tue, 17 Oct 2023 19:21:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 395D18D0002; Tue, 17 Oct 2023 19:21:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 123C38D011D; Tue, 17 Oct 2023 19:21:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EC18B8D0002 for ; Tue, 17 Oct 2023 19:21:58 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C009F80CB9 for ; Tue, 17 Oct 2023 23:21:58 +0000 (UTC) X-FDA: 81356528316.22.5CADEEC Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf17.hostedemail.com (Postfix) with ESMTP id DBAEE40003 for ; Tue, 17 Oct 2023 23:21:56 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Fy2M7XaX; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697584916; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Z7v4RebQyJpdc4sQlri0EPJCRWGhgONmwRgz8BMi8es=; b=Sopb/yaQfESEaCLL6Qtr9jBrzd4/uYfLQjcUoLK7Qz4q5zK96WBBKZm3DFu0KcXskQmSqR nURmWTExs93pguU6JsDJqG5y7J/jQqKjIC59KFRPoN8E1RToTgy5HaBPGqcRSkkXxPXgFx g08NuavNqT6WuI5UTguex1iEbZv2si8= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Fy2M7XaX; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697584916; a=rsa-sha256; cv=none; b=7SwkCBOOOSc1weaB70pZ8fCCcOlVCpaj2/8osoYHXsRaiXrVi0tvIGfsIB1baP+YN3cEUV Bq00KSbp8VZJ3uDN0+i9CVwXeSnjyY3Mm/H3AH20uCPZitggbmE1BhOfrCLf6NJKsl0EGQ Xn8mKsYyHb7eTuqUiRz1t8r6Mksef8g= Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-565334377d0so4666226a12.2 for ; Tue, 17 Oct 2023 16:21:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584916; x=1698189716; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z7v4RebQyJpdc4sQlri0EPJCRWGhgONmwRgz8BMi8es=; b=Fy2M7XaXXnjE0mkpsvZDDqDHsZbqMRHYZ4CvAKGPDdl+muS5NSrgJ9tZ3MhE2OyzIH qkMs0V1OJPEqmHIL6vSVLTCZjYG6alZTN2Ua4GD1Ht0/SUfN2rV4mIu7NK3h5CFu1KtU iZikU+wAmGApp07RyBTC5O6USakdXYpmrWBz7rQIJhBwa8VUpFZfhIQoNxiXzHABbeeu THFQeLQmkamf1vYa++EYJaHGI61n+pNH8dc+Aj+FLWaOxkkHrqfzrJ0dl0wRWIjsLDXH 50ZB/GowN8wtEfuW+ESZznK92+qa0Fbzv2+YYkUniV9rE7aj464IrMMrv82QgZ5bjzbI WJ6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584916; x=1698189716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z7v4RebQyJpdc4sQlri0EPJCRWGhgONmwRgz8BMi8es=; b=nVoEzlem3vQwIQ3QDBmetf6r9JRwH2696IfpaHs5UxO//S/lGAXEk60dEzK7K0TyzF wl360Hg+Lip2RDVQpPyonqASiiXgp/O888nm50TmK7wioBMyd4EYe4mkJnfwzGqEWFkl CUqgLTdkwvr749LPeYkqwvOfXa5/DGdnZhdyp33Z+bETifMzQ3JK8VeWRmQijRcsIZkZ ybwbbER/asLLWwKZmKzB9L44cqZD6K5j4TkK/7i+twvQnJ76XvqNMER/pQyRT0rrk2Ko SjRba+5R54DfQcYpJSNV/C4EfaD16IWuCu9dj60946vyB23WBtF+pktoOMhvbSR486yn yO9A== X-Gm-Message-State: AOJu0YzeDOx7Zr4K0cQwZYjNcKZiYSPKfARzM3ee//x8zu8uXHsDWZ1t cFSSOO/129wgtlyvP4ULE5A= X-Google-Smtp-Source: AGHT+IEfthOH1/vbuwx8GBxLp7AV5HAOVBnZLimkJePzXVxapwVFQUAgYitw91LLn8qQzfjU7hhxjw== X-Received: by 2002:a05:6a20:938b:b0:17a:284:de50 with SMTP id x11-20020a056a20938b00b0017a0284de50mr4589053pzh.6.1697584915727; Tue, 17 Oct 2023 16:21:55 -0700 (PDT) Received: from localhost (fwdproxy-prn-008.fbsv.net. [2a03:2880:ff:8::face:b00c]) by smtp.gmail.com with ESMTPSA id e7-20020a170902b78700b001c735421215sm2116251pls.216.2023.10.17.16.21.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:55 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Date: Tue, 17 Oct 2023 16:21:50 -0700 Message-Id: <20231017232152.2605440-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: DBAEE40003 X-Stat-Signature: dx8pufazzkypd4nrar4p7cpjgaaqihcj X-Rspam-User: X-HE-Tag: 1697584916-14002 X-HE-Meta: U2FsdGVkX18sdVUbA70l/B+M8yCq69L9B6lqew7Lu5ZhsCT8mxJqq8n0vG6FlDcelIC5fW00pbijPGBDBLSZViXkQrjFx16ss00RJv0ZkVHzH8e+SCayv8ZZeoQyPcQaKLF8RtgMR2rqub6XUpprOW5omlV4lJ6bSkxLf6r9okcBIdDLYI2PRIZxanMhNGrmEpr34eB792sGVxTJDz/4oU9YJDpaTN0G5N8tjjcO1haXN2ru/utAkiSdegdBICKpmthEGtqgi6GuIQMVAQX3z76a78HiOoG1PS5ZxJIAfK2iRDybZxl/ELfqtrt6jX+8fAt9IGtBydDVbQYDRbRZ8wIIzBgNuqIbJV6HStQbx5NKfrdpnDp1fRWQ8bqDsOn+ZWFPFfKS3voLRXuHt+wVKYByH3WNw/Fbw7KCBg0yp238W/S0V1QVWwdPbinFfSmMdqEKECf26w/e57kiWUEenbxglwTgYohnOtytpdycHFaHE46ttT257OdRAslSd+gapmsNVmw65qr1QKsd2ZTCNM2c6TgAx6Dh8+F3TR5EU2AAcyj+VIfc3qx6zcQjbyVmxAv5S963YOOuLqRXduSPEMY0hc/2BUdsRXtpH6AVL2jBEq3dT1ckH8xFjbV2CpYkP3nG2k5NjFu3F0tLcG2nPoSXdJouhft4/JDULD8W4zDm5gTI5q5b39Gv1hE0p6a3Mi+GvaBU49OUWd4b7nWGYAczxx/88yb7wPer/7wq5yCnD8FRts1+Qg89iHt5wg/2diDaNkU5tcFe750HrULuiuGQ7Zxbtow8mldcQCDKyVbk3gi90CHvQoTuq6mqxk3e9rzl5Mg/iDPFoPu9RAdTvUA3dmsT0sVkH6OwjS6AAxU4NSsL5jA6Wq0UA/zHhGWGGgvIATSAOjueLWgQpFLu0xfSJHH5ELZaj7lxcqMZ+rh0zObM14bifpdT1n9qFyAgKudscX7FIKMC1xgmFgz TEkTRONF PCdCg04rAeRIsd5E9eN5gVoC+fIHzD4f3REYxuwDBXd1/i9AYl7su6fjbbRDXlVmXZWMvEfjMQJqizgaxlzat7fL8JBXtYtbss4JNL2Txl4ByMd+7BZepOxXF5F05OZvMPoMEBRjkqhXIYD1j9H9o+3bNO8L4eS7pfv8WNpD2/Tvw/fppiC+NHPwG8V0HO2zN0u26OlIo9ssy6erfEnbaRde1evnqrQMWSA0FMPccjd7OXLbPJy/yFDSU8REMtCg3c1pI2+W9bTs9rl+E+DpuG/nZeqLIzF+dzhhihGBdYn6LU4461ZN02m7vK50mDXDewdkXe3jlXI9HjPGNEjJkrivwaGDD9RRwqdJtfXYNOxjlYGBkSWPCD2INqLWyRmwe5jiCYP1WvLCkvcZUdhfktAbh9xOMLPZcNblMaZXfT7wfwjC2e8szVuFI2sxwoo2em/HLs0CgT5zZzMw2asvN7YYU7g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Domenico Cerasuolo Since zswap now writes back pages from memcg-specific LRUs, we now need a new stat to show writebacks count for each memcg. Suggested-by: Nhat Pham Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham Acked-by: Nhat Pham --- include/linux/memcontrol.h | 2 ++ mm/memcontrol.c | 15 +++++++++++++++ mm/zswap.c | 3 +++ 3 files changed, 20 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3de10fabea0f..7868b1e00bf5 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -38,6 +38,7 @@ enum memcg_stat_item { MEMCG_KMEM, MEMCG_ZSWAP_B, MEMCG_ZSWAPPED, + MEMCG_ZSWAP_WB, MEMCG_NR_STAT, }; @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg, bool obj_cgroup_may_zswap(struct obj_cgroup *objcg); void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size); void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size); +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg); #else static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1bde67b29287..a9118871e5a6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = { #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) { "zswap", MEMCG_ZSWAP_B }, { "zswapped", MEMCG_ZSWAPPED }, + { "zswap_wb", MEMCG_ZSWAP_WB }, #endif { "file_mapped", NR_FILE_MAPPED }, { "file_dirty", NR_FILE_DIRTY }, @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item) switch (item) { case MEMCG_PERCPU_B: case MEMCG_ZSWAP_B: + case MEMCG_ZSWAP_WB: case NR_SLAB_RECLAIMABLE_B: case NR_SLAB_UNRECLAIMABLE_B: case WORKINGSET_REFAULT_ANON: @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) rcu_read_unlock(); } +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg) +{ + struct mem_cgroup *memcg; + + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return; + + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1); + rcu_read_unlock(); +} + static u64 zswap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) { diff --git a/mm/zswap.c b/mm/zswap.c index d2989ad11814..15485427e3fa 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o } zswap_written_back_pages++; + if (entry->objcg) + obj_cgroup_report_zswap_wb(entry->objcg); + /* * Writeback started successfully, the page now belongs to the * swapcache. Drop the entry from zswap - unless invalidate already From patchwork Tue Oct 17 23:21:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13426195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAE13CDB48B for ; Tue, 17 Oct 2023 23:22:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D7DF8D0002; Tue, 17 Oct 2023 19:22:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0602A8D011E; Tue, 17 Oct 2023 19:21:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD2F88D0002; Tue, 17 Oct 2023 19:21:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id CA9488D0002 for ; Tue, 17 Oct 2023 19:21:59 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 94AEC40F3F for ; Tue, 17 Oct 2023 23:21:59 +0000 (UTC) X-FDA: 81356528358.14.CF77C71 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf04.hostedemail.com (Postfix) with ESMTP id B439E40020 for ; Tue, 17 Oct 2023 23:21:57 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=iDiaD20f; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697584917; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l1cHnelUz7JpCw4ya3vv+qgP9pRjbzsLh3QaMob4Kno=; b=g/ALmhR+H0ciBCXkOTi5DZWjPyjg7Les2zyN+RK/x5n7a9MvLTvjWDVi+fH4fGw4aJVhHB kfYVrYSzqJ13yjkSDBqzZis2shxBdEQSuKM0ZlTebTaS5T69t61PXvBJBPM5Yz0OFOudxS 4t+XP/s78HGDzQabakLeth8bMhvwGPU= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=iDiaD20f; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697584917; a=rsa-sha256; cv=none; b=qofjzf89yj6i+EBjyQYz19+y/zoaGCiL1A6W5oNj7BJ7QNIWk4aCeU4JIPlSe8QDbsCwxK YOYOdfGRhuhoLvE0ObYtROU4AqXpnVsaAMM2Rbp425iZAVISkv2EZx/lbzwlAmE/bNYokq BqD1k5OpRMY5jMHA7aQ4+nN3wBbiNRg= Received: by mail-pg1-f180.google.com with SMTP id 41be03b00d2f7-5a9bc2ec556so3252779a12.0 for ; Tue, 17 Oct 2023 16:21:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584916; x=1698189716; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l1cHnelUz7JpCw4ya3vv+qgP9pRjbzsLh3QaMob4Kno=; b=iDiaD20f/RtoIQkT0QQAi0MSbMCfdOul3dtPubY6Nv5Yp73vb6WXq0Y+rdWZAgypT5 c9AEhRELjvmlAXd/NV46LVD8rw4JOE2v6wpsDFGkceGgEcFqbFrQsTFSgfFNHHmTY8HZ DFOIViTkmTmGUCBpYt0mGZNIuFgUn8PKMpbqW1vpe6Uu+R5IRtsr7fa7xM6cqW22/Cmr Fh2GUTMlsJqbePAZ7yJ9Ku/4vcNTfyxLRiHaoU4wJalOSfj0uyJts4Vuq79ki+bpwMHD FZ7elRSKDAf+JFpZf/5STWTkTU43yVgAmSfndMgUvgTnsiPJ9k3R8jKUg2VrSUYfdPmP wFZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584916; x=1698189716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l1cHnelUz7JpCw4ya3vv+qgP9pRjbzsLh3QaMob4Kno=; b=pkgVJypZUTb5G1zjQ4ybPJ2nU2g/SvnF2FQfMBZ6Z/v/97Np0ynCUhwBeVmyDI66VW +0hmojvLlwdirHTkbh3lJ4TZ+SIMIH82AxGjan2RAhyd01/IEVLETg9FT2jL76hrv022 f0M5n941qKdLkHKAA24kJySHDhhTHt5JrD6Mu435jPUsgjsI7MKQVNJ61hdx2bw9K8tc mg1R0nbuWrTJdJDEs8Hhr1QLDXK90HZPBKMA53HY2ZJ6C4wXswUavgIE+ul5kcVHfbRv OFkiFT5VXuZqDhvmcrGcjRJ741PqHjxrzyc+SjxemZEkoZOQPDO4wUtvohcseg733/9h xqzw== X-Gm-Message-State: AOJu0YxSH5R6ffVMRNJ3svA5Mk307oxqJ9p9nXo5R83q7I8WMknn2pAA DbGJZO1si509jnd6R2EEY9E= X-Google-Smtp-Source: AGHT+IFesVzJtwp9D46Y7uRYTiIL4vlxo4njVbMDGRffV+cNTLXVfoIhh/9vko3/sNz6DemBDx9n+Q== X-Received: by 2002:a05:6a21:4886:b0:14c:d494:77c3 with SMTP id av6-20020a056a21488600b0014cd49477c3mr3906280pzc.33.1697584916491; Tue, 17 Oct 2023 16:21:56 -0700 (PDT) Received: from localhost (fwdproxy-prn-019.fbsv.net. [2a03:2880:ff:13::face:b00c]) by smtp.gmail.com with ESMTPSA id u12-20020a170902b28c00b001c9bc811d4dsm2111635plr.295.2023.10.17.16.21.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:56 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest Date: Tue, 17 Oct 2023 16:21:51 -0700 Message-Id: <20231017232152.2605440-5-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B439E40020 X-Stat-Signature: jgk457nwkrbd3je44ztz591zydqx6bmh X-HE-Tag: 1697584917-92340 X-HE-Meta: U2FsdGVkX1+NO+iJPTapLhfOI61QEp7JVI1VuQG02gRLNcgfK/GZfepQwcQDjtgeuHrXfpSNHhGevKsM3ihnzpU19PUjGHcGh2eyNypI0iloXgKu5k2P6ZaFO7eC315dPxDlEeH5HfQq2lZI2xuxvxGakrpiZ3NYhCpk3UIoWwAS65HinfQ854Gki0TOKAV3aipGIhxD5uKWNKByFy5Kq8ionTTp4bCGrDizRkoREi0aMMLZnL+rTUnCBb2mw8KQdSCIwg5aC64bItm5bK39ptwk2B78nxUFw9OcfTH0vr8EFXgZaKyxMhaXS7vUVUD8yJXzvh61DiD4lxePHCfvR4EoTf1AgfOS/k/arZNtdX/QxDLvPVD+VnVowlEpxZeWmCUm5buxW9Z1iwoZKR62dNkAT1a/CHrhc/ySZoyUKBDOYBNTLpSoa2W7EXZfhtWroaG8Akx4PkfOlHmZeTxezUuJqAcOcU5j9MT94udTLovb7uMhvkK8sFP+Z+356VFcIoSfy74xiUOB2iB5L8X71JOMDHRxnWON8NPwihWNNsbVDMY6+OiCUFjZidk/MZBIY17zA3xgbx7g5Yashf1HTNghT86v099hFUjZj2VL4qyX/iTmgyLdoT4t0Uj5FhuPlmID5u+aqOx1vQvKdQwXSSX7Xg4XPcKZFbcH8kI9valfXOnFKTEt660J65FzO1dChpXMXjLTaQ2GEdyn321Rr+DKTviFa3x38W5ZqsfJjzagQ4rQwPxdo0GHa04utldoL0v9NZymOhIexVOEhYv04+v7EA0n6SqCsTwnSJo37QU32uSLVIr6MavFQxUHnU8ObevScR+fJ6Z/Iuyi0xW45VSDlDyAUfOZ+ngjS0YxeP9B9ox925h33VDS1/tL9yY+6+wcr9NR9WdIrrhaXaA58Y7bjGsAvaTYVf/KDenioonXFtQwIPoY/lxQ7CLlhnctlm5pIdDMtWlxguqjs6z gdkQGorN 1Eoy1cA6PMtRaP92gXeX2XqF0bXwAcRW1bJNgUXg+9Q8O96uUjtOFPaam5w2944zIIc29jOcZWUBVspq040hh58mn0odOT2f5BOtEyQKbBlroXWtbiY50xWSat8FQn+TL8NeUFQVRmyXz87tRhH+4h1OA0paFqN7wMeaj9IhtnxoA32Lx20qmy5zmYPilb3MSCH+/U8+ShMQsnJt/gEsypk9DmYcrSRrvivwu7MIoPKDOozcWJSR3LJjZxZ7kIMFc372I5XByVrMEIebCqao5rPJrUOJR/N8jK3SOb0eSDcqJrnnXuyIHMMWaF6FOIl1ab0Pv8GQhCVOtG2acigkH6XpT3IxJZB72ZrpRaEfGASbUWGmfxrEFFBf6QM+NlUPPazirFsIfyU73wgYeXRCX1QuG2DwI2+jmFk8G+j9g+QXPO4s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.225619, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Domenico Cerasuolo The memcg-zswap self test is updated to adjust to the behavior change implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"), where zswap performs writeback for specific memcg. Signed-off-by: Domenico Cerasuolo Signed-off-by: Nhat Pham Acked-by: Nhat Pham --- tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c index 49def87a909b..11271fabeffc 100644 --- a/tools/testing/selftests/cgroup/test_zswap.c +++ b/tools/testing/selftests/cgroup/test_zswap.c @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value) return read_int("/sys/kernel/debug/zswap/stored_pages", value); } -static int get_zswap_written_back_pages(size_t *value) +static int get_cg_wb_count(const char *cg) { - return read_int("/sys/kernel/debug/zswap/written_back_pages", value); + return cg_read_key_long(cg, "memory.stat", "zswap_wb"); } static int allocate_bytes(const char *cgroup, void *arg) @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg) return 0; } +static char *setup_test_group_1M(const char *root, const char *name) +{ + char *group_name = cg_name(root, name); + + if (!group_name) + return NULL; + if (cg_create(group_name)) + goto fail; + if (cg_write(group_name, "memory.max", "1M")) { + cg_destroy(group_name); + goto fail; + } + return group_name; +fail: + free(group_name); + return NULL; +} + /* * When trying to store a memcg page in zswap, if the memcg hits its memory - * limit in zswap, writeback should not be triggered. - * - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may - * not zswap"). Needs to be revised when a per memcg writeback mechanism is - * implemented. + * limit in zswap, writeback should affect only the zswapped pages of that + * memcg. */ static int test_no_invasive_cgroup_shrink(const char *root) { - size_t written_back_before, written_back_after; int ret = KSFT_FAIL; - char *test_group; + size_t control_allocation_size = MB(10); + char *control_allocation, *wb_group = NULL, *control_group = NULL; /* Set up */ - test_group = cg_name(root, "no_shrink_test"); - if (!test_group) - goto out; - if (cg_create(test_group)) + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1"); + if (!wb_group) + return KSFT_FAIL; + if (cg_write(wb_group, "memory.zswap.max", "10K")) goto out; - if (cg_write(test_group, "memory.max", "1M")) + control_group = setup_test_group_1M(root, "per_memcg_wb_test2"); + if (!control_group) goto out; - if (cg_write(test_group, "memory.zswap.max", "10K")) + + /* Push some test_group2 memory into zswap */ + if (cg_enter_current(control_group)) goto out; - if (get_zswap_written_back_pages(&written_back_before)) + control_allocation = malloc(control_allocation_size); + for (int i = 0; i < control_allocation_size; i += 4095) + control_allocation[i] = 'a'; + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1) goto out; - /* Allocate 10x memory.max to push memory into zswap */ - if (cg_run(test_group, allocate_bytes, (void *)MB(10))) + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */ + if (cg_run(wb_group, allocate_bytes, (void *)MB(10))) goto out; - /* Verify that no writeback happened because of the memcg allocation */ - if (get_zswap_written_back_pages(&written_back_after)) - goto out; - if (written_back_after == written_back_before) + /* Verify that only zswapped memory from gwb_group has been written back */ + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0) ret = KSFT_PASS; out: - cg_destroy(test_group); - free(test_group); + cg_enter_current(root); + if (control_group) { + cg_destroy(control_group); + free(control_group); + } + cg_destroy(wb_group); + free(wb_group); + if (control_allocation) + free(control_allocation); return ret; } From patchwork Tue Oct 17 23:21:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13426196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6DC7CDB489 for ; Tue, 17 Oct 2023 23:22:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C48BD8D011F; Tue, 17 Oct 2023 19:22:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF7E68D011E; Tue, 17 Oct 2023 19:22:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98A628D011F; Tue, 17 Oct 2023 19:22:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 78F858D011E for ; Tue, 17 Oct 2023 19:22:00 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 55E07120EEE for ; Tue, 17 Oct 2023 23:22:00 +0000 (UTC) X-FDA: 81356528400.09.DF06A4A Received: from mail-oo1-f46.google.com (mail-oo1-f46.google.com [209.85.161.46]) by imf09.hostedemail.com (Postfix) with ESMTP id 8D997140005 for ; Tue, 17 Oct 2023 23:21:58 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GAEpmfgQ; spf=pass (imf09.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.161.46 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697584918; a=rsa-sha256; cv=none; b=Ql8EazpwtzvvNWClMf5oZVaTUbXkSGFW5qu27BsDUBvEamist3KlMKlgV8w31oeKHHT+hx g5aYXg0B7fzCZe1tO2LiSQv2jKYhfLtPM4BuYuoAK5FJxSnjgySNYXJTp+Ag/Ev97YaKF1 eJck7dQF5b8VFA7FGLnVtNpr35QCxHI= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GAEpmfgQ; spf=pass (imf09.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.161.46 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697584918; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ro07mZ3GzpatL4RIMfSbqQAjjCCYLlk3KQix1gJEHqQ=; b=gQcuyvS5WBSWfA5qkn+K7p+fp/Dp0cy8aatkbA91Nxwg4I7OLN4Qm0jNsB4xz+pn8GyXnr nRGPEudNjBcW4Hjn0Dpgjn2+YCIj4die02vZYg4yYEN0R9gGnj8RfboD/1nYmolN6UbkdF bV6xnCQp4Asr/RD3ZKIw4yhgpU12gXU= Received: by mail-oo1-f46.google.com with SMTP id 006d021491bc7-581e92f615fso438658eaf.2 for ; Tue, 17 Oct 2023 16:21:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697584917; x=1698189717; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ro07mZ3GzpatL4RIMfSbqQAjjCCYLlk3KQix1gJEHqQ=; b=GAEpmfgQBJfc6BNXfVyw6npzSSjSJt/X5HtCr+oX2UtOzPB2PmO2Z5TtR0G6AWSbNN MK30pxU6LcNdJZJgpEPPCHfQpAQuAKfSPPSHea3QvwnVdEzR7EsIDljBu0J/6mrg2huH oa957m2WRhdwYSU/f8Ig0J6D75BtRqrBwKW2RSNep3xNQh4QtZmJfI5Qu8kTmG6uWKVO CY0az7AefnJMFAuwObp4BkaQjex4FUnqlLDi8t3WLDRXrb5eTW435DFqpf52i18Rl9rn gks5wVhbjV6hTY4xXCIab/xWtOSsrkCZisERRN0h73B4wa4jaTD94lJ9Mn+XaePphSs/ f2Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697584917; x=1698189717; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ro07mZ3GzpatL4RIMfSbqQAjjCCYLlk3KQix1gJEHqQ=; b=KFcV9iJ1iXnbXY5gzDPry1ZkILQ9KKQ7IeXwSSKGAZijtoNdxs9ljGYm1/SHgTDRk7 OS4j86ruoSFH0daPD5W4cQkrYZJVw7eqOX3XjPILPvKKL+ayZIkFpQ1ciCMvVld12cau yvBUVQnuOu3BHdPXXSeO0F4WxPklLSESakj5VaMXEWhB3ZA6uZz5SF1Br99eifsgv6nA gDjdXet6sWF+8GXkz5EENIVrUGouVaHE7B9QjSBr8nO2OtswRZ813mbMj5f79qkMgkPj n6ynspsBXVhU6euGBhwBO6rSvY2OI/aW6e/GTQj8IyNJIB9b/2R3kuOC3RiTkwGU5mLi FjCQ== X-Gm-Message-State: AOJu0Yy/P24ADZx1TGxMWXk3e714Q5AqGL0S5TSCxKEUdEj/d6F2c9f7 hnR+F+895PGlJbtlvhatI3Y= X-Google-Smtp-Source: AGHT+IF9RySE5g3TTjODlRtS6qpmjP5uh+vqX8oME1qqvf1cc6S8YM5AyaCTqDlrKiXC/CT2Gir9GA== X-Received: by 2002:a05:6359:2ea3:b0:166:cae0:6e19 with SMTP id rp35-20020a0563592ea300b00166cae06e19mr3977722rwb.3.1697584917455; Tue, 17 Oct 2023 16:21:57 -0700 (PDT) Received: from localhost (fwdproxy-prn-001.fbsv.net. [2a03:2880:ff:1::face:b00c]) by smtp.gmail.com with ESMTPSA id z5-20020aa79f85000000b0068bc6a75848sm1983031pfr.156.2023.10.17.16.21.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 16:21:57 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure Date: Tue, 17 Oct 2023 16:21:52 -0700 Message-Id: <20231017232152.2605440-6-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017232152.2605440-1-nphamcs@gmail.com> References: <20231017232152.2605440-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8D997140005 X-Stat-Signature: ek8orcy31myadjrw965ym6p6fa4fdk66 X-Rspam-User: X-HE-Tag: 1697584918-736959 X-HE-Meta: U2FsdGVkX186G3b+qRMj8sMCNW0KwnJWV4d36pOsdptevkiBFXt41DrUzlHGFtydYd5Q6+uUfX7nIczhrktVWPP6RPzWbN9RTjcjt6fIWTWiBhwP6wC/Hw3JKURQQrvWAHdxT8dveXzln+C1aHjWAqKSkRJ37UTS6LbRq07EOFijr2Kx+gT9iwiscxtC9e9Sk2IYndnWhCOANxQqwOEkMms4hRFCuOWcYHpTUG6Kh9FcURYaWolwIJlkIhBIqmqgy2B9j6h0yuFfELcvdbicxQR90bzUl7tnCdGS03WGgjMXwJ0L6YLcP0jfUe95o2TELQmh9ye5HYGXq5llVMDZGhO85TviuyohYqUXhPpK3l1gB5mU5XpqlDJQNirE6xHlguz5Dx0l2jk9fpL4V+UvvlQwZYJEx1ckbXTRVDc5MwxrZXthORL2IXcv+2xtBSAAVTtvcMT1m3kBzQycSNpdilWdcrQTGg3aGG6IpO27HFqfR0xY7IDt7c5VoIj1TrigLF+SGRIzfTn0G0wAudJEgAPtmLjel7MQf2Is9TQ0pRXkEQb2epPCCTMMZtPd/IUvMEazU36WNtkr1Mye6owEXnjw6aF3ARF6sv42L/SuDhZ+/9ZNAnOgjxCI3UHVmYvvhoNr9ii/uXoINa7xD8IOf1AO4dWNuPITvxTIGiFegXGIvrLnda/ZWItK61Ckgo9baw21mySL39Ab1AvY8yCgwTyotbcCsU9SGCHhuIP8llfbPgu8KCk2BZ0j0mJnayUfdvnqdtLjuC9Ec63pBG/BkMWb7xPSjsKnr+GbV2DonwEZBi6MoPkR9gh+/oL0NYxqH7jqavukifMZ+KUqJ3b5r6jDIoDt7Sf/hxQKJ4UXET7rAvozCBNZsvgX3T8ys9jc0kSEhBIVGclGEFfBSNbii6joH+1QlUMQe25PHt4KEKSb4XFnoaK0ocNggPbrqYLpy+mEhqn7mAvYQdIwwjb 3iXCt9La u9FiOzP1+VPz2xmPnzD4ou8t+vtpI5sPoTyGQhZpNU0mvFgB4BWUxOZhGz/UP6DEtiVmElNMaE9Q9eWwUuKDjVyrqbFGEnabH2Esee78iIcdAnG0KHSxQWPLEoT/8Zsm4k+8/PSs5hNgGMXWIBNccLezQNs9gtZQEe8jAForfCT1qVan19xUCTDUuGS+c6MeT1i+XImcSBF5g78ff09mxiQSzu/OISQmDGmcAhAU6kpyGFcoMIKiK5E1TAAGY9u6FS1mLlVV66Pn5vfBrFQb91QeLHNE2aJ2O5xNVME2vu5EDxDQN3kuhkmvT1EDDgwtsdD5wjrgIAxTawEU+cbJI8ZDm/HzzmKo/0vstPKRU1yeeH3xR5lxVrYXb9857EpbufVTxjDUkX1Vzc+5VX3k4FoAmKLd1qlDx49E4z/jUPTWNUrY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, we only shrink the zswap pool when the user-defined limit is hit. This means that if we set the limit too high, cold data that are unlikely to be used again will reside in the pool, wasting precious memory. It is hard to predict how much zswap space will be needed ahead of time, as this depends on the workload (specifically, on factors such as memory access patterns and compressibility of the memory pages). This patch implements a memcg- and NUMA-aware shrinker for zswap, that is initiated when there is memory pressure. The shrinker does not have any parameter that must be tuned by the user, and can be opted in or out on a per-memcg basis. Furthermore, to make it more robust for many workloads and prevent overshrinking (i.e evicting warm pages that might be refaulted into memory), we build in the following heuristics: * Estimate the number of warm pages residing in zswap, and attempt to protect this region of the zswap LRU. * Scale the number of freeable objects by an estimate of the memory saving factor. The better zswap compresses the data, the fewer pages we will evict to swap (as we will otherwise incur IO for relatively small memory saving). * During reclaim, if the shrinker encounters a page that is also being brought into memory, the shrinker will cautiously terminate its shrinking action, as this is a sign that it is touching the warmer region of the zswap LRU. As a proof of concept, we ran the following synthetic benchmark: build the linux kernel in a memory-limited cgroup, and allocate some cold data in tmpfs to see if the shrinker could write them out and improved the overall performance. Depending on the amount of cold data generated, we observe from 14% to 35% reduction in kernel CPU time used in the kernel builds. Signed-off-by: Nhat Pham --- Documentation/admin-guide/mm/zswap.rst | 7 ++ include/linux/mmzone.h | 14 +++ mm/mmzone.c | 3 + mm/swap_state.c | 21 +++- mm/zswap.c | 161 +++++++++++++++++++++++-- 5 files changed, 196 insertions(+), 10 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 45b98390e938..522ae22ccb84 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -153,6 +153,13 @@ attribute, e. g.:: Setting this parameter to 100 will disable the hysteresis. +When there is a sizable amount of cold memory residing in the zswap pool, it +can be advantageous to proactively write these cold pages to swap and reclaim +the memory for other use cases. By default, the zswap shrinker is disabled. +User can enable it as follows: + + echo Y > /sys/module/zswap/parameters/shrinker_enabled + A debugfs interface is provided for various statistic about pool size, number of pages stored, same-value filled pages and various counters for the reasons pages are rejected. diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 486587fcd27f..8947a1bfbe9c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -637,6 +637,20 @@ struct lruvec { #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif +#ifdef CONFIG_ZSWAP + /* + * Number of pages in zswap that should be protected from the shrinker. + * This number is an estimate of the following counts: + * + * a) Recent page faults. + * b) Recent insertion to the zswap LRU. This includes new zswap stores, + * as well as recent zswap LRU rotations. + * + * These pages are likely to be warm, and might incur IO if the are written + * to swap. + */ + atomic_long_t nr_zswap_protected; +#endif }; /* Isolate for asynchronous migration */ diff --git a/mm/mmzone.c b/mm/mmzone.c index 68e1511be12d..4137f3ac42cd 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec) memset(lruvec, 0, sizeof(struct lruvec)); spin_lock_init(&lruvec->lru_lock); +#ifdef CONFIG_ZSWAP + atomic_long_set(&lruvec->nr_zswap_protected, 0); +#endif for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); diff --git a/mm/swap_state.c b/mm/swap_state.c index 0356df52b06a..a60197b55a28 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); +#ifdef CONFIG_ZSWAP + if (page) { + struct lruvec *lruvec = folio_lruvec(page_folio(page)); + + atomic_long_inc(&lruvec->nr_zswap_protected); + } +#endif + return page; } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, lru_add_drain(); skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - NULL); + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL); +#ifdef CONFIG_ZSWAP + if (page) { + struct lruvec *lruvec = folio_lruvec(page_folio(page)); + + atomic_long_inc(&lruvec->nr_zswap_protected); + } +#endif + return page; } /** diff --git a/mm/zswap.c b/mm/zswap.c index 15485427e3fa..1d1fe75a5237 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644); /* Number of zpools in zswap_pool (empirically determined for scalability) */ #define ZSWAP_NR_ZPOOLS 32 +/* Enable/disable memory pressure-based shrinker. */ +static bool zswap_shrinker_enabled; +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644); + /********************************* * data structures **********************************/ @@ -174,6 +178,8 @@ struct zswap_pool { char tfm_name[CRYPTO_MAX_ALG_NAME]; struct list_lru list_lru; struct mem_cgroup *next_shrink; + struct shrinker *shrinker; + atomic_t nr_stored; }; /* @@ -272,17 +278,26 @@ static bool zswap_can_accept(void) DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); } +static u64 get_zswap_pool_size(struct zswap_pool *pool) +{ + u64 pool_size = 0; + int i; + + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) + pool_size += zpool_get_total_size(pool->zpools[i]); + + return pool_size; +} + static void zswap_update_total_size(void) { struct zswap_pool *pool; u64 total = 0; - int i; rcu_read_lock(); list_for_each_entry_rcu(pool, &zswap_pools, list) - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) - total += zpool_get_total_size(pool->zpools[i]); + total += get_zswap_pool_size(pool); rcu_read_unlock(); @@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry) static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry) { struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry); - bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg); - + int nid = entry_to_nid(entry); + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg); + unsigned long lru_size, old, new; + + if (added) { + lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg); + old = atomic_long_inc_return(&lruvec->nr_zswap_protected); + + /* + * Decay to avoid overflow and adapt to changing workloads. + * This is based on LRU reclaim cost decaying heuristics. + */ + do { + new = old > lru_size / 4 ? old / 2 : old; + } while ( + !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new)); + } mem_cgroup_put(memcg); return added; } @@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry) else { zswap_lru_del(&entry->pool->list_lru, entry); zpool_free(zswap_find_zpool(entry), entry->handle); + atomic_dec(&entry->pool->nr_stored); zswap_pool_put(entry->pool); } zswap_entry_cache_free(entry); @@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, return entry; } +/********************************* +* shrinker functions +**********************************/ +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, + spinlock_t *lock, void *arg); + +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); + unsigned long shrink_ret, nr_protected, lru_size; + struct zswap_pool *pool = shrinker->private_data; + bool encountered_page_in_swapcache = false; + + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected); + lru_size = list_lru_shrink_count(&pool->list_lru, sc); + + /* + * Abort if the shrinker is disabled or if we are shrinking into the + * protected region. + */ + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { + sc->nr_scanned = 0; + return SHRINK_STOP; + } + + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, + &encountered_page_in_swapcache); + + if (encountered_page_in_swapcache) + return SHRINK_STOP; + + return shrink_ret ? shrink_ret : SHRINK_STOP; +} + +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct zswap_pool *pool = shrinker->private_data; + struct mem_cgroup *memcg = sc->memcg; + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; + +#ifdef CONFIG_MEMCG_KMEM + cgroup_rstat_flush(memcg->css.cgroup); + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); +#else + /* use pool stats instead of memcg stats */ + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; + nr_stored = atomic_read(&pool->nr_stored); +#endif + + if (!zswap_shrinker_enabled || !nr_stored) + return 0; + + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected); + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc); + /* + * Subtract the lru size by an estimate of the number of pages + * that should be protected. + */ + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0; + + /* + * Scale the number of freeable pages by the memory saving factor. + * This ensures that the better zswap compresses memory, the fewer + * pages we will evict to swap (as it will otherwise incur IO for + * relatively small memory saving). + */ + return mult_frac(nr_freeable, nr_backing, nr_stored); +} + +static void zswap_alloc_shrinker(struct zswap_pool *pool) +{ + pool->shrinker = + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap"); + if (!pool->shrinker) + return; + + pool->shrinker->private_data = pool; + pool->shrinker->scan_objects = zswap_shrinker_scan; + pool->shrinker->count_objects = zswap_shrinker_count; + pool->shrinker->batch = 0; + pool->shrinker->seeks = DEFAULT_SEEKS; +} + /********************************* * per-cpu code **********************************/ @@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o spinlock_t *lock, void *arg) { struct zswap_entry *entry = container_of(item, struct zswap_entry, lru); + bool *encountered_page_in_swapcache = (bool *)arg; struct mem_cgroup *memcg; struct zswap_tree *tree; + struct lruvec *lruvec; pgoff_t swpoffset; enum lru_status ret = LRU_REMOVED_RETRY; int writeback_result; @@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o /* we cannot use zswap_lru_add here, because it increments node's lru count */ list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg); spin_unlock(lock); - mem_cgroup_put(memcg); ret = LRU_RETRY; + + /* + * Encountering a page already in swap cache is a sign that we are shrinking + * into the warmer region. We should terminate shrinking (if we're in the dynamic + * shrinker context). + */ + if (writeback_result == -EEXIST && encountered_page_in_swapcache) { + ret = LRU_SKIP; + *encountered_page_in_swapcache = true; + } + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry))); + /* Increment the protection area to account for the LRU rotation. */ + atomic_long_inc(&lruvec->nr_zswap_protected); + + mem_cgroup_put(memcg); goto put_unlock; } zswap_written_back_pages++; @@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) &pool->node); if (ret) goto error; + + zswap_alloc_shrinker(pool); + if (!pool->shrinker) + goto error; + pr_debug("using %s compressor\n", pool->tfm_name); /* being the current pool takes 1 ref; this func expects the @@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); - list_lru_init_memcg(&pool->list_lru, NULL); + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker)) + goto lru_fail; + shrinker_register(pool->shrinker); INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); return pool; +lru_fail: + list_lru_destroy(&pool->list_lru); + shrinker_free(pool->shrinker); error: if (pool->acomp_ctx) free_percpu(pool->acomp_ctx); @@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool) zswap_pool_debug("destroying", pool); + shrinker_free(pool->shrinker); cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); list_lru_destroy(&pool->list_lru); @@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio) if (entry->length) { INIT_LIST_HEAD(&entry->lru); zswap_lru_add(&pool->list_lru, entry); + atomic_inc(&pool->nr_stored); } spin_unlock(&tree->lock);