From patchwork Tue May 30 21:02:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13261154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 708DAC77B73 for ; Tue, 30 May 2023 21:02:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DADC7280001; Tue, 30 May 2023 17:02:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D5D396B0075; Tue, 30 May 2023 17:02:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2675280001; Tue, 30 May 2023 17:02:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B46556B0074 for ; Tue, 30 May 2023 17:02:58 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 60B0940230 for ; Tue, 30 May 2023 21:02:58 +0000 (UTC) X-FDA: 80848146036.21.8946613 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf24.hostedemail.com (Postfix) with ESMTP id 6BC8518000E for ; Tue, 30 May 2023 21:02:56 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="MELA/owQ"; spf=pass (imf24.hostedemail.com: domain of 3f2R2ZAoKCEwC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3f2R2ZAoKCEwC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685480576; a=rsa-sha256; cv=none; b=iAX1YA//oWyM1aLRaYRFmt4FDfC/wf7BX5cFiKbdrDRbb8Q83758anXtNFQ0SJHWAbAS0v VkTT623ivDCIsMYcXSF01dVwqX6BUkOwyQyFG4ptjb3rcIXulPFbBR/KGQqrLZcPTcCRg9 DLk5YfIAG3JYW96WlVs8ulDAViWfi1w= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="MELA/owQ"; spf=pass (imf24.hostedemail.com: domain of 3f2R2ZAoKCEwC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3f2R2ZAoKCEwC265Cov0sru22uzs.q20zw18B-00y9oqy.25u@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685480576; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=clX13NUM0l8GVoG+aLcKYNzWuyZrIUKrfqkjH8gJ434=; b=VVAK71GCna7JBZcgNLevrEaf5rEtWvjaHOLRejEqEzSZXUCeWws4JRqDd5zHY018AVnk89 0UzZ7jRb33Cx3q74a+nzehSpD1GiztPLD0gfoBovszBd30AUhOvTbgqd0eLBgXnBjQuPN5 dTF865BlRobMh2cyKS3iFeDyeZ0OM9Q= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-ba81b37d9d2so9136008276.3 for ; Tue, 30 May 2023 14:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685480575; x=1688072575; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=clX13NUM0l8GVoG+aLcKYNzWuyZrIUKrfqkjH8gJ434=; b=MELA/owQN9WZjUGW2K4FLaeB8I5CH0Il+yW3Q6bKo29meHUE6otFdK/LnFhpVGKQPD tutVMozMDTL39UC9+M/M7Pa6AfujhbCB8wl0Hbnz3bPxYRAx9NnN6YH1cRK+s2n6jJJX XY6ImcudjCs5hftn125k/UZs1mBm/i6WXYnkEW6xH5bHjihOV6Rx4TIKIkPW9PjSWKtT lCsKHXLnAWf3e2TxcGu3pFLQho8wGzrVnn3J828AN+MZ7YFTuu0Byt1uyHfk6GNlq1+C xoKanqRmXCnO3hIA7k05yPeiFPDCJwlLmaTrjFTGgGek+Ag77zNpiIeUe6mrtdUd7Cnr MW2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685480575; x=1688072575; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=clX13NUM0l8GVoG+aLcKYNzWuyZrIUKrfqkjH8gJ434=; b=Qa3iPPNH3niIpdjiQ07XH/J9QKgAYLRr4gYkhqT7Y0nxH7rh4lN/DrwgD1FjKWggxl XZZ18NXS4Ir5Mf34EgY772PKmO/R3ge6/tYADvSWOdiygvSFhKRsxyzIaws4jkrhuxJZ 1HKa1VoMJ3KAPC8wAunox9CM7RJYFlSF3NoD9uxpCkCOE/B/qoiLW5XhqR/mhNri8GNg PvOP9HgzshF8mffMtqWq4NH+iUx3+J4d+raBF+O2IaO1ZjB8MLmLDQ+6rVBRqnvwAiqY pjOJah+4P8XN37n9L7tiIWIDll4BSTfFJxfISzoM1YpddX3etRmL9+/bJtzxFT23Md/P hKXQ== X-Gm-Message-State: AC+VfDzcfRceyYs2MWUBROyso/suO+5sgkGDKYp+/Md3uWwKhjsFGZEx 0BbmmsLgVevC68BoyB9odkkSjenmkDj/ZiUJ X-Google-Smtp-Source: ACHHUZ6G1wOkpbTQg6TcS/5ScAmUVYScppyPgo7GvIIhzP8RJzF4ipWfuZXl6zHtOWK/wHuyV/wtjsFvVblxbKpG X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a25:bf81:0:b0:ba8:4ba3:5b54 with SMTP id l1-20020a25bf81000000b00ba84ba35b54mr1474573ybk.11.1685480575473; Tue, 30 May 2023 14:02:55 -0700 (PDT) Date: Tue, 30 May 2023 21:02:51 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230530210251.493194-1-yosryahmed@google.com> Subject: [PATCH] mm: zswap: support exclusive loads From: Yosry Ahmed To: Konrad Rzeszutek Wilk , Andrew Morton , Seth Jennings , Dan Streetman , Vitaly Wool Cc: Johannes Weiner , Nhat Pham , Domenico Cerasuolo , Yu Zhao , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6BC8518000E X-Stat-Signature: 46zii8mxxhi4rzfmfzoqae3p5468argx X-Rspam-User: X-HE-Tag: 1685480576-814952 X-HE-Meta: U2FsdGVkX18Bfx/tcqFr9ZMUVC5hmz3Ckp9QKzmwFBHkvy0F2r7yR2HcyptWw3ZnOTjbCkaSqNWZPWHl7Ne3q9rnqA++ULZSGXl/L/B/d5oIBeRXr4uHClk4YD5uKtQLm8ifjOewnx2Ft//dmPhREQz/ibbez2SGyVBoEjV/zfOzcTyOkxp9h5Rr1KIP7uGobb1W9r/uDfl6qVC4in8Bca7tzkh7Et4EblHGfSCDgK0/PB6m+5kCL3zhTvDOBbWejP501+/uuXnnHuUcT7tbmFWNt6Vt9D6OIIbfEOZOq27kDSb29GkT1ncK4wHFGJ3HxqVOZQUmFwo9Q/xoaZLBFQ345uxjPMOJyS+SjKFS9blGUzBJBKo3G2gqoLhezHnPKFGA1Lh0xXPkwZ/NEEM3EAzYko5ObNfgvcwawowYN7jdUobxWA0+fmTHJ4PUIZNwmCXAir0IpLhZGob+7+p8LQQQsHXL1j22z3DHwCcfjoLMX5amPYAxA4ick1ThfBIsdgJ9oU9DQZmxTn0jgDRneN8mHzIFwj9yxWfuqLBoClv1b7/86J/pukYot0RfOMlXgwnGFahSluTCQ0b90JtfzBhfJes2ynpWx+XfDz9fH3rRd2TES1a93wmgYm6tsA2Ga+98s3mpLOgf3fTgGVvZGQseUAeVv+eO0CMiy98l5cqw2T/8Xmbqh/zLqxfoK5RJkIeBHeKHFjPzKb7EUCYDKTFymM7knvAFE9o2Dg0WwOLKf/uLON9h2N5h5X3j3A37YOYpwO6ob0yn5/3QY1uwzXKREC0bDikcDR8jTIOg05rfm8xIt9V9VXwki1JCZh3k9hgZMI5Jf0Z3lziyEkTp1szzOkLN9do8nzQkbvYXrInyjlxzQEyJ0Fj9DN9WNVyiETp1po/gqEO8oe9Cp+UIxcKIr7zRZ2CADGwQnwNXppWrd6PeVUIPen4vPCP4acsBzv4wGuMf4cMckGkyP19 F10oYsAX MuatylstOwAFY6WKNBc6/OlSGrKvZv+hn+p+qfJkxbFZ4HovOu0KWTSKS1S/T8aZuhcXu5abV6wl5o7rQGDF9gtn9n55ArwhSm9NzpFa2yc3s3zLmhPouutOYDCBzx4DtDpIfK/tTuGXglseEv1sQlIdMBi0XplYfpPFgXTp3EyI2k9xjoIKkysYZw3KHQ9pEW4UXvvxy5iZbLR4nL8IpOjHi7q2cSxMDM9aqt3VEKN/0zVx4Nm/dBQFIvFxcjygY3kcbmSIWOBelLXvdLdTovHwvchOvsZVQ2uNwDzLdBemY3fHcgpLu6gv5CpEhjzQRwuZrNDnHV5QbzLLpHlpfnnrdCADx3+l/w+x/SGoboiEURTlx80Xt74SvbGxRppyH60ltHTTYll0GdHchBba+iARdnh2G2ZIDBPd3nqeTQbfiE7d9rfXcu30EYIfPWcBCGHcbLa62vl5+VcnHPa1WDYAGE2EZOuf6VxvFxCr2LmzO54Am5OAyEj8Lgvm2M5NeVgudH0iCnF+IsDt2T+SllsNSC90dV7HBZZFax1Z+7qBgKGdFvFZOxLPWwQcEk6HWZ1J5rJv+V7tNbtkperurJN7WVFn3sxDE6kwfqh+DdCWy9+HyhiH0nE+JoLQjMbCTVecwJ4TubGHJ+zu3N0T+0uUTVNXLwto+zT20e7lyvzcPZ4CPX0NAueMeKw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit 71024cb4a0bf ("frontswap: remove frontswap_tmem_exclusive_gets") removed support for exclusive loads from frontswap as it was not used. Bring back exclusive loads support to frontswap by adding an exclusive_loads argument to frontswap_ops. Add support for exclusive loads to zswap behind CONFIG_ZSWAP_EXCLUSIVE_LOADS. Refactor zswap entry invalidation in zswap_frontswap_invalidate_page() into zswap_invalidate_entry() to reuse it in zswap_frontswap_load(). With exclusive loads, we avoid having two copies of the same page in memory (compressed & uncompressed) after faulting it in from zswap. On the other hand, if the page is to be reclaimed again without being dirtied, it will be re-compressed. Compression is not usually slow, and a page that was just faulted in is less likely to be reclaimed again soon. Suggested-by: Yu Zhao Signed-off-by: Yosry Ahmed --- include/linux/frontswap.h | 1 + mm/Kconfig | 13 +++++++++++++ mm/frontswap.c | 7 ++++++- mm/zswap.c | 23 +++++++++++++++-------- 4 files changed, 35 insertions(+), 9 deletions(-) diff --git a/include/linux/frontswap.h b/include/linux/frontswap.h index a631bac12220..289561e12cad 100644 --- a/include/linux/frontswap.h +++ b/include/linux/frontswap.h @@ -13,6 +13,7 @@ struct frontswap_ops { int (*load)(unsigned, pgoff_t, struct page *); /* load a page */ void (*invalidate_page)(unsigned, pgoff_t); /* page no longer needed */ void (*invalidate_area)(unsigned); /* swap type just swapoff'ed */ + bool exclusive_loads; /* pages are invalidated after being loaded */ }; int frontswap_register_ops(const struct frontswap_ops *ops); diff --git a/mm/Kconfig b/mm/Kconfig index 7672a22647b4..92c30879bf67 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -46,6 +46,19 @@ config ZSWAP_DEFAULT_ON The selection made here can be overridden by using the kernel command line 'zswap.enabled=' option. +config ZSWAP_EXCLUSIVE_LOADS + bool "Invalidate zswap entries when pages are loaded" + depends on ZSWAP + help + If selected, when a page is loaded from zswap, the zswap entry is + invalidated at once, as opposed to leaving it in zswap until the + swap entry is freed. + + This avoids having two copies of the same page in memory + (compressed and uncompressed) after faulting in a page from zswap. + The cost is that if the page was never dirtied and needs to be + swapped out again, it will be re-compressed. + choice prompt "Default compressor" depends on ZSWAP diff --git a/mm/frontswap.c b/mm/frontswap.c index 279e55b4ed87..e5d6825110f4 100644 --- a/mm/frontswap.c +++ b/mm/frontswap.c @@ -216,8 +216,13 @@ int __frontswap_load(struct page *page) /* Try loading from each implementation, until one succeeds. */ ret = frontswap_ops->load(type, offset, page); - if (ret == 0) + if (ret == 0) { inc_frontswap_loads(); + if (frontswap_ops->exclusive_loads) { + SetPageDirty(page); + __frontswap_clear(sis, offset); + } + } return ret; } diff --git a/mm/zswap.c b/mm/zswap.c index 59da2a415fbb..fba80330afd1 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1329,6 +1329,16 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, goto reject; } +static void zswap_invalidate_entry(struct zswap_tree *tree, + struct zswap_entry *entry) +{ + /* remove from rbtree */ + zswap_rb_erase(&tree->rbroot, entry); + + /* drop the initial reference from entry creation */ + zswap_entry_put(tree, entry); +} + /* * returns 0 if the page was successfully decompressed * return -1 on entry not found or error @@ -1403,6 +1413,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, count_objcg_event(entry->objcg, ZSWPIN); freeentry: spin_lock(&tree->lock); + if (!ret && IS_ENABLED(CONFIG_ZSWAP_EXCLUSIVE_LOADS)) + zswap_invalidate_entry(tree, entry); zswap_entry_put(tree, entry); spin_unlock(&tree->lock); @@ -1423,13 +1435,7 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset) spin_unlock(&tree->lock); return; } - - /* remove from rbtree */ - zswap_rb_erase(&tree->rbroot, entry); - - /* drop the initial reference from entry creation */ - zswap_entry_put(tree, entry); - + zswap_invalidate_entry(tree, entry); spin_unlock(&tree->lock); } @@ -1472,7 +1478,8 @@ static const struct frontswap_ops zswap_frontswap_ops = { .load = zswap_frontswap_load, .invalidate_page = zswap_frontswap_invalidate_page, .invalidate_area = zswap_frontswap_invalidate_area, - .init = zswap_frontswap_init + .init = zswap_frontswap_init, + .exclusive_loads = IS_ENABLED(CONFIG_ZSWAP_EXCLUSIVE_LOADS), }; /*********************************