From patchwork Wed Dec 6 09:46:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13481319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BBC7C4167B for ; Wed, 6 Dec 2023 09:46:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5ACB46B00A1; Wed, 6 Dec 2023 04:46:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 55BD56B00A2; Wed, 6 Dec 2023 04:46:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FBDC6B00A3; Wed, 6 Dec 2023 04:46:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 31D9C6B00A1 for ; Wed, 6 Dec 2023 04:46:39 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 00B381C030D for ; Wed, 6 Dec 2023 09:46:38 +0000 (UTC) X-FDA: 81535913718.20.B065816 Received: from out-171.mta1.migadu.com (out-171.mta1.migadu.com [95.215.58.171]) by imf10.hostedemail.com (Postfix) with ESMTP id 20575C001F for ; Wed, 6 Dec 2023 09:46:36 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.171 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701855997; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xjCI9Y+WpwNUiPvyb+PfGaW2yDIpazi6J+7q5rujkjY=; b=d6dFfhdxJD/XCCq9tr1aHwXtd2Oxb90hDXN8LsDZxFofjSJsHwyVMI8LeeYbvggBQP2MB5 O1V3iFVJ0gpSf2+Ee2Kt8aKMKrtwsppQ62AK65VErn0TsVgzza3B/E/R8nVclnGYDd5ylR EA93V/QgyCkLWdtWB9Np+kzdrHmvOeY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.171 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701855997; a=rsa-sha256; cv=none; b=1t1SLyPnGz12wTd5Hej95YVA5rcFZMaY1A+sFkUojUhCVXoyJNRm5SqcjvBZhZMKcxHxrG cXXNI5ZzYQvhhTdLfL5dOWc0PKFun4TJEyv62f+5DnmmOhwrgoRCsJ+UhHuYn52CU9LRA/ Sd0MI0+sHGkBY/A4F0ynH6QDfar9XJY= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 06 Dec 2023 09:46:25 +0000 Subject: [PATCH 2/7] mm/zswap: split zswap rb-tree MIME-Version: 1.0 Message-Id: <20231206-zswap-lock-optimize-v1-2-e25b059f9c3a@bytedance.com> References: <20231206-zswap-lock-optimize-v1-0-e25b059f9c3a@bytedance.com> In-Reply-To: <20231206-zswap-lock-optimize-v1-0-e25b059f9c3a@bytedance.com> To: Vitaly Wool , Nhat Pham , Johannes Weiner , Michal Hocko , Seth Jennings , Dan Streetman , Andrew Morton , Yosry Ahmed Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1701855988; l=6878; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=91S6pF6My/bUROdLpQALha8Vf8LlqYuhabIDQmuzuWI=; b=wPMG7SuyQz02Gmf6/zoPkNXypnlp9pzAs3LaGP2c2Co5uxHZeHfDwL1yFMzUvPUEcTV7ChSyQ ajsHFI83VQLCQGcg29JkEtY4nc6utk5FaXQG6gU7Y5SrzXyxOn7RFyY X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 20575C001F X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam-User: X-Stat-Signature: tsigm7yazni5rrz574fs9ciit46bxfnr X-Rspamd-Server: rspam01 X-Rspam: Yes X-HE-Tag: 1701855996-73975 X-HE-Meta: U2FsdGVkX19esoJjFzvysCvMqlWyhrMzEsli0Q812z2/k2fJ5T8yzGCGLw/YPv3AJJko+7WORwztozRvamvTbqf4YAAKeVFcLIhp2W6tTeAcs+4/toImxHVDfpbzh6N7tsoudamyTas51bGMayblkvNy1EZoTvf+r1uCg546RrBj3S6qPWTyEJADCfkJAwLoJNY4OsONKQsnziS97sQy5Y7yvgoASSHs7tNPNGyaKbK+saPIHvvXiz8HkAzzT+Cclpj+gTCWJmuoM79Nb6aV5yQz15zVb2+gS1AgnUY5Tumej6kgpjHlHb8HoXQAAUuBJc7sxTRwk2AzxenZ1tvME0e7BeY5LjkVyGDgTxZIQQHqVmhMJ73Xh8Y4/J2CPvnRvom2boRu2ScUHg5PsP8RfbfkQ2mO9ucyLvEp0gve+9GDcik7Gn2p54CbD3ZbTvCzeTwk+7WsEF1lWfZGDOccPc2ZRiV4jl+pkTSwcKnP3H8ZUTsKYAgfTQpWOc6jDRdmhv4hhWWglapz/Q2w+FBWpS6rhdUUFLeRhAXHb6SA2zzSPGEuI4jbjFZ1C6j2j6pKyht/sJpzbWg5XwD2AiHFSb2oaEui1I6wji5yjlW/pqXqED577r1WUKsmh+xbIswaGL2SL1xblx8Ywgx1MXG+xbWC4meatVvJ+Btp/UY31woWLcCoDQebGLl0aNPHBpbXa92P2+iH7dafuDz9WHwwrjCvyzqxouBL6WkUeQRD/Gq5oNyd43IDedXdRNqbsHp4Hrcb0JyMlc+fxeay0V0uF0X+dOednRUoVHmC5bWTcWAH5Ei7XADSgZQtSIySU9Q9V9hBD8qkwdzpA52v/ID0AFAHWTL03aw9r9xxuPYOrWW/gnfpNgkHKaqrbwE9Kxfk2X19B3dQsLD/LoqkgwSIenSwevac2Zldi96AIuhKML2JHoB73TG6TjZ7QYc6LIG6ceOHWbfBGJ3LVjRWjlo jMrZFhE1 RFW8nO7poZjbsdGZTVCZgN6p+Hya0iKHtGzG04mGvpP44OOiPCNs6oIun6vAnX5CsvMtgkQao9Weg8u4ayoQnfWKMb43tjayyeB3RrS1688bnD6yfn90bAgGkWwP79x961xwSD89D/hGof5ah/BNSEHr6Gco735PqNR2Yo/KkbcpdjpsFcsH6r52E8NwjOuevTl5jBJ22UU0dvkwm9c0G1vj41G4jPN+JH/FTRSNqQxcptpoWIP0JN8qYweiTVkS57OK72se/bguvKdYv8HmX0lHu6b6mr5jWx9gV1erL000AnsgRCsVHAQ6zCfAkHdd65h+1bdkN5cH0UV0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Each swapfile has one rb-tree to search the mapping of swp_entry_t to zswap_entry, that use a spinlock to protect, which can cause heavy lock contention if multiple tasks zswap_store/load concurrently. Optimize the scalability problem by splitting the zswap rb-tree into multiple rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M), just like we did in the swap cache address_space splitting. Signed-off-by: Chengming Zhou --- include/linux/zswap.h | 4 +-- mm/swapfile.c | 2 +- mm/zswap.c | 69 ++++++++++++++++++++++++++++++++------------------- 3 files changed, 47 insertions(+), 28 deletions(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 7cccc02cb9e9..d3a8bc300b70 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -30,7 +30,7 @@ struct zswap_lruvec_state { bool zswap_store(struct folio *folio); bool zswap_load(struct folio *folio); void zswap_invalidate(int type, pgoff_t offset); -int zswap_swapon(int type); +int zswap_swapon(int type, unsigned long nr_pages); void zswap_swapoff(int type); void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); void zswap_lruvec_state_init(struct lruvec *lruvec); @@ -50,7 +50,7 @@ static inline bool zswap_load(struct folio *folio) } static inline void zswap_invalidate(int type, pgoff_t offset) {} -static inline int zswap_swapon(int type) {} +static inline int zswap_swapon(int type, unsigned long nr_pages) {} static inline void zswap_swapoff(int type) {} static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {} diff --git a/mm/swapfile.c b/mm/swapfile.c index 939e7590feda..da8367a3e076 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3163,7 +3163,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) if (error) goto bad_swap_unlock_inode; - error = zswap_swapon(p->type); + error = zswap_swapon(p->type, maxpages); if (error) goto free_swap_address_space; diff --git a/mm/zswap.c b/mm/zswap.c index 5e2b8d5ee33b..a6b4859a0164 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -234,6 +234,7 @@ struct zswap_tree { }; static struct zswap_tree *zswap_trees[MAX_SWAPFILES]; +static unsigned int nr_zswap_trees[MAX_SWAPFILES]; /* RCU-protected iteration */ static LIST_HEAD(zswap_pools); @@ -260,6 +261,10 @@ static bool zswap_has_pool; * helpers and fwd declarations **********************************/ +#define swap_zswap_tree(entry) \ + (&zswap_trees[swp_type(entry)][swp_offset(entry) \ + >> SWAP_ADDRESS_SPACE_SHIFT]) + #define zswap_pool_debug(msg, p) \ pr_debug("%s pool %s/%s\n", msg, (p)->tfm_name, \ zpool_get_type((p)->zpools[0])) @@ -885,7 +890,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o * until the entry is verified to still be alive in the tree. */ swpoffset = swp_offset(entry->swpentry); - tree = zswap_trees[swp_type(entry->swpentry)]; + tree = swap_zswap_tree(entry->swpentry); list_lru_isolate(l, item); /* * It's safe to drop the lock here because we return either @@ -1535,10 +1540,9 @@ static void zswap_fill_page(void *ptr, unsigned long value) bool zswap_store(struct folio *folio) { swp_entry_t swp = folio->swap; - int type = swp_type(swp); pgoff_t offset = swp_offset(swp); struct page *page = &folio->page; - struct zswap_tree *tree = zswap_trees[type]; + struct zswap_tree *tree = swap_zswap_tree(swp); struct zswap_entry *entry, *dupentry; struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; @@ -1610,7 +1614,7 @@ bool zswap_store(struct folio *folio) src = kmap_local_page(page); if (zswap_is_page_same_filled(src, &value)) { kunmap_local(src); - entry->swpentry = swp_entry(type, offset); + entry->swpentry = swp; entry->length = 0; entry->value = value; atomic_inc(&zswap_same_filled_pages); @@ -1688,7 +1692,7 @@ bool zswap_store(struct folio *folio) mutex_unlock(acomp_ctx->mutex); /* populate entry */ - entry->swpentry = swp_entry(type, offset); + entry->swpentry = swp; entry->handle = handle; entry->length = dlen; @@ -1748,10 +1752,9 @@ bool zswap_store(struct folio *folio) bool zswap_load(struct folio *folio) { swp_entry_t swp = folio->swap; - int type = swp_type(swp); pgoff_t offset = swp_offset(swp); struct page *page = &folio->page; - struct zswap_tree *tree = zswap_trees[type]; + struct zswap_tree *tree = swap_zswap_tree(swp); struct zswap_entry *entry; struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; @@ -1835,7 +1838,7 @@ bool zswap_load(struct folio *folio) void zswap_invalidate(int type, pgoff_t offset) { - struct zswap_tree *tree = zswap_trees[type]; + struct zswap_tree *tree = swap_zswap_tree(swp_entry(type, offset)); struct zswap_entry *entry; /* find */ @@ -1850,37 +1853,53 @@ void zswap_invalidate(int type, pgoff_t offset) spin_unlock(&tree->lock); } -int zswap_swapon(int type) +int zswap_swapon(int type, unsigned long nr_pages) { - struct zswap_tree *tree; + struct zswap_tree *trees, *tree; + unsigned int nr, i; - tree = kzalloc(sizeof(*tree), GFP_KERNEL); - if (!tree) { + nr = DIV_ROUND_UP(nr_pages, SWAP_ADDRESS_SPACE_PAGES); + trees = kvcalloc(nr, sizeof(*tree), GFP_KERNEL); + if (!trees) { pr_err("alloc failed, zswap disabled for swap type %d\n", type); return -ENOMEM; } - tree->rbroot = RB_ROOT; - spin_lock_init(&tree->lock); - zswap_trees[type] = tree; + for (i = 0; i < nr; i++) { + tree = trees + i; + tree->rbroot = RB_ROOT; + spin_lock_init(&tree->lock); + } + + nr_zswap_trees[type] = nr; + zswap_trees[type] = trees; return 0; } void zswap_swapoff(int type) { - struct zswap_tree *tree = zswap_trees[type]; - struct zswap_entry *entry, *n; + struct zswap_tree *trees = zswap_trees[type]; + unsigned int i; - if (!tree) + if (!trees) return; - /* walk the tree and free everything */ - spin_lock(&tree->lock); - rbtree_postorder_for_each_entry_safe(entry, n, &tree->rbroot, rbnode) - zswap_free_entry(entry); - tree->rbroot = RB_ROOT; - spin_unlock(&tree->lock); - kfree(tree); + for (i = 0; i < nr_zswap_trees[type]; i++) { + struct zswap_tree *tree = trees + i; + struct zswap_entry *entry, *n; + + /* walk the tree and free everything */ + spin_lock(&tree->lock); + rbtree_postorder_for_each_entry_safe(entry, n, + &tree->rbroot, + rbnode) + zswap_free_entry(entry); + tree->rbroot = RB_ROOT; + spin_unlock(&tree->lock); + } + + kvfree(trees); + nr_zswap_trees[type] = 0; zswap_trees[type] = NULL; }