From patchwork Mon Apr 8 18:39:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31BB7CD1296 for ; Mon, 8 Apr 2024 18:40:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD06A6B0087; Mon, 8 Apr 2024 14:40:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A59656B0088; Mon, 8 Apr 2024 14:40:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 921776B0089; Mon, 8 Apr 2024 14:40:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 770536B0087 for ; Mon, 8 Apr 2024 14:40:06 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 42C33A0159 for ; Mon, 8 Apr 2024 18:40:06 +0000 (UTC) X-FDA: 81987229212.29.BE2DBF4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id B2F0B14000F for ; Mon, 8 Apr 2024 18:40:04 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601604; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxqItFowqExUfPDGsbBqCxmeulHaKinVQnff3OQT0rc=; b=Dm6dbR3y4VQCEZedl+n0mXgRuRkRlc1OqgxKqsuLmwgBYu1i9WnuJxELib7H+0TxY5R2Ob 6wBexHqKyfV7Lf9FTLDsoywccvqcrKf52UfwQtIWigOTLoFjxn+B9aQkEevgq7IydA93BA fdvYTJXR4TJVZAAtcKYFGHSPF/7po4I= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601604; a=rsa-sha256; cv=none; b=PKYb03O3MknxA0LpsUlZOg5aaLitfQJLfoPmZkYagiWspKZvU2wBAVHSBMvZMLEYiTqsIf Dmy3zvuwtrKvauF5fj3m0iyjUQsZRpmBKcIGD1wWo8ks5a5ooHPBEL2tmKWo/CzGBWpwad MpDfyqQKJful3hRes8vo+1Dw+3sSZIA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F7171007; Mon, 8 Apr 2024 11:40:34 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E34C83F766; Mon, 8 Apr 2024 11:40:01 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 1/7] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Date: Mon, 8 Apr 2024 19:39:40 +0100 Message-Id: <20240408183946.2991168-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B2F0B14000F X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: giwx8but46ce8ior4zeqt1widuhouscz X-HE-Tag: 1712601604-634643 X-HE-Meta: U2FsdGVkX1+j7HygUBgL7jmVMwJv9T+1ot9FONecdqHfEvstDKufQ+QThaN1suPExJtCEtMlrltM7kREpbOZDblSWxJM3MQw1h2NotAWy/sSeoBUpBKnSDHUA68zjDcLX5XPSvIG/wI5l7X5RQTY021g0KVW8BSJmsjJdSEZy5vh3qgy3SMFsNl+yCEEgpbPu8C1pIRXptKhV0GhPfSMOSLqzL1pcfU4N0Ikii+qBb/k182mHE9xUy8avlehGvTtUlU1j/VUo1MD4SATkqy02p8pQNgQtXQmbcislK26vT2nG8F6are/Udg70H++RG3DsfYJhjI5mgrL2ePdVqoqWTy0Xhsb98qau2MPS93hp4Vw6LlOaDqqgcE+GUt1y7zwkTPjrWB+yQOBrZrRhk7gsXM5mN3iTwv7I+v1OWm65nkxAePsKKLxfQ3bChKWD7V78MCnCf7JRKns8R9YpM50UCbi2bngfT/WQP4JH9kQiM/pD5miSajp/+knMQTasK/ut0tvAvq/6M1IuIAdD6C0R1gtJ1WJWHhmNSd+TM7yQkxho4SglzTOUJVoOt9mnHngCfIJ8IUMwc9eB2iEpIFzV7IRoxKOO5sxyJOLSxbdXqCSWryWZEyeoIZgl3cXk/duf4ZVfqgL+j82RSZ3VqqzilD9uKyMmMqeyZJSt8WwWITKxwyy4aFlKDV70v++tmO9vr3fv02Nljf0JJvSse1Ea9Knrsk3wtoHY00aX2nSh48+auvtOopWYK1Y8p4F43ATIpcJmmc3kzyAI7bMBHqL1syzPOnO+107lA58VlJtbsV3ea6VlZ/Y3duda3N0AaJFe45alLTXhMLVd34IvvQ1X75qO8hQJzevP7CYcfGpGjZX8Y2DvvzyM72y31mLbjVeOZmIhDDlnWA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As preparation for supporting small-sized THP in the swap-out path, without first needing to split to order-0, Remove the CLUSTER_FLAG_HUGE, which, when present, always implies PMD-sized THP, which is the same as the cluster size. The only use of the flag was to determine whether a swap entry refers to a single page or a PMD-sized THP in swap_page_trans_huge_swapped(). Instead of relying on the flag, we now pass in order, which originates from the folio's order. This allows the logic to work for folios of any order. The one snag is that one of the swap_page_trans_huge_swapped() call sites does not have the folio. But it was only being called there to shortcut a call __try_to_reclaim_swap() in some cases. __try_to_reclaim_swap() gets the folio and (via some other functions) calls swap_page_trans_huge_swapped(). So I've removed the problematic call site and believe the new logic should be functionally equivalent. That said, removing the fast path means that we will take a reference and trylock a large folio much more often, which we would like to avoid. The next patch will solve this. Removing CLUSTER_FLAG_HUGE also means we can remove split_swap_cluster() which used to be called during folio splitting, since split_swap_cluster()'s only job was to remove the flag. Reviewed-by: "Huang, Ying" Acked-by: Chris Li Acked-by: David Hildenbrand Signed-off-by: Ryan Roberts --- include/linux/swap.h | 10 ---------- mm/huge_memory.c | 3 --- mm/swapfile.c | 47 ++++++++------------------------------------ 3 files changed, 8 insertions(+), 52 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index a211a0383425..f6f78198f000 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -259,7 +259,6 @@ struct swap_cluster_info { }; #define CLUSTER_FLAG_FREE 1 /* This cluster is free */ #define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ -#define CLUSTER_FLAG_HUGE 4 /* This cluster is backing a transparent huge page */ /* * We assign a cluster to each CPU, so each CPU can allocate swap entry from @@ -590,15 +589,6 @@ static inline int add_swap_extent(struct swap_info_struct *sis, } #endif /* CONFIG_SWAP */ -#ifdef CONFIG_THP_SWAP -extern int split_swap_cluster(swp_entry_t entry); -#else -static inline int split_swap_cluster(swp_entry_t entry) -{ - return 0; -} -#endif - #ifdef CONFIG_MEMCG static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b106baec7260..5b875f0fc923 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2892,9 +2892,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, shmem_uncharge(folio->mapping->host, nr_dropped); remap_page(folio, nr); - if (folio_test_swapcache(folio)) - split_swap_cluster(folio->swap); - /* * set page to its compound_head when split to non order-0 pages, so * we can skip unlocking it below, since PG_locked is transferred to diff --git a/mm/swapfile.c b/mm/swapfile.c index 5e6d2304a2a4..1ded6d1dcab4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -343,18 +343,6 @@ static inline void cluster_set_null(struct swap_cluster_info *info) info->data = 0; } -static inline bool cluster_is_huge(struct swap_cluster_info *info) -{ - if (IS_ENABLED(CONFIG_THP_SWAP)) - return info->flags & CLUSTER_FLAG_HUGE; - return false; -} - -static inline void cluster_clear_huge(struct swap_cluster_info *info) -{ - info->flags &= ~CLUSTER_FLAG_HUGE; -} - static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, unsigned long offset) { @@ -1027,7 +1015,7 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) offset = idx * SWAPFILE_CLUSTER; ci = lock_cluster(si, offset); alloc_cluster(si, idx); - cluster_set_count_flag(ci, SWAPFILE_CLUSTER, CLUSTER_FLAG_HUGE); + cluster_set_count(ci, SWAPFILE_CLUSTER); memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); unlock_cluster(ci); @@ -1365,7 +1353,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) ci = lock_cluster_or_swap_info(si, offset); if (size == SWAPFILE_CLUSTER) { - VM_BUG_ON(!cluster_is_huge(ci)); map = si->swap_map + offset; for (i = 0; i < SWAPFILE_CLUSTER; i++) { val = map[i]; @@ -1373,7 +1360,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) if (val == SWAP_HAS_CACHE) free_entries++; } - cluster_clear_huge(ci); if (free_entries == SWAPFILE_CLUSTER) { unlock_cluster_or_swap_info(si, ci); spin_lock(&si->lock); @@ -1395,23 +1381,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) unlock_cluster_or_swap_info(si, ci); } -#ifdef CONFIG_THP_SWAP -int split_swap_cluster(swp_entry_t entry) -{ - struct swap_info_struct *si; - struct swap_cluster_info *ci; - unsigned long offset = swp_offset(entry); - - si = _swap_info_get(entry); - if (!si) - return -EBUSY; - ci = lock_cluster(si, offset); - cluster_clear_huge(ci); - unlock_cluster(ci); - return 0; -} -#endif - static int swp_entry_cmp(const void *ent1, const void *ent2) { const swp_entry_t *e1 = ent1, *e2 = ent2; @@ -1519,22 +1488,23 @@ int swp_swapcount(swp_entry_t entry) } static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, - swp_entry_t entry) + swp_entry_t entry, int order) { struct swap_cluster_info *ci; unsigned char *map = si->swap_map; + unsigned int nr_pages = 1 << order; unsigned long roffset = swp_offset(entry); - unsigned long offset = round_down(roffset, SWAPFILE_CLUSTER); + unsigned long offset = round_down(roffset, nr_pages); int i; bool ret = false; ci = lock_cluster_or_swap_info(si, offset); - if (!ci || !cluster_is_huge(ci)) { + if (!ci || nr_pages == 1) { if (swap_count(map[roffset])) ret = true; goto unlock_out; } - for (i = 0; i < SWAPFILE_CLUSTER; i++) { + for (i = 0; i < nr_pages; i++) { if (swap_count(map[offset + i])) { ret = true; break; @@ -1556,7 +1526,7 @@ static bool folio_swapped(struct folio *folio) if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio))) return swap_swapcount(si, entry) != 0; - return swap_page_trans_huge_swapped(si, entry); + return swap_page_trans_huge_swapped(si, entry, folio_order(folio)); } /** @@ -1622,8 +1592,7 @@ int free_swap_and_cache(swp_entry_t entry) } count = __swap_entry_free(p, entry); - if (count == SWAP_HAS_CACHE && - !swap_page_trans_huge_swapped(p, entry)) + if (count == SWAP_HAS_CACHE) __try_to_reclaim_swap(p, swp_offset(entry), TTRS_UNMAPPED | TTRS_FULL); put_swap_device(p); From patchwork Mon Apr 8 18:39:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FB27CD129A for ; Mon, 8 Apr 2024 18:40:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E7ADA6B0088; Mon, 8 Apr 2024 14:40:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E2A6E6B0089; Mon, 8 Apr 2024 14:40:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA4DC6B008A; Mon, 8 Apr 2024 14:40:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AE9AD6B0088 for ; Mon, 8 Apr 2024 14:40:08 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6E9511A0116 for ; Mon, 8 Apr 2024 18:40:08 +0000 (UTC) X-FDA: 81987229296.23.81B9CED Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id BE11D20016 for ; Mon, 8 Apr 2024 18:40:06 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601606; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sevualGHsvqF2Uz7vyiHM6CjaOjNp31DvEbRyNYaDHQ=; b=XmP8vAEMQrwK+6l+d2SRXcu5/bFSKPoG3xA2IziSdlkgVP6KZcnRlQJWUIYT06jSKnVYSY qVoRgyJW7wrGwY/labZRc1ZS4OntL9UqpWU9hIqSXo+oqjT7XflesDHKfakYnCFPUqR4qp X3+MCAFzESAivfrOVwQt+kz1S+kXWsw= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601606; a=rsa-sha256; cv=none; b=aPiFa4iX5sWl14VThDGOmWpAXvTfRkk5YyX2o/ZlrDx3GRDlrsjShKAYnXf8paFYF6rw0r AnRIoz04ygy1YEZh7/J73NZJHUd2mqR2HwCF1I8X4WDr5rPXCOLZ28ms5r2BIsfyyN2/wi 7vkJP3Mehfxtdpr+izN77QOFuQNw7y4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7067B12FC; Mon, 8 Apr 2024 11:40:36 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 05AEA3F766; Mon, 8 Apr 2024 11:40:03 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Date: Mon, 8 Apr 2024 19:39:41 +0100 Message-Id: <20240408183946.2991168-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: BE11D20016 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 8n6kng4dz9rg3atzah1gs3nrqcms5x76 X-HE-Tag: 1712601606-749990 X-HE-Meta: U2FsdGVkX1/vlqll9wIAgzEPTaxTXoh36lCUq7tM6zCxv076WhO/QsV2xBAeVY+t0ZA/dB8zEknWghUccO7925EsN4+mjgunUeqKth0/363hPA02peEeSzkor0qrMoX5wBSyodUkU13Rglph0+fwwIomKYzMbbPzo1Pp26AStsjxhkr75nX4hIUd1F+3AdD5lukqBm6GzPCW7fbRERbzI5JE2Zqta7+UjJgJo5vIuj3lFMOa3kJYUWxg+WAeJLIzVC2yrcDV0yPvQQJjWGfLaZp9sMmw7MVGsXKFtlbHthzSkAFT//WR9fCpwZMGX6BLCmT5+6Mm+hArq0H4Etp7OyZmUSxA3yxT+/d+P7uXvdxlXHi7JDAP1af4RfGObnsbLGLD26cgwSugRuPUZTKtA71KdC4EnIt/d9RKKKOSX9J0eVBpWWoEo1rFZJCviCY7k82/BiPsWiGOMIVYgaaQPkazaujvABng6fd0c0xQTWECQOdsu06bOeHN3OzFvzFV38JZf1P7kBDQyAm8fel7Pu3+YtXa13jZWj41cjrK7E1vQmWCvCM3GraHm5QnyYgfq+4gjko+lbZQiiouHnKpTOXPeUmZMwM3e1qCEkuqkgRAmICdwW0NcYI3ttpoI3pF4Q22F1DEblfn+ePOj2AT1BsV97YsKkSqSs1uIZyvDV/GdEPXuEkL/kQLkT2mMaMlSG0/diKoi1xY+cOmtP6trDhniUkyK04MLhLIAEFSLBS+vIyGCznPKwQymjN3XTwYO7WfnS+sWA7Zy/TuLS3HyJVRHuVxLcCGkljGr72zwQ0tTSHAfCFG0Mpck02mFZpHgj8zCgUSehNdh60i3gnzs51Cr2m1AiOejuGbojBq3iXFXy+TuvoM7ClJRQO6P7m1FxxVgwWivWwyHX4FECuQ/vJXNNE1Ql9VfX0QXCip/TRp5kIlaIe++tPaHf5dBSNAcDXf6AbaGeO5qKMZwYa twaeT+jz ZE0we65ySeUgs3yvvjObpepjZPfPNICkUUWPLujUN+NLeX9f1MLBAwkPWDgipSq3XxQY0DLp0qvn1etPsiSxvM2K/Nk0sBbn3UsURjzuM0cy65ukB2aoOqZ8crX9aOWX/sbPZhM8hpDD7f1niE0L3yo1fvPMMe/AwyNhfc235nvle3vAWVy5bKY9SZMCV5V9HUg5zG7wUG9fH8LQ6s9lLiyMD7w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that we no longer have a convenient flag in the cluster to determine if a folio is large, free_swap_and_cache() will take a reference and lock a large folio much more often, which could lead to contention and (e.g.) failure to split large folios, etc. Let's solve that problem by batch freeing swap and cache with a new function, free_swap_and_cache_nr(), to free a contiguous range of swap entries together. This allows us to first drop a reference to each swap slot before we try to release the cache folio. This means we only try to release the folio once, only taking the reference and lock once - much better than the previous 512 times for the 2M THP case. Contiguous swap entries are gathered in zap_pte_range() and madvise_free_pte_range() in a similar way to how present ptes are already gathered in zap_pte_range(). While we are at it, let's simplify by converting the return type of both functions to void. The return value was used only by zap_pte_range() to print a bad pte, and was ignored by everyone else, so the extra reporting wasn't exactly guaranteed. We will still get the warning with most of the information from get_swap_device(). With the batch version, we wouldn't know which pte was bad anyway so could print the wrong one. Signed-off-by: Ryan Roberts Acked-by: David Hildenbrand --- include/linux/pgtable.h | 29 ++++++++++++ include/linux/swap.h | 12 +++-- mm/internal.h | 63 ++++++++++++++++++++++++++ mm/madvise.c | 12 +++-- mm/memory.c | 13 +++--- mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++-------- 6 files changed, 195 insertions(+), 31 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a3fc8150b047..75096025fe52 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm, } #endif +#ifndef clear_not_present_full_ptes +/** + * clear_not_present_full_ptes - Clear multiple not present PTEs which are + * consecutive in the pgtable. + * @mm: Address space the ptes represent. + * @addr: Address of the first pte. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to clear. + * @full: Whether we are clearing a full mm. + * + * May be overridden by the architecture; otherwise, implemented as a simple + * loop over pte_clear_not_present_full(). + * + * Context: The caller holds the page table lock. The PTEs are all not present. + * The PTEs are all in the same PMD. + */ +static inline void clear_not_present_full_ptes(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, unsigned int nr, int full) +{ + for (;;) { + pte_clear_not_present_full(mm, addr, ptep, full); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} +#endif + #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH extern pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, diff --git a/include/linux/swap.h b/include/linux/swap.h index f6f78198f000..5737236dc3ce 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t); extern int swapcache_prepare(swp_entry_t); extern void swap_free(swp_entry_t); extern void swapcache_free_entries(swp_entry_t *entries, int n); -extern int free_swap_and_cache(swp_entry_t); +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr); int swap_type_of(dev_t device, sector_t offset); int find_first_swap(dev_t *device); extern unsigned int count_swap_pages(int, int); @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si) #define free_pages_and_swap_cache(pages, nr) \ release_pages((pages), (nr)); -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */ -#define free_swap_and_cache(e) is_pfn_swap_entry(e) +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr) +{ +} static inline void free_swap_cache(struct folio *folio) { @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis, } #endif /* CONFIG_SWAP */ +static inline void free_swap_and_cache(swp_entry_t entry) +{ + free_swap_and_cache_nr(entry, 1); +} + #ifdef CONFIG_MEMCG static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg) { diff --git a/mm/internal.h b/mm/internal.h index 3bdc8693b54f..de68705624b0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include struct folio_batch; @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, return min(ptep - start_ptep, max_nr); } + +/** + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte. + * @pte: The initial pte state; is_swap_pte(pte) must be true. + * + * Increments the swap offset, while maintaining all other fields, including + * swap type, and any swp pte bits. The resulting pte is returned. + */ +static inline pte_t pte_next_swp_offset(pte_t pte) +{ + swp_entry_t entry = pte_to_swp_entry(pte); + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry), + swp_offset(entry) + 1)); + + if (pte_swp_soft_dirty(pte)) + new = pte_swp_mksoft_dirty(new); + if (pte_swp_exclusive(pte)) + new = pte_swp_mkexclusive(new); + if (pte_swp_uffd_wp(pte)) + new = pte_swp_mkuffd_wp(new); + + return new; +} + +/** + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries + * @start_ptep: Page table pointer for the first entry. + * @max_nr: The maximum number of table entries to consider. + * @pte: Page table entry for the first entry. + * + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs + * containing swap entries all with consecutive offsets and targeting the same + * swap type, all with matching swp pte bits. + * + * max_nr must be at least one and must be limited by the caller so scanning + * cannot exceed a single page table. + * + * Return: the number of table entries in the batch. + */ +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte) +{ + pte_t expected_pte = pte_next_swp_offset(pte); + const pte_t *end_ptep = start_ptep + max_nr; + pte_t *ptep = start_ptep + 1; + + VM_WARN_ON(max_nr < 1); + VM_WARN_ON(!is_swap_pte(pte)); + VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte))); + + while (ptep < end_ptep) { + pte = ptep_get(ptep); + + if (!pte_same(pte, expected_pte)) + break; + + expected_pte = pte_next_swp_offset(expected_pte); + ptep++; + } + + return ptep - start_ptep; +} #endif /* CONFIG_MMU */ void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, diff --git a/mm/madvise.c b/mm/madvise.c index 1f77a51baaac..5011ecb24344 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, struct folio *folio; int nr_swap = 0; unsigned long next; + int nr, max_nr; next = pmd_addr_end(addr, end); if (pmd_trans_huge(*pmd)) @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, return 0; flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); - for (; addr != end; pte++, addr += PAGE_SIZE) { + for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) { + nr = 1; ptent = ptep_get(pte); if (pte_none(ptent)) @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, entry = pte_to_swp_entry(ptent); if (!non_swap_entry(entry)) { - nr_swap--; - free_swap_and_cache(entry); - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + max_nr = (end - addr) / PAGE_SIZE; + nr = swap_pte_batch(pte, max_nr, ptent); + nr_swap -= nr; + free_swap_and_cache_nr(entry, nr); + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); } else if (is_hwpoison_entry(entry) || is_poisoned_swp_entry(entry)) { pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); diff --git a/mm/memory.c b/mm/memory.c index b98e4d907a14..0db2aa066a5a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, folio_remove_rmap_pte(folio, page, vma); folio_put(folio); } else if (!non_swap_entry(entry)) { - /* Genuine swap entry, hence a private anon page */ + max_nr = (end - addr) / PAGE_SIZE; + nr = swap_pte_batch(pte, max_nr, ptent); + /* Genuine swap entries, hence a private anon pages */ if (!should_zap_cows(details)) continue; - rss[MM_SWAPENTS]--; - if (unlikely(!free_swap_and_cache(entry))) - print_bad_pte(vma, addr, ptent, NULL); + rss[MM_SWAPENTS] -= nr; + free_swap_and_cache_nr(entry, nr); } else if (is_migration_entry(entry)) { folio = pfn_swap_entry_folio(entry); if (!should_zap_folio(details, folio)) @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, pr_alert("unrecognized swap entry 0x%lx\n", entry.val); WARN_ON_ONCE(1); } - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); - zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent); + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); + zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent); } while (pte += nr, addr += PAGE_SIZE * nr, addr != end); add_mm_rss_vec(mm, rss); diff --git a/mm/swapfile.c b/mm/swapfile.c index 1ded6d1dcab4..20c45757f2b2 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -130,7 +130,11 @@ static inline unsigned char swap_count(unsigned char ent) /* Reclaim the swap entry if swap is getting full*/ #define TTRS_FULL 0x4 -/* returns 1 if swap entry is freed */ +/* + * returns number of pages in the folio that backs the swap entry. If positive, + * the folio was reclaimed. If negative, the folio was not reclaimed. If 0, no + * folio was associated with the swap entry. + */ static int __try_to_reclaim_swap(struct swap_info_struct *si, unsigned long offset, unsigned long flags) { @@ -155,6 +159,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, ret = folio_free_swap(folio); folio_unlock(folio); } + ret = ret ? folio_nr_pages(folio) : -folio_nr_pages(folio); folio_put(folio); return ret; } @@ -895,7 +900,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si, swap_was_freed = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); spin_lock(&si->lock); /* entry was freed successfully, try to use this again */ - if (swap_was_freed) + if (swap_was_freed > 0) goto checks; goto scan; /* check next one */ } @@ -1572,32 +1577,88 @@ bool folio_free_swap(struct folio *folio) return true; } -/* - * Free the swap entry like above, but also try to - * free the page cache entry if it is the last user. +/** + * free_swap_and_cache_nr() - Release reference on range of swap entries and + * reclaim their cache if no more references remain. + * @entry: First entry of range. + * @nr: Number of entries in range. + * + * For each swap entry in the contiguous range, release a reference. If any swap + * entries become free, try to reclaim their underlying folios, if present. The + * offset range is defined by [entry.offset, entry.offset + nr). */ -int free_swap_and_cache(swp_entry_t entry) +void free_swap_and_cache_nr(swp_entry_t entry, int nr) { - struct swap_info_struct *p; + const unsigned long start_offset = swp_offset(entry); + const unsigned long end_offset = start_offset + nr; + unsigned int type = swp_type(entry); + struct swap_info_struct *si; + bool any_only_cache = false; + unsigned long offset; unsigned char count; if (non_swap_entry(entry)) - return 1; + return; - p = get_swap_device(entry); - if (p) { - if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) { - put_swap_device(p); - return 0; + si = get_swap_device(entry); + if (!si) + return; + + if (WARN_ON(end_offset > si->max)) + goto out; + + /* + * First free all entries in the range. + */ + for (offset = start_offset; offset < end_offset; offset++) { + if (data_race(si->swap_map[offset])) { + count = __swap_entry_free(si, swp_entry(type, offset)); + if (count == SWAP_HAS_CACHE) + any_only_cache = true; + } else { + WARN_ON_ONCE(1); } + } + + /* + * Short-circuit the below loop if none of the entries had their + * reference drop to zero. + */ + if (!any_only_cache) + goto out; - count = __swap_entry_free(p, entry); - if (count == SWAP_HAS_CACHE) - __try_to_reclaim_swap(p, swp_offset(entry), + /* + * Now go back over the range trying to reclaim the swap cache. This is + * more efficient for large folios because we will only try to reclaim + * the swap once per folio in the common case. If we do + * __swap_entry_free() and __try_to_reclaim_swap() in the same loop, the + * latter will get a reference and lock the folio for every individual + * page but will only succeed once the swap slot for every subpage is + * zero. + */ + for (offset = start_offset; offset < end_offset; offset += nr) { + nr = 1; + if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { + /* + * Folios are always naturally aligned in swap so + * advance forward to the next boundary. Zero means no + * folio was found for the swap entry, so advance by 1 + * in this case. Negative value means folio was found + * but could not be reclaimed. Here we can still advance + * to the next boundary. + */ + nr = __try_to_reclaim_swap(si, offset, TTRS_UNMAPPED | TTRS_FULL); - put_swap_device(p); + if (nr == 0) + nr = 1; + else if (nr < 0) + nr = -nr; + nr = ALIGN(offset + 1, nr) - offset; + } } - return p != NULL; + +out: + put_swap_device(si); } #ifdef CONFIG_HIBERNATION From patchwork Mon Apr 8 18:39:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4E55C67861 for ; Mon, 8 Apr 2024 18:40:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB6DE6B0089; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D68B96B008A; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C07836B008C; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A63B66B0089 for ; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5BEAA160135 for ; Mon, 8 Apr 2024 18:40:10 +0000 (UTC) X-FDA: 81987229380.24.F6664D5 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id B1D3C1C0012 for ; Mon, 8 Apr 2024 18:40:08 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601608; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CPvydlf1x8WC7f0KwckZbYpQubLZswQl7673V7csftE=; b=umqfIxwNF1yUnNvjA9HeXoFVUjLO5hkyMmbu7u5prEre7bGC7Daqqx4IfWRcsLd58pW9sJ Z8udknZ66ZiNoNGwURDsa20lORQKBZms5PEqEjEZvDB8hCsr/IcE9vjGwCRGMzNFnSWoZE BUhpFsdY7xLxOPKL5tLRQxNyVsV/mc8= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601608; a=rsa-sha256; cv=none; b=3XgWUVEIGoZv0+GeObZ6P7VKtrIIc2v9XhqfrB5wp/8OJADipzduYgFiySRCmVtf3gc4w7 511/fe3XvZhK5w5GXJPINWWQauPAr0ykc0NPjq4PVbhPslVlPSgEUIVH61j7S6Z6w6Fglm trwj51GvRLCUN/SQgl/vrH4QFs/kIPQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86AF0139F; Mon, 8 Apr 2024 11:40:38 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 36AA53F766; Mon, 8 Apr 2024 11:40:06 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 3/7] mm: swap: Simplify struct percpu_cluster Date: Mon, 8 Apr 2024 19:39:42 +0100 Message-Id: <20240408183946.2991168-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B1D3C1C0012 X-Stat-Signature: 4qbgix4xmdy4wk6sx77piwy6rg1zymhu X-HE-Tag: 1712601608-766563 X-HE-Meta: U2FsdGVkX19qk5AQ/ilmLINyt4S2h0nQWVFx98ckLEsY+PChQhnGYdvoTY9jUlPgljRf0qXEhRsga3CvaFmR4SIm3SnC1/ZoQPoxM5Cca7j7cl1Btf4S+HSf2s/RsSf9FnQE1uILvySxetWQ/CJE5C5XC3QGsbI0l/BrO/uK2+ucnLs3ALeOYAnGTRFl3ZF+1Psv3j92pYrCdWy0bkC3SS3zO2h6tA3YG0GVtZEza2w6l9VWPgKCa/BaaBoH+Vyyx/I8hLF99rzs3JAk9lpTX5XPeGX54tChmUJMm+/KrL5zNluqcJBYLzN/6fsq5xHOZ1VYNmbisC9Ffq5DoMmp8j+cmDRY1pCT2rpeyHKAW6iY2XN+BuI19egv8sTc6NQLXlnHTpVIqnCyu12ac3Ql/uO+gmiSKdZP/rk7uy4wyGGX5qkNg8hJKj4tAVWNcL5OR2LLY7YKA73zH/xUK48bJUFUuXBUtCGketsjxaosxcLfyYDCpnkFqW29S7lBtEjtGWyXq6K84ivBvYYU6R3HxaPFtiBjrx5L/38k7KHujh0ZVAhbikDUYRcHllXum1Z3ZWp/XSIzh7HTE9NLtcyjoGkwl5GEUXoKJnkW5MHcekN8Yt4ZHMhXOEbOBl8jpv9AdYC6EGtatY9Js+8TmFo9uwjnWdB6F+Fpmwt2kqbicB0M8ic6Gzv5rSXsWoH5BhuVGvfVb7DvY3cyjrx61eVAgC8L09OsXM0d75wuBR1GGhoMtsiKwcsm3rFhepAof5wKrTSwHAtjn0MzGp+gJ6+ZHORoFJbfVU/KelCCJQi8y1a293Zyn3Cs/IQf4ff1v+AQ503vV8pfIDd7YNl0Z+tLKbqYuk6v0+T0uotNiEbrnZl5+JUL8P5oFnijVrBoaLFOFeU78ybAIbg/GTfMhmKPur1vk0N2JsW6/uOU9mZR8zK9E3ZTji1Nlty9VAF1ozDkwnOHdWKAqt3IXjxvSar BRrfPgVX WGPxuK0dB5FMfUGSBj4v20m/H3jK1rwlN8HVPte9rDh6jfKzrUynezA7DnchQcLyMQJtCinVqRJ1vUq16qqLFdHZVQeO+YJhFTYY9/PITr9WP4ujccZhYS6eDci9OSmNhtHhbe5rQzOUDuOqZ0Rtp8vdXuQwodVs+MkPTHDRAAcm0hUsWt43Ceg85Av3ERb5yOFoqtP9WLbLdW1Z3v7xgVsgBkLh6j/MLZ9lx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: struct percpu_cluster stores the index of cpu's current cluster and the offset of the next entry that will be allocated for the cpu. These two pieces of information are redundant because the cluster index is just (offset / SWAPFILE_CLUSTER). The only reason for explicitly keeping the cluster index is because the structure used for it also has a flag to indicate "no cluster". However this data structure also contains a spin lock, which is never used in this context, as a side effect the code copies the spinlock_t structure, which is questionable coding practice in my view. So let's clean this up and store only the next offset, and use a sentinal value (SWAP_NEXT_INVALID) to indicate "no cluster". SWAP_NEXT_INVALID is chosen to be 0, because 0 will never be seen legitimately; The first page in the swap file is the swap header, which is always marked bad to prevent it from being allocated as an entry. This also prevents the cluster to which it belongs being marked free, so it will never appear on the free list. This change saves 16 bytes per cpu. And given we are shortly going to extend this mechanism to be per-cpu-AND-per-order, we will end up saving 16 * 9 = 144 bytes per cpu, which adds up if you have 256 cpus in the system. Reviewed-by: "Huang, Ying" Signed-off-by: Ryan Roberts --- include/linux/swap.h | 9 ++++++++- mm/swapfile.c | 22 +++++++++++----------- 2 files changed, 19 insertions(+), 12 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 5737236dc3ce..5e1e4f5bf0cb 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -260,13 +260,20 @@ struct swap_cluster_info { #define CLUSTER_FLAG_FREE 1 /* This cluster is free */ #define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ +/* + * The first page in the swap file is the swap header, which is always marked + * bad to prevent it from being allocated as an entry. This also prevents the + * cluster to which it belongs being marked free. Therefore 0 is safe to use as + * a sentinel to indicate next is not valid in percpu_cluster. + */ +#define SWAP_NEXT_INVALID 0 + /* * We assign a cluster to each CPU, so each CPU can allocate swap entry from * its own cluster and swapout sequentially. The purpose is to optimize swapout * throughput. */ struct percpu_cluster { - struct swap_cluster_info index; /* Current cluster index */ unsigned int next; /* Likely next allocation offset */ }; diff --git a/mm/swapfile.c b/mm/swapfile.c index 20c45757f2b2..e3f855475278 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -609,7 +609,7 @@ scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si, return false; percpu_cluster = this_cpu_ptr(si->percpu_cluster); - cluster_set_null(&percpu_cluster->index); + percpu_cluster->next = SWAP_NEXT_INVALID; return true; } @@ -622,14 +622,14 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, { struct percpu_cluster *cluster; struct swap_cluster_info *ci; - unsigned long tmp, max; + unsigned int tmp, max; new_cluster: cluster = this_cpu_ptr(si->percpu_cluster); - if (cluster_is_null(&cluster->index)) { + tmp = cluster->next; + if (tmp == SWAP_NEXT_INVALID) { if (!cluster_list_empty(&si->free_clusters)) { - cluster->index = si->free_clusters.head; - cluster->next = cluster_next(&cluster->index) * + tmp = cluster_next(&si->free_clusters.head) * SWAPFILE_CLUSTER; } else if (!cluster_list_empty(&si->discard_clusters)) { /* @@ -649,9 +649,7 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, * Other CPUs can use our cluster if they can't find a free cluster, * check if there is still free entry in the cluster */ - tmp = cluster->next; - max = min_t(unsigned long, si->max, - (cluster_next(&cluster->index) + 1) * SWAPFILE_CLUSTER); + max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER)); if (tmp < max) { ci = lock_cluster(si, tmp); while (tmp < max) { @@ -662,12 +660,13 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, unlock_cluster(ci); } if (tmp >= max) { - cluster_set_null(&cluster->index); + cluster->next = SWAP_NEXT_INVALID; goto new_cluster; } - cluster->next = tmp + 1; *offset = tmp; *scan_base = tmp; + tmp += 1; + cluster->next = tmp < max ? tmp : SWAP_NEXT_INVALID; return true; } @@ -3163,8 +3162,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) } for_each_possible_cpu(cpu) { struct percpu_cluster *cluster; + cluster = per_cpu_ptr(p->percpu_cluster, cpu); - cluster_set_null(&cluster->index); + cluster->next = SWAP_NEXT_INVALID; } } else { atomic_inc(&nr_rotate_swap); From patchwork Mon Apr 8 18:39:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A270CD1296 for ; Mon, 8 Apr 2024 18:40:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AAA5A6B008A; Mon, 8 Apr 2024 14:40:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A5A356B008C; Mon, 8 Apr 2024 14:40:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8AE4C6B0092; Mon, 8 Apr 2024 14:40:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 67DDD6B008A for ; Mon, 8 Apr 2024 14:40:12 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 352D3120146 for ; Mon, 8 Apr 2024 18:40:12 +0000 (UTC) X-FDA: 81987229464.12.213B1D1 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id 91FFF1C001D for ; Mon, 8 Apr 2024 18:40:10 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bLoGWHtb5wa44C9WqM71mAr6Jt41pD2YDY/H0/kn8tY=; b=PcM6/eqcuwMSqTztWxuI/jNEt5dsLAGPin/w5EID3kUKwNQpj0ELaJ/0DD7kuDMVGYPjQt +TKaMYPREt95jKZIwxyz3L/YTJZiHcE0/mQ8Q4rVNzjJ4eHFr3XL4JbeCVC1LLMJ5xOoNz Bdfvec+Yb4YZy0x0N5iLvJ3IQJG+vC8= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601610; a=rsa-sha256; cv=none; b=LO1Z+mt50lr2GyWrpQzWv19tsfP76gl8fyBgCEx8KrfmP0PVojoc/QuEDMVqqaf1J85HXc oTrXm6s2kwASsrXMxZUPfznYiE77Cq6H3lc44Uywr1wi5WMuNeozfbzvL+ny7MLXr91Ttz XC8qlmalzRkxwKYDEqzMbCGXBHbVJjQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D363DA7; Mon, 8 Apr 2024 11:40:40 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4CC963F766; Mon, 8 Apr 2024 11:40:08 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 4/7] mm: swap: Update get_swap_pages() to take folio order Date: Mon, 8 Apr 2024 19:39:43 +0100 Message-Id: <20240408183946.2991168-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 91FFF1C001D X-Stat-Signature: wnkbn9qors9gcgzjxg88w3nim9ip8i57 X-HE-Tag: 1712601610-163894 X-HE-Meta: U2FsdGVkX1/pVDKtCs3o6SVHItKws8zxgRGCoMp1zmGocE3mjlxZbtIB2oBlyeAXItXUrPh5MUGQX3AMhT7KBklorGK+L55YtVJGrWfGsF5JjUdn7ZOiKLCLTDSCmiu1pv0nh6YnUzwVHlxklniI9sRnBdzl0GFBK5eFVBAyYtpFyNaJ2ilM2APx69nWm1U9SAex28sC1fsutYVHnf4AE78E7G7fsGy1CgFw1y0+Y+fyg357mdgv83+5HyOxlkLbjqCU+WvcrX0153LE1w0o+UOCNP4zlC+vVrEodR6bjNFNtn61rSROqFYbZDANJb4g7rHcLdwlooqY5E4jSbDioaoBZqo5RDYYH7Du636JQX8K+foqh2sv+O1UhdOoGyeupMfqtBFTfH2muJ+qOg8cPkTw4Yien4FHkKyaoCvtRXig0xWnoUwVRqs4inFfOgZeabr2O/IKoISrT/c7Cfiu0FHEUbcsKouXQvATo3MGqqws1iAQonAxOOrPZ2fa+PxkEI/v1zyWU+PVvdiLJ84sFn6eiyjK2+55yxbow5GuWzi2Mv/NOrYPFzlEOIEMwDpDbPJZPTn3wyPcMG+5wGUAnlkjksSlhBf7Yse1nRPwxaSNW33ZW6kke06tby2qF46O1hokIBUTUJAIuLTzkNYGopxZW2n6OBpJbqA+dfOVxLm+2E4OVcigsFfcD+06ZXcoJKi97m91/5L2/mcAY3lpVZDFTiAPwo7rsKWnGAOnENTbNAy27Bepl3Mdz7msmQ++nUKLevvcx62Uh34blITkKpqUAN7+ZxH96Mm3MSY1m1WB3W+g3DNvReLXps5COfL4DnDlNSDY/fniD0rX0G4wRyAf8N89jXgpVa3arIUEX/DSks14peIDNCLfKPzwpswL4sRURbbdXA9f3zOlYOn0gNxZF/1VPTJut4ZpOFWji5iRIXPxlB79NWlEdsYJCumj/ECt+pBwlxfvFymJY4H NQiPa4IA n1/il5MwLec5ufDPXTFbf6xiDu1bso8NGWF3pviNz9Cb+PmpBVFm8+OampVyoD6mKnT+VNmU1Sdriy2n1dyKjtLBrav8eXUfP6LINnOS6p7LmVuEL8NDuD8VzxMpIG7fX7FRNzpVxWvKgthgJxG8JjoLHBX01Bm8FSgFSZK2fd9oQY9uy/Hf/ymOjY2eNfjERpqCmjCsaOvFH+sPAKbdLVnU/X3+GpWo3vhBmVzaHkEr6eJs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to allow swap storage of any mTHP size. To prepare for that, let's change get_swap_pages() to take a folio order parameter instead of nr_pages. This makes the interface self-documenting; a power-of-2 number of pages must be provided. We will also need the order internally so this simplifies accessing it. Reviewed-by: "Huang, Ying" Signed-off-by: Ryan Roberts Reviewed-by: David Hildenbrand --- include/linux/swap.h | 2 +- mm/swap_slots.c | 6 +++--- mm/swapfile.c | 13 +++++++------ 3 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 5e1e4f5bf0cb..b888e1080a94 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -471,7 +471,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio); bool folio_free_swap(struct folio *folio); void put_swap_folio(struct folio *folio, swp_entry_t entry); extern swp_entry_t get_swap_page_of_type(int); -extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size); +extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order); extern int add_swap_count_continuation(swp_entry_t, gfp_t); extern void swap_shmem_alloc(swp_entry_t); extern int swap_duplicate(swp_entry_t); diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 53abeaf1371d..13ab3b771409 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -264,7 +264,7 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache) cache->cur = 0; if (swap_slot_cache_active) cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE, - cache->slots, 1); + cache->slots, 0); return cache->nr; } @@ -311,7 +311,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) if (folio_test_large(folio)) { if (IS_ENABLED(CONFIG_THP_SWAP)) - get_swap_pages(1, &entry, folio_nr_pages(folio)); + get_swap_pages(1, &entry, folio_order(folio)); goto out; } @@ -343,7 +343,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) goto out; } - get_swap_pages(1, &entry, 1); + get_swap_pages(1, &entry, 0); out: if (mem_cgroup_try_charge_swap(folio, entry)) { put_swap_folio(folio, entry); diff --git a/mm/swapfile.c b/mm/swapfile.c index e3f855475278..d2e3d3cd439f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -278,15 +278,15 @@ static void discard_swap_cluster(struct swap_info_struct *si, #ifdef CONFIG_THP_SWAP #define SWAPFILE_CLUSTER HPAGE_PMD_NR -#define swap_entry_size(size) (size) +#define swap_entry_order(order) (order) #else #define SWAPFILE_CLUSTER 256 /* - * Define swap_entry_size() as constant to let compiler to optimize + * Define swap_entry_order() as constant to let compiler to optimize * out some code if !CONFIG_THP_SWAP */ -#define swap_entry_size(size) 1 +#define swap_entry_order(order) 0 #endif #define LATENCY_LIMIT 256 @@ -1042,9 +1042,10 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx) swap_range_free(si, offset, SWAPFILE_CLUSTER); } -int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) +int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) { - unsigned long size = swap_entry_size(entry_size); + int order = swap_entry_order(entry_order); + unsigned long size = 1 << order; struct swap_info_struct *si, *next; long avail_pgs; int n_ret = 0; @@ -1349,7 +1350,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) unsigned char *map; unsigned int i, free_entries = 0; unsigned char val; - int size = swap_entry_size(folio_nr_pages(folio)); + int size = 1 << swap_entry_order(folio_order(folio)); si = _swap_info_get(entry); if (!si) From patchwork Mon Apr 8 18:39:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A3CCCD1292 for ; Mon, 8 Apr 2024 18:40:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3099F6B008C; Mon, 8 Apr 2024 14:40:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 292AB6B0092; Mon, 8 Apr 2024 14:40:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06F0E6B0093; Mon, 8 Apr 2024 14:40:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DD4CD6B008C for ; Mon, 8 Apr 2024 14:40:14 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 939B780112 for ; Mon, 8 Apr 2024 18:40:14 +0000 (UTC) X-FDA: 81987229548.21.D5C0C60 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id E27314000E for ; Mon, 8 Apr 2024 18:40:12 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601613; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uqgXYkdGzWElzJDlsaGnPigrOsNsj5YSYfjSwrboPzA=; b=rSC5unu+AjTbZUruVx2J8GdFD4BgL2eIpskpKk2E9o/L5zEcN+F98AWGlxJ/qR3veVPf+3 9yyPB0HyPD7qb3wGzQi7U05w2FrCP6xYp9U9oHzkwpoWtMMozRNhUNvgKLFRTWWQKkQBxD C6xUH63T0db9a4A91vOY4fDaZsfs25M= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601613; a=rsa-sha256; cv=none; b=krZVFwqmRnnVC7EnL66ozXkCl02fgneOY1noIwzAXfPvBrUBTNd4jWp3QorHzM5KlT/dUx cUXcRTTXleGMnQ+P8L46uuoDCAIBBDGhCdxe5lT6rET5jNKqJtcJ6bzeDEEW7VpgXSuTZ/ k3vrqi/GHmaSkc3qHnlw8zs2NLzJq00= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B3DD01007; Mon, 8 Apr 2024 11:40:42 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 639383F766; Mon, 8 Apr 2024 11:40:10 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 5/7] mm: swap: Allow storage of all mTHP orders Date: Mon, 8 Apr 2024 19:39:44 +0100 Message-Id: <20240408183946.2991168-6-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: E27314000E X-Rspam-User: X-Stat-Signature: pgx6aupabbzaccrdbbfqif3qpymtx7hf X-Rspamd-Server: rspam01 X-HE-Tag: 1712601612-528467 X-HE-Meta: U2FsdGVkX1800JMU/MHlNJOh7u3zZ4UrxV5Nf8zAYBZJyT7GThM1NFKS3wsv3GwoS/KC8qem3U38/TXtXu2bJPqpNNt8qtXG1jHi+02vAbAHy41LjesIQ8kx8xOrNCH5DlDYkQYYuhsSGB7N3VUOHaIOMJiffF0EYfxa/z9NG8bDdG+nwu84WY6AWThCT7sGi2QG3BkCbUX/WFvC151ZfY3UMZ/HpYT326DI+tfmXyhbCHNiI6sXIO7gAG2Hq29QCK1tGX9qsa2ULystd2W+J93Z3LvDv7uEEYGQ0/OLEIt+pf4kCyX24RlZ7cXIFF+LsIbtA7pzHOGp8Ig/PlunadF6ReenZzb1CUnhVKDvX80twdFhtYx/DF6dLll76GVLrjajF9IswFVw/Ksf9A+xU3lik3tC7VN53hPh2cAiBOMKCYJwvrLTY9tXBFWLLdtfBgqcau/qTTjazvg8KRp1upBF0sfVPVTrwXDdXo1i0vKVdqjquF1e2HlseQsGvu5Hc9r7qZ+VXoqA2nP0XJmYUHcsHd0mUv35XmVdFlEOkM/rbQNAq3OXsJFfSAEI8KfvOVkIpu/+uYCDB9Yqb2TRUl4bJrKKRDbNVi26yvSNzqjsqFZlr0A+fSTRyaeP9pFHX9Qm7LbuqkK9zaSbe9oTxH75ScwUpiA1X3mAYmLuQm0kUu/xHXsEApWiWJyCNA4Dpk0QhP/crOjJm9ip1HhP3yUYXEw9hjwwxyIjdNicEMR88LEN4cyrHwwt1gLaRxWNMt8uAlOkFCahO5JuYZs8uult1SuC+mgPrr3WYQwyeSRcUCdC/iU9zRRe4bkFjoyK6tUlCkKqd/BuhLho5xqRwWilhvPfQignoxAZutLewIkJ06EG7OQFw/8m7UDT0dKR9WBPuuDqWbzEfthr4LcYkH0AtHi5h++3tyQzWAuzXYSAf/SKpJJ+yxNs4uosChU4vf3yXs4L9Lm8Nod87jj 4G6rvUdn zD4aEDiglvw8261NiTAe3dWgeSfKELRob9Pn1ozS9iudLDZ5K+wnnTgRGpXEijvfFh5AcKdW9IG9zOn3oP/LYpXebZ6XxxVaXCDx8E9RoyUIaFeVtY4e2ZGM0Ro26qwgRuvY2QEa3Y+GU4Z0m58vqSk9Nzbl1BueUKrDuhENVWN9yOlVsYPgpQPx3wA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Multi-size THP enables performance improvements by allocating large, pte-mapped folios for anonymous memory. However I've observed that on an arm64 system running a parallel workload (e.g. kernel compilation) across many cores, under high memory pressure, the speed regresses. This is due to bottlenecking on the increased number of TLBIs added due to all the extra folio splitting when the large folios are swapped out. Therefore, solve this regression by adding support for swapping out mTHP without needing to split the folio, just like is already done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, and when the swap backing store is a non-rotating block device. These are the same constraints as for the existing PMD-sized THP swap-out support. Note that no attempt is made to swap-in (m)THP here - this is still done page-by-page, like for PMD-sized THP. But swapping-out mTHP is a prerequisite for swapping-in mTHP. The main change here is to improve the swap entry allocator so that it can allocate any power-of-2 number of contiguous entries between [1, (1 << PMD_ORDER)]. This is done by allocating a cluster for each distinct order and allocating sequentially from it until the cluster is full. This ensures that we don't need to search the map and we get no fragmentation due to alignment padding for different orders in the cluster. If there is no current cluster for a given order, we attempt to allocate a free cluster from the list. If there are no free clusters, we fail the allocation and the caller can fall back to splitting the folio and allocates individual entries (as per existing PMD-sized THP fallback). The per-order current clusters are maintained per-cpu using the existing infrastructure. This is done to avoid interleving pages from different tasks, which would prevent IO being batched. This is already done for the order-0 allocations so we follow the same pattern. As is done for order-0 per-cpu clusters, the scanner now can steal order-0 entries from any per-cpu-per-order reserved cluster. This ensures that when the swap file is getting full, space doesn't get tied up in the per-cpu reserves. This change only modifies swap to be able to accept any order mTHP. It doesn't change the callers to elide doing the actual split. That will be done in separate changes. Reviewed-by: "Huang, Ying" Signed-off-by: Ryan Roberts --- include/linux/swap.h | 8 ++- mm/swapfile.c | 162 ++++++++++++++++++++++++------------------- 2 files changed, 98 insertions(+), 72 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index b888e1080a94..11c53692f65f 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -268,13 +268,19 @@ struct swap_cluster_info { */ #define SWAP_NEXT_INVALID 0 +#ifdef CONFIG_THP_SWAP +#define SWAP_NR_ORDERS (PMD_ORDER + 1) +#else +#define SWAP_NR_ORDERS 1 +#endif + /* * We assign a cluster to each CPU, so each CPU can allocate swap entry from * its own cluster and swapout sequentially. The purpose is to optimize swapout * throughput. */ struct percpu_cluster { - unsigned int next; /* Likely next allocation offset */ + unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */ }; struct swap_cluster_list { diff --git a/mm/swapfile.c b/mm/swapfile.c index d2e3d3cd439f..148ef08f19dd 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -551,10 +551,12 @@ static void free_cluster(struct swap_info_struct *si, unsigned long idx) /* * The cluster corresponding to page_nr will be used. The cluster will be - * removed from free cluster list and its usage counter will be increased. + * removed from free cluster list and its usage counter will be increased by + * count. */ -static void inc_cluster_info_page(struct swap_info_struct *p, - struct swap_cluster_info *cluster_info, unsigned long page_nr) +static void add_cluster_info_page(struct swap_info_struct *p, + struct swap_cluster_info *cluster_info, unsigned long page_nr, + unsigned long count) { unsigned long idx = page_nr / SWAPFILE_CLUSTER; @@ -563,9 +565,19 @@ static void inc_cluster_info_page(struct swap_info_struct *p, if (cluster_is_free(&cluster_info[idx])) alloc_cluster(p, idx); - VM_BUG_ON(cluster_count(&cluster_info[idx]) >= SWAPFILE_CLUSTER); + VM_BUG_ON(cluster_count(&cluster_info[idx]) + count > SWAPFILE_CLUSTER); cluster_set_count(&cluster_info[idx], - cluster_count(&cluster_info[idx]) + 1); + cluster_count(&cluster_info[idx]) + count); +} + +/* + * The cluster corresponding to page_nr will be used. The cluster will be + * removed from free cluster list and its usage counter will be increased by 1. + */ +static void inc_cluster_info_page(struct swap_info_struct *p, + struct swap_cluster_info *cluster_info, unsigned long page_nr) +{ + add_cluster_info_page(p, cluster_info, page_nr, 1); } /* @@ -595,7 +607,7 @@ static void dec_cluster_info_page(struct swap_info_struct *p, */ static bool scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si, - unsigned long offset) + unsigned long offset, int order) { struct percpu_cluster *percpu_cluster; bool conflict; @@ -609,24 +621,39 @@ scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si, return false; percpu_cluster = this_cpu_ptr(si->percpu_cluster); - percpu_cluster->next = SWAP_NEXT_INVALID; + percpu_cluster->next[order] = SWAP_NEXT_INVALID; + return true; +} + +static inline bool swap_range_empty(char *swap_map, unsigned int start, + unsigned int nr_pages) +{ + unsigned int i; + + for (i = 0; i < nr_pages; i++) { + if (swap_map[start + i]) + return false; + } + return true; } /* - * Try to get a swap entry from current cpu's swap entry pool (a cluster). This - * might involve allocating a new cluster for current CPU too. + * Try to get swap entries with specified order from current cpu's swap entry + * pool (a cluster). This might involve allocating a new cluster for current CPU + * too. */ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, - unsigned long *offset, unsigned long *scan_base) + unsigned long *offset, unsigned long *scan_base, int order) { + unsigned int nr_pages = 1 << order; struct percpu_cluster *cluster; struct swap_cluster_info *ci; unsigned int tmp, max; new_cluster: cluster = this_cpu_ptr(si->percpu_cluster); - tmp = cluster->next; + tmp = cluster->next[order]; if (tmp == SWAP_NEXT_INVALID) { if (!cluster_list_empty(&si->free_clusters)) { tmp = cluster_next(&si->free_clusters.head) * @@ -647,26 +674,27 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, /* * Other CPUs can use our cluster if they can't find a free cluster, - * check if there is still free entry in the cluster + * check if there is still free entry in the cluster, maintaining + * natural alignment. */ max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER)); if (tmp < max) { ci = lock_cluster(si, tmp); while (tmp < max) { - if (!si->swap_map[tmp]) + if (swap_range_empty(si->swap_map, tmp, nr_pages)) break; - tmp++; + tmp += nr_pages; } unlock_cluster(ci); } if (tmp >= max) { - cluster->next = SWAP_NEXT_INVALID; + cluster->next[order] = SWAP_NEXT_INVALID; goto new_cluster; } *offset = tmp; *scan_base = tmp; - tmp += 1; - cluster->next = tmp < max ? tmp : SWAP_NEXT_INVALID; + tmp += nr_pages; + cluster->next[order] = tmp < max ? tmp : SWAP_NEXT_INVALID; return true; } @@ -796,13 +824,14 @@ static bool swap_offset_available_and_locked(struct swap_info_struct *si, static int scan_swap_map_slots(struct swap_info_struct *si, unsigned char usage, int nr, - swp_entry_t slots[]) + swp_entry_t slots[], int order) { struct swap_cluster_info *ci; unsigned long offset; unsigned long scan_base; unsigned long last_in_cluster = 0; int latency_ration = LATENCY_LIMIT; + unsigned int nr_pages = 1 << order; int n_ret = 0; bool scanned_many = false; @@ -817,6 +846,25 @@ static int scan_swap_map_slots(struct swap_info_struct *si, * And we let swap pages go all over an SSD partition. Hugh */ + if (order > 0) { + /* + * Should not even be attempting large allocations when huge + * page swap is disabled. Warn and fail the allocation. + */ + if (!IS_ENABLED(CONFIG_THP_SWAP) || + nr_pages > SWAPFILE_CLUSTER) { + VM_WARN_ON_ONCE(1); + return 0; + } + + /* + * Swapfile is not block device or not using clusters so unable + * to allocate large entries. + */ + if (!(si->flags & SWP_BLKDEV) || !si->cluster_info) + return 0; + } + si->flags += SWP_SCANNING; /* * Use percpu scan base for SSD to reduce lock contention on @@ -831,8 +879,11 @@ static int scan_swap_map_slots(struct swap_info_struct *si, /* SSD algorithm */ if (si->cluster_info) { - if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base)) + if (!scan_swap_map_try_ssd_cluster(si, &offset, &scan_base, order)) { + if (order > 0) + goto no_page; goto scan; + } } else if (unlikely(!si->cluster_nr--)) { if (si->pages - si->inuse_pages < SWAPFILE_CLUSTER) { si->cluster_nr = SWAPFILE_CLUSTER - 1; @@ -874,13 +925,16 @@ static int scan_swap_map_slots(struct swap_info_struct *si, checks: if (si->cluster_info) { - while (scan_swap_map_ssd_cluster_conflict(si, offset)) { + while (scan_swap_map_ssd_cluster_conflict(si, offset, order)) { /* take a break if we already got some slots */ if (n_ret) goto done; if (!scan_swap_map_try_ssd_cluster(si, &offset, - &scan_base)) + &scan_base, order)) { + if (order > 0) + goto no_page; goto scan; + } } } if (!(si->flags & SWP_WRITEOK)) @@ -911,11 +965,11 @@ static int scan_swap_map_slots(struct swap_info_struct *si, else goto done; } - WRITE_ONCE(si->swap_map[offset], usage); - inc_cluster_info_page(si, si->cluster_info, offset); + memset(si->swap_map + offset, usage, nr_pages); + add_cluster_info_page(si, si->cluster_info, offset, nr_pages); unlock_cluster(ci); - swap_range_alloc(si, offset, 1); + swap_range_alloc(si, offset, nr_pages); slots[n_ret++] = swp_entry(si->type, offset); /* got enough slots or reach max slots? */ @@ -936,8 +990,10 @@ static int scan_swap_map_slots(struct swap_info_struct *si, /* try to get more slots in cluster */ if (si->cluster_info) { - if (scan_swap_map_try_ssd_cluster(si, &offset, &scan_base)) + if (scan_swap_map_try_ssd_cluster(si, &offset, &scan_base, order)) goto checks; + if (order > 0) + goto done; } else if (si->cluster_nr && !si->swap_map[++offset]) { /* non-ssd case, still more slots in cluster? */ --si->cluster_nr; @@ -964,11 +1020,13 @@ static int scan_swap_map_slots(struct swap_info_struct *si, } done: - set_cluster_next(si, offset + 1); + if (order == 0) + set_cluster_next(si, offset + 1); si->flags -= SWP_SCANNING; return n_ret; scan: + VM_WARN_ON(order > 0); spin_unlock(&si->lock); while (++offset <= READ_ONCE(si->highest_bit)) { if (unlikely(--latency_ration < 0)) { @@ -997,38 +1055,6 @@ static int scan_swap_map_slots(struct swap_info_struct *si, return n_ret; } -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) -{ - unsigned long idx; - struct swap_cluster_info *ci; - unsigned long offset; - - /* - * Should not even be attempting cluster allocations when huge - * page swap is disabled. Warn and fail the allocation. - */ - if (!IS_ENABLED(CONFIG_THP_SWAP)) { - VM_WARN_ON_ONCE(1); - return 0; - } - - if (cluster_list_empty(&si->free_clusters)) - return 0; - - idx = cluster_list_first(&si->free_clusters); - offset = idx * SWAPFILE_CLUSTER; - ci = lock_cluster(si, offset); - alloc_cluster(si, idx); - cluster_set_count(ci, SWAPFILE_CLUSTER); - - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); - unlock_cluster(ci); - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); - *slot = swp_entry(si->type, offset); - - return 1; -} - static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx) { unsigned long offset = idx * SWAPFILE_CLUSTER; @@ -1051,9 +1077,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) int n_ret = 0; int node; - /* Only single cluster request supported */ - WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER); - spin_lock(&swap_avail_lock); avail_pgs = atomic_long_read(&nr_swap_pages) / size; @@ -1089,14 +1112,10 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) spin_unlock(&si->lock); goto nextsi; } - if (size == SWAPFILE_CLUSTER) { - if (si->flags & SWP_BLKDEV) - n_ret = swap_alloc_cluster(si, swp_entries); - } else - n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, - n_goal, swp_entries); + n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, + n_goal, swp_entries, order); spin_unlock(&si->lock); - if (n_ret || size == SWAPFILE_CLUSTER) + if (n_ret || size > 1) goto check_out; cond_resched(); @@ -1673,7 +1692,7 @@ swp_entry_t get_swap_page_of_type(int type) /* This is called for allocating swap entry, not cache */ spin_lock(&si->lock); - if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry)) + if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry, 0)) atomic_long_dec(&nr_swap_pages); spin_unlock(&si->lock); fail: @@ -3127,7 +3146,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) p->flags |= SWP_SYNCHRONOUS_IO; if (p->bdev && bdev_nonrot(p->bdev)) { - int cpu; + int cpu, i; unsigned long ci, nr_cluster; p->flags |= SWP_SOLIDSTATE; @@ -3165,7 +3184,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) struct percpu_cluster *cluster; cluster = per_cpu_ptr(p->percpu_cluster, cpu); - cluster->next = SWAP_NEXT_INVALID; + for (i = 0; i < SWAP_NR_ORDERS; i++) + cluster->next[i] = SWAP_NEXT_INVALID; } } else { atomic_inc(&nr_rotate_swap); From patchwork Mon Apr 8 18:39:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64536C67861 for ; Mon, 8 Apr 2024 18:40:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E3EB6B0092; Mon, 8 Apr 2024 14:40:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2438D6B0093; Mon, 8 Apr 2024 14:40:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 132FD6B0095; Mon, 8 Apr 2024 14:40:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E84776B0092 for ; Mon, 8 Apr 2024 14:40:16 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AFB3A140133 for ; Mon, 8 Apr 2024 18:40:16 +0000 (UTC) X-FDA: 81987229632.11.5A8C2F7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf26.hostedemail.com (Postfix) with ESMTP id 0BE1B140011 for ; Mon, 8 Apr 2024 18:40:14 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601615; a=rsa-sha256; cv=none; b=p1jfEGh/nVktmrlTme+xBloa4AGCPBLqduzeZp9Q8iqsc0b1zpd9f+HKvID6iuVYqm5zwW DIYmxxPoBefkiJbUvppqFQZ60g86d7Uu5BkrRc1iqgN/j3rc6+gPszgjVYNfduti5WmakF w3/BU0ILJVWXv4jYRYWphAaTR27eNt4= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HpPHlm+I2PxqN1DWwulluE8OakBdDVNG/BhvChW0tfA=; b=64TexqWMrxH+VxEsrB0omyjGFRuchtYNbOFP+bd8APAHBqpjzgj8ZEklbgKXZdWcrQ8cKL qrujLPRAexWjke7LkFfFDiJlntxnWSdAHeIW5nwmwYkOYdMt0yWA3PnnP82OEhUm9u5JN0 iFAuusMuFGN+oMw2mRsPthyA6FavGo8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E4EBDDA7; Mon, 8 Apr 2024 11:40:44 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A0EB3F766; Mon, 8 Apr 2024 11:40:12 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Subject: [PATCH v7 6/7] mm: vmscan: Avoid split during shrink_folio_list() Date: Mon, 8 Apr 2024 19:39:45 +0100 Message-Id: <20240408183946.2991168-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0BE1B140011 X-Stat-Signature: jxz7a4mjohs6ppmt8ar4zoyqjf835czg X-Rspam-User: X-HE-Tag: 1712601614-724642 X-HE-Meta: U2FsdGVkX1+WtLEjtir0jHtnfSelJwBpUkJD5zs6BmlEoao/ttl8iy+M9BcOBSt/PZMalctGRT63H3pMCQWqSh77U3jTwbrwUABTFTCRkM27pc5crPpmmvfdKa58yqrhGyFSbqBfMT0Ff4zrVPID4HONhD0koWb2QpzGX1SYIkCmZm3M8vzFPvbKv4rrf5sfhicHqmq3qQfnSAZrWF5z4ChiamS+ZCT9BHplm/9y6W3NdcAnFyOw4lqrF2xSeK4AhBou6HKWQwLSvBzHcM6PoNoZFiYeNR+L/likVgyzszVSoGY8tq9rnHPReVvmjpOIMzRYkL5rdwcNVpp67vZNVntGSf1zJ3MwjxwW6jN+xkyYqfDC84ZXqaM/ijZbvmhTN/WkdiZuv+iPOdEvZwecyLr9qXUK8A3brzOyZcjYk9tLdww/0vNgc11OuVcWHCTo7zF31eEKYKkjICLuVG3OwoMPHu3G7WdXp04QJSKR0dmWRRC9DJojK6lvSNwjtZiRsb9CPEmeg5lnFmOSZQEY8sNJIv790WQRPidRNWxkgHAM3sSV8fSEh41/JT6UQe69yZFi3wSuQAl3SXZAtVihn0VF1YfgwsfT7FlimCfCgYH3eBiJguMkty0dzhHt8qvv8jk+NrNvLwm2TFsCN51vMr1/f2ZX2+L3oW/3fBy1S0pTZ8qHX/uuS5rQv/b1O1gybN8ahZO2IId+nez9Q4zaIzc8UZEs9Bw+35zNNJVvLQO0ZLQI/VvMuzAC2xaWu0N6ebuMHoljCdnosOF6sbVY5njflFgUWhfVMgqXFmhn//fmQZ+lXG+9S5fUmIE/wajwEiUtCeYVo/3j0XJ5x0bG2GbZUKbkxSL4NTqkgkvvdpyn/7lKblJ/xEDCnKcTKfabAJzO5r0HNBH314BiiZx1OGOW2jWy2AxkJsuRVAkTxwrQcYoMDVgeV/f9Cw56USyfqMvmRqFHPX9slWPxHR2 KR1urWIR t9WzagNwCPU1UyBQiIsgcsPy5uK/Iz6xSUZA6Pzd+W3zZGbtEZQi3iKqGJRv8HqKhxGagB3v9c2dK9jPo9V0R9Db+ZkZ75FuPTwOJObx8HeFMy0tZ0O/mOXsSxnVQ5bxuk7h+06OJuEGg9rK4Bzbh/0k1VxHvNdNJ5A4IocbHk63UeSI/9Hv0OJ3XQenibN5vV4K5u9+cYsoe1bCOwSvD+xLh118Mzc476lWN2Ntt+WRqyycOGXl9Na4pMajjOjr9cFzB0CBKas1+6y5VPUq6IISufFls64Lgj71l X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that swap supports storing all mTHP sizes, avoid splitting large folios before swap-out. This benefits performance of the swap-out path by eliding split_folio_to_list(), which is expensive, and also sets us up for swapping in large folios in a future series. If the folio is partially mapped, we continue to split it since we want to avoid the extra IO overhead and storage of writing out pages uneccessarily. THP_SWPOUT and THP_SWPOUT_FALLBACK counters should continue to count events only for PMD-mappable folios to avoid user confusion. THP_SWPOUT already has the appropriate guard. Add a guard for THP_SWPOUT_FALLBACK. It may be appropriate to add per-size counters in future. Reviewed-by: David Hildenbrand Reviewed-by: Barry Song Signed-off-by: Ryan Roberts --- mm/vmscan.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00adaf1cb2c3..bca2d9981c95 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1223,25 +1223,25 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split partially mapped folios right away. + * We can free the unmapped pages without IO. */ - if (!folio_entire_mapcount(folio) && - split_folio_to_list(folio, - folio_list)) + if (data_race(!list_empty(&folio->_deferred_list)) && + split_folio_to_list(folio, folio_list)) goto activate_locked; } if (!add_to_swap(folio)) { if (!folio_test_large(folio)) goto activate_locked_split; /* Fallback to swap normal pages */ - if (split_folio_to_list(folio, - folio_list)) + if (split_folio_to_list(folio, folio_list)) goto activate_locked; #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); - count_vm_event(THP_SWPOUT_FALLBACK); + if (nr_pages >= HPAGE_PMD_NR) { + count_memcg_folio_events(folio, + THP_SWPOUT_FALLBACK, 1); + count_vm_event(THP_SWPOUT_FALLBACK); + } #endif if (!add_to_swap(folio)) goto activate_locked_split; From patchwork Mon Apr 8 18:39:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8694CCD1292 for ; Mon, 8 Apr 2024 18:40:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5D276B0093; Mon, 8 Apr 2024 14:40:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0F926B0095; Mon, 8 Apr 2024 14:40:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 836CB6B0096; Mon, 8 Apr 2024 14:40:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 688246B0093 for ; Mon, 8 Apr 2024 14:40:19 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2946F80113 for ; Mon, 8 Apr 2024 18:40:19 +0000 (UTC) X-FDA: 81987229758.08.71C02F2 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id 73DD8120022 for ; Mon, 8 Apr 2024 18:40:17 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601617; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O5YvrPE3fLl3521L/ebm3T/m0HKaJXdJeqvtpmc5Eq0=; b=WSAgpAIxdzcNLfTNBWklaakAigztYuOyjzhokKcz/rFtSeG9hTsy4puh6bJFywX16XHCaE hsf3laRmfg153gWI0JZXknmmjfPZ/Jtv8Ap0zMWcxTwOr3vA7F9A1c0TfniaFRLfdzMK5H avQWfiNK0Ua9s2ZoQpSFyj2ll2dSWgk= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601617; a=rsa-sha256; cv=none; b=rC2ZX975okEFZTm/c4Me88EVzRgwHN0ShdWBuPXuzzth33spJZpgX4z7Ttl/pkpiPAHQ7k TI4j4KJ/VfmesaHsWenVt0tAze8dF/UYney23VkLi+CcwXFb2A3+N4ytouzv/WRiax1AGm PafdFKRc//GI5t46yvnnftKwRk22Ezw= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 223D212FC; Mon, 8 Apr 2024 11:40:47 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AB5803F766; Mon, 8 Apr 2024 11:40:14 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Subject: [PATCH v7 7/7] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Date: Mon, 8 Apr 2024 19:39:46 +0100 Message-Id: <20240408183946.2991168-8-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 73DD8120022 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 9npghn3r9jnu74hd66k6r11y65eewqgo X-HE-Tag: 1712601617-979885 X-HE-Meta: U2FsdGVkX181H9mJXBUCoR0jkzbKCONa/6y9Gp2mp6JBkNlNfLzUdGV29Z8grcoSUE3zbdCgtrTJHSxivBgb8rWhZuWotN9Sz5D1UlEVFJxiGPgcUlfJP3Ls9Vhvcs0wDWFotK6KckyOt1xbS7xwtJFjqpiv66C39kKtZPfLk3Yii1DIkKicQTJ+5Jy8Rhfw/HNKhuHTeir4gYIGhCIsJbdb6rV/dFgOuHu+Ube29sPJ7a8vl+6egk99t4J7BKuHRmBsjObvy95TPazhSMDG2oxchGwH2eq8SuNuQcGHywZ9pRWwfROm6WIjNHGtq92F0QAUkoTaN5pxKqQlfJXO/q8fr+dK09UBdKui84Y08vFgeI5ppEIDW3NxfO0Jm29JBpfiZirrPorqcmjbSWbSShVH/tP6lF1lYb3418HfrvWd1SqvroWtmYAtK0L4OqDtArGb4+vnmCk1QGVw9UbCorYUI4YtHqb/JJ6qStfAjspHFaQ8QsRL7pIcT/GPgyAy68QiPi7/i/SF+w4akp0WQVajXVt1SG0lM/f8XqwJrqiUsLS9VeOmOZnwQcowH7dqusDmw+ZglPlhnkstuOSxPXk6XTTlJ6jgf2HlM6mqNS6Y2DEMCi3uXPrmu0VuGy0/JJjsSm0CV1QrGn3AphdOjP3bdzYk1p1itJlr7hoEQW7Prf1LkMCv0TWC7pqZHNuiZAvhymTWyx9W6HN/XbpzuFigWaBHnA3AoQgXkAnzXavf3m3wc6eMeQlHwe58VBjU0U1msijkYMu+KWOOf6MdnvUN/jE0x/DtOwhaMScNzOsOWglg2+OU0Ii8Mg/OEPoWUYfaXPvbiGfk47QplZVp3Q51DJnVFXgPKcxr1YX2Yu+8Fa/R12n+jH2UVCYei2IKTqL4YMEvHcjHd4CbNdEqPSzSTyChlnJDxTYJEtXVADk/CnQVr/tEY6teqBUNWNAxb0VxIK4Fi01z8pgh13v tDpOGnEP et26Z4bCbvlxAH1aoyYoiJJ5BlkI2Y6UQ3Hw87AkaKMFwI6HCes/lysTMfo9IP5rNPXZidWAhn/GPk9M9+MU64r1fNgjN/2D7P2b0dOrlKPbiMWloNCC+H+9Dz9PCjC38MjaD258+MSrZkQyvgdLLCsOhEKAyN77EKsHsRzSJhmKYHa3u7L4VwM7CcJfSfPQ5Q7t/Sv84Fj64Ztng9Pz2BqeU5mRIhmHwEQ92HuCn/uruaqDLB4H/r0qtThp68GtdxLsKznD2k6Cah6Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large folio that is fully and contiguously mapped in the pageout/cold vm range. This change means that large folios will be maintained all the way to swap storage. This both improves performance during swap-out, by eliding the cost of splitting the folio, and sets us up nicely for maintaining the large folio when it is swapped back in (to be covered in a separate series). Folios that are not fully mapped in the target range are still split, but note that behavior is changed so that if the split fails for any reason (folio locked, shared, etc) we now leave it as is and move to the next pte in the range and continue work on the proceeding folios. Previously any failure of this sort would cause the entire operation to give up and no folios mapped at higher addresses were paged out or made cold. Given large folios are becoming more common, this old behavior would have likely lead to wasted opportunities. While we are at it, change the code that clears young from the ptes to use ptep_test_and_clear_young(), via the new mkold_ptes() batch helper function. This is more efficent than get_and_clear/modify/set, especially for contpte mappings on arm64, where the old approach would require unfolding/refolding and the new approach can be done in place. Reviewed-by: Barry Song Signed-off-by: Ryan Roberts --- include/linux/pgtable.h | 30 ++++++++++++++ mm/internal.h | 12 +++++- mm/madvise.c | 87 +++++++++++++++++++++++------------------ mm/memory.c | 4 +- 4 files changed, 92 insertions(+), 41 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 75096025fe52..e2f45e22a6d1 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -361,6 +361,36 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, } #endif +#ifndef mkold_ptes +/** + * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old. + * @vma: VMA the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to mark old. + * + * May be overridden by the architecture; otherwise, implemented as a simple + * loop over ptep_test_and_clear_young(). + * + * Note that PTE bits in the PTE range besides the PFN can differ. For example, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + */ +static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ + for (;;) { + ptep_test_and_clear_young(vma, addr, ptep); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} +#endif + #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, diff --git a/mm/internal.h b/mm/internal.h index de68705624b0..9d3250b4a08a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -130,6 +130,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * @flags: Flags to modify the PTE batch semantics. * @any_writable: Optional pointer to indicate whether any entry except the * first one is writable. + * @any_young: Optional pointer to indicate whether any entry except the + * first one is young. * * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same large folio. @@ -145,16 +147,18 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable) + bool *any_writable, bool *any_young) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; - bool writable; + bool writable, young; int nr; if (any_writable) *any_writable = false; + if (any_young) + *any_young = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); @@ -168,6 +172,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte = ptep_get(ptep); if (any_writable) writable = !!pte_write(pte); + if (any_young) + young = !!pte_young(pte); pte = __pte_batch_clear_ignored(pte, flags); if (!pte_same(pte, expected_pte)) @@ -183,6 +189,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, if (any_writable) *any_writable |= writable; + if (any_young) + *any_young |= young; nr = pte_batch_hint(ptep, pte); expected_pte = pte_advance_pfn(expected_pte, nr); diff --git a/mm/madvise.c b/mm/madvise.c index 5011ecb24344..f59169888b8e 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, LIST_HEAD(folio_list); bool pageout_anon_only_filter; unsigned int batch_count = 0; + int nr; if (fatal_signal_pending(current)) return -EINTR; @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, return 0; flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); - for (; addr < end; pte++, addr += PAGE_SIZE) { + for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) { + nr = 1; ptent = ptep_get(pte); if (++batch_count == SWAP_CLUSTER_MAX) { @@ -447,55 +449,66 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, continue; /* - * Creating a THP page is expensive so split it only if we - * are sure it's worth. Split it if we are only owner. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be swapped out whole. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; - - if (folio_likely_mapped_shared(folio)) - break; - if (pageout_anon_only_filter && !folio_test_anon(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; - continue; + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | + FPB_IGNORE_SOFT_DIRTY; + int max_nr = (end - addr) / PAGE_SIZE; + bool any_young; + + nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, + fpb_flags, NULL, &any_young); + if (any_young) + ptent = pte_mkyoung(ptent); + + if (nr < folio_nr_pages(folio)) { + int err; + + if (folio_likely_mapped_shared(folio)) + continue; + if (pageout_anon_only_filter && !folio_test_anon(folio)) + continue; + if (!folio_trylock(folio)) + continue; + folio_get(folio); + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(start_pte, ptl); + start_pte = NULL; + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + start_pte = pte = + pte_offset_map_lock(mm, pmd, addr, &ptl); + if (!start_pte) + break; + arch_enter_lazy_mmu_mode(); + if (!err) + nr = 0; + continue; + } } /* * Do not interfere with other mappings of this folio and - * non-LRU folio. + * non-LRU folio. If we have a large folio at this point, we + * know it is fully mapped so if its mapcount is the same as its + * number of pages, it must be exclusive. */ - if (!folio_test_lru(folio) || folio_mapcount(folio) != 1) + if (!folio_test_lru(folio) || + folio_mapcount(folio) != folio_nr_pages(folio)) continue; if (pageout_anon_only_filter && !folio_test_anon(folio)) continue; - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - if (!pageout && pte_young(ptent)) { - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - ptent = pte_mkold(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + mkold_ptes(vma, addr, pte, nr); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } /* diff --git a/mm/memory.c b/mm/memory.c index 0db2aa066a5a..78422d1c7381 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma flags |= FPB_IGNORE_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, - &any_writable); + &any_writable, NULL); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, - NULL); + NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, addr, details, rss, force_flush,