From patchwork Tue Oct 17 16:13:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13425598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD11FCDB484 for ; Tue, 17 Oct 2023 16:13:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4963680042; Tue, 17 Oct 2023 12:13:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41E898003F; Tue, 17 Oct 2023 12:13:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D74780042; Tue, 17 Oct 2023 12:13:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 08C3C8003F for ; Tue, 17 Oct 2023 12:13:21 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C4EE780CC8 for ; Tue, 17 Oct 2023 16:13:20 +0000 (UTC) X-FDA: 81355448160.07.14F9D42 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 1D5BC4000F for ; Tue, 17 Oct 2023 16:13:18 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697559199; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mnOvsh3al+SoOgsN/9gCY3HqncX/GEGRH6QjIIu7IY8=; b=NV782ctha2GqJ/gUjdNFrWxOK+KCkOnN/tmP3G58F4Kwu70GBtHtEESqkoufdBcg79cUrl F5TrIRiN0t8zK8EjKBVYJawnh6KkSD7AKBbnkRcFOrH5pd6X3szyf4oWBRgGQsdBNf+e5T E1se+dzyHgGwyFzhF4q7RImextdhjKo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697559199; a=rsa-sha256; cv=none; b=X2mrhA9myrQd9RZm1Zw2HqZwoeSdiX4ROaxOVB62TByOwWV3hdm1p4MTj6EXhRtxfFy47s INo1Sj6xNElUO8KqlULzZyot4DFTOu8NukR9Uyz0UfSKE9UQ8g6m/8hs6oovNYxlISjAbH K8LOEjC8FV6GMousQ1EEITT/TctT14g= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF8E212FC; Tue, 17 Oct 2023 09:13:58 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E3C13F762; Tue, 17 Oct 2023 09:13:16 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 2/2] mm: swap: Swap-out small-sized THP without splitting Date: Tue, 17 Oct 2023 17:13:02 +0100 Message-Id: <20231017161302.2518826-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231017161302.2518826-1-ryan.roberts@arm.com> References: <20231017161302.2518826-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1D5BC4000F X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: cxei81rpz1t3xpb879484hjkppeaew9y X-HE-Tag: 1697559198-629981 X-HE-Meta: U2FsdGVkX190w4ZuyEkHky/JIrWmxtZmeZ7jYEWZznYiPQutdlfLGuZyIbMViuUu/68DC7jmKLU652MV+tdr+ViKCl03FjxQEQrRpESwBBsbmZ70KhkQtTqe4Wj7Bv2S0yn93o5fyqBPvOT5Fm7R2TE9fuuvIRuDAiehOYpSoQZsxBDFIFnHlesBkDxfgegxMFCHg5qmO7KrP7WyzJxDBzBNsJwlEUqe3zkwiniUH1XhuzbrIgHOFJhgiotg+9YdvLkhC+HKGkKFHvF93j32Lq3FTlSHXA1XGHpOxjqoNKUX3eBgpByXYXsP0OpAv9lQIb90fUtUHM1TdZV1Pw5TCwLlAFV380qK8fTttzK7W41RrJ7KbwDaxjD+vbuE1diotZLvh0IhSPmJiXu37nij7SolgHsC9/OqSXZBM+PNFKkd/qZyU2ZX9ggbq9s/vNP/sTMTxpwjsJ9BLPNxHoMSjKY7I1OOwPyyAYLPf477JuO1CD9xtKclfI++G5UMwu8FtXXxAdPLIebRMQWru70QVIpronsOK34/zcyaOE69vbho1X4mMq3UjI3dl4VsDcN8Zr10VIKzuCnnn5KgIAmtP4OyN0zuw1pWFIZGKK4ueYkxPprf5T0zq1/zOGFwMLEsBoVQgRb2U2g8TJAUriPChRC21978s0fgouSHkRUd1cSJw2pJ4xNFuRsyHUlVXmkp7i99/ooB29TUMuoucwO9WYet/ja4aJ2Yjrn7/KetEmuRuV/0bWx8Cen862GfFGF19O60rQAQBFvX/drUH4g4KDdpqsRD7W0nVGFL4208xb4Ltuu6/oa8VsoZxmDD8vuhVMETBVqWAYhxJkVE9Sqw/CZ/tcAx0gMIdbvqebETHgqP29qzyJLTX8DzrOgv1Gj3+6ecwabQBvWcg0WNUJpVRG5Ecfa1LBz2DZU8wQMcgDqJ5l/32U5UTXQYp5mkx9yNMzD4CsQPq48cfa3i4bq e21dX0B9 u3x8b6Ap4lJFvl+GCkMGOpIIjRRmJw8RblmP/YCbzDCPH1hTGTDaKUq5u+KHVlTwRVtcJN/qCLnWlctXc4X8im6SRHiylQR0Cg8LZFgX5gWaRiYhHhuwcNKQ8iJ4sUNLxbp43ve7Bw0SL1sNL67wJnGXPnznJdYHx7VyU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The upcoming anonymous small-sized THP feature enables performance improvements by allocating large folios for anonymous memory. However I've observed that on an arm64 system running a parallel workload (e.g. kernel compilation) across many cores, under high memory pressure, the speed regresses. This is due to bottlenecking on the increased number of TLBIs added due to all the extra folio splitting. Therefore, solve this regression by adding support for swapping out small-sized THP without needing to split the folio, just like is already done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, and when the swap backing store is a non-rotating block device. These are the same constraints as for the existing PMD-sized THP swap-out support. Note that no attempt is made to swap-in THP here - this is still done page-by-page, like for PMD-sized THP. The main change here is to improve the swap entry allocator so that it can allocate any power-of-2 number of contiguous entries between [4, (1 << PMD_ORDER)] (THP cannot support order-1 folios). This is done by allocating a cluster for each distinct order and allocating sequentially from it until the cluster is full. This ensures that we don't need to search the map and we get no fragmentation due to alignment padding for different orders in the cluster. If there is no current cluster for a given order, we attempt to allocate a free cluster from the list. If there are no free clusters, we fail the allocation and the caller falls back to splitting the folio and allocates individual entries (as per existing PMD-sized THP fallback). The per-order current clusters are maintained per-cpu using the existing percpu_cluster infrastructure. This is done to avoid interleving pages from different tasks, which would prevent IO being batched. This is already done for the order-0 allocations so we follow the same pattern. As far as I can tell, this should not cause any extra fragmentation concerns, given how similar it is to the existing PMD-sized THP allocation mechanism. There could be up to (PMD_ORDER-2) * nr_cpus clusters in concurrent use though, which in a pathalogical case (cluster set aside for every order for every cpu and only one huge entry allocated from it) would tie up ~12MiB of unused swap entries for these high orders (assuming PMD_ORDER=9). In practice, the number of orders in use will be small and the amount of swap space reserved is very small compared to a typical swap file. Note that PMD_ORDER is not compile-time constant on powerpc, so we have to allocate the large_next[] array at runtime. I've run the tests on Ampere Altra (arm64), set up with a 35G block ram device as the swap device and from inside a memcg limited to 40G memory. I've then run `usemem` from vm-scalability with 70 processes (each has its own core), each allocating and writing 1G of memory. I've repeated everything 5 times and taken the mean and stdev: Mean Performance Improvement vs 4K/baseline | alloc size | baseline | + this series | | | v6.6-rc4+anonfolio | | |:-----------|--------------------:|--------------------:| | 4K Page | 0.0% | 1.1% | | 64K THP | -44.1% | 0.9% | | 2M THP | 56.0% | 56.4% | So with this change, the regression for 64K swap performance goes away. Both 4K and 64K benhcmarks are now bottlenecked on TLBI performance from try_to_unmap_flush_dirty(), on arm64 at least. When using fewer cpus in the test, I see upto 2x performance of 64K THP swapping compared to 4K. Signed-off-by: Ryan Roberts --- include/linux/swap.h | 6 ++++ mm/swapfile.c | 74 +++++++++++++++++++++++++++++++++++--------- mm/vmscan.c | 10 +++--- 3 files changed, 71 insertions(+), 19 deletions(-) -- 2.25.1 diff --git a/include/linux/swap.h b/include/linux/swap.h index a073366a227c..35cbbe6509a9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -268,6 +268,12 @@ struct swap_cluster_info { struct percpu_cluster { struct swap_cluster_info index; /* Current cluster index */ unsigned int next; /* Likely next allocation offset */ + unsigned int large_next[]; /* + * next free offset within current + * allocation cluster for large folios, + * or UINT_MAX if no current cluster. + * Index is (order - 1). + */ }; struct swap_cluster_list { diff --git a/mm/swapfile.c b/mm/swapfile.c index b83ad77e04c0..625964e53c22 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -987,35 +987,70 @@ static int scan_swap_map_slots(struct swap_info_struct *si, return n_ret; } -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot, + unsigned int nr_pages) { + int order_idx; unsigned long idx; struct swap_cluster_info *ci; + struct percpu_cluster *cluster; unsigned long offset; /* * Should not even be attempting cluster allocations when huge * page swap is disabled. Warn and fail the allocation. */ - if (!IS_ENABLED(CONFIG_THP_SWAP)) { + if (!IS_ENABLED(CONFIG_THP_SWAP) || + nr_pages < 4 || nr_pages > SWAPFILE_CLUSTER || + !is_power_of_2(nr_pages)) { VM_WARN_ON_ONCE(1); return 0; } - if (cluster_list_empty(&si->free_clusters)) + /* + * Not using clusters so unable to allocate large entries. + */ + if (!si->cluster_info) return 0; - idx = cluster_list_first(&si->free_clusters); - offset = idx * SWAPFILE_CLUSTER; - ci = lock_cluster(si, offset); - alloc_cluster(si, idx); - cluster_set_count(ci, SWAPFILE_CLUSTER); + order_idx = ilog2(nr_pages) - 2; + cluster = this_cpu_ptr(si->percpu_cluster); + offset = cluster->large_next[order_idx]; + + if (offset == UINT_MAX) { + if (cluster_list_empty(&si->free_clusters)) + return 0; + + idx = cluster_list_first(&si->free_clusters); + offset = idx * SWAPFILE_CLUSTER; - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); + ci = lock_cluster(si, offset); + alloc_cluster(si, idx); + cluster_set_count(ci, SWAPFILE_CLUSTER); + + /* + * If scan_swap_map_slots() can't find a free cluster, it will + * check si->swap_map directly. To make sure this standby + * cluster isn't taken by scan_swap_map_slots(), mark the swap + * entries bad (occupied). (same approach as discard). + */ + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD, + SWAPFILE_CLUSTER - nr_pages); + } else { + idx = offset / SWAPFILE_CLUSTER; + ci = lock_cluster(si, offset); + } + + memset(si->swap_map + offset, SWAP_HAS_CACHE, nr_pages); unlock_cluster(ci); - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); + swap_range_alloc(si, offset, nr_pages); *slot = swp_entry(si->type, offset); + offset += nr_pages; + if (idx != offset / SWAPFILE_CLUSTER) + offset = UINT_MAX; + cluster->large_next[order_idx] = offset; + return 1; } @@ -1041,7 +1076,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) int node; /* Only single cluster request supported */ - WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER); + WARN_ON_ONCE(n_goal > 1 && size > 1); spin_lock(&swap_avail_lock); @@ -1078,14 +1113,14 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) spin_unlock(&si->lock); goto nextsi; } - if (size == SWAPFILE_CLUSTER) { + if (size > 1) { if (si->flags & SWP_BLKDEV) - n_ret = swap_alloc_cluster(si, swp_entries); + n_ret = swap_alloc_large(si, swp_entries, size); } else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, n_goal, swp_entries); spin_unlock(&si->lock); - if (n_ret || size == SWAPFILE_CLUSTER) + if (n_ret || size > 1) goto check_out; cond_resched(); @@ -3046,6 +3081,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) if (p->bdev && bdev_nonrot(p->bdev)) { int cpu; unsigned long ci, nr_cluster; + int nr_order; + int i; p->flags |= SWP_SOLIDSTATE; p->cluster_next_cpu = alloc_percpu(unsigned int); @@ -3073,7 +3110,12 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) for (ci = 0; ci < nr_cluster; ci++) spin_lock_init(&((cluster_info + ci)->lock)); - p->percpu_cluster = alloc_percpu(struct percpu_cluster); + nr_order = IS_ENABLED(CONFIG_THP_SWAP) ? PMD_ORDER - 1 : 0; + p->percpu_cluster = __alloc_percpu( + struct_size(p->percpu_cluster, + large_next, + nr_order), + __alignof__(struct percpu_cluster)); if (!p->percpu_cluster) { error = -ENOMEM; goto bad_swap_unlock_inode; @@ -3082,6 +3124,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) struct percpu_cluster *cluster; cluster = per_cpu_ptr(p->percpu_cluster, cpu); cluster_set_null(&cluster->index); + for (i = 0; i < nr_order; i++) + cluster->large_next[i] = UINT_MAX; } } else { atomic_inc(&nr_rotate_swap); diff --git a/mm/vmscan.c b/mm/vmscan.c index c16e2b1ea8ae..5984d2ae4547 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1212,11 +1212,13 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split PMD-mappable folios without a + * PMD map right away. Chances are some + * or all of the tail pages can be freed + * without IO. */ - if (!folio_entire_mapcount(folio) && + if (folio_test_pmd_mappable(folio) && + !folio_entire_mapcount(folio) && split_folio_to_list(folio, folio_list)) goto activate_locked;