From patchwork Wed Oct 25 14:45:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13436305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FECFC0032E for ; Wed, 25 Oct 2023 14:46:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D86086B0327; Wed, 25 Oct 2023 10:45:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D33F16B0329; Wed, 25 Oct 2023 10:45:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFB376B032B; Wed, 25 Oct 2023 10:45:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AD35C6B0327 for ; Wed, 25 Oct 2023 10:45:59 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6CF4B140698 for ; Wed, 25 Oct 2023 14:45:59 +0000 (UTC) X-FDA: 81384258438.10.D3A6085 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 571A61C0029 for ; Wed, 25 Oct 2023 14:45:57 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698245157; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=xP7sOvrfSH852OAIXuWBK/wSgiTGti8Jl2oeZ2ri4Ik=; b=DjHSVXwwhVW2Xr5N/XLDEllon8LuI5jLG7qvXOhUK8aIMcvqK3u3MezXVspRmkIB+aDm4e 1dukP400L5pAnkRW6SRkQOWICr/ZbWq4xTSu4G2LpBsWplq99jQSkkLUh3QoUWjxqq1ZVi XbPGNi+YA8TwFRgoV5d5IVrEY4IqO34= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698245157; a=rsa-sha256; cv=none; b=Gpxa317jEdB7QvKQizVzETPzFezQ3WMR4fsNQoKkyMrie3iSj3NMR1ifZpZaLkS5mm1CGK nOEFVzbDnZivOJelEMWh2vA7nASyP+Dvmv1gN+aiJeBPat0B2fL2JnuJXVYxP8TwP+7ZIl Td7I2Z0FvDpq5Pp5tjMfenZMNRdM9Pg= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A7C432F4; Wed, 25 Oct 2023 07:46:37 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 896F53F64C; Wed, 25 Oct 2023 07:45:54 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 0/4] Swap-out small-sized THP without splitting Date: Wed, 25 Oct 2023 15:45:42 +0100 Message-Id: <20231025144546.577640-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspamd-Queue-Id: 571A61C0029 X-Rspam-User: X-Stat-Signature: tm358bhmeqc3p1kp9cgh8sdy46yq4bdy X-Rspamd-Server: rspam03 X-HE-Tag: 1698245157-402661 X-HE-Meta: U2FsdGVkX1/uNYEPe/iKE4GVPh3j8nXOrBgwWhDG1axoeJZNVr82FXuUprLOlOxY6au9+YG7TPdwknjWM06is5BQwIgUDMl6nMjqxmfnntp7VpGSNqQNnzQBQODNuJstVRwGblhM0LqRsI18mnCjGZR/mPenb3E8cPbHq2wYLGPdQAGtf4he9amYxkZkwgEprYSXDhjcd8W7uUTOhwDTv2Dqi+pdokROGf/XEtf/Qd3WX1XNh4OnDrSAyIKdpM+u6qS0as2aUu/nkNvgQQkGnxufHcpPETdbVVgcYrB+/FlwjnM2wVnYvlddopr8FIKEGrPjtKiAv0Eqjz2OC8GErtC4wINd7TxbKuu8OqhlO4l+3U8WNe1hu/sFkuxiAGx7nw5zoMdzsS9oQkl5xwoHI7ROk8u6IFiq+twFnBwFaehUV+kF9yPe+agN7CVhsbBxou8z7NEvhz5zp9Hj1K54O7uNPSI3G1OrcL4Z4AP/KKmp1cZSGnsfCVRVhZoN8ZubYNy+l8rUtqtatZzEEgRe7wkpZ4tZMAwl6azy5DFVwYmOICdjdxpZytB7goZmh73gq5LFOjtv3MvIqeISdxyR3NTvkgBXzAufKctTujCBTzggrvGvNM7gFVMd8rbpG9SJUTHdTJUckYjbRIIJNDfOrvHxlcdK9c0+35a4ScyB3rPsbyQD1mI4cNtIMKgpwK+cd/F8+lJLj/rwDdnoZ8IJiSAGLOdiM4LoqW7UL4fenrG5+Km9DQGKkPeUNsXIoMt2mRvK1q49dzCQKcLslIFdr0XfI9ozYosRQ2IJCIVj/ucCcJIuxwNBoiYBjA8P4LPllJvY2OFu2RHY71uLPulSi8ueM5djNGrndKhsVMS8+6Vt/RGmdUsrWIPJMIkKgEMH89jrmiBBCyUKPUC9PDAPQmiuT7GtxBeKB6H6/E8g2Yotm15ynC/UPyXTvvQltiqr9lt6MpUOI2AHdp3WR+w mvWe2mLr 1Icz2uAhTtIzcl8/JpualF+zjZQzfbWWaxlVJTNTFJkdST3mbvvBT2hBvs9fym8QTgZ7+t4VW2vNT44nD/t1y/6yFFSCbHXKvTvfqSVvn3fY+qeJjvZ84MPUxP/ygaZ8C3eV99tzlwAl7iqvbE2SGweptl29KEpv2nhQaHSXNTXC2FCpEFCXrLYyuUc7vtvm2KhlhQPexzLt2lyXBmP56gbocd7BeRTTioOaa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi All, This is v3 of a series to add support for swapping out small-sized THP without needing to first split the large folio via __split_huge_page(). It closely follows the approach already used by PMD-sized THP. "Small-sized THP" is an upcoming feature that enables performance improvements by allocating large folios for anonymous memory, where the large folio size is smaller than the traditional PMD-size. See [3]. In some circumstances I've observed a performance regression (see patch 2 for details), and this series is an attempt to fix the regression in advance of merging small-sized THP support. I've done what I thought was the smallest change possible, and as a result, this approach is only employed when the swap is backed by a non-rotating block device (just as PMD-sized THP is supported today). Discussion against the RFC concluded that this is probably sufficient. The series applies against mm-unstable (1a3c85fa684a) Changes since v2 [2] ==================== - Reuse scan_swap_map_try_ssd_cluster() between order-0 and order > 0 allocation. This required some refactoring to make everything work nicely (new patches 2 and 3). - Fix bug where nr_swap_pages would say there are pages available but the scanner would not be able to allocate them because they were reserved for the per-cpu allocator. We now allow stealing of order-0 entries from the high order per-cpu clusters (in addition to exisiting stealing from order-0 per-cpu clusters). Thanks to Huang, Ying for the review feedback and suggestions! Changes since v1 [1] ==================== - patch 1: - Use cluster_set_count() instead of cluster_set_count_flag() in swap_alloc_cluster() since we no longer have any flag to set. I was unable to kill cluster_set_count_flag() as proposed against v1 as other call sites depend explicitly setting flags to 0. - patch 2: - Moved large_next[] array into percpu_cluster to make it per-cpu (recommended by Huang, Ying). - large_next[] array is dynamically allocated because PMD_ORDER is not compile-time constant for powerpc (fixes build error). Thanks, Ryan P.S. I know we agreed this is not a prerequisite for merging small-sized THP, but given Huang Ying had provided some review feedback, I wanted to progress it. All the actual prerequisites are either complete or being worked on by others. [1] https://lore.kernel.org/linux-mm/20231010142111.3997780-1-ryan.roberts@arm.com/ [2] https://lore.kernel.org/linux-mm/20231017161302.2518826-1-ryan.roberts@arm.com/ [3] https://lore.kernel.org/linux-mm/15a52c3d-9584-449b-8228-1335e0753b04@arm.com/ Ryan Roberts (4): mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags mm: swap: Remove struct percpu_cluster mm: swap: Simplify ssd behavior when scanner steals entry mm: swap: Swap-out small-sized THP without splitting include/linux/swap.h | 31 +++--- mm/huge_memory.c | 3 - mm/swapfile.c | 232 ++++++++++++++++++++++++------------------- mm/vmscan.c | 10 +- 4 files changed, 149 insertions(+), 127 deletions(-) --- 2.25.1