From patchwork Mon Apr 8 18:39:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64536C67861 for ; Mon, 8 Apr 2024 18:40:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E3EB6B0092; Mon, 8 Apr 2024 14:40:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2438D6B0093; Mon, 8 Apr 2024 14:40:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 132FD6B0095; Mon, 8 Apr 2024 14:40:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E84776B0092 for ; Mon, 8 Apr 2024 14:40:16 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AFB3A140133 for ; Mon, 8 Apr 2024 18:40:16 +0000 (UTC) X-FDA: 81987229632.11.5A8C2F7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf26.hostedemail.com (Postfix) with ESMTP id 0BE1B140011 for ; Mon, 8 Apr 2024 18:40:14 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601615; a=rsa-sha256; cv=none; b=p1jfEGh/nVktmrlTme+xBloa4AGCPBLqduzeZp9Q8iqsc0b1zpd9f+HKvID6iuVYqm5zwW DIYmxxPoBefkiJbUvppqFQZ60g86d7Uu5BkrRc1iqgN/j3rc6+gPszgjVYNfduti5WmakF w3/BU0ILJVWXv4jYRYWphAaTR27eNt4= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HpPHlm+I2PxqN1DWwulluE8OakBdDVNG/BhvChW0tfA=; b=64TexqWMrxH+VxEsrB0omyjGFRuchtYNbOFP+bd8APAHBqpjzgj8ZEklbgKXZdWcrQ8cKL qrujLPRAexWjke7LkFfFDiJlntxnWSdAHeIW5nwmwYkOYdMt0yWA3PnnP82OEhUm9u5JN0 iFAuusMuFGN+oMw2mRsPthyA6FavGo8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E4EBDDA7; Mon, 8 Apr 2024 11:40:44 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A0EB3F766; Mon, 8 Apr 2024 11:40:12 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Subject: [PATCH v7 6/7] mm: vmscan: Avoid split during shrink_folio_list() Date: Mon, 8 Apr 2024 19:39:45 +0100 Message-Id: <20240408183946.2991168-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0BE1B140011 X-Stat-Signature: jxz7a4mjohs6ppmt8ar4zoyqjf835czg X-Rspam-User: X-HE-Tag: 1712601614-724642 X-HE-Meta: U2FsdGVkX1+WtLEjtir0jHtnfSelJwBpUkJD5zs6BmlEoao/ttl8iy+M9BcOBSt/PZMalctGRT63H3pMCQWqSh77U3jTwbrwUABTFTCRkM27pc5crPpmmvfdKa58yqrhGyFSbqBfMT0Ff4zrVPID4HONhD0koWb2QpzGX1SYIkCmZm3M8vzFPvbKv4rrf5sfhicHqmq3qQfnSAZrWF5z4ChiamS+ZCT9BHplm/9y6W3NdcAnFyOw4lqrF2xSeK4AhBou6HKWQwLSvBzHcM6PoNoZFiYeNR+L/likVgyzszVSoGY8tq9rnHPReVvmjpOIMzRYkL5rdwcNVpp67vZNVntGSf1zJ3MwjxwW6jN+xkyYqfDC84ZXqaM/ijZbvmhTN/WkdiZuv+iPOdEvZwecyLr9qXUK8A3brzOyZcjYk9tLdww/0vNgc11OuVcWHCTo7zF31eEKYKkjICLuVG3OwoMPHu3G7WdXp04QJSKR0dmWRRC9DJojK6lvSNwjtZiRsb9CPEmeg5lnFmOSZQEY8sNJIv790WQRPidRNWxkgHAM3sSV8fSEh41/JT6UQe69yZFi3wSuQAl3SXZAtVihn0VF1YfgwsfT7FlimCfCgYH3eBiJguMkty0dzhHt8qvv8jk+NrNvLwm2TFsCN51vMr1/f2ZX2+L3oW/3fBy1S0pTZ8qHX/uuS5rQv/b1O1gybN8ahZO2IId+nez9Q4zaIzc8UZEs9Bw+35zNNJVvLQO0ZLQI/VvMuzAC2xaWu0N6ebuMHoljCdnosOF6sbVY5njflFgUWhfVMgqXFmhn//fmQZ+lXG+9S5fUmIE/wajwEiUtCeYVo/3j0XJ5x0bG2GbZUKbkxSL4NTqkgkvvdpyn/7lKblJ/xEDCnKcTKfabAJzO5r0HNBH314BiiZx1OGOW2jWy2AxkJsuRVAkTxwrQcYoMDVgeV/f9Cw56USyfqMvmRqFHPX9slWPxHR2 KR1urWIR t9WzagNwCPU1UyBQiIsgcsPy5uK/Iz6xSUZA6Pzd+W3zZGbtEZQi3iKqGJRv8HqKhxGagB3v9c2dK9jPo9V0R9Db+ZkZ75FuPTwOJObx8HeFMy0tZ0O/mOXsSxnVQ5bxuk7h+06OJuEGg9rK4Bzbh/0k1VxHvNdNJ5A4IocbHk63UeSI/9Hv0OJ3XQenibN5vV4K5u9+cYsoe1bCOwSvD+xLh118Mzc476lWN2Ntt+WRqyycOGXl9Na4pMajjOjr9cFzB0CBKas1+6y5VPUq6IISufFls64Lgj71l X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that swap supports storing all mTHP sizes, avoid splitting large folios before swap-out. This benefits performance of the swap-out path by eliding split_folio_to_list(), which is expensive, and also sets us up for swapping in large folios in a future series. If the folio is partially mapped, we continue to split it since we want to avoid the extra IO overhead and storage of writing out pages uneccessarily. THP_SWPOUT and THP_SWPOUT_FALLBACK counters should continue to count events only for PMD-mappable folios to avoid user confusion. THP_SWPOUT already has the appropriate guard. Add a guard for THP_SWPOUT_FALLBACK. It may be appropriate to add per-size counters in future. Reviewed-by: David Hildenbrand Reviewed-by: Barry Song Signed-off-by: Ryan Roberts --- mm/vmscan.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00adaf1cb2c3..bca2d9981c95 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1223,25 +1223,25 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split partially mapped folios right away. + * We can free the unmapped pages without IO. */ - if (!folio_entire_mapcount(folio) && - split_folio_to_list(folio, - folio_list)) + if (data_race(!list_empty(&folio->_deferred_list)) && + split_folio_to_list(folio, folio_list)) goto activate_locked; } if (!add_to_swap(folio)) { if (!folio_test_large(folio)) goto activate_locked_split; /* Fallback to swap normal pages */ - if (split_folio_to_list(folio, - folio_list)) + if (split_folio_to_list(folio, folio_list)) goto activate_locked; #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); - count_vm_event(THP_SWPOUT_FALLBACK); + if (nr_pages >= HPAGE_PMD_NR) { + count_memcg_folio_events(folio, + THP_SWPOUT_FALLBACK, 1); + count_vm_event(THP_SWPOUT_FALLBACK); + } #endif if (!add_to_swap(folio)) goto activate_locked_split;