From patchwork Wed Aug 28 08:28:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13780884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46368C5474F for ; Wed, 28 Aug 2024 08:29:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 57DE46B0096; Wed, 28 Aug 2024 04:29:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 506F56B0098; Wed, 28 Aug 2024 04:29:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A79A6B00A2; Wed, 28 Aug 2024 04:29:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1CA856B0096 for ; Wed, 28 Aug 2024 04:29:00 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 97ACA161D3B for ; Wed, 28 Aug 2024 08:28:59 +0000 (UTC) X-FDA: 82500978798.12.5BBF276 Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 4C5CF40015 for ; Wed, 28 Aug 2024 08:28:55 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="fhQw7N/w"; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf27.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724833666; a=rsa-sha256; cv=none; b=5tM7VBWzN3dBotBN2cI/Y2Y4lqIY3SDn404HhPmNpP1Rba1Q30bB87Kaea2AKWieIxT5hd GyXl1mgAE38NxeEO7Nxabml9wrFmJdsPQbj43TcWYILYvbPgADKIqeG4wWs0R62zeHdtW9 IhIn/MrZiSpyUm8V8wATFv9Dnhs3myQ= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="fhQw7N/w"; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf27.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724833666; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=97ju4RMpYK6Qa1Bay+6iKFFpZc4GsCYy2UsOfQ1P8ao=; b=1C9QSqeA3fJpPdUFyFJACY2/JZ7kmIz7jLccMWJUILeC3XhlDEcO+V9uhIrABJiSYP6j/D 8Z5owBMV+XIUSrDtOICqyu13a4uDUu7hdSPNnwC8EkPw+J6+vJBcPZYzTJHCVeJJEVBiui Lv5ry6nNbFMBe6Dzw+kIYfvvwwefheM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1724833733; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=97ju4RMpYK6Qa1Bay+6iKFFpZc4GsCYy2UsOfQ1P8ao=; b=fhQw7N/wP72lmcfsgIVApmrztLKYr0nXZYLTYdY7ahBtoIH3ws2gdc4W2molYzuBWybN2RIOUsskk/94siMCndItDOxhF02KyM5tbS7fkqUnoGgjvsfocjLRFB19JQTI/bUdyKwYdZNPBuCjJkASdgEMarh6iq8m8k/Y9HjWXsk= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WDpB22x_1724833730) by smtp.aliyun-inc.com; Wed, 28 Aug 2024 16:28:51 +0800 From: Baolin Wang To: hughd@google.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, da.gomez@samsung.com, david@redhat.com, ioworker0@gmail.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, p.raghav@samsung.com, ryan.roberts@arm.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, ying.huang@intel.com, ziy@nvidia.com Subject: [PATCH] mm: shmem: support large folio swap out fix 2 Date: Wed, 28 Aug 2024 16:28:38 +0800 Message-Id: <1236a002daa301b3b9ba73d6c0fab348427cf295.1724833399.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4C5CF40015 X-Stat-Signature: n31mciobziru7hnxazqbz4pcqwed9qmf X-Rspam-User: X-HE-Tag: 1724833735-700041 X-HE-Meta: U2FsdGVkX1+a0axZyYeTclUS737Yi+QHX1OyaMzWlsKX5tFD/Qvpm9clW85BMDb1SjA3RMSizszkKt942Z5M5sHNORWyEPRAtV9cW8NQ/MnSlOwTcBihwK5QS+WAB/P7gq8xgWKIaHsf1t1Kca7XpWqRHq089RK9WEdQroUV95VkxkOYH0TEeGbWrR64qmE/Xq/Q2e2JQhXCLqqpDwZokEbO1K/kCPwtwL77QANjWfI4U8wnRFikF4aBuzzuQrnD+BFnPpif2a2Xc2ZKqQln9+im5pwg0NWbqy56tvTFPsVfL9drMlexyO79POO5kOZK8VUxAyD2MCN/swWXzcRLyV2Z25/GYnFsGwAMHiosUxou2BpyVx2XjGWVNazEEB1n+JUjtHrMIMoIJ8N/1tNcO6n0z3Rr6x3avc0tdp/uHBhuePrHQcn/243U32QOISp6mY5BTSwPXLRqIhsRRYOs/WQQeWbPwooXjYfzPUx7itSX7vKL84rWFNj2/GrnF6lrs3cSOANR96km0tKQ3v9bn5APfG6IJqNXOqn1pyTXUiXRhtuze/c4CnqtyJPQJX3cHZ9lG8vCkoK7U0qXiDhF2b5N2GHddCRPWviH6ochqFgDbojlKrpXYOxBBOnpyxl8HUOgsm7u0Lechn99DA4LG4Hq3jsQft/ECJXKW6HEu2UISWcsoYgOQEXObI6DbTSc++fcAJ0Juo3Z8iIud4mGH1idaBEVHjy5iUXQ4eS7Xl0S4jKY/dcRlC3NU9TcwvvCGzn3ilGiIYWBV2bsvpxBOAuLnknoIfqCBtIthm02cZX90SpFRv0jTGyLGT4Bx3GynmkSFNOJQ7tgcO3GlEU37q7KcfrWNyndbMjiUcagQ7/+cw7c6nAVkRah1TO3qOK7OwSl4gEVQR5Fg2IwyG7eUKTOtpFeT40a3UO16DMZ4SDx83IB2JjfUEb3L0c3kOkTlwnZSmteqnqNnhxydE1 PVan3YE1 iSPsJDBkYizeOK/0IxUfTjrXsZD1bPYPdESvVRLIRU2xy88rM7wDJ/ijea2HGMF2dZticV6bSLb3tnwLcaXZ/O5UDzrAV3YK51Y1eFHuyNp7dVdLXP6w4IKd09yVax9vVa6U3KeaTE57haFhdI12WR+RyVeBFf4W7t+dgbgQStEAHmTaGjBKwi3VZZuDGuTmSn4LpXNRVX/JFQaK0UoT2RVc+B3NcazIbgPxn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As Hugh said: " The i915 THP splitting inshmem_writepage() was to avoid mm VM_BUG_ONs and crashes when shmem.c did not support huge page swapout: but now you are enabling that support, and such VM_BUG_ONs and crashes are gone (so far as I can see: and this is written on a laptop using the i915 driver). I cannot think of why i915 itself would care how mm implements swapout (beyond enjoying faster): I think all the wbc->split_large_folio you introduce here should be reverted. " So this fixup patch will remove the wbc->split_large_folio as suggested by Hugh. Signed-off-by: Baolin Wang --- This fixup patch is based on 2024-08-28 mm-unstable branch. Andrew, please help to squash this fixup patch. Thanks. --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 1 - include/linux/writeback.h | 1 - mm/shmem.c | 9 ++++----- mm/vmscan.c | 4 +--- 4 files changed, 5 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index c66cb9c585e1..c5e1c718a6d2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -308,7 +308,6 @@ void __shmem_writeback(size_t size, struct address_space *mapping) .range_start = 0, .range_end = LLONG_MAX, .for_reclaim = 1, - .split_large_folio = 1, }; unsigned long i; diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 10100e22d5c6..51278327b3c6 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -63,7 +63,6 @@ struct writeback_control { unsigned range_cyclic:1; /* range_start is cyclic */ unsigned for_sync:1; /* sync(2) WB_SYNC_ALL writeback */ unsigned unpinned_netfs_wb:1; /* Cleared I_PINNING_NETFS_WB */ - unsigned split_large_folio:1; /* Split large folio for shmem writeback */ /* * When writeback IOs are bounced through async layers, only the diff --git a/mm/shmem.c b/mm/shmem.c index 16099340ca1d..2b0209d6ac9c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1473,19 +1473,18 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) goto redirty; /* - * If /sys/kernel/mm/transparent_hugepage/shmem_enabled is "always" or - * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages, - * and its shmem_writeback() needs them to be split when swapping. + * If CONFIG_THP_SWAP is not enabled, the large folio should be + * split when swapping. * * And shrinkage of pages beyond i_size does not split swap, so * swapout of a large folio crossing i_size needs to split too * (unless fallocate has been used to preallocate beyond EOF). */ if (folio_test_large(folio)) { - split = wbc->split_large_folio; index = shmem_fallocend(inode, DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE)); - if (index > folio->index && index < folio_next_index(folio)) + if ((index > folio->index && index < folio_next_index(folio)) || + !IS_ENABLED(CONFIG_THP_SWAP)) split = true; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 283e3f9d652b..f27792e77a0f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -681,10 +681,8 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping, * not enabled or contiguous swap entries are failed to * allocate. */ - if (shmem_mapping(mapping) && folio_test_large(folio)) { + if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - wbc.split_large_folio = !IS_ENABLED(CONFIG_THP_SWAP); - } folio_set_reclaim(folio); res = mapping->a_ops->writepage(&folio->page, &wbc);