From patchwork Wed Mar 29 01:17:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E6D2C77B61 for ; Wed, 29 Mar 2023 01:17:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 648A76B0072; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F8616B0074; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 499176B0075; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3813F6B0072 for ; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 013F140BF4 for ; Wed, 29 Mar 2023 01:17:37 +0000 (UTC) X-FDA: 80620173396.20.5A06854 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf26.hostedemail.com (Postfix) with ESMTP id D21CC14000E for ; Wed, 29 Mar 2023 01:17:35 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=c2vliS6J; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=mKrpjOke; spf=pass (imf26.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052656; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Yr+uFjBJL/0qCDkJMJuMYA8+4U49oyS8kQRIZ4QnQ48=; b=jrm58sNxly8d8Q4TGiflfb7mSvKd0kp4jsE0YMFTKPgWgiahxIuyBvcBF3XGl/Fq3yrCFh CoFutiiaC55sNR7aPRwT26tzp9S0LP32IuDGZjnSOu5iBQ5Ql3/Ivi/DnobbOgckhHl7Dg SUg2WXIQBUdWi6X+hPytMmzAvSDCVu4= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=c2vliS6J; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=mKrpjOke; spf=pass (imf26.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052656; a=rsa-sha256; cv=none; b=pZfkbskcFDoMkLiGWQoJun/GX6eE+YnCP6renMBKhLelPevx7ZIjZVpIUpWAcvAqsvDhIh +6INevAax5nHuYwmSf2Zmzxg4CkzGCVHE9teb+YKOZHq9bIlIm0oL/4YR9kv95YIVS5Vl4 G3IKm+RRtHycqdwVPq+4MdycFWD7vTM= Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 5837D5C00DD; Tue, 28 Mar 2023 21:17:35 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 28 Mar 2023 21:17:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052655; x=1680139055; bh=Yr+uFjBJL/0qCDkJMJuMYA8+4U49oyS8kQR IZ4QnQ48=; b=c2vliS6JhQxy5/eS0afSs1yoR0oEYtCAkHHfJ2NY2HoMe7XHOOO CpMt1njwoyf3knkBXLIr/09frjWyWNQ6HWyDMuDMqDAsjwx4bN64V4Z1J5laPlco Q+70dNV/AvOttP16sojJ1qlpXAxPTYlsQ0mC1fdyenzk4Zcf7sRdZ/gKgFDqw6Si +6e6b+PpALSBM7Dc5XfhMCKS8eIeNEY/v1JZvJM9wjf+mxXnQbldnkPhHeZx6IYq wepPh+YKS1q6698TPsrzZRfUE9eF1inQwUrBZT6wI+fT84QDqizyidBrXPiNaeqN VLi8wAzUPIOSdjrZ9ZYc0RPUjkhHqg2J6Iw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052655; x=1680139055; bh=Yr+uFjBJL/0qCDkJMJuMYA8+4U49oyS8kQR IZ4QnQ48=; b=mKrpjOke7X6OEH7MkeQZ+uLgHdMAhITBjBpB7pRv9Ae+UiAhZ8X bOlmsLDdOJtdChgpQ7DlpAZoC0KEfS45ibzrvGH3CCbA2Hq3Bs7hBFwcAKuKOoh5 y1hy12LARUXJ8yaaWC2CsIo79wufNnEBnWtYlLU5n3t87xuexEQ0JBGdWQWAo3EE V74ZQPgyulK1CJ8ZreNYH+SXNPkH9vccUk7fIoGTfN9YJruBbJLUGP1qRbbM7Tek U0YPeaz6CfQDmeAi/6BiXzHV9FYbJHa2bHHju+IWJ8h7lLbAPfe1QacehYnuL6EE q6S3li6xjFrlFPgAhZM1YtRNhgS6YUoyWtQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:34 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 1/7] mm/memcg: use order instead of nr in split_page_memcg() Date: Tue, 28 Mar 2023 21:17:06 -0400 Message-Id: <20230329011712.3242298-2-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Stat-Signature: qpow84bc41u6q4do3hs1qrq5pgcrpmdu X-Rspam-User: X-Rspamd-Queue-Id: D21CC14000E X-Rspamd-Server: rspam06 X-HE-Tag: 1680052655-822625 X-HE-Meta: U2FsdGVkX1+MKxyAskVlBzWzhI5caDuZSvMjWoM5An1r3KdwQi56pFgc2ctDIFXeZLTFyWZ8vJQ3kvn/bCiMll94f2AmWztHh1RAnUoVvqlerc1avPCG3MYBp83ndcLcH1ivLmQhCle0HmqN3ZRCGMEkryei2pautNGFjpkwWDUzuuA1cNLjMq83lQvV4zeHu5YjVe1UMO87QPRA8/0vQbGwVMW1U1IY34AefWXhho5j2uFMPWuvmA9dCbjxHb5sr2MzTt1jTVdd4PEL4HCNrNi+2rrlj27527zmkJVvl+rEIKS8e40GKyPFdos0/RhSkDmEcdyapn5l6hQy9ErukvyAHcx0k7E5fUVDW1BliDn/8vIWTYw4ItjCF9XZ+uGYwak0SYQu15yEm/xMJ8qsZXNjctZYKolXrCa+vOcfE8Xa3QZQYhEa2KhpEKYmCRvU37SB6X1JdRlKyz2jq2/vIe9BnytVSgmYQH7qvo29Y9u/r8YSfnj0OSHo0t8uzdDb36ZRjD4OIzkhwqEgN+i+Wd4abvFw4EBzCQjbMtXzSG8OOBKzHzuNeAHx8BRkq7H9KbcvoS4yzF5VFrKpAK9zAso23Vaut24cONPr6kIQxinri1UmHfiCiqCGUX1oxrF1Pi9JI4AC+/ZN4lpF0PCxoyC2hYi9LNS0SVr7RxEPMh2Lepz88blShOSBuo+hyIxiACGQNtQwjk4ZQ2J8q80pEUd2Ywdxs8t2ySGJWU9HpKwlnj3BNmhu4oR//IymA5dgnI9bhW3IvImWMWe2xHioMIqwa1u/VFfQ0QlFGjv4gtdrbFkl6LaZfQMR/ubHCW2J+/Gmlr/SF3DFJvS/z7zKdrXN4RyR2ikuiF5iEXGhoWEnlMFIEUeKKRkbeFQaoZV9GSFCeXVQXXhLBjukJCgX5B3H8J2w3PhE7YXpu/Z22NPTWdGifI9u5f6SAD6BKfSBT462nP36Y/V7ouxRrt5 KUn0dzM2 hik7AJI4W8Ik1rdCGfG4VW/fHIP+LhQq9NIujuYkmyksBjtfM+I1Y3LaVamllYofBlH4RJWoUnar/sH3JtJ8FBiQTwqRW0Et9crDCQFlA+/u+rEMx8AE0575lAu5Rp3wAEAv81JFxyMyZaijb7tBMrwob/aWaVFwqR7X4nJNdIu+oxdBDtNHxrL0R6JA86LJRA4YJlUO9YRpZTgpv+dKoEi56m2IlFtMtSlvw1cUSuxT/ZKeUFnL9hHTYewH167e8t2xflaU86vniVGjHF/bFV5l1TL1XeCSazQVXqDP1vkZhREUgvRdcQZ4SVBtk1WKaNuKdJVZsXgv7y06MdE3EoPo7TQ6Esho4UkyZRZyHV13vqcRdtob8mJQOTP2d4FwoiJn+LXzTXMKZzm/ak/19htlcTX/+fiifXvmbtMtka7wQdJFxr1Hvt6pxryedEjmUXMWjmTUA8K0u76NWSX1OIhOOQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan We do not have non power of two pages, using nr is error prone if nr is not power-of-two. Use page order instead. Signed-off-by: Zi Yan --- include/linux/memcontrol.h | 4 ++-- mm/huge_memory.c | 3 ++- mm/memcontrol.c | 3 ++- mm/page_alloc.c | 4 ++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index aa69ea98e2d8..e06a61ea4fc1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1151,7 +1151,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, int order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1588,7 +1588,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, unsigned int nr) +static inline void split_page_memcg(struct page *head, int order) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 81a5689806af..3bb003eb80a3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2512,10 +2512,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, struct address_space *swap_cache = NULL; unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); + int order = folio_order(folio); int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, order); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 681e7528a714..cab2828e188d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3414,11 +3414,12 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, int order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); int i; + unsigned int nr = 1 << order; if (mem_cgroup_disabled() || !memcg) return; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0767dd6bc5ba..d84b121d1e03 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2781,7 +2781,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, order); } EXPORT_SYMBOL_GPL(split_page); @@ -4997,7 +4997,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, order); while (page < --last) set_page_refcounted(last); From patchwork Wed Mar 29 01:17:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57D39C77B70 for ; Wed, 29 Mar 2023 01:17:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 008FA6B0075; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF9EA900002; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D655B6B0075; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AEBF76B0078 for ; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 88D9D160C12 for ; Wed, 29 Mar 2023 01:17:38 +0000 (UTC) X-FDA: 80620173396.19.D8CFCA9 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf27.hostedemail.com (Postfix) with ESMTP id 8D44E4000C for ; Wed, 29 Mar 2023 01:17:36 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=qzeTA3YG; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=DfFURRi2; spf=pass (imf27.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052656; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0+1ccQx8AuLgSqfkQ9qAEQMi/lCcYIby1X8SrOj7vms=; b=JdZS9/59G/4QQBx2sXXZYeCyqRi3lblQ8AhagJNQICjgHVu4luIHZ9X9PeN5NKla0SS8JN puNJmNDW4JUEuVUBLJyxdK1jt8NVoytTGU7boVdaerx6r+Y7h3eXqtb3k2offv/lnpRsU/ 1TvNnG/+QuwF4QbWW2dQBrCcw4hZ56k= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=qzeTA3YG; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=DfFURRi2; spf=pass (imf27.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052656; a=rsa-sha256; cv=none; b=wYXCQBwc41BwT168+NktzQlLq6vLovnwb1dAFdUNaEp9hg78nejsPuTK8uX7C4zNRTmtS1 mooIc7npxTr/CEfcbDrUn2pWnK3LDtDsmvaeRIjnrQqTMe9gCoZNt2FlyHAYUOXgk1X4IH lV79M7g9Ex8b5hnhIaCS/hIc5kW9p2I= Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 1101F5C00C6; Tue, 28 Mar 2023 21:17:36 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Tue, 28 Mar 2023 21:17:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052656; x=1680139056; bh=0+1ccQx8AuLgSqfkQ9qAEQMi/lCcYIby1X8 SrOj7vms=; b=qzeTA3YGawR5Npj6RnXbK8iNDVcqZsg+UaqOP3JpEgiuEGN7QGF S69PaEPBeYscqfIWj2k4hLG+ymAsE0dwAxOw9Mza3YioK2zpdTZPT/r2AeRGBrdo 1kBpkqNu++ropyPnhe2HIfyWbWiwMHPlYLNBet2L/wkgh1mwWIm9fYWmjOoKEYb/ R5HklkL4YnDOoadWQ0RCvCfXoYdUQ4CTzUZoasZ7d0dVelrKN7DB5q6UbvJ5u4+S 7JAgW2cQwRKU0s5fwm2Q4qr10Q28th89fRrxP98gPycYNleqFSa1ISKGkhues90S vsSnHxW26wSUxLaGZiDGCsNfikWsw5KMuHg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052656; x=1680139056; bh=0+1ccQx8AuLgSqfkQ9qAEQMi/lCcYIby1X8 SrOj7vms=; b=DfFURRi2cTfE0c9CmH4Sa/MZGtlmXfs/BdaDyy+0eKhfQMKK4k+ Gqd5fJX509/FMF+LPHtOz8swWthx3cUvAHignbSHc4aeSsex4j+Z3P/eQAWUW+Jx htj2L5OCj7uH5EJwuWelYeGJ59GxkDmSEE9elQKbFKRKNY10zTGdfoGglR003vEE wiVn2Z0zSuWAvFtDel78cHeL37FRh2Cl24gnf89YBaw5sOhQ/PUOuBsUM7FKXiW5 ix/cZUkPRlXTDClXD2ElcTYYZS+wJDiNrnmW7Eaozo8zOHChG03i91bX4sVekajN neLxtXdWBtuUWOJQz4vPDJWc01xryJx5MfA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:35 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 2/7] mm/page_owner: use order instead of nr in split_page_owner() Date: Tue, 28 Mar 2023 21:17:07 -0400 Message-Id: <20230329011712.3242298-3-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8D44E4000C X-Stat-Signature: ecqumhxeo8oegup9mrgd7x1otzkuertq X-Rspam-User: X-HE-Tag: 1680052656-68756 X-HE-Meta: U2FsdGVkX1+vAspAY744oI9zB6g4INCsxauxde7T5tzK+9s8hqBYT1MBPeegtpaC2eZZVe2eBsb6cte9YdR05ddqwd4T/86E8JRHSxZeABOa1PxrO/r3uTK4YPmJu0VERzoZInoMKYmjl90QdgTYOrLRhUqybz1O1UyIQ8Pswcxel1cv+sMqGGstQ46RB+ihqV1sWTyHfyz6+lOONnF2yIndFe0Do+Dmx5wAnF7O6eIZpbOkSHHYWSjFNhche6W7BTvbOSlZ91olJDh/KNACf2d+xjy9oaOWHbFlvgLxleDYcjJrU53otd7w+Yn/mbhiuVgrsMzulLw5ZmKpVHez7xnRioCxgQK5XI02acLWZ+O4BNSqVBiNZrTKnAK6GEF90hWeJ+OvNgt5ztoWkaH6xp/iUdFEYoUm0P6oDKB6MthdzyDJ7tfDIlucJYO4V+tADdtH57KRYglV8oAC+WxBBq9VGojoCZxG4uh+ugSOiMjA5C0H8LRc2nXHWMSHltHAI/BAsirRE6WlbZiboOV1NmxM2u05TpzWaDAb4rIvZZYrER3OGjmy6qjA8E4SwOSdS9+KtQNkurpbYeN6lbgVCwoKrnSGjNkEk9eIuqY6HM3vOtc+f+PIkIEy5G3xqV1io4IODhuBb/inPxw6nUxiFWGxqm3ZZt532iEDNkieC3fBM40lp270JG+7pJSTXoFU6TyF95v/ajcMKGqfKSBPO0GqudvgI+epamtS2cO6YHXGPspnNVlIFFdXLCl2GILbaulIlTzAaNYJgK29AfNYR3bMNNIB7BnO2dhiCbhbTfEzDoEiN3AV5Dp1VAZggsL4pcqdNgTqua2OPKDdwihKvbpcj0+9ulpyPRCpDQzfyHisEguMS9fXLq0JNg80tV6QJOWzgMVhGjd5Jt4VIF4Qd01kJS8fabVmc+ThlTM/8wuxgBvDlwH8FbnuQmwixVyVfHjnade4/F0gBClkZUd b/RJaclV 49XIiJQ/3kb9CMPoxUbh5TPjXgo3xS4MblPniU4gnC0wsQWTray+u5VOkFrZhy04dmWp/G8ET9hwsbLJ+XH/PBevdA/+u3A0LU3bl7RWlnzlbwK/nLpwUbIs5hC0yLIQKPCa7T28sWQFJ2e8Hi2xDi6JSG/e/XYfdN3zDRosbUKcOSJQtf1i0q+2Q0JhQ4hLAmmom9vKEySY6ogoQiXfNgrRxGE3QpmzHl1tH2/ojtc/MbzMQaieI4/HfmKhxoo6nmtiMu24chRu4kxLsFJzh7Jm0REz1S7mxtS6nJK8lp+r6DdOTaC1J0iQ55KeOs9yEZvCAJC1ZJ/Zjkllohzxm9k7fg2eUGkAmgCSaIBDjjqyzDgIls1AzaayXz5rL/u/1ol1Mv1s/XvHxbOleGzltDL839GpkikdwRkugEI4GUpQjrEuEtOgqp03+6E75eaK+a7aM/kFd4Ay9xu0bYw9quzOUMQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan We do not have non power of two pages, using nr is error prone if nr is not power-of-two. Use page order instead. Signed-off-by: Zi Yan --- include/linux/page_owner.h | 8 ++++---- mm/huge_memory.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 3 ++- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 119a0c9d2a8b..d7878523adfc 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int nr); +extern void __split_page_owner(struct page *page, int order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int nr) +static inline void split_page_owner(struct page *page, int order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, nr); + __split_page_owner(page, order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -60,7 +60,7 @@ static inline void set_page_owner(struct page *page, { } static inline void split_page_owner(struct page *page, - unsigned short order) + int order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3bb003eb80a3..a21921c90b21 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2557,7 +2557,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, nr); + split_page_owner(head, order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d84b121d1e03..d537828bc4be 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2780,7 +2780,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, 1 << order); + split_page_owner(page, order); split_page_memcg(page, order); } EXPORT_SYMBOL_GPL(split_page); @@ -4996,7 +4996,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *page = virt_to_page((void *)addr); struct page *last = page + nr; - split_page_owner(page, 1 << order); + split_page_owner(page, order); split_page_memcg(page, order); while (page < --last) set_page_refcounted(last); diff --git a/mm/page_owner.c b/mm/page_owner.c index 31169b3e7f06..64233b5b09d5 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -211,11 +211,12 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_ext_put(page_ext); } -void __split_page_owner(struct page *page, unsigned int nr) +void __split_page_owner(struct page *page, int order) { int i; struct page_ext *page_ext = page_ext_get(page); struct page_owner *page_owner; + unsigned int nr = 1 << order; if (unlikely(!page_ext)) return; From patchwork Wed Mar 29 01:17:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65094C76196 for ; Wed, 29 Mar 2023 01:17:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BBA76B0078; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 843C36B007B; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BD646B007D; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4BFB76B0078 for ; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1D519160C12 for ; Wed, 29 Mar 2023 01:17:39 +0000 (UTC) X-FDA: 80620173438.12.B277511 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf08.hostedemail.com (Postfix) with ESMTP id 2C92D160002 for ; Wed, 29 Mar 2023 01:17:37 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=hLP7XiSY; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=bGvmUg86; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf08.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052657; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tqMM6rDGpE/k/HR8iQPMcAVX1ewArHJYKHHgDopL/rw=; b=pjBr/fXhyVsnPv9h0l8x7+GwCZQ8qSERPT7lq2hyfxkjzvJnuvCoFbza6j0iTcNcHE8RpD JKo4fqqHUM8/y1eTydijEw1R77cw8YgfSSOGHhhy/uaAQ+3VfwKBg4tEx49Di57MUYD7gg Idu+q/xHLSXC1eBjkI6No2ZsJaTbDDY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=hLP7XiSY; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=bGvmUg86; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf08.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052657; a=rsa-sha256; cv=none; b=jkVcuTNhuOMJBDeLpxynPR8/lboc07M/pl/Tbv8aZBlnZduYm03qmJeGh1sEWdqvgiflG+ 9qm7TG0eHkWgVrTNE3HeXeldjzMEzrKUIN59AA3hdOLmNf2I4R5XlBdoO9r+jijhR3iHTT Ki44hcx1yfPggiuD/qgPzRrZdM6BblQ= Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id AB69D5C00E1; Tue, 28 Mar 2023 21:17:36 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Tue, 28 Mar 2023 21:17:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052656; x=1680139056; bh=tqMM6rDGpE/k/HR8iQPMcAVX1ewArHJYKHH gDopL/rw=; b=hLP7XiSYjLlCRd7lrVm3ZMABglqLVh/uiS4L+T6/ooSIAVoHnop dSSVOlJ2QIYAH0C27cKSvqfBGrTbQpFjeMaMREVycqpMmUtVPFxbSs2bCCiBUhQG t//bCkzodvujojloLhv74DGmERUsKVCkokoaA63NLw509LEhOAwrJsco7BKgOpt8 /cEeBZVM8fzDjQOJZoBHjo5bm13JluuuMLODiAaAnvioAcVccMnsGOi5nyiiV73/ Xijb5DIniR6x306TPsS6SxR8DI/YO5StlWvfhx+xjgSYxzAaQuVH9NPeLxgPBI3S sK8k0O6uwfWepbI3SR3I3SLq6CAgHztM6VQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052656; x=1680139056; bh=tqMM6rDGpE/k/HR8iQPMcAVX1ewArHJYKHH gDopL/rw=; b=bGvmUg86eQWS7avpajzu2TiXESB+1FnJpYcefpwmrz3WcMqmTWn oTqa2z0f/A/JHfgh8lz9y0VDVWpl2zB2mWVxPTBaFlqUQ+3E+FXtc8VGVDXESXCc v3GWCECIeaCslCfiVSxZHaG6NeD+lhI30pXXZwuNpgxQzxv9pY7AC035KeoKbgKs MQQ4+5CVfWvjsleQNbmsrtS042v/W9ghVHu5X6z7N7tQkEmIJ/zUUM2eWJmkmaxA lB3LE7cs8PysAHDmcExUKlK21dIaXdQHb91qwZ9ihCla1Vb2LvAxXQ+EV3bbF94I F8Ec2Oa8NUn5ORSAx/G2H5uspktCBkIrIIw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:36 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 3/7] mm: memcg: make memcg huge page split support any order split. Date: Tue, 28 Mar 2023 21:17:08 -0400 Message-Id: <20230329011712.3242298-4-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2C92D160002 X-Stat-Signature: 5i4q374secoazo8b9w58y5kxag5azaup X-HE-Tag: 1680052657-446919 X-HE-Meta: U2FsdGVkX1+nEV1NSiLfMFpYttTVbxUA+r5WWMndKPgdKJujFmcY4SIv0+C+cxUvFt6BZN1/uyJ446k5MxeDiI3TFc9gg9gKkANqR2N0fnLSXmcSEidKJrSQxd+PKrfvHQVWWmQlk4WoWaGHejoSdHyg7FR1Zo7/N36HdWoWy/eLU5vESGCNNqb8IbfstY/uqCILXRbnlISRiAvc85fHFeIsu3TY3iul7qEYMGGIeIWz3+dJ5Lo3cRQmnOrrIhFeW381xc8/knwcxEJdHcmlMnM4i5R6itGlJYcL+e3WCILhcYhmazMqDX1Q6OanOB06kSgdjVDnRmLV5Eyr636cgqyL6w92WcMNyvbPBsKp4KHq8PkAdSJz6KwGY4HLsoH29zCr7CZTr1cwUezvM6rhR6WvNKzhO/1Y0xaJx6QqXafYXFdbB+cEC7VpszimBblW3S1Yt0U8IUEsAMnMJEUMcz9ZD6LKraUXCWpDdhuTLYmGXtDScPq4I/qfmmDF9toVhvMSgss8tWnBTf2A7ubHG8v4jFrhZNEQvYv19VW6iTZd6jAtrhRzBKrh0smZ+SC0ZTjbJLr6d1e6mYDzLK3FVKBcPHly4iq5mnTLD0sZPwrsGhHM0QzmSDE2s0RRyaLsdKTNiNUNk9bL2v9GryTofTI4K6/PCeDW1uhxUjSt1K/xupUtOnhcBvf2WFN0jokg024uCKAoAlgT/LK52sH7s6XmXaqMUqpZk3eHj2g+jMJSpRfMPiRLTqWISTvOCP1U2HhNuR72Y0qZaW16U5WOC2QFDO4IjztQg7Wi47IeIlwfZ/rKVi7r905TZgIyKbUynHqQBY9qCQ5VYYxtmvSVsZohiXuEFv5+DSyfEisWvfh1ZZWJs9Pz/0+IJqUMdNhhSdyIWrn5AKqQyAoOcZr7j20+xY26VVsVLzIxVK99YOlN/QRpVcn+/2jyidFWHdKR5WG9sQ0yJAgrQGyZ8kQ sHGC5+4f PC8Iuh9GVFRY8TgA7IUAFIgZlfcCB3csPRe1erpBPhqiYu6lxS0BNEKMNUfTSClhlgtU88o/tCmEaaL7AcTPZOyE1r2RMsPMLwMXCTmyd4Ii1YzHz2t5Xnpn5tTIPrA+7XGHpJatWheyUC45/eKCBaVoXHGETPUEhiMRQWBIOk3biaBw94qWCyDGO8RVWa8bgfksKCGOlo6pJqC8RRfbzZClQO04JmwB4qHz9nOuZ+Zs+wQFQctcXCrKyN1Ra4OJHMQWsflG9FBrJnS4pepkup77HzHI29xYPDXRZ8vfvhF6RbM94eq7ZZyQszp04Dd6oxQelNWWSYEMF3OI8wOgfVZVQyg3G/DyCwDE2TIeT0YjBdcJWuDnaqq1R/XKIUqXaw84aBFNPtPDwoJKubQAKWyq2RJQ/0vmPe37E X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It sets memcg information for the pages after the split. A new parameter new_order is added to tell the order of subpages in the new page, always 0 for now. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan --- include/linux/memcontrol.h | 4 ++-- mm/huge_memory.c | 2 +- mm/memcontrol.c | 11 ++++++----- mm/page_alloc.c | 4 ++-- 4 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e06a61ea4fc1..1633c00fe393 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1151,7 +1151,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, int order); +void split_page_memcg(struct page *head, int old_order, int new_order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1588,7 +1588,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, int order) +static inline void split_page_memcg(struct page *head, int old_order, int new_order) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a21921c90b21..106cde74d933 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2516,7 +2516,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order); + split_page_memcg(head, order, 0); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cab2828e188d..93ae37f90c84 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3414,23 +3414,24 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, int order) +void split_page_memcg(struct page *head, int old_order, int new_order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); int i; - unsigned int nr = 1 << order; + unsigned int old_nr = 1 << old_order; + unsigned int new_nr = 1 << new_order; if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < old_nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, old_nr / new_nr - 1); } #ifdef CONFIG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d537828bc4be..ef559795525b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2781,7 +2781,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, order); - split_page_memcg(page, order); + split_page_memcg(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); @@ -4997,7 +4997,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, order); - split_page_memcg(page, order); + split_page_memcg(page, order, 0); while (page < --last) set_page_refcounted(last); From patchwork Wed Mar 29 01:17:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 630F1C6FD18 for ; Wed, 29 Mar 2023 01:17:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 586C7900002; Tue, 28 Mar 2023 21:17:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C3266B007D; Tue, 28 Mar 2023 21:17:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33A1D900002; Tue, 28 Mar 2023 21:17:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2080A6B007B for ; Tue, 28 Mar 2023 21:17:40 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F0651A0559 for ; Wed, 29 Mar 2023 01:17:39 +0000 (UTC) X-FDA: 80620173438.08.5409AE2 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf03.hostedemail.com (Postfix) with ESMTP id 08E2F20003 for ; Wed, 29 Mar 2023 01:17:37 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=iU4MYKvh; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=Hg04jMI5; spf=pass (imf03.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052658; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aUMOD1VSwlpi8SU9QPM0FjnDI83YsOvX8fZYLAjytoI=; b=ICN5/XnnUyuo1bJyYK0ajARaxtV+TC9ndPxKlSK4wAlKOlzHZrC3svJ7rsQLIYwp0HPnvj Ecll7R5NH9VxmCsD0hGL5aC7/ACZT1J9QfedIqEn5REAepbh+5HqeRLl26NHmwcn+LfIGz /Sq0vvtTOW1d8CrVky+pZXH0i1V0qtY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=iU4MYKvh; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=Hg04jMI5; spf=pass (imf03.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052658; a=rsa-sha256; cv=none; b=7T8GZLxxJq+92xs4iC3ZuQOVtnM4PEJD2Wp9IZeMIve4B2zRAPWeIJ8FJkFUaKlvuldyfN R7UyS5h1TVJqd+3COE6L5E8NX9+/Ru9AFTLCJquKh3vP9rymd694Z+sbUul+2N6rdRfNWa x3nq1S9kC1noQ5PgPsfucpQuNQvMlnk= Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 8B7F55C0045; Tue, 28 Mar 2023 21:17:37 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Tue, 28 Mar 2023 21:17:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052657; x=1680139057; bh=aUMOD1VSwlpi8SU9QPM0FjnDI83YsOvX8fZ YLAjytoI=; b=iU4MYKvhUukywGLlEKLDX1/HMUmgD5WUDx2430lFmQsZIhey7CK 7JjS1kU6XkWTAoOQySY0cHobbSepGAIj7sJqRg0shuxUR+44r6AfYijMOePSVmDu DQqtqV9U1nPHSYreR1ohCXfx08+8zUUk4VkO1guvg3/16I712KrD7IxqENlmJuVB 3Pw+iNLUhjVEidWtwAgnY+6k+6wY64IS5cwhi2KETMbEoNs+LqE/YC61mXtbosuX mQbcG4K6YMloRXpM7MhLr2L6Nrnv6Zw4H9P3zuS9xfKFh4NA0gFxIozLf4/Q0LM8 anDlvpywXGYBrDSdsc7hdaMA9hqFSWYEG6Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052657; x=1680139057; bh=aUMOD1VSwlpi8SU9QPM0FjnDI83YsOvX8fZ YLAjytoI=; b=Hg04jMI5y3sqbJL6KV0puzCcXMJARxNuBYPS6DXyJRqHRSi38BA RIQXYJy5djVGBfx+pbu9ZoFt0zLAoA9tr2R8JLVaa6ByVUbM/0MxNAIvTNTopG4U IqpDwFiiNQ5HcqX4arm4qukeaMS0uz8t8qgz7+eoyz2hBNELSlD3LZTsLBD1/y9O P/86PYDBX42C6WQrws1vMnWuHR9JLtUqpmQbLw162AINMkLSCCk2fCzAqV0L6m8u 5SBW+pJxXniU5RcZzhgXL0DvAv8g4gQ946LkSzyZXUx+fDXd9CwKHpFPqnw7WsM9 fC2COPUF0IOLnOu/YPq9aI6zMJ6JNNsuRMw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:36 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 4/7] mm: page_owner: add support for splitting to any order in split page_owner. Date: Tue, 28 Mar 2023 21:17:09 -0400 Message-Id: <20230329011712.3242298-5-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 08E2F20003 X-Stat-Signature: 153ob86nk8stzgi9kimmgixc87hgb98m X-HE-Tag: 1680052657-766149 X-HE-Meta: U2FsdGVkX1/+ZFekDk2iVw9DRiB2yzdjEk/0togqbmXp0dH4c8IBKGLHciUGReZ4AFVQzVNU1lQnJBaBtknIWnV407nTXrPrLBsNEhxft+hCzLpq6gcExrpSdVLvC6wppOQRWkBozvzFk/xk6TdHD4rR3kaXAGB1am57yNfM+T5Syh+y1u6tqxoJ04LkMRBmhdVCpFbQFRE+HZfaJpiIsupeJGdDVNZR8eIO3i3uHkpsDlu/Cg56hNc2YJo71yzAhmTXcNxg8b5w57XDh40C9fmGi8a12/KLxQgxr3mTo936kEDAfTdG8gi5tAIrYVGfL+vCDEmXu9AnMxlZCUF9cLwRGUSC3ujimYH64NU8jHX+l0Em6fugau0u+dBTfocDdKV7vODB7IbvJCHgAlGx9dBkkulU/bRzAemWdocZCE/HWPxcaorL8AZanemnliGxQm+WUpgN/+uIFLZO0JRbyWpcjvWQKU6DzrzVgdmg1/r0pMcFRZkJoD9JWaKq55wG3fvXFlmpRxH3rJru864N2r1/agZ6VdJkmJ8VqHn7UFq1q6raaF3VijOiFxt74CxnjmdsD4A384wK8wQvL3VVBrd1ITwfSZLfdoFcEsRscf9qKZedg5cAVCrsl7/y4RQ1AEikhpISn62XICx2sLfoVMBnx0ZjAiNhZ2aI8vnjciQKV8xxyCWR94VtYNr1aOj0n6dXaTKWcm1+NlV9L1Z8HlyeFbbpkKiq3Vi5A7of9GBxxs7dywRg5pUlQ9zFSnwBKRUAV/4jBRLmxcEkaeHQxlY0s/fkHCHpowZDfMVncRwtpLiKY2XBDywiRAwEm+8zcJdwDteXNCZ0AAgRZooDPvDUbTmpiDcH6s9LLgvrUShvL6p8QcYhdE0qK3n+R0iOSjZQvVjUeAbcAvzxJ42G485RrX6L8EFfUZliuokK2E6eHvM2jCX7cR1vW2InjDjw/2BV+eF96LMDfG/1lQ8 gcDjS3nh HagFnbYXj5Qrco3VFc+4mc3L/rD0nk52hz3ltcsQYwqAQrHUFZCX6kxCJ+DnNeLwaq6+jxZStdm/L9jZ2tJ1c9I2ZJlyttilSA+VBD2EOk5Kpsnsn4PRFPv/OYmRUj/Urh5nsjMl2i3jRD7+zwaVOw7Hkc0OSjltS5UphcCQ65ZR3AU73Iw29U+ag333IR4yNj1RjveKcPBDyWYB8Jj5UuUB6+rL90W/Y1sYOhhV+W7+hc+RDb3RJ49vFQgGQcKXNUmnBlVJ9LL76jsbeaVoyNTn/suhwVpzfxq/0kgZfRfDdV5pE4oyy9ceZBzKgY/X2hn4eQqQwEhG3+R4XObc07WXk9EYhqqg5J3hUUVtDe8MlAf9JbJIVju0UO9B6qU6gpCE8Rkr4Dzl4RWEfrnmZlUG19b6SQi/1IHSpOphYlYG8N+1kpiWpG47LNtQm78zssNPwx08SdVoxZIMCMudJidMGVWUNwfkArqHfnn2n9B+cCr1Lvgc9RRm65KNd0yaCX5P2mp6oUymiQPivSzQODeSU0Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It adds a new_order parameter to set new page order in page owner. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan --- include/linux/page_owner.h | 10 +++++----- mm/huge_memory.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 11 ++++++----- 4 files changed, 14 insertions(+), 13 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index d7878523adfc..a784ba69f67f 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, int order); +extern void __split_page_owner(struct page *page, int old_order, int new_order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, int order) +static inline void split_page_owner(struct page *page, int old_order, int new_order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, order); + __split_page_owner(page, old_order, new_order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -56,11 +56,11 @@ static inline void reset_page_owner(struct page *page, unsigned short order) { } static inline void set_page_owner(struct page *page, - unsigned int order, gfp_t gfp_mask) + unsigned short order, gfp_t gfp_mask) { } static inline void split_page_owner(struct page *page, - int order) + int old_order, int new_order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 106cde74d933..f8a8a72b207d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2557,7 +2557,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order); + split_page_owner(head, order, 0); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ef559795525b..4845ff6c4223 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2780,7 +2780,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, order); + split_page_owner(page, order, 0); split_page_memcg(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); @@ -4996,7 +4996,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *page = virt_to_page((void *)addr); struct page *last = page + nr; - split_page_owner(page, order); + split_page_owner(page, order, 0); split_page_memcg(page, order, 0); while (page < --last) set_page_refcounted(last); diff --git a/mm/page_owner.c b/mm/page_owner.c index 64233b5b09d5..347861fe9c50 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -211,20 +211,21 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_ext_put(page_ext); } -void __split_page_owner(struct page *page, int order) +void __split_page_owner(struct page *page, int old_order, int new_order) { int i; struct page_ext *page_ext = page_ext_get(page); struct page_owner *page_owner; - unsigned int nr = 1 << order; + unsigned int old_nr = 1 << old_order; + unsigned int new_nr = 1 << new_order; if (unlikely(!page_ext)) return; - for (i = 0; i < nr; i++) { + for (i = 0; i < old_nr; i += new_nr) { + page_ext = lookup_page_ext(page + i); page_owner = get_page_owner(page_ext); - page_owner->order = 0; - page_ext = page_ext_next(page_ext); + page_owner->order = new_order; } page_ext_put(page_ext); } From patchwork Wed Mar 29 01:17:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0563C77B74 for ; Wed, 29 Mar 2023 01:17:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 49C76900003; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4730B6B007D; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 276F1900003; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 06D4D6B007B for ; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CE87A120522 for ; Wed, 29 Mar 2023 01:17:40 +0000 (UTC) X-FDA: 80620173480.30.9B54884 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf20.hostedemail.com (Postfix) with ESMTP id C9B481C0005 for ; Wed, 29 Mar 2023 01:17:38 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=Dcx3mJCj; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=FQWPDPtJ; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf20.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052658; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aksTP0wsXOFiVzEDaAS1LD1g3uMK0049FZHtL5zgUuw=; b=m5kBwlq+XrAc6TnFLIfXUT+KcUQg6xBWKRlevBTToBwgHmzxp10mwRi1lgJgdSZJ0TeArQ pBU8WG5IVZPFwFfclMoXxM2Bvz6YeDH8I56bdZ0wrBYdB5k5nSdWBBCFVD/WCXffyzEDzc th6Uio59KxP9ZLQmcvgIqUyDXjsBe9Y= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=Dcx3mJCj; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=FQWPDPtJ; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf20.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052658; a=rsa-sha256; cv=none; b=AuEg6w8ouVPCAThgvuvTVX0fSkArOwBQEYVAfVfGqLKSiTf9VX/1dvdU0+Gw2jnBBh2Nh9 sJw05GqLVckI8w7z+p0ukYb1IHKjlZi7ITzwq9QV24LnNE+zqUPWJzarxEX5zdOi/MS03c f+woC9ik9Hc/Ga4d5qWc/K/sSGJ725A= Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 5A4385C004E; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Tue, 28 Mar 2023 21:17:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052658; x=1680139058; bh=aksTP0wsXOFiVzEDaAS1LD1g3uMK0049FZH tL5zgUuw=; b=Dcx3mJCj4o/cBPXDOYcGgRdRyQ22nxQrHFPcYotx3tHmmsZNLsv pc0x43YARWo7R8Tqa3EaIKAbKNE+0dRgqNWVcRVoy+0PxoLEQQaiOB/l5qyeDT9J 3PLItl508JNOwnt22szn4kU16TwEaDvKiGzEZOoMmHoaRFAXcwzDPdE5PvI/I4c7 lFh6WtsgPKLc0qmP5EvR/8VAwNrseIoXBxH4mctVa/vYd+N8MdVV8dqdCag8THBV ambmpdoNRVsvS5ewLMZefNnrjQfD4U4bA6yt7Kj8na9oZv2T+TnlIQHFp532+HK8 JK/6U/GLNSaKw114/Wktuy06Ml4goA2jpUw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052658; x=1680139058; bh=aksTP0wsXOFiVzEDaAS1LD1g3uMK0049FZH tL5zgUuw=; b=FQWPDPtJoRCHisa6cyVyXovcOgzDabdSzVHuAQpVaoZ8GhDerNv WaAwl7o6sgJXyQt4BquNq3jju6RIzlxOaS5CpNynUu53oH1/HCDhKQtsa48WmSv9 N1QKxGvk4JX0DQzrYbQ8+kLu/rp7kpH+JBuA9kcdZIqeD61GN5NplJbRcuRDib8R Y1KpgpsJ4mRy7T8NkvgtT+pFOgnYsK6Gh6l19k5Nfx1jbVt5BVOi/IkcPMuszTeD xeHGZ/TuxVdVK8thoUEV/hdIPJ/cwTnMeuILSYET0xmwXEBq31CJP23zYEZJlYN5 ZbG+sRbM4hJetbu2u+ebpQkqGYdib4LM8mQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:37 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 5/7] mm: thp: split huge page to any lower order pages. Date: Tue, 28 Mar 2023 21:17:10 -0400 Message-Id: <20230329011712.3242298-6-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: C9B481C0005 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: w3owi4khef7y98m6pgdmq9rnm5crtts6 X-HE-Tag: 1680052658-248494 X-HE-Meta: U2FsdGVkX1+HQfVkVrVdT9kFHys1fu727AuMwCcVeAZ+7gVBhmFW744Jf582D79dOqkFMeeBffiwqehOBbp0QsZdw8LOEzqy0ViI10IqgT0dcRUHQezLyXKCjmVrKmCMTYK+bSCqa7R7A4RcsftUxvlJ076lA/ArteL3WoqHAaCBdXkZCZe3fNi/SELd9Fk5TwqFcZCghaaEHH+pRwHrwF6+UPeOD82XR05co2xn2zDoRgw81RoAdzRG8Q7L1hSnYwp1YNYxmPGWKihOYwuMTOhTqokayUZIHn822x3+7MxwhUS7720H0sdaBY/KpQNsKv+DDTV5UKDiuZqz2T0vYH9q9ZQxCAyLi5tlLOZ1TJyP8iW6TuEPH7ejxyjwV1zlIRIwHEJvnT/Av/8za3ZFLZutzr/ITpyfm1SnK9FPNBESs7cV4xt0TIEU9xx4M7qlJFd829o5At4hOHTDXqJstkAv7VASWDqItd8TEGehcOKMkH1qgG7en7o/HIgIt1BkFAJyKhQLfbpYg8YHUfpaB+beU+K14y+GgNRvD4uX6JYzEStkQVTq8gMDjazWgeVQou5eDPNc/VNu1hgSptpSbL19KpCoBE4mwRVf2sE3QhQWLNfjGEhLnjW+M06t9MQlsmsnq8QVY6rFgGYZxKZR1o1yqPalnWvg4My8SjTRszPQirfrLANnrLsHKGF99G7tIZzJC8RHr80PtCEQEamcDBoP+2/EJCd2e8Soe0zcvhSGUuXUlH8P4+oWgwfZ5vwxMlNZWqeHiIaP6emYS6ZTxq4Ik8LEW+YxaNpwEDCWnDLQN7J9uhPZGhH58Byrv6Skt63iGCfR2Kzbep7NhfFq9l9Hw9vm1Kr9C5iHa3FBcqwETBw5a6+PUlkRPoNXRFBdcXf4RZGQz0MBsMzfyiYxFXX0kv1ZzQ9uhPaKXSzzE8ootNvdfFlFqNgi6SfE+rCw2/1WPicL+Zy0fZYcNFa yA7Sx5aX QiF21V5a5gsx7fwN0hy2y6QgeOwOBy/BrtutE8BVCicp7cevLKiGjB4suq44/BgVZNUnvXycYeyddT1ZAGxlC982TPbYhEpMdaApmkjnO7wCgiAIj3UCZDAvnFl4pFD5VJk4xJt4syRfwoqJ4rVZVnawLm/pK60Yqnv1HdIOoJsBdoh6F5ivgLRD1tYe1hCWUP8U7XPUXPObyf96Pln2Qw75cvcZ31zO5mHDf05XIjGJhvqktFNmZiSUgcpYMtwoJ8Bz3i1rVvevxuOIpKztE2EH8m7n9ujRVnoEBIOjqi2MXdjchJ7wWe1Og0Oi/VTq4eg7rfI80ZqqmW+P1zPtsmR+++iXUB2IYT0rhQ660noRkBfP6zEZhoM3nVrR/L0gZXIkorn5tKqq1TDKdm3XSdf1uy92epJJoXcKD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan To split a THP to any lower order pages, we need to reform THPs on subpages at given order and add page refcount based on the new page order. Also we need to reinitialize page_deferred_list after removing the page from the split_queue, otherwise a subsequent split will see list corruption when checking the page_deferred_list again. It has many uses, like minimizing the number of pages after truncating a huge pagecache page. For anonymous THPs, we can only split them to order-0 like before until we add support for any size anonymous THPs. Signed-off-by: Zi Yan --- include/linux/huge_mm.h | 10 ++-- mm/huge_memory.c | 102 +++++++++++++++++++++++++++++----------- 2 files changed, 81 insertions(+), 31 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 20284387b841..32c91e1b59cd 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -147,10 +147,11 @@ void prep_transhuge_page(struct page *page); void free_transhuge_page(struct page *page); bool can_split_folio(struct folio *folio, int *pextra_pins); -int split_huge_page_to_list(struct page *page, struct list_head *list); +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order); static inline int split_huge_page(struct page *page) { - return split_huge_page_to_list(page, NULL); + return split_huge_page_to_list_to_order(page, NULL, 0); } void deferred_split_folio(struct folio *folio); @@ -297,7 +298,8 @@ can_split_folio(struct folio *folio, int *pextra_pins) return false; } static inline int -split_huge_page_to_list(struct page *page, struct list_head *list) +split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { return 0; } @@ -397,7 +399,7 @@ static inline bool thp_migration_supported(void) static inline int split_folio_to_list(struct folio *folio, struct list_head *list) { - return split_huge_page_to_list(&folio->page, list); + return split_huge_page_to_list_to_order(&folio->page, list, 0); } static inline int split_folio(struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f8a8a72b207d..619d25278340 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2359,11 +2359,13 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, static void unmap_folio(struct folio *folio) { - enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC; + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC; VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (folio_test_pmd_mappable(folio)) + ttu_flags |= TTU_SPLIT_HUGE_PMD; + /* * Anon pages need migration entries to preserve them, but file * pages can simply be left unmapped, then faulted back on demand. @@ -2395,7 +2397,6 @@ static void lru_add_page_tail(struct page *head, struct page *tail, struct lruvec *lruvec, struct list_head *list) { VM_BUG_ON_PAGE(!PageHead(head), head); - VM_BUG_ON_PAGE(PageCompound(tail), head); VM_BUG_ON_PAGE(PageLRU(tail), head); lockdep_assert_held(&lruvec->lru_lock); @@ -2416,7 +2417,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail, } static void __split_huge_page_tail(struct page *head, int tail, - struct lruvec *lruvec, struct list_head *list) + struct lruvec *lruvec, struct list_head *list, unsigned int new_order) { struct page *page_tail = head + tail; @@ -2483,10 +2484,15 @@ static void __split_huge_page_tail(struct page *head, int tail, * which needs correct compound_head(). */ clear_compound_head(page_tail); + if (new_order) { + prep_compound_page(page_tail, new_order); + prep_transhuge_page(page_tail); + } /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, 1 + (!PageAnon(head) || - PageSwapCache(head))); + page_ref_unfreeze(page_tail, 1 + ((!PageAnon(head) || + PageSwapCache(head)) ? + thp_nr_pages(page_tail) : 0)); if (page_is_young(head)) set_page_young(page_tail); @@ -2504,7 +2510,7 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end) + pgoff_t end, unsigned int new_order) { struct folio *folio = page_folio(page); struct page *head = &folio->page; @@ -2512,11 +2518,12 @@ static void __split_huge_page(struct page *page, struct list_head *list, struct address_space *swap_cache = NULL; unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); + unsigned int new_nr = 1 << new_order; int order = folio_order(folio); int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, 0); + split_page_memcg(head, order, new_order); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2531,14 +2538,14 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); - for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(head, i, lruvec, list); + for (i = nr - new_nr; i >= new_nr; i -= new_nr) { + __split_huge_page_tail(head, i, lruvec, list, new_order); /* Some pages can be beyond EOF: drop them from page cache */ if (head[i].index >= end) { struct folio *tail = page_folio(head + i); if (shmem_mapping(head->mapping)) - shmem_uncharge(head->mapping->host, 1); + shmem_uncharge(head->mapping->host, new_nr); else if (folio_test_clear_dirty(tail)) folio_account_cleaned(tail, inode_to_wb(folio->mapping->host)); @@ -2548,29 +2555,38 @@ static void __split_huge_page(struct page *page, struct list_head *list, __xa_store(&head->mapping->i_pages, head[i].index, head + i, 0); } else if (swap_cache) { + /* + * split anonymous THPs (including swapped out ones) to + * non-zero order not supported + */ + VM_WARN_ONCE(new_order, + "Split swap-cached anon folio to non-0 order not supported"); __xa_store(&swap_cache->i_pages, offset + i, head + i, 0); } } - ClearPageCompound(head); + if (!new_order) + ClearPageCompound(head); + else + set_compound_order(head, new_order); unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order, 0); + split_page_owner(head, order, new_order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { /* Additional pin to swap cache */ if (PageSwapCache(head)) { - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&swap_cache->i_pages); } else { page_ref_inc(head); } } else { /* Additional pin to page cache */ - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&head->mapping->i_pages); } local_irq_enable(); @@ -2583,7 +2599,15 @@ static void __split_huge_page(struct page *page, struct list_head *list, split_swap_cluster(entry); } - for (i = 0; i < nr; i++) { + /* + * set page to its compound_head when split to non order-0 pages, so + * we can skip unlocking it below, since PG_locked is transferred to + * the compound_head of the page and the caller will unlock it. + */ + if (new_order) + page = compound_head(page); + + for (i = 0; i < nr; i += new_nr) { struct page *subpage = head + i; if (subpage == page) continue; @@ -2617,29 +2641,31 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) } /* - * This function splits huge page into normal pages. @page can point to any - * subpage of huge page to split. Split doesn't change the position of @page. + * This function splits huge page into pages in @new_order. @page can point to + * any subpage of huge page to split. Split doesn't change the position of + * @page. * * Only caller must hold pin on the @page, otherwise split fails with -EBUSY. * The huge page must be locked. * * If @list is null, tail pages will be added to LRU list, otherwise, to @list. * - * Both head page and tail pages will inherit mapping, flags, and so on from - * the hugepage. + * Pages in new_order will inherit mapping, flags, and so on from the hugepage. * - * GUP pin and PG_locked transferred to @page. Rest subpages can be freed if - * they are not mapped. + * GUP pin and PG_locked transferred to @page or the compound page @page belongs + * to. Rest subpages can be freed if they are not mapped. * * Returns 0 if the hugepage is split successfully. * Returns -EBUSY if the page is pinned or if anon_vma disappeared from under * us. */ -int split_huge_page_to_list(struct page *page, struct list_head *list) +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { struct folio *folio = page_folio(page); struct deferred_split *ds_queue = get_deferred_split_queue(folio); - XA_STATE(xas, &folio->mapping->i_pages, folio->index); + /* reset xarray order to new order after split */ + XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; @@ -2649,6 +2675,18 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + /* Cannot split THP to order-1 (no order-1 THPs) */ + if (new_order == 1) { + VM_WARN_ONCE(1, "Cannot split to order-1 folio"); + return -EINVAL; + } + + /* Split anonymous folio to non-zero order not support */ + if (folio_test_anon(folio) && new_order) { + VM_WARN_ONCE(1, "Split anon folio to non-0 order not support"); + return -EINVAL; + } + is_hzp = is_huge_zero_page(&folio->page); VM_WARN_ON_ONCE_FOLIO(is_hzp, folio); if (is_hzp) @@ -2744,7 +2782,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (folio_ref_freeze(folio, 1 + extra_pins)) { if (!list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; - list_del(&folio->_deferred_list); + /* + * Reinitialize page_deferred_list after removing the + * page from the split_queue, otherwise a subsequent + * split will see list corruption when checking the + * page_deferred_list. + */ + list_del_init(&folio->_deferred_list); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { @@ -2754,14 +2798,18 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); - } else { + } else if (!new_order) { + /* + * Decrease THP stats only if split to normal + * pages + */ __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } } - __split_huge_page(page, list, end); + __split_huge_page(page, list, end, new_order); ret = 0; } else { spin_unlock(&ds_queue->split_queue_lock); From patchwork Wed Mar 29 01:17:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104FAC761AF for ; Wed, 29 Mar 2023 01:17:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFD05900004; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAC476B007D; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD652900004; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B88386B007B for ; Tue, 28 Mar 2023 21:17:41 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8778140BD0 for ; Wed, 29 Mar 2023 01:17:41 +0000 (UTC) X-FDA: 80620173522.30.6419A35 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf16.hostedemail.com (Postfix) with ESMTP id 9069F180004 for ; Wed, 29 Mar 2023 01:17:39 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=AgPcQhKT; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=Cux0JaYs; spf=pass (imf16.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052659; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n0I3rKbVmW11Ko5MdyrWnLEXy5CTcBBnWUTtQZPt4TU=; b=FXz5kqnr/LCDibkFN8qwI4lpCms9k7oL6fTaBG2pcevXJHBNiDZvUSqE8cpe+2gwOLh0tn kOA1Wevm6GiNwBLFhUscfgqr5e8aOLvEbsks0hgNohMh61Fxwzb2PdByeAefo4YXTbcdU9 gEKEbl95UgdtoDfPO5H1Wj/Ekz92iJk= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=AgPcQhKT; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=Cux0JaYs; spf=pass (imf16.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052659; a=rsa-sha256; cv=none; b=vg2+yL9ZIE4woDuzqRZFIIUJmef+gi0NVtCpixtCxYYKd+K7JMcnwrtRTCERr1HwHjMKNz Zy+i0mFt8lYKg4gzoQOZODUYNQhn4mO3ah6EjkTofi7dFOb1nyDH3pVFpcWCpo3RNDVzKC Ih9MLjBqtpQt8rcg1ijM6xVWsGh+Z8E= Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 1D3FC5C00F0; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 28 Mar 2023 21:17:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052659; x=1680139059; bh=n0I3rKbVmW11Ko5MdyrWnLEXy5CTcBBnWUT tQZPt4TU=; b=AgPcQhKTNgGR+bE1G8EVwVYSsvXI5KtJXlVvtOKa5ATqXXwCf/6 lZnsdHkTe/6rQFW7Orubpm/Guh937SkdusjMAmB6zRyoOKVx45v5/LbONp3y30w5 ACTlWT/JBCu0JoviU/XMSw2gDSJdrc+LCx0AHUFu8XI1JL+7qnT5ZZ1a138JeE3G UVuEV5KecJKpzdcyiolNkSaQbkeOxomRSmgEgrOdguQaHmQb/8LXsyXlJNbvpqw+ tL35C/RdeaPwMIf+hvzaOe1V8EpqdEmYe5Lqov0PZU26M6evaXf7ITBm6LA/K0aS WaPN5GGsrzi2rjGf/FG6mATsBsO9d52JvBA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052659; x=1680139059; bh=n0I3rKbVmW11Ko5MdyrWnLEXy5CTcBBnWUT tQZPt4TU=; b=Cux0JaYsiE0eu7v9sa4P9tATX0d1RQyZLsw9Wh2WDAUsJtPiv++ PAVUrKEsXvGON4AD2CEzWW4TGEVMWe3ZUHhG12Z5Kk9TCjUGuPBSFQy1GErCQar1 oios7zs7juHKmcrWxX2VRwwAT0gv4jzvtEUDi3qNpw2ToejR8e/gU+SVl0ubX91I oplO9UOuXcEtu+hR4ynW3B/ecxkKD7myJRBAYAPhPUHwNWxObAF6ueubUcUk4MJg R0WcAyp5UHI4076+A6bZLGsA7VBwZN34/DmRBzeYE2wl+rY4hhrveFTmXuU+5nD7 XBbnJ7s2rT0P1HE54qJ7WScWfZGcrNq9I+A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:38 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 6/7] mm: truncate: split huge page cache page to a non-zero order if possible. Date: Tue, 28 Mar 2023 21:17:11 -0400 Message-Id: <20230329011712.3242298-7-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9069F180004 X-Stat-Signature: xgre1kjtkzjfc8tzzsh6e6iajsr8skbi X-HE-Tag: 1680052659-374421 X-HE-Meta: U2FsdGVkX1/KYiC5U16c9Xgfl7qYg0oltWylMaRpGL1zac9MJDT73mAGlj4RyQ976swwzn7iqlAjYzqT5ZyNKEAK0WK2aTJobIszXExktgaH6O5JxefdnZlLqeVLnxz4Z0o6hqq0XWGOmWfRx8Ixop6NgQAS4g1qYiF4+qnvUYpQlqltpBnKSn4dbsLsHIrNl+eDXbRocQNIjTkWmYAEoje6OUXba/jh7KCdv8SoHnueBMDNSjhR1eQeS8l7HH4wTqITCrh6rIf220DXJcxBEisSkSo8EdAqYjYL48B86W7wnB3YNRgieXUzE/K2A+MRUaa60rKipCQJghHhdSW6eGNf+hM4YGnXlh/Y/94ZDLxdLAGJyKU7hoOVqXQ3g+/1ZHdUDThfwenFHZUwS1BVYl0l2Vn/nWW3/wMFKqFJtT3y9zU+KNJci3tQ1ewOFmO7XkmEcpTZZ70Y4XHAF3kvsH6t9ataHMm9o8CdN+aX4CVeI3aKg8Uon2T/ewjKw3J7s9+bICyDCcVj1RxTgQfH1gvb5tkzxnVErcAVD/LASq7eM14kQm4GMiG9sne1jAc0gi4qZ15YY3YeJY46Gf+hp5B5xDSW2qMFYIT2xoYBUal0Lvq5X5SsyYTb3Q6RoN/rHuZ7V8nFIr45D8p0UA6PM0iCfVPn5Kb32DuYkQSpoUL1GkFj9iS1qxmOcoTmk/qySihMs1LEGoDlRMWfTcA1w7bSHm4mIc3+6RpMFRdryetwV9jwHWbK06mtLuE4k2Cph17XE/sL28oxWcgD0tLVBLZKWc3+x9/6WETVW2nGjr2BZifF7MqIyVEiXF0HyN1XI0ch0NQvqNI884RqKJGjTdUWL6DWkbxGurjsm+x/CNlyS1BFpcbQaqVen5FYG6psiu2IW10uXtScItZipO0p/hEgxGHQINkfi3Fd0aEH2Fx5KkZqNxQ9uF3bLbTJn9Evz8hGtcFyDazYgyjpGnK xlk0C5px +JaamYukFl56RH7U+bPgGIe80jMij4AYEORApOh3Cd1Y2yfzWk/7ZwDq0/YkqiEWnkxl+tsPHfaj7UvBgyn8gh6JJQPU9UI2AFITDLwHsQp7Q8sdeN33QAs37yRhKsqBB31kC6VVnj4d+vt8Os6ZXduIGMwoq3SbyCq5ptBI+kKeqSpkDI+8b4LpxAypsAF7FT0cXhwKgbQPQjvdHHlIOYJZGaLJ1lSiXuOSpz1SciJbql9cL6Oa7CMoTJjowZg0A4b9/HVtl1LiDM4DadfKEE7s42QBpoBFKlksmXoIjQ113VmGRL82Utdl+V6ADHmzzhlTEHXB4L/2M0NTzzSCC1faSj88k2hvcrucniYNJHwCh36W7BxBoLW/HYfHDo0Brr4902HIlyzv2bO+ffCB3BayMnK0mkMQKFbtA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan To minimize the number of pages after a huge page truncation, we do not need to split it all the way down to order-0. The huge page has at most three parts, the part before offset, the part to be truncated, the part remaining at the end. Find the greatest common divisor of them to calculate the new page order from it, so we can split the huge page to this order and keep the remaining pages as large and as few as possible. Signed-off-by: Zi Yan --- mm/truncate.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 86de31ed4d32..817efd5e94b4 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -22,6 +22,7 @@ #include /* grr. try_to_release_page */ #include #include +#include #include "internal.h" /* @@ -211,7 +212,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio) bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) { loff_t pos = folio_pos(folio); - unsigned int offset, length; + unsigned int offset, length, remaining; + unsigned int new_order = folio_order(folio); if (pos < start) offset = start - pos; @@ -222,6 +224,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) length = length - offset; else length = end + 1 - pos - offset; + remaining = folio_size(folio) - offset - length; folio_wait_writeback(folio); if (length == folio_size(folio)) { @@ -236,11 +239,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) */ folio_zero_range(folio, offset, length); + /* + * Use the greatest common divisor of offset, length, and remaining + * as the smallest page size and compute the new order from it. So we + * can truncate a subpage as large as possible. Round up gcd to + * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0. + */ + new_order = ilog2(round_up(gcd(gcd(offset, length), remaining), + PAGE_SIZE) / PAGE_SIZE); + + /* order-1 THP not supported, downgrade to order-0 */ + if (new_order == 1) + new_order = 0; + + if (folio_has_private(folio)) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; - if (split_folio(folio) == 0) + if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0) return true; if (folio_test_dirty(folio)) return false; From patchwork Wed Mar 29 01:17:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13191818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF3B8C6FD18 for ; Wed, 29 Mar 2023 01:17:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BC6896B007B; Tue, 28 Mar 2023 21:17:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B79A1280002; Tue, 28 Mar 2023 21:17:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C992280001; Tue, 28 Mar 2023 21:17:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7421F6B007B for ; Tue, 28 Mar 2023 21:17:42 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 50079AC144 for ; Wed, 29 Mar 2023 01:17:42 +0000 (UTC) X-FDA: 80620173564.29.922D013 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf07.hostedemail.com (Postfix) with ESMTP id 520BC40006 for ; Wed, 29 Mar 2023 01:17:40 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=WLPo7fI0; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=KjgraFA3; spf=pass (imf07.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680052660; a=rsa-sha256; cv=none; b=xRR+kyHyt+RfvMgKWJEHeqTTPdF+Iqn9L4N821o3+SKf7G2fI79zX8etQbVLZKDg6miwXW vpo4agMYz8cZ3ExdqQ+yxYVWtChJCiY52XXUZ3rL1h49BfopsdN1JwFhPoxAhnlFWh7crj 4yLESXATdMHOUcZXfQ92bkiangCz/+E= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=WLPo7fI0; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=KjgraFA3; spf=pass (imf07.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680052660; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BxqGc0evZec652+iCiid1CZrCHeD4aKdhQKP2Da/cdE=; b=Ub0RZ9hhbnw4HEPV7FULnyFkIc6me+8XFWQfsrZIlYPr7VEOe7dDa1JNAO4MPIt0dmEWQa pLzfEl7AaVxl3hKj2PgtgUP62c+i/NwjKBqOhnlKUpajHrHux5SRdYFeXWqplwh62l52Lh XiEvRFkew1mCzls18l5zzirPNR604lM= Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id CB9E35C00EA; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Tue, 28 Mar 2023 21:17:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1680052659; x=1680139059; bh=BxqGc0evZec652+iCiid1CZrCHeD4aKdhQK P2Da/cdE=; b=WLPo7fI0z9NuknUbDxKzkiZcoueiJ0DMONyqrTBCIctzyn2PZE4 C63FkhOUBLOMKZZGlMX8U/Wcpwyn9rDM2gIrH/Zx9fZSqAY9YivFElDw0/M/OMBU WkSUDvCL12p8WYbfb9oCoTxV8j5UTAd2cTGpDQOBaHTYIU7tcIkFj9Oa9S1QcFcn l6yqxkTcT/brSQuaJdpdxRQ+52HanqxvlJkz8qQSK/mnJkYeizdyeYqrb2wCotOI JvG0e1HjbplbsdjmVZQIRwj0O+HxPnvHtQL/+h+w3RsZYEBWhov3wXQcrFrF00Xl YXF1aQEsSQoCQAn434g4CwueoSyIsAGmTzg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1680052659; x=1680139059; bh=BxqGc0evZec652+iCiid1CZrCHeD4aKdhQK P2Da/cdE=; b=KjgraFA3pUHek1i5HwjjNMt0bKZi7Mlgtm1I8nt/SmiRLfwtIEH o0KKM3WEc53CcB39sq7h/B4ZY447nyikhA65HgU0nmB5kMgHoiemfFb+MQ3PYppE tb2IUNCVEz477UksEOHPnXBOUabJr6fLunpKycJMMk3O06l+WLBB7DJ0U2NaJWh/ ZlarVVl909onmO11QKfuoriFSPsmxuhGKbr7WzF31IHIJvUlgwNak2Wms3MhbKhs WZj5kts2CfnRIXTiqYUNBrI2qZ+7ukRemXvCRisq+6lGDL0RNbMtCf/e6wdW2hBv iuneQU4pEwOi5DyiD6TRjhHeG8v6YYzPZ5g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehhedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 28 Mar 2023 21:17:39 -0400 (EDT) From: Zi Yan To: "Matthew Wilcox (Oracle)" , Yang Shi , Yu Zhao , linux-mm@kvack.org Cc: Zi Yan , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v2 7/7] mm: huge_memory: enable debugfs to split huge pages to any order. Date: Tue, 28 Mar 2023 21:17:12 -0400 Message-Id: <20230329011712.3242298-8-zi.yan@sent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329011712.3242298-1-zi.yan@sent.com> References: <20230329011712.3242298-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 520BC40006 X-Rspamd-Server: rspam01 X-Stat-Signature: 8o9n3mg661zggr78teqwrwsppsbrjzg7 X-HE-Tag: 1680052660-526110 X-HE-Meta: U2FsdGVkX18mRBfdYIqDntRKJDoXg+Gv+SUehYg9dJlj/SbDksUJNkNU0dEyflE1EzUXA8e7xzQnCSRjW/3l1R49tUIE2ZMonG2ZL3Da8S9vcGxu+r9PspNl/cpHaN1B/KSMRn8VuZs6a6n9uo3/3onoxEgQlBOMldBbV4OPl+WvK+0m+OVgmxoCo6Lf3JU35+EzKDjpg4gA8UOMtqkYO9UCximvf3+WuQPW5w+RYQ/Vcx9kghEcfkJglgvr1266zrXKAfODwIL0wG2XZgiVFvsJku8bhnTmzlOQ8fvfB8vdXGeUzTSC7YCkdkb2cjqKexFGf50jh4y/7LvdjDV1roZi4eEhEzBQBbUfkcq8CdIUYr5HgoXzxWm7M8G42C2+tj2MBDVWV0Pvt8QirANpvglogW0hGzzKn4rBk9BxlbgGEBMRKMtHWMmohy3sIMd2M8V0B8vZp8zfuhRDyS5zrqtFZyweBMWUELoV2fwDU6LpAEOMJlMiLji0fUFDfhGXjOr/CzE5408iw8H2Vl7Z+YG6z/xARxkYTDbHaSbgGvPAClEc64aSrcGZSj+kQ0PVJ7I0oO823G1LsDnqo2lMMmFuteiGRk78A7N69ugd+jkJTiRLvZ6azn6HdFj3DBZSGcyvsQPfBQaO5QWQJBjGgtdduvjbK+b8F5kFb7UJ2IGY/cSrq6UqjxQBX5uEPAY0jz/MCTBiXv3DFK8CP6YETh6vtpDIjrxBPfWv3Oz6f6tx/SEwe4jKrjs4IbVIyzAr9ptrod93rtXCkjCOsCAnJ4mDIcV9J0kRPthWTafOnwGeeZXZVW1RTqGTwTKqQlpMIkO/1kYLlulOfi8bBwxO9696jmE0LisacEcPlyb6bzFdWQJPW6qdMjaSbe4tBxj6nCcUmGYMmxee7a9E+uXXHA6ZInq/6XGJ0l5YhNTONUd+zrSJhGiijvXASLuUfW3gGA2nsK3oNNqjTvm7P7D qF1zMBWA yGNiFlOiMjW23yqFlqmTZB0Gwkzb5biP/wJdxdepHAJrSNqMwgtfp/Y5jSzfDliVFFYJUcllaFNXeFqHVDy7ol4OdSBTwztiAhuMSnIlA/iKT3aKttDvkoCuzhVHiyCdFpCBu2B6S7VbxpgPfU1xTupLuXIefCzFAY36eCULMtF/qu3EJOerQ1LTI9RKydjzDbPLNN18qni7MgbxOcH1GRd187GCognaBQdTDvqNjlOIFeA4oxq972HB8hK/IBnzC0HrB+1/ou1n2+md6H7bbd0KwSqAolkR31FcV2j5092h1DypoZfSseP4woJs6xOIsoEDvD6kBlkCVlhOCAG40kWIi9gPBClSmlVb6Q3sb12cZ/7GNyKlWEkgNbCF1ljVb1bt8yqQC9hPRm+FNzfmzB59iS8yvylh+S7kZGbD5eYhnrLX+jE9J61WcwKSLYAFkkC9RyqYLGFQB2mcbyWVQo2VODg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It is used to test split_huge_page_to_list_to_order for pagecache THPs. Also add test cases for split_huge_page_to_list_to_order via both debugfs, truncating a file, and punching holes in a file. Signed-off-by: Zi Yan --- mm/huge_memory.c | 34 ++- .../selftests/mm/split_huge_page_test.c | 225 +++++++++++++++++- 2 files changed, 242 insertions(+), 17 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 619d25278340..ad5b29558a51 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3023,7 +3023,7 @@ static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma) } static int split_huge_pages_pid(int pid, unsigned long vaddr_start, - unsigned long vaddr_end) + unsigned long vaddr_end, unsigned int new_order) { int ret = 0; struct task_struct *task; @@ -3085,13 +3085,19 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto next; total++; - if (!can_split_folio(page_folio(page), NULL)) + /* + * For folios with private, split_huge_page_to_list_to_order() + * will try to drop it before split and then check if the folio + * can be split or not. So skip the check here. + */ + if (!folio_test_private(page_folio(page)) && + !can_split_folio(page_folio(page), NULL)) goto next; if (!trylock_page(page)) goto next; - if (!split_huge_page(page)) + if (!split_huge_page_to_list_to_order(page, NULL, new_order)) split++; unlock_page(page); @@ -3109,7 +3115,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, - pgoff_t off_end) + pgoff_t off_end, unsigned int new_order) { struct filename *file; struct file *candidate; @@ -3148,7 +3154,7 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (!folio_trylock(folio)) goto next; - if (!split_folio(folio)) + if (!split_huge_page_to_list_to_order(&folio->page, NULL, new_order)) split++; folio_unlock(folio); @@ -3173,10 +3179,14 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, { static DEFINE_MUTEX(split_debug_mutex); ssize_t ret; - /* hold pid, start_vaddr, end_vaddr or file_path, off_start, off_end */ + /* + * hold pid, start_vaddr, end_vaddr, new_order or + * file_path, off_start, off_end, new_order + */ char input_buf[MAX_INPUT_BUF_SZ]; int pid; unsigned long vaddr_start, vaddr_end; + unsigned int new_order = 0; ret = mutex_lock_interruptible(&split_debug_mutex); if (ret) @@ -3205,29 +3215,29 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, goto out; } - ret = sscanf(buf, "0x%lx,0x%lx", &off_start, &off_end); - if (ret != 2) { + ret = sscanf(buf, "0x%lx,0x%lx,%d", &off_start, &off_end, &new_order); + if (ret != 2 && ret != 3) { ret = -EINVAL; goto out; } - ret = split_huge_pages_in_file(file_path, off_start, off_end); + ret = split_huge_pages_in_file(file_path, off_start, off_end, new_order); if (!ret) ret = input_len; goto out; } - ret = sscanf(input_buf, "%d,0x%lx,0x%lx", &pid, &vaddr_start, &vaddr_end); + ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d", &pid, &vaddr_start, &vaddr_end, &new_order); if (ret == 1 && pid == 1) { split_huge_pages_all(); ret = strlen(input_buf); goto out; - } else if (ret != 3) { + } else if (ret != 3 && ret != 4) { ret = -EINVAL; goto out; } - ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end); + ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order); if (!ret) ret = strlen(input_buf); out: diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index b8558c7f1a39..cbb5e6893cbf 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "vm_util.h" uint64_t pagesize; @@ -23,10 +24,12 @@ unsigned int pageshift; uint64_t pmd_pagesize; #define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" +#define SMAP_PATH "/proc/self/smaps" +#define THP_FS_PATH "/mnt/thp_fs" #define INPUT_MAX 80 -#define PID_FMT "%d,0x%lx,0x%lx" -#define PATH_FMT "%s,0x%lx,0x%lx" +#define PID_FMT "%d,0x%lx,0x%lx,%d" +#define PATH_FMT "%s,0x%lx,0x%lx,%d" #define PFN_MASK ((1UL<<55)-1) #define KPF_THP (1UL<<22) @@ -113,7 +116,7 @@ void split_pmd_thp(void) /* split all THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)one_page, - (uint64_t)one_page + len); + (uint64_t)one_page + len, 0); for (i = 0; i < len; i++) if (one_page[i] != (char)i) { @@ -203,7 +206,7 @@ void split_pte_mapped_thp(void) /* split all remapped THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)pte_mapped, - (uint64_t)pte_mapped + pagesize * 4); + (uint64_t)pte_mapped + pagesize * 4, 0); /* smap does not show THPs after mremap, use kpageflags instead */ thp_size = 0; @@ -269,7 +272,7 @@ void split_file_backed_thp(void) } /* split the file-backed THP */ - write_debugfs(PATH_FMT, testfile, pgoff_start, pgoff_end); + write_debugfs(PATH_FMT, testfile, pgoff_start, pgoff_end, 0); status = unlink(testfile); if (status) @@ -290,20 +293,232 @@ void split_file_backed_thp(void) printf("file-backed THP split test done, please check dmesg for more information\n"); } +void create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd, char **addr) +{ + size_t i; + int dummy; + + srand(time(NULL)); + + *fd = open(testfile, O_CREAT | O_RDWR, 0664); + if (*fd == -1) { + perror("Failed to create a file at "THP_FS_PATH); + exit(EXIT_FAILURE); + } + + for (i = 0; i < fd_size; i++) { + unsigned char byte = (unsigned char)i; + + write(*fd, &byte, sizeof(byte)); + } + close(*fd); + sync(); + *fd = open("/proc/sys/vm/drop_caches", O_WRONLY); + if (*fd == -1) { + perror("open drop_caches"); + goto err_out_unlink; + } + if (write(*fd, "3", 1) != 1) { + perror("write to drop_caches"); + goto err_out_unlink; + } + close(*fd); + + *fd = open(testfile, O_RDWR); + if (*fd == -1) { + perror("Failed to open a file at "THP_FS_PATH); + goto err_out_unlink; + } + + *addr = mmap(NULL, fd_size, PROT_READ|PROT_WRITE, MAP_SHARED, *fd, 0); + if (*addr == (char *)-1) { + perror("cannot mmap"); + goto err_out_close; + } + madvise(*addr, fd_size, MADV_HUGEPAGE); + + for (size_t i = 0; i < fd_size; i++) + dummy += *(*addr + i); + + if (!check_huge_file(*addr, fd_size / pmd_pagesize, pmd_pagesize)) { + printf("No pagecache THP generated, please mount a filesystem supporting pagecache THP at "THP_FS_PATH"\n"); + goto err_out_close; + } + return; +err_out_close: + close(*fd); +err_out_unlink: + unlink(testfile); + exit(EXIT_FAILURE); +} + +void split_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + printf("split %ld kB PMD-mapped pagecache page to order %d ... ", fd_size >> 10, order); + write_debugfs(PID_FMT, getpid(), (uint64_t)addr, (uint64_t)addr + fd_size, order); + + for (i = 0; i < fd_size; i++) + if (*(addr + i) != (char)i) { + printf("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { + printf("Still FilePmdMapped not split\n"); + err = EXIT_FAILURE; + goto out; + } + + printf("done\n"); +out: + close(fd); + unlink(testfile); + if (err) + exit(err); +} + +void truncate_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + printf("truncate %ld kB PMD-mapped pagecache page to size %lu kB ... ", + fd_size >> 10, 4UL << order); + ftruncate(fd, pagesize << order); + + for (i = 0; i < (pagesize << order); i++) + if (*(addr + i) != (char)i) { + printf("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { + printf("Still FilePmdMapped not split after truncate\n"); + err = EXIT_FAILURE; + goto out; + } + + printf("done\n"); +out: + close(fd); + unlink(testfile); + if (err) + exit(err); +} + +void punch_hole_in_pagecache_thp(size_t fd_size, off_t offset[], off_t len[], + int n, int num_left_thps) +{ + int fd, j; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + for (j = 0; j < n; j++) { + printf("punch a hole to %ld kB PMD-mapped pagecache page at addr: %lx, offset %ld, and len %ld ...\n", + fd_size >> 10, (unsigned long)addr, offset[j], len[j]); + fallocate(fd, FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE, offset[j], len[j]); + } + + for (i = 0; i < fd_size; i++) { + int in_hole = 0; + + for (j = 0; j < n; j++) + if (i >= offset[j] && i < (offset[j] + len[j])) { + in_hole = 1; + break; + } + + if (in_hole) { + if (*(addr + i)) { + printf("%lu byte non-zero after punch\n", i); + err = EXIT_FAILURE; + goto out; + } + continue; + } + if (*(addr + i) != (char)i) { + printf("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + } + + if (!check_huge_file(addr, num_left_thps, pmd_pagesize)) { + printf("Still FilePmdMapped not split after punch\n"); + goto out; + } + printf("done\n"); +out: + close(fd); + unlink(testfile); + if (err) + exit(err); +} + int main(int argc, char **argv) { + int i; + size_t fd_size; + off_t offset[2], len[2]; + if (geteuid() != 0) { printf("Please run the benchmark as root\n"); exit(EXIT_FAILURE); } + setbuf(stdout, NULL); + pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; pmd_pagesize = read_pmd_pagesize(); + fd_size = 2 * pmd_pagesize; split_pmd_thp(); split_pte_mapped_thp(); split_file_backed_thp(); + for (i = 8; i >= 0; i--) + if (i != 1) + split_thp_in_pagecache_to_order(fd_size, i); + + /* + * for i is 1, truncate code in the kernel should create order-0 pages + * instead of order-1 THPs, since order-1 THP is not supported. No error + * is expected. + */ + for (i = 8; i >= 0; i--) + truncate_thp_in_pagecache_to_order(fd_size, i); + + offset[0] = 123; + offset[1] = 4 * pagesize; + len[0] = 200 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2, 1); + + offset[0] = 259 * pagesize + pagesize / 2; + offset[1] = 33 * pagesize; + len[0] = 129 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2, 1); + return 0; }