From patchwork Mon Mar 21 14:21:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12787303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53107C433F5 for ; Mon, 21 Mar 2022 14:21:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93C126B0073; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 84D948D0001; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F00A6B0075; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id 57E3B6B0073 for ; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D4FA1A010C for ; Mon, 21 Mar 2022 14:21:37 +0000 (UTC) X-FDA: 79268606634.24.535F1E7 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf07.hostedemail.com (Postfix) with ESMTP id 5CF104000B for ; Mon, 21 Mar 2022 14:21:37 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 005715C0192; Mon, 21 Mar 2022 10:21:37 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 21 Mar 2022 10:21:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm3; bh=tXqvAixMw5LH0Y fUtbzaonW6xe08jm3VyFZaFA2A5hg=; b=VoXtG0rUj89HL8a9t9XSClLnA8qdst 867J3UhDpvuwxZH1/qCkd+tC36HHVUp9+psR0m5rLxjQnCsl11s0X3oEIZn5Ujzu 8P7ytscIn/61BrN5V0bSaOZoJ5ghBW6G58wME90QPfXVz7Zb5hVceQf8L5ifDwuX TFlUEtSJSbguZFydh1JSTHXBo0X57SzRxGnzb5DeH5GbTM18KPl7dexgLXlrPyQl i2gbubj0ViTAng/8kTUazkVDGDIrQoWiXWPU7Ns3JWmRr9018v8bf9Fx7FRYM5XN fLIW1E77O8zl0GtkXJzleXw1xKF/4EgTJaSNPRocdaffgl2l7TpTWhIw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=tXqvAixMw5LH0YfUtbzaonW6xe08jm3VyFZaFA2A5hg=; b=jGBvfn8p 94GYrztj2DXLXLso38ehz7889cnUZ/x9YmTqksFcoL7VPRu7EQmAqJcHmJJ1KVn4 xgSpaUyPZ+rb8hdcJH3j0d8+SZJfB1i3cE4Mya3KhGHS8jcLTJ0oxZ3Xq4AvLnfX gG1skxrFc3nYWSfQ7TtJlS858djSMdpZR0bXd/kpyRfka1sLmfM/hVGEnombtuyZ nT6phmcZVc2ftCk8VdI0w1Y9F2QkOioVmOjUGEoSuR0I5MbXK296E6n935TUIu+X eKPKpbCj2pVaJTZGcnhsGfZoS78/jX1ntxu7MDxy+7o+s6+RRAEhvHLg2By6UJIw lih5zEh3K0T5mQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrudegfedgieefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Mar 2022 10:21:36 -0400 (EDT) From: Zi Yan To: Matthew Wilcox , linux-mm@kvack.org Cc: Roman Gushchin , Shuah Khan , Yang Shi , Miaohe Lin , Hugh Dickins , "Kirill A . Shutemov" , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org, Zi Yan Subject: [RFC PATCH 1/5] mm: memcg: make memcg huge page split support any order split. Date: Mon, 21 Mar 2022 10:21:24 -0400 Message-Id: <20220321142128.2471199-2-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220321142128.2471199-1-zi.yan@sent.com> References: <20220321142128.2471199-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: 5CF104000B X-Stat-Signature: pywmksu7x91ni18whz3hcngwdnkdi6tq X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=sent.com header.s=fm3 header.b=VoXtG0rU; dkim=pass header.d=messagingengine.com header.s=fm3 header.b=jGBvfn8p; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf07.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Rspamd-Server: rspam02 X-HE-Tag: 1647872497-244028 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It sets memcg information for the pages after the split. A new parameter new_order is added to tell the new page order, always 0 for now. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan --- include/linux/memcontrol.h | 2 +- mm/huge_memory.c | 2 +- mm/memcontrol.c | 10 +++++----- mm/page_alloc.c | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..e71189454bf0 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1116,7 +1116,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2fe38212e07c..640040c386f0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2371,7 +2371,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, nr, 0); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 43b2a22ce812..e7da413ac174 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3262,22 +3262,22 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); - int i; + int i, new_nr = 1 << new_order; if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, nr / new_nr - 1); } #ifdef CONFIG_MEMCG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f648decfe39d..d982919b9e51 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3515,7 +3515,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, 1 << order, 0); } EXPORT_SYMBOL_GPL(split_page); From patchwork Mon Mar 21 14:21:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12787304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E67A5C433EF for ; Mon, 21 Mar 2022 14:21:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E2818D0002; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 394218D0001; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0373B8D0002; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id E57608D0001 for ; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 973EF8249980 for ; Mon, 21 Mar 2022 14:21:38 +0000 (UTC) X-FDA: 79268606676.28.5BD7C39 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf15.hostedemail.com (Postfix) with ESMTP id 03457A0008 for ; Mon, 21 Mar 2022 14:21:37 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id A92A15C01C7; Mon, 21 Mar 2022 10:21:37 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 21 Mar 2022 10:21:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm3; bh=+/Wp7/w8DInb8D 6QqZ6ciS73TAw3rM4RvE2nNIS9rTw=; b=nJwqj/l0JgzTMam8BnG9lxPaM1+x+A 7+BLwRD6YpZFX/4SGSM+zGhlBll/868j4rmMy4wHqx6S/Mhbm7N4MK9SIMk07/Sv lyV+mK7PvJPSi1ceMe7ksQxi1gAqjbX2F6kwzIqpktkQ097Urt1FGrX6qyRmQFrb Z0WWiVBSIEKHUEFI+zxb2+jVSYMR63BVvIgP+hRfHbUbih+D4S49xQfOrtvmg0lG jrFkgswGEkYcymBxevlpmm5BnAbwNvr4/FGF3as2cIL1LiusHSWPeYd9wYbYMiVh QSe0r8SltxpC4q3F4ZwVLFEBH2mUZrEyMVH1EGfftoTNrIr/UP+qIccQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=+/Wp7/w8DInb8D6QqZ6ciS73TAw3rM4RvE2nNIS9rTw=; b=o1C99GeY L7HpALqGAWJawa5uOAHc04OvriLZIsh8ZFIgj1UAYmVMs+7Hb062PRIlyk1gSdH2 De5V7tZVHYQ9VhlQzoBzbMk1A0qyRk8pZG1Vc91b6Rp5vrdGwVAiEnX/U5siTzco iQwRe0WmSn1VmRcjQ6AdmrktO7DtR/V01OWycm0Usw22pj8SbSh/OEkEZHn4UlXJ 0W4zDMwokwtIEgvEaUwU9PzDQ5AAoaSCqi4PSG7MvOuuMWhvj3ni+DXFNfrbId3x 0t9+SjJ6a3yBxWwAzeuHBijO12paNSSff/whVfOHeuRtl80fluF8lC7CWV10ba0A bSP98/ubPnwGlA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrudegfedgieegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Mar 2022 10:21:37 -0400 (EDT) From: Zi Yan To: Matthew Wilcox , linux-mm@kvack.org Cc: Roman Gushchin , Shuah Khan , Yang Shi , Miaohe Lin , Hugh Dickins , "Kirill A . Shutemov" , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org, Zi Yan Subject: [RFC PATCH 2/5] mm: page_owner: add support for splitting to any order in split page_owner. Date: Mon, 21 Mar 2022 10:21:25 -0400 Message-Id: <20220321142128.2471199-3-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220321142128.2471199-1-zi.yan@sent.com> References: <20220321142128.2471199-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 03457A0008 X-Stat-Signature: g5w7z1yanth66ccm7rm1r73ywioh993u Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=sent.com header.s=fm3 header.b="nJwqj/l0"; dkim=pass header.d=messagingengine.com header.s=fm3 header.b=o1C99GeY; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf15.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-HE-Tag: 1647872497-568983 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It adds a new_order parameter to set new page order in page owner and uses old_order instead of nr to make the parameters look consistent. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan Reviewed-by: Roman Gushchin --- include/linux/page_owner.h | 12 +++++++----- mm/huge_memory.c | 3 ++- mm/page_alloc.c | 2 +- mm/page_owner.c | 13 +++++++------ 4 files changed, 17 insertions(+), 13 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 119a0c9d2a8b..16050cc89274 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,8 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int nr); +extern void __split_page_owner(struct page *page, unsigned short old_order, + unsigned short new_order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +32,11 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int nr) +static inline void split_page_owner(struct page *page, unsigned int old_order, + unsigned int new_order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, nr); + __split_page_owner(page, old_order, new_order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -56,11 +58,11 @@ static inline void reset_page_owner(struct page *page, unsigned short order) { } static inline void set_page_owner(struct page *page, - unsigned int order, gfp_t gfp_mask) + unsigned short order, gfp_t gfp_mask) { } static inline void split_page_owner(struct page *page, - unsigned short order) + unsigned short old_order, unsigned short new_order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 640040c386f0..fcfa46af6c4c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2367,6 +2367,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; + unsigned int order = thp_order(head); unsigned int nr = thp_nr_pages(head); int i; @@ -2408,7 +2409,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, nr); + split_page_owner(head, order, 0); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d982919b9e51..9cac40c26c58 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3514,7 +3514,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, 1 << order); + split_page_owner(page, order, 0); split_page_memcg(page, 1 << order, 0); } EXPORT_SYMBOL_GPL(split_page); diff --git a/mm/page_owner.c b/mm/page_owner.c index 0a9588506571..52013c846d19 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -202,19 +202,20 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_owner->last_migrate_reason = reason; } -void __split_page_owner(struct page *page, unsigned int nr) +void __split_page_owner(struct page *page, unsigned short old_order, + unsigned short new_order) { - int i; - struct page_ext *page_ext = lookup_page_ext(page); + int i, old_nr = 1 << old_order, new_nr = 1 << new_order; + struct page_ext *page_ext; struct page_owner *page_owner; if (unlikely(!page_ext)) return; - for (i = 0; i < nr; i++) { + for (i = 0; i < old_nr; i += new_nr) { + page_ext = lookup_page_ext(page + i); page_owner = get_page_owner(page_ext); - page_owner->order = 0; - page_ext = page_ext_next(page_ext); + page_owner->order = new_order; } } From patchwork Mon Mar 21 14:21:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12787305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4708C433FE for ; Mon, 21 Mar 2022 14:21:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0E628D0003; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AC05F8D0001; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DA0B8D0003; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 67E7E8D0001 for ; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 217E2182888BE for ; Mon, 21 Mar 2022 14:21:39 +0000 (UTC) X-FDA: 79268606718.16.326BDE5 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf05.hostedemail.com (Postfix) with ESMTP id 9B55B10002B for ; Mon, 21 Mar 2022 14:21:38 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 4BB0E5C013E; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 21 Mar 2022 10:21:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm3; bh=7T5ogzTVz+pREW HQHtFgoPpVXuCA2hjAwdosZDFI+CE=; b=Hix2FzTy2ZmuW6BK5lkeOMhTARfjBT iXuQbSBngj5gdw/VmLXkW+E3As9ekKUJZ1b7wDm8/fgF6VVZdmgt1cFarPv+t0zD Xkx5NrxWQDB8gcbges8ZoPkqMPa/tsGGB5mnmawXLNSoNWeZjWoEo4paikqBdr0W wgWrbut9Gf33ODzbPuhh7j6x4oWYmSP2fYSd+geKslJ8FY4f0wOOO/olunlxkf6i q4LuFSsWhLtr523ErhFU3nWRuiJlsK0sKxdYxPB7/3VPqQvQpwp1eYHOua76cxBt ACSMWNBZhSt0wGx4994pciJUxQjJ6vuN32eaoaeI9uA/zAAhRuhWNptg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=7T5ogzTVz+pREWHQHtFgoPpVXuCA2hjAwdosZDFI+CE=; b=H/8cFbIw MMeoQQuyERsEmp99t9K7E3Zp3B9107hEZVjLr6hj4ieVlk8qswifA7n/4YynuLzU UhEpRZkpbPNNJRCJaQzy+n8blXiFKi802qNE5YN9TWOA8msFNmkeqRHWEV/q8+9E 7EfVHeMJonpw3vq6xbhKrUUapJ4oTQtTsnYOeO+sy1IgiHbYXrc+pNMGpyjbLGAH U3LKF1yzj/yJziPbo4gPo6Z4m5GCsrGabrdO9UGrkO6SsxqIErx5BcjE9qOC0oMQ lwqGIUfTqg/61H8WXkHWOHySCkePma8FayAyejHGxY/cN0Tkiy7CFLuYdYhIxpVC geq0uWthWeZiOg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrudegfedgieefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Mar 2022 10:21:37 -0400 (EDT) From: Zi Yan To: Matthew Wilcox , linux-mm@kvack.org Cc: Roman Gushchin , Shuah Khan , Yang Shi , Miaohe Lin , Hugh Dickins , "Kirill A . Shutemov" , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org, Zi Yan Subject: [RFC PATCH 3/5] mm: thp: split huge page to any lower order pages. Date: Mon, 21 Mar 2022 10:21:26 -0400 Message-Id: <20220321142128.2471199-4-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220321142128.2471199-1-zi.yan@sent.com> References: <20220321142128.2471199-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: 9B55B10002B X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=sent.com header.s=fm3 header.b=Hix2FzTy; dkim=pass header.d=messagingengine.com header.s=fm3 header.b="H/8cFbIw"; spf=pass (imf05.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Stat-Signature: b1thrrwprwxodyerxkf36g4b3536wrkn X-Rspamd-Server: rspam07 X-HE-Tag: 1647872498-653059 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan To split a THP to any lower order pages, we need to reform THPs on subpages at given order and add page refcount based on the new page order. Also we need to reinitialize page_deferred_list after removing the page from the split_queue, otherwise a subsequent split will see list corruption when checking the page_deferred_list again. It has many uses, like minimizing the number of pages after truncating a pagecache THP. For anonymous THPs, we can only split them to order-0 like before until we add support for any size anonymous THPs. Signed-off-by: Zi Yan Reviewed-by: Roman Gushchin --- include/linux/huge_mm.h | 8 +++ mm/huge_memory.c | 111 ++++++++++++++++++++++++++++++---------- 2 files changed, 91 insertions(+), 28 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2999190adc22..c7153cd7e9e4 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -186,6 +186,8 @@ void free_transhuge_page(struct page *page); bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order); static inline int split_huge_page(struct page *page) { return split_huge_page_to_list(page, NULL); @@ -355,6 +357,12 @@ split_huge_page_to_list(struct page *page, struct list_head *list) { return 0; } +static inline int +split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) +{ + return 0; +} static inline int split_huge_page(struct page *page) { return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fcfa46af6c4c..3617aa3ad0b1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2236,11 +2236,13 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, static void unmap_page(struct page *page) { struct folio *folio = page_folio(page); - enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC; + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC; VM_BUG_ON_PAGE(!PageHead(page), page); + if (folio_order(folio) >= HPAGE_PMD_ORDER) + ttu_flags |= TTU_SPLIT_HUGE_PMD; + /* * Anon pages need migration entries to preserve them, but file * pages can simply be left unmapped, then faulted back on demand. @@ -2254,9 +2256,9 @@ static void unmap_page(struct page *page) VM_WARN_ON_ONCE_PAGE(page_mapped(page), page); } -static void remap_page(struct folio *folio, unsigned long nr) +static void remap_page(struct folio *folio, unsigned short nr) { - int i = 0; + unsigned int i; /* If unmap_page() uses try_to_migrate() on file, remove this check */ if (!folio_test_anon(folio)) @@ -2274,7 +2276,6 @@ static void lru_add_page_tail(struct page *head, struct page *tail, struct lruvec *lruvec, struct list_head *list) { VM_BUG_ON_PAGE(!PageHead(head), head); - VM_BUG_ON_PAGE(PageCompound(tail), head); VM_BUG_ON_PAGE(PageLRU(tail), head); lockdep_assert_held(&lruvec->lru_lock); @@ -2295,9 +2296,10 @@ static void lru_add_page_tail(struct page *head, struct page *tail, } static void __split_huge_page_tail(struct page *head, int tail, - struct lruvec *lruvec, struct list_head *list) + struct lruvec *lruvec, struct list_head *list, unsigned int new_order) { struct page *page_tail = head + tail; + unsigned long compound_head_flag = new_order ? (1L << PG_head) : 0; VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); @@ -2321,6 +2323,7 @@ static void __split_huge_page_tail(struct page *head, int tail, #ifdef CONFIG_64BIT (1L << PG_arch_2) | #endif + compound_head_flag | (1L << PG_dirty))); /* ->mapping in first tail page is compound_mapcount */ @@ -2329,7 +2332,10 @@ static void __split_huge_page_tail(struct page *head, int tail, page_tail->mapping = head->mapping; page_tail->index = head->index + tail; - /* Page flags must be visible before we make the page non-compound. */ + /* + * Page flags must be visible before we make the page non-compound or + * a compound page in new_order. + */ smp_wmb(); /* @@ -2339,10 +2345,15 @@ static void __split_huge_page_tail(struct page *head, int tail, * which needs correct compound_head(). */ clear_compound_head(page_tail); + if (new_order) { + prep_compound_page(page_tail, new_order); + prep_transhuge_page(page_tail); + } /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, 1 + (!PageAnon(head) || - PageSwapCache(head))); + page_ref_unfreeze(page_tail, 1 + ((!PageAnon(head) || + PageSwapCache(head)) ? + thp_nr_pages(page_tail) : 0)); if (page_is_young(head)) set_page_young(page_tail); @@ -2360,7 +2371,7 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end) + pgoff_t end, unsigned int new_order) { struct folio *folio = page_folio(page); struct page *head = &folio->page; @@ -2369,10 +2380,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; unsigned int order = thp_order(head); unsigned int nr = thp_nr_pages(head); + unsigned int new_nr = 1 << new_order; int i; /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr, 0); + split_page_memcg(head, nr, new_order); if (PageAnon(head) && PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2387,42 +2399,50 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); - for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(head, i, lruvec, list); + for (i = nr - new_nr; i >= new_nr; i -= new_nr) { + __split_huge_page_tail(head, i, lruvec, list, new_order); /* Some pages can be beyond EOF: drop them from page cache */ if (head[i].index >= end) { ClearPageDirty(head + i); __delete_from_page_cache(head + i, NULL); if (shmem_mapping(head->mapping)) - shmem_uncharge(head->mapping->host, 1); + shmem_uncharge(head->mapping->host, new_nr); put_page(head + i); } else if (!PageAnon(page)) { __xa_store(&head->mapping->i_pages, head[i].index, head + i, 0); } else if (swap_cache) { + /* + * split anonymous THPs (including swapped out ones) to + * non-zero order not supported + */ + VM_BUG_ON(new_order); __xa_store(&swap_cache->i_pages, offset + i, head + i, 0); } } - ClearPageCompound(head); + if (!new_order) + ClearPageCompound(head); + else + set_compound_order(head, new_order); unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order, 0); + split_page_owner(head, order, new_order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { /* Additional pin to swap cache */ if (PageSwapCache(head)) { - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&swap_cache->i_pages); } else { page_ref_inc(head); } } else { /* Additional pin to page cache */ - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&head->mapping->i_pages); } local_irq_enable(); @@ -2435,7 +2455,14 @@ static void __split_huge_page(struct page *page, struct list_head *list, split_swap_cluster(entry); } - for (i = 0; i < nr; i++) { + /* + * set page to its compound_head when split to THPs, so that GUP pin and + * PG_locked are transferred to the right after-split page + */ + if (new_order) + page = compound_head(page); + + for (i = 0; i < nr; i += new_nr) { struct page *subpage = head + i; if (subpage == page) continue; @@ -2472,36 +2499,60 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) * This function splits huge page into normal pages. @page can point to any * subpage of huge page to split. Split doesn't change the position of @page. * + * See split_huge_page_to_list_to_order() for more details. + * + * Returns 0 if the hugepage is split successfully. + * Returns -EBUSY if the page is pinned or if anon_vma disappeared from under + * us. + */ +int split_huge_page_to_list(struct page *page, struct list_head *list) +{ + return split_huge_page_to_list_to_order(page, list, 0); +} + +/* + * This function splits huge page into pages in @new_order. @page can point to + * any subpage of huge page to split. Split doesn't change the position of + * @page. + * * Only caller must hold pin on the @page, otherwise split fails with -EBUSY. * The huge page must be locked. * * If @list is null, tail pages will be added to LRU list, otherwise, to @list. * - * Both head page and tail pages will inherit mapping, flags, and so on from - * the hugepage. + * Pages in new_order will inherit mapping, flags, and so on from the hugepage. * - * GUP pin and PG_locked transferred to @page. Rest subpages can be freed if - * they are not mapped. + * GUP pin and PG_locked transferred to @page or the compound page @page belongs + * to. Rest subpages can be freed if they are not mapped. * * Returns 0 if the hugepage is split successfully. * Returns -EBUSY if the page is pinned or if anon_vma disappeared from under * us. */ -int split_huge_page_to_list(struct page *page, struct list_head *list) +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { struct folio *folio = page_folio(page); struct page *head = &folio->page; struct deferred_split *ds_queue = get_deferred_split_queue(head); - XA_STATE(xas, &head->mapping->i_pages, head->index); + /* reset xarray order to new order after split */ + XA_STATE_ORDER(xas, &head->mapping->i_pages, head->index, new_order); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; pgoff_t end; + VM_BUG_ON(thp_order(head) <= new_order); VM_BUG_ON_PAGE(is_huge_zero_page(head), head); VM_BUG_ON_PAGE(!PageLocked(head), head); VM_BUG_ON_PAGE(!PageCompound(head), head); + /* Cannot split THP to order-1 (no order-1 THPs) */ + VM_BUG_ON(new_order == 1); + + /* Split anonymous THP to non-zero order not support */ + VM_BUG_ON(PageAnon(head) && new_order); + if (PageWriteback(head)) return -EBUSY; @@ -2582,7 +2633,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; - list_del(page_deferred_list(head)); + list_del_init(page_deferred_list(head)); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { @@ -2592,14 +2643,18 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (PageSwapBacked(head)) { __mod_lruvec_page_state(head, NR_SHMEM_THPS, -nr); - } else { + } else if (!new_order) { + /* + * Decrease THP stats only if split to normal + * pages + */ __mod_lruvec_page_state(head, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } } - __split_huge_page(page, list, end); + __split_huge_page(page, list, end, new_order); ret = 0; } else { spin_unlock(&ds_queue->split_queue_lock); From patchwork Mon Mar 21 14:21:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12787306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24E20C433F5 for ; Mon, 21 Mar 2022 14:21:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FC2B8D0005; Mon, 21 Mar 2022 10:21:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A9DC8D0001; Mon, 21 Mar 2022 10:21:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FB3F8D0005; Mon, 21 Mar 2022 10:21:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 006668D0001 for ; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C689F1C07 for ; Mon, 21 Mar 2022 14:21:39 +0000 (UTC) X-FDA: 79268606718.15.399A9B0 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf13.hostedemail.com (Postfix) with ESMTP id 440432000E for ; Mon, 21 Mar 2022 14:21:39 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id EA8175C01CE; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 21 Mar 2022 10:21:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm3; bh=kIpuEveA7/YbA0 pSxKdFASwEtWdRLdBg+ont8QAUBuo=; b=CibfHPfPc9bJSaTTlmtIw04wVD3Dd8 3VqLqKQ5Qj071muc/kNIs2STVuogk3JFo13FuAM03CbTtbgDmccuFLrp16kcWIFO d8hl8FQdMKdjmKXPCk8wtZrl7DxcvoIkEU6WPV5hxZfDz6lMpV6fIMjLkx/76nna EiQGdMJQ4AHaL7eEaHpuGV9NSE8Jz4Uyw4k7ck2Xv9E4nyia9/ltn8+f8P4NCO4S pWq2hAlxRldvFm0hWFywi+rVo9wSbo+fCZjdR9azvHjTXQYxlq5iIoHuX4RkIEVm FQSLn+aFimBxOseDQcxdLoDoRcSy19BnL6Vr8ZjP8+p6fihROf/0XYog== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=kIpuEveA7/YbA0pSxKdFASwEtWdRLdBg+ont8QAUBuo=; b=E/noCP1M yVMAjJAqn0BOQsC1e5+7xwiXuijpihKmellqXIdOMkNlCUVSAxhx4Kd+SpYbrHZC 859nDk2ffs3FaMVJ2H8nYHlyBwQfulEJt82/xmqBk35GfQl17pHlh+LsRT5amo9F 9fHbMugbzGkL1bUPv04Em/0LCmEuKSST9ks7BXA+EtCtAejx77G71LwjzTDABnRk QC+4FeO+2n45rW13BcrSz+Rb+rhR+ykehSmSEa2+UEGc/q3br86XTzp0kPBFcpjg NGv2FBJh9ZEVa9hqzEd15pOAjSLI0fyr4n8FHDlffeWtOwjp5cEWta7I5OOKO5aj 5TsOJLkr5wx+Vw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrudegfedgieegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) From: Zi Yan To: Matthew Wilcox , linux-mm@kvack.org Cc: Roman Gushchin , Shuah Khan , Yang Shi , Miaohe Lin , Hugh Dickins , "Kirill A . Shutemov" , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org, Zi Yan Subject: [RFC PATCH 4/5] mm: truncate: split huge page cache page to a non-zero order if possible. Date: Mon, 21 Mar 2022 10:21:27 -0400 Message-Id: <20220321142128.2471199-5-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220321142128.2471199-1-zi.yan@sent.com> References: <20220321142128.2471199-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 440432000E X-Stat-Signature: eatcc6oaq51hsj951gsrrijx9smj9hw8 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=sent.com header.s=fm3 header.b=CibfHPfP; dkim=pass header.d=messagingengine.com header.s=fm3 header.b="E/noCP1M"; spf=pass (imf13.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-HE-Tag: 1647872499-612999 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan To minimize the number of pages after a huge page truncation, we do not need to split it all the way down to order-0. The huge page has at most three parts, the part before offset, the part to be truncated, the part remaining at the end. Find the greatest common power of two multiplier of the non-zero values of them as the new order, so we can split the huge page to this order and keep the remaining pages as large and as few as possible. Signed-off-by: Zi Yan Reported-by: kernel test robot --- mm/huge_memory.c | 1 + mm/truncate.c | 33 +++++++++++++++++++++++++++++++-- 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3617aa3ad0b1..76db0092a1e2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2349,6 +2349,7 @@ static void __split_huge_page_tail(struct page *head, int tail, prep_compound_page(page_tail, new_order); prep_transhuge_page(page_tail); } + VM_BUG_ON_PAGE(PageTail(page_tail), page_tail); /* Finally unfreeze refcount. Additional reference from page cache. */ page_ref_unfreeze(page_tail, 1 + ((!PageAnon(head) || diff --git a/mm/truncate.c b/mm/truncate.c index ab50d0d59a2a..4f71e67dec09 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -197,6 +197,14 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio) return 0; } +static unsigned int greatest_pow_of_two_multiplier(unsigned int num) +{ + if (num & 1) + return 0; + return min_t(unsigned int, ilog2(num), + ilog2(num - rounddown_pow_of_two(num))); +} + /* * Handle partial folios. The folio may be entirely within the * range if a split has raced with us. If not, we zero the part of the @@ -211,7 +219,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio) bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) { loff_t pos = folio_pos(folio); - unsigned int offset, length; + unsigned int offset, length, remaining; + unsigned int new_order = folio_order(folio); if (pos < start) offset = start - pos; @@ -222,6 +231,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) length = length - offset; else length = end + 1 - pos - offset; + remaining = folio_size(folio) - offset - length; folio_wait_writeback(folio); if (length == folio_size(folio)) { @@ -236,11 +246,30 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) */ folio_zero_range(folio, offset, length); + /* + * Find the greatest common power of two multiplier of the non-zero + * offset, length, and remaining as the new order. So we can truncate + * a subpage as large as possible. + */ + if (offset) + new_order = greatest_pow_of_two_multiplier(offset / PAGE_SIZE); + if (length) + new_order = min_t(unsigned int, new_order, + greatest_pow_of_two_multiplier(length / PAGE_SIZE)); + if (remaining) + new_order = min_t(unsigned int, new_order, + greatest_pow_of_two_multiplier(remaining / PAGE_SIZE)); + + /* order-1 THP not supported, downgrade to order-0 */ + if (new_order == 1) + new_order = 0; + + if (folio_has_private(folio)) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; - if (split_huge_page(&folio->page) == 0) + if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0) return true; if (folio_test_dirty(folio)) return false; From patchwork Mon Mar 21 14:21:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12787307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30084C433FE for ; Mon, 21 Mar 2022 14:21:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 015578D0006; Mon, 21 Mar 2022 10:21:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F03F38D0001; Mon, 21 Mar 2022 10:21:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D56328D0006; Mon, 21 Mar 2022 10:21:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id C54F18D0001 for ; Mon, 21 Mar 2022 10:21:40 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6F03199E93 for ; Mon, 21 Mar 2022 14:21:40 +0000 (UTC) X-FDA: 79268606760.20.B2A099B Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by imf22.hostedemail.com (Postfix) with ESMTP id DE247C0017 for ; Mon, 21 Mar 2022 14:21:39 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 90CA35C01C5; Mon, 21 Mar 2022 10:21:39 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 21 Mar 2022 10:21:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm3; bh=U2V6naZTN2l9Vn KhbqbkZBPTbIZWhGoV3DWXyWLzNW0=; b=x8kPTimY8qfbhFJYezAvUUMDyE2bAs C9SQw8Z/8Mdhv+DkOvE7BJtwcBTUPKk5cXeClx5R2vT2ppyN6+PTA5JZxmvQ+WcI b1bLAw+KQQe1Z9TBH9UIJLCXenuh18K/P/Hx+N6AKs/FobnZAq3fG9fR1Hh9V2ih 0o5ERWa9e4qz73VnChACKPUTuOcH27zbx5WYRN3XL7iVUo1rrdEgVW/+Q8BTz7m1 Vd7yvqfGFMz9YLz6QnBBqx8faQE4B+F5FQCbsBO/kf0hbpcvm4uknWYava/yKOdr Llo/d856dwWclxQmcDZpUseTA0HX9mzAHIDnhFr5Y/McfOaDa1klByRw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=U2V6naZTN2l9VnKhbqbkZBPTbIZWhGoV3DWXyWLzNW0=; b=dmlyux77 IbBQWWQFTzBrZFvHDATunOQU5xyGXGuFKyhU43ssIM9CcL/K9SyRebsKK8kLbN9d Wf4AC5q6AXurzY6OJ7cwpzwpnpzE81p+aTlCGhTzQsu1L/MuXWgdYMx8q41Gct/A lhyRP1uwA53V7hnV8Y+SIbwhdGmEQraicdmUD0U37RXg7EUTaw4ksY5NNmaeUMSm Cpkp93TJcQ2RcZsC7/j7MM4eAexoNcZJVTm7Jje/w/JiT1EqQb06RnNXeUvuVmEO /RmxIQIlHYGNWwr9ossUU9PRExGdJeAxZzp2YfZtTiZQahcOOoHlnT6Oq8BvuFYA IiQ+iYdrLK+eHQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrudegfedgieefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 21 Mar 2022 10:21:38 -0400 (EDT) From: Zi Yan To: Matthew Wilcox , linux-mm@kvack.org Cc: Roman Gushchin , Shuah Khan , Yang Shi , Miaohe Lin , Hugh Dickins , "Kirill A . Shutemov" , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org, Zi Yan Subject: [RFC PATCH 5/5] mm: huge_memory: enable debugfs to split huge pages to any order. Date: Mon, 21 Mar 2022 10:21:28 -0400 Message-Id: <20220321142128.2471199-6-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220321142128.2471199-1-zi.yan@sent.com> References: <20220321142128.2471199-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DE247C0017 X-Stat-Signature: 6p4s7bo5wt8x7nyb5tt8botbgmof47zy Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=sent.com header.s=fm3 header.b=x8kPTimY; dkim=pass header.d=messagingengine.com header.s=fm3 header.b=dmlyux77; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf22.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.27 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Rspam-User: X-HE-Tag: 1647872499-444864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It is used to test split_huge_page_to_list_to_order for pagecache THPs. Also add test cases for split_huge_page_to_list_to_order via both debugfs, truncating a file, and punching holes in a file. Signed-off-by: Zi Yan Reviewed-by: Roman Gushchin --- mm/huge_memory.c | 26 ++- .../selftests/vm/split_huge_page_test.c | 219 +++++++++++++++--- 2 files changed, 201 insertions(+), 44 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 76db0092a1e2..7645bb12fcbc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2856,7 +2856,7 @@ static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma) } static int split_huge_pages_pid(int pid, unsigned long vaddr_start, - unsigned long vaddr_end) + unsigned long vaddr_end, unsigned int new_order) { int ret = 0; struct task_struct *task; @@ -2926,7 +2926,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!trylock_page(page)) goto next; - if (!split_huge_page(page)) + if (!split_huge_page_to_list_to_order(page, NULL, new_order)) split++; unlock_page(page); @@ -2944,7 +2944,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, - pgoff_t off_end) + pgoff_t off_end, unsigned int new_order) { struct filename *file; struct file *candidate; @@ -2984,7 +2984,7 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (!trylock_page(fpage)) goto next; - if (!split_huge_page(fpage)) + if (!split_huge_page_to_list_to_order(fpage, NULL, new_order)) split++; unlock_page(fpage); @@ -3009,10 +3009,14 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, { static DEFINE_MUTEX(split_debug_mutex); ssize_t ret; - /* hold pid, start_vaddr, end_vaddr or file_path, off_start, off_end */ + /* + * hold pid, start_vaddr, end_vaddr, new_order or + * file_path, off_start, off_end, new_order + */ char input_buf[MAX_INPUT_BUF_SZ]; int pid; unsigned long vaddr_start, vaddr_end; + unsigned int new_order = 0; ret = mutex_lock_interruptible(&split_debug_mutex); if (ret) @@ -3041,29 +3045,29 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, goto out; } - ret = sscanf(buf, "0x%lx,0x%lx", &off_start, &off_end); - if (ret != 2) { + ret = sscanf(buf, "0x%lx,0x%lx,%d", &off_start, &off_end, &new_order); + if (ret != 2 && ret != 3) { ret = -EINVAL; goto out; } - ret = split_huge_pages_in_file(file_path, off_start, off_end); + ret = split_huge_pages_in_file(file_path, off_start, off_end, new_order); if (!ret) ret = input_len; goto out; } - ret = sscanf(input_buf, "%d,0x%lx,0x%lx", &pid, &vaddr_start, &vaddr_end); + ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d", &pid, &vaddr_start, &vaddr_end, &new_order); if (ret == 1 && pid == 1) { split_huge_pages_all(); ret = strlen(input_buf); goto out; - } else if (ret != 3) { + } else if (ret != 3 && ret != 4) { ret = -EINVAL; goto out; } - ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end); + ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order); if (!ret) ret = strlen(input_buf); out: diff --git a/tools/testing/selftests/vm/split_huge_page_test.c b/tools/testing/selftests/vm/split_huge_page_test.c index 52497b7b9f1d..af01e7dca9c8 100644 --- a/tools/testing/selftests/vm/split_huge_page_test.c +++ b/tools/testing/selftests/vm/split_huge_page_test.c @@ -16,6 +16,7 @@ #include #include #include +#include uint64_t pagesize; unsigned int pageshift; @@ -24,10 +25,11 @@ uint64_t pmd_pagesize; #define PMD_SIZE_PATH "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size" #define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" #define SMAP_PATH "/proc/self/smaps" +#define THP_FS_PATH "/mnt/thp_fs" #define INPUT_MAX 80 -#define PID_FMT "%d,0x%lx,0x%lx" -#define PATH_FMT "%s,0x%lx,0x%lx" +#define PID_FMT "%d,0x%lx,0x%lx,%d" +#define PATH_FMT "%s,0x%lx,0x%lx,%d" #define PFN_MASK ((1UL<<55)-1) #define KPF_THP (1UL<<22) @@ -75,23 +77,6 @@ static uint64_t read_pmd_pagesize(void) return strtoul(buf, NULL, 10); } -static int write_file(const char *path, const char *buf, size_t buflen) -{ - int fd; - ssize_t numwritten; - - fd = open(path, O_WRONLY); - if (fd == -1) - return 0; - - numwritten = write(fd, buf, buflen - 1); - close(fd); - if (numwritten < 1) - return 0; - - return (unsigned int) numwritten; -} - static void write_debugfs(const char *fmt, ...) { char input[INPUT_MAX]; @@ -106,11 +91,6 @@ static void write_debugfs(const char *fmt, ...) printf("%s: Debugfs input is too long\n", __func__); exit(EXIT_FAILURE); } - - if (!write_file(SPLIT_DEBUGFS, input, ret + 1)) { - perror(SPLIT_DEBUGFS); - exit(EXIT_FAILURE); - } } #define MAX_LINE_LENGTH 500 @@ -124,7 +104,7 @@ static bool check_for_pattern(FILE *fp, const char *pattern, char *buf) return false; } -static uint64_t check_huge(void *addr) +static uint64_t check_huge(void *addr, const char *prefix) { uint64_t thp = 0; int ret; @@ -149,13 +129,13 @@ static uint64_t check_huge(void *addr) goto err_out; /* - * Fetch the AnonHugePages: in the same block and check the number of + * Fetch the @prefix in the same block and check the number of * hugepages. */ - if (!check_for_pattern(fp, "AnonHugePages:", buffer)) + if (!check_for_pattern(fp, prefix, buffer)) goto err_out; - if (sscanf(buffer, "AnonHugePages:%10ld kB", &thp) != 1) { + if (sscanf(&buffer[strlen(prefix)], "%10ld kB", &thp) != 1) { printf("Reading smap error\n"); exit(EXIT_FAILURE); } @@ -184,7 +164,7 @@ void split_pmd_thp(void) for (i = 0; i < len; i++) one_page[i] = (char)i; - thp_size = check_huge(one_page); + thp_size = check_huge(one_page, "AnonHugePages:"); if (!thp_size) { printf("No THP is allocated\n"); exit(EXIT_FAILURE); @@ -192,7 +172,7 @@ void split_pmd_thp(void) /* split all THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)one_page, - (uint64_t)one_page + len); + (uint64_t)one_page + len, 0); for (i = 0; i < len; i++) if (one_page[i] != (char)i) { @@ -201,7 +181,7 @@ void split_pmd_thp(void) } - thp_size = check_huge(one_page); + thp_size = check_huge(one_page, "AnonHugePages:"); if (thp_size) { printf("Still %ld kB AnonHugePages not split\n", thp_size); exit(EXIT_FAILURE); @@ -249,7 +229,7 @@ void split_pte_mapped_thp(void) for (i = 0; i < len; i++) one_page[i] = (char)i; - thp_size = check_huge(one_page); + thp_size = check_huge(one_page, "AnonHugePages:"); if (!thp_size) { printf("No THP is allocated\n"); exit(EXIT_FAILURE); @@ -284,7 +264,7 @@ void split_pte_mapped_thp(void) /* split all remapped THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)pte_mapped, - (uint64_t)pte_mapped + pagesize * 4); + (uint64_t)pte_mapped + pagesize * 4, 0); /* smap does not show THPs after mremap, use kpageflags instead */ thp_size = 0; @@ -371,20 +351,193 @@ void split_file_backed_thp(void) printf("file-backed THP split test done, please check dmesg for more information\n"); } +void create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd, char **addr) +{ + size_t i; + int dummy; + + srand(time(NULL)); + + *fd = open(testfile, O_CREAT | O_RDWR, 0664); + if (*fd == -1) { + perror("Failed to create a file at "THP_FS_PATH); + exit(EXIT_FAILURE); + } + + for (i = 0; i < fd_size; i++) { + unsigned char byte = (unsigned char)i; + + write(*fd, &byte, sizeof(byte)); + } + close(*fd); + sync(); + *fd = open("/proc/sys/vm/drop_caches", O_WRONLY); + if (*fd == -1) { + perror("open drop_caches"); + exit(EXIT_FAILURE); + } + if (write(*fd, "3", 1) != 1) { + perror("write to drop_caches"); + exit(EXIT_FAILURE); + } + close(*fd); + + *fd = open(testfile, O_RDWR); + if (*fd == -1) { + perror("Failed to open a file at "THP_FS_PATH); + exit(EXIT_FAILURE); + } + + *addr = mmap(NULL, fd_size, PROT_READ|PROT_WRITE, MAP_SHARED, *fd, 0); + if (*addr == (char *)-1) { + perror("cannot mmap"); + exit(1); + } + madvise(*addr, fd_size, MADV_HUGEPAGE); + + for (size_t i = 0; i < fd_size; i++) + dummy += *(*addr + i); + + if (!check_huge(*addr, "FilePmdMapped:")) { + printf("No pagecache THP generated, please mount a filesystem " + "supporting pagecache THP at "THP_FS_PATH"\n"); + exit(EXIT_FAILURE); + } +} + +void split_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + printf("split %ld kB pagecache page to order %d ... ", fd_size >> 10, order); + write_debugfs(PID_FMT, getpid(), (uint64_t)addr, (uint64_t)addr + fd_size, order); + + for (i = 0; i < fd_size; i++) + if (*(addr + i) != (char)i) { + printf("%lu byte corrupted in the file\n", i); + exit(EXIT_FAILURE); + } + + close(fd); + unlink(testfile); + printf("done\n"); +} + +void truncate_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + printf("truncate %ld kB pagecache page to size %lu kB ... ", fd_size >> 10, 4UL << order); + ftruncate(fd, pagesize << order); + + for (i = 0; i < (pagesize << order); i++) + if (*(addr + i) != (char)i) { + printf("%lu byte corrupted in the file\n", i); + exit(EXIT_FAILURE); + } + + close(fd); + unlink(testfile); + printf("done\n"); +} + +void punch_hole_in_pagecache_thp(size_t fd_size, off_t offset[], off_t len[], int n) +{ + int fd, j; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + for (j = 0; j < n; j++) { + printf("addr: %lx, punch a hole at offset %ld kB with len %ld kB ... ", + (unsigned long)addr, offset[j] >> 10, len[j] >> 10); + fallocate(fd, FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE, offset[j], len[j]); + printf("done\n"); + } + + for (i = 0; i < fd_size; i++) { + int in_hole = 0; + + for (j = 0; j < n; j++) + if (i >= offset[j] && i <= (offset[j] + len[j])) { + in_hole = 1; + break; + } + + if (in_hole) { + if (*(addr + i)) { + printf("%lu byte non-zero after punch\n", i); + exit(EXIT_FAILURE); + } + continue; + } + if (*(addr + i) != (char)i) { + printf("%lu byte corrupted in the file\n", i); + exit(EXIT_FAILURE); + } + } + + close(fd); + unlink(testfile); +} + int main(int argc, char **argv) { + int i; + size_t fd_size; + off_t offset[2], len[2]; + if (geteuid() != 0) { printf("Please run the benchmark as root\n"); exit(EXIT_FAILURE); } + setbuf(stdout, NULL); + pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; pmd_pagesize = read_pmd_pagesize(); + fd_size = 2 * pmd_pagesize; split_pmd_thp(); split_pte_mapped_thp(); split_file_backed_thp(); + for (i = 8; i >= 0; i--) + if (i != 1) + split_thp_in_pagecache_to_order(fd_size, i); + + /* + * for i is 1, truncate code in the kernel should create order-0 pages + * instead of order-1 THPs, since order-1 THP is not supported. No error + * is expected. + */ + for (i = 8; i >= 0; i--) + truncate_thp_in_pagecache_to_order(fd_size, i); + + offset[0] = 123 * pagesize; + offset[1] = 4 * pagesize; + len[0] = 200 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2); + + offset[0] = 259 * pagesize + pagesize / 2; + offset[1] = 33 * pagesize; + len[0] = 129 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2); + return 0; }