From patchwork Fri Sep 22 07:09:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 13395075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5360EE7D0D6 for ; Fri, 22 Sep 2023 07:10:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C23206B028A; Fri, 22 Sep 2023 03:10:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD1FF6B028B; Fri, 22 Sep 2023 03:10:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9A9F6B028C; Fri, 22 Sep 2023 03:10:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9A1886B028A for ; Fri, 22 Sep 2023 03:10:03 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 73B891CAD08 for ; Fri, 22 Sep 2023 07:10:03 +0000 (UTC) X-FDA: 81263359086.14.FC9FFDB Received: from out-226.mta0.migadu.com (out-226.mta0.migadu.com [91.218.175.226]) by imf09.hostedemail.com (Postfix) with ESMTP id 9ECC014000F for ; Fri, 22 Sep 2023 07:10:01 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=jxWX0Bgf; spf=pass (imf09.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.226 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695366601; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k0mCgISWNiznjjM6SFzdDoz9gB321AuL+FXt04gr7LU=; b=6cLx3tqoMhkaJVvAAwNprs7mC1mDMWW3s4cAi2gk70IezW9r0Smux4rracElUdyEtx5uur MTnzZubGTT223e1rcaekP2S75v7f7pLEIjJ6IfRWEnfX7YTAnnG8zxnk//iN8Bjx/NEK0T WqtuAYhztJ7mS50gRXHfGgLo2iUDNNw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695366601; a=rsa-sha256; cv=none; b=U5uxir6vHtyFqEkBgpGRYFxJ/dCJACuKs1fRXuPmE4JjJ9acukTBuuFumsrhZ8nboIy8jA R2yEYMqRTlrUlCoLpQl6pRBI/5cdH03DN4sDAtPaQdIdvl7zeDOy4WyVAVgqZn27tEtkPP /RMV3d+Aex7HVxY4Ed0/ao+eIVwuz34= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=jxWX0Bgf; spf=pass (imf09.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.226 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k0mCgISWNiznjjM6SFzdDoz9gB321AuL+FXt04gr7LU=; b=jxWX0BgfnsPK+/UXL3X/EVvjG8BWDljXrq53ger1dOsqtFyF7Qy0lOSvvZlzrSxRGiowBX aZ+aCA1UetBKaJ3y1q6OBuvEv3GThzFUov3jfT6/IHSVlPKdHtoHot+HJ51tvuCMu1WOgT 36r5JuQY/7UoGeeVslveatFA3PqI7mk= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 1/4] mm: pass set_count and set_reserved to __init_single_page Date: Fri, 22 Sep 2023 15:09:20 +0800 Message-Id: <20230922070923.355656-2-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 9ECC014000F X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: cmjed3fphwqc7qmakq8rgf8j5b16j5h5 X-HE-Tag: 1695366601-577634 X-HE-Meta: U2FsdGVkX18QCdFtUZTZGM7oFHP3EzwuWHzWax4y2nwCTLOgoim0pdF11X2wirhguu4I0j8OX/qTv6KLUCpRAYqtacU91+PN5u5NKMGd9mou8FbPHp/ml6HS/jU/329BVjQJHh4UK+4vw7yFHuJVfoxKPyGCPa6GVgMuMmDwrq12lm9gFCarSebU07PW33uqhYN478AybheWDGrnm69L0yK2mIqLbRcHu76+Zn2FZ3pu/ioSD7H34zg3PYrJmPuoDl09PotAXUm2ZZRFCheVvH+V2XarmeStPC3/k+2gEzB7XYrx52xc/IIbXB1dRSVgHihNTwb7OnFD/CDJlTvutN6QJDZI6KnvsQeF974YWC1Hw44QoEQh5yHgtN+h4gd7v9DVwSDaFIfjSKnmVKMv0VYaqx279gJrbLdNiXtpI0VbUoCGSfKREqagKDgGbSCoSSHkeOE/8AdmFi9UsqrIzzte8HLz813kIP785d5uzxejkOyEiuQrJOFjiSsChzcIBV1+R6W6hLgAdxG1KM+hEFoNW9dNzz40mj4FLLu/JKwBvCzOg5WSR07mHc5jOYhnrknp7xFEti0BmCiTrShkUpBvXlT67z7GUAU06eXfAzWc7yXwOx0+qZSZmr4MLJXsaRn+yi+2hPePI6MbPEyHbFrs7Lepu3mXdcmHW5lRIvsr2qfIXRigiH1yY3OsEX8iJDb08MCxnSKxHCenDVu/ZAQXfjaQjp2WAZp0+CqMQsHvwLEjeUa58x76MytXEUtNHIaceSi3jcFWXhDl9URTLdVD6mApS9HI+hHAgTJa6FnVpdtz/dR9XSIfzR2zCnC/5NNT6qG60jQqcQptK4LiBXQ9cJj/FxvMQifGFyrFej/quDxa5CfcrVIG+ormpjcj9sBGG6zhpJwJZ+YqaOdsEJu+iGV5RKwlX/sPHv/qrMnttJsNrgaadW5xsqZCMxR7K2Y5lu8B5sBooVYbT+Y pePgeela O8qpA1zoWhpXi/SZVZspXQnMVe6FANEPpTGlvexgg7RBoXmIc2xe6hMpAok2/9BN7C6YMRZ7lhDMGP8+rmaPx+R20RJsc5L9vm4bodR2iU5hwskQvMj9npo5pAg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we init a single page, we need to mark this page reserved if it does. And somes page may not need to set page count, such as compound pages. Pass set_count and set_reserved to __init_single_page, let the caller decide if it needs to set page count or mark page reserved. Signed-off-by: Yajun Deng --- mm/hugetlb.c | 2 +- mm/internal.h | 3 ++- mm/mm_init.c | 30 ++++++++++++++++-------------- 3 files changed, 19 insertions(+), 16 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e2123d1bb4a2..4f91e47430ce 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3196,7 +3196,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, for (pfn = head_pfn + start_page_number; pfn < end_pfn; pfn++) { struct page *page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); + __init_single_page(page, pfn, zone, nid, true, false); prep_compound_tail((struct page *)folio, pfn - head_pfn); ret = page_ref_freeze(page, 1); VM_BUG_ON(!ret); diff --git a/mm/internal.h b/mm/internal.h index 7a961d12b088..8bded7f98493 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1210,7 +1210,8 @@ struct vma_prepare { }; void __meminit __init_single_page(struct page *page, unsigned long pfn, - unsigned long zone, int nid); + unsigned long zone, int nid, bool set_count, + bool set_reserved); /* shrinker related functions */ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, diff --git a/mm/mm_init.c b/mm/mm_init.c index 06a72c223bce..c40042098a82 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -557,11 +557,13 @@ static void __init find_zone_movable_pfns_for_nodes(void) } void __meminit __init_single_page(struct page *page, unsigned long pfn, - unsigned long zone, int nid) + unsigned long zone, int nid, bool set_count, + bool set_reserved) { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + if (set_count) + init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); @@ -572,6 +574,8 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, if (!is_highmem_idx(zone)) set_page_address(page, __va(pfn << PAGE_SHIFT)); #endif + if (set_reserved) + __SetPageReserved(page); } #ifdef CONFIG_NUMA @@ -714,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) if (zone_spans_pfn(zone, pfn)) break; } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid); + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, true, false); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} @@ -821,8 +825,8 @@ static void __init init_unavailable_range(unsigned long spfn, pfn = pageblock_end_pfn(pfn) - 1; continue; } - __init_single_page(pfn_to_page(pfn), pfn, zone, node); - __SetPageReserved(pfn_to_page(pfn)); + __init_single_page(pfn_to_page(pfn), pfn, zone, node, + true, true); pgcnt++; } @@ -884,7 +888,7 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone } page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); + __init_single_page(page, pfn, zone, nid, true, false); if (context == MEMINIT_HOTPLUG) __SetPageReserved(page); @@ -965,11 +969,9 @@ static void __init memmap_init(void) #ifdef CONFIG_ZONE_DEVICE static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, unsigned long zone_idx, int nid, - struct dev_pagemap *pgmap) + struct dev_pagemap *pgmap, + bool set_count) { - - __init_single_page(page, pfn, zone_idx, nid); - /* * Mark page reserved as it will need to wait for onlining * phase for it to be fully associated with a zone. @@ -977,7 +979,7 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * We can use the non-atomic __set_bit operation for setting * the flag as we are still initializing the pages. */ - __SetPageReserved(page); + __init_single_page(page, pfn, zone_idx, nid, set_count, true); /* * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer @@ -1041,7 +1043,7 @@ static void __ref memmap_init_compound(struct page *head, for (pfn = head_pfn + 1; pfn < end_pfn; pfn++) { struct page *page = pfn_to_page(pfn); - __init_zone_device_page(page, pfn, zone_idx, nid, pgmap); + __init_zone_device_page(page, pfn, zone_idx, nid, pgmap, false); prep_compound_tail(head, pfn - head_pfn); set_page_count(page, 0); @@ -1084,7 +1086,7 @@ void __ref memmap_init_zone_device(struct zone *zone, for (pfn = start_pfn; pfn < end_pfn; pfn += pfns_per_compound) { struct page *page = pfn_to_page(pfn); - __init_zone_device_page(page, pfn, zone_idx, nid, pgmap); + __init_zone_device_page(page, pfn, zone_idx, nid, pgmap, true); if (pfns_per_compound == 1) continue; @@ -2058,7 +2060,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, } else { page++; } - __init_single_page(page, pfn, zid, nid); + __init_single_page(page, pfn, zid, nid, true, false); nr_pages++; } return (nr_pages); From patchwork Fri Sep 22 07:09:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 13395076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 532B2E7D0D2 for ; Fri, 22 Sep 2023 07:10:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6C986B028C; Fri, 22 Sep 2023 03:10:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1BD06B028D; Fri, 22 Sep 2023 03:10:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0B256B028E; Fri, 22 Sep 2023 03:10:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C0B996B028C for ; Fri, 22 Sep 2023 03:10:12 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 844C5141032 for ; Fri, 22 Sep 2023 07:10:12 +0000 (UTC) X-FDA: 81263359464.30.B101F7B Received: from out-213.mta0.migadu.com (out-213.mta0.migadu.com [91.218.175.213]) by imf17.hostedemail.com (Postfix) with ESMTP id BD55F40011 for ; Fri, 22 Sep 2023 07:10:10 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=iSP4Wnij; spf=pass (imf17.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.213 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695366610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eNDRWQx4rNnkd5izfMYcW2fBhjnfX3xJn3/1FZ3VThE=; b=S5tKw7Y3OXI/6OZxuG2Eeq+zwrZjqsFOloVANlUTlsQXVVRx9UNqMYcXHpzdaBrT0mGoSC 6AQQR/px9r6cN3ktJ9CbQDo2qTj/3gVBF/BLD8yzCst9PjQm8pVa4l39JXYKVUTmpNx5Tw 2+vrQ8CoGSCYvUpFfqmwinHl9aov2bw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695366610; a=rsa-sha256; cv=none; b=ntfWYZFDYQncUJfr/HIMDTODLteTNSXUC2h/4L5btR3SRWPBC6JKsAudy/iEzqtZJbtklU p7vOTqA3n/84opm37BrLprfZJjbahvVbxIvOqZeGKhqhyt7HD/wLJ7EcI+ROMGEb9UP0A9 ym2qqdZ3yI6rwGrOKOXQHVXfJrTSNZ4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=iSP4Wnij; spf=pass (imf17.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.213 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366609; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eNDRWQx4rNnkd5izfMYcW2fBhjnfX3xJn3/1FZ3VThE=; b=iSP4WnijIwKLq3l9vadWpYjfJoJm3g4DerOWhwM35ttFjdkUM6l19oJCA3gNFvh1Mphrj3 HsqfWa/w9fO+8D4ekxGE+wh4qCZ9R7alts+HM2SsnLz/nxZodV+mCe9vbVcqanUTTx5BMC y/tcpUU+PnQQYCSvaVyj01COgaRrssQ= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 2/4] mm: Introduce MEMINIT_LATE context Date: Fri, 22 Sep 2023 15:09:21 +0800 Message-Id: <20230922070923.355656-3-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Stat-Signature: 9u9p9qno8jkskwoxgeiy7pfrprnkfb5i X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: BD55F40011 X-Rspam-User: X-HE-Tag: 1695366610-481978 X-HE-Meta: U2FsdGVkX1//00hFQZWLizIQovf8lEktRnqUYVdygXMgBFbQe03wyoYmqMcTkE3C10NeBQ+vIOR2DBGeWIAHBWjbpXj7IwtGpXyrLfqzEeQ1tZuYIjBY3KwK0XmVWWxhUI92906z/UsyMB2csQNrr7Des0C90MmfhknKb+qcCDAmujoqWrivG8JH2t5Fdo0DIzm3lylL+sOmwMMWD9ZJg2/OZj873QEJbTvv7cW2knoCeAUWWCmUVSZCVDStBBuWvh09lhfaSUpqQbYyO82WVBzrKKa/1t8Qi5wD4KKJLkLY9n+ZrO9UF5yxmFqyE0K+OtiMeJYfBd/P30sHxzp5vp7PkxoAPHqzmQ2ZYIqBpzQo4I961+4Bzl/WSfin7JOzc0Pr5ua/YhJlZj71v1joE8D7pJKLPOq+f4toia6ewcBoRWJBxWMhSOBTk6ZKH6kr8FWj65zXhFKabtF1AUnVZHlaPfIwcpMQJLUYAxelwWYjlCGrwoSotrCghx1SLj6wbwbQp/4q7WSTyogscuM8DzvkzfMalGY70i9NBKf+qmKtFBMLfPmGLz1nF4k3cklzs7/jcKBKpaZ8cGARcRdiBd9bMdry1iJRcewaBGAabHZwIj8Yiu1GdGfu59o8ne4b1okxpSm64khTayrIqM7z2J+5XLUzSrymmybL/Cs9OLNfFNX5lKaQla3H7iRYnno0lc4ueXGtEG8T3JU8wUakd1LxFladZci3U5oEG7yi7LKRb5WkafFT4puHlp4ISRkHff/CaIaMuJsZLCSdEJEV4IBAsKZm0bR87cqCzEwFsRJ59q7uZ5PSzWZ6D5S0ppG4jwW+V6/6ezVKEkAldDF50zhL6ToLzBLtfouJ7g5v82sG0j2nQp8Aa6qFW6hzdwPxPyBioALufQ8kwuNwYB+CtyxorOJxxMzP1sJk5zadjwzBSSF1nad+l08QzeifkNjzP5gI80CJgnC5Gfw2G5x YzMoeWqU Y864/JX6P0icdW9ZReYApvGWn87SEo9uetzfmpcXJOwXFLlgviemb+3zxfIAg8D8HM+E4RUNCUqbcJf44JieoH2rLOkjDh9rn3UnpgFEhwMHVwnedqzTbCpwQvr76REwzt6gE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __free_pages_core() will always reset pages count and clear reserved flag. It will consume a lot of time if there are a lot of pages. Introduce MEMINIT_LATE context, if the context is MEMINIT_EARLY, we don't need reset pages count and clear reserved flag. Signed-off-by: Yajun Deng --- include/linux/mmzone.h | 1 + mm/internal.h | 7 ++++--- mm/kmsan/init.c | 2 +- mm/memblock.c | 4 ++-- mm/memory_hotplug.c | 2 +- mm/mm_init.c | 11 ++++++----- mm/page_alloc.c | 14 ++++++++------ 7 files changed, 23 insertions(+), 18 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1e9cf3aa1097..253e792d409f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1442,6 +1442,7 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order, */ enum meminit_context { MEMINIT_EARLY, + MEMINIT_LATE, MEMINIT_HOTPLUG, }; diff --git a/mm/internal.h b/mm/internal.h index 8bded7f98493..31737196257c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -394,9 +394,10 @@ static inline void clear_zone_contiguous(struct zone *zone) extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); -extern void memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order); -extern void __free_pages_core(struct page *page, unsigned int order); +extern void memblock_free_pages(unsigned long pfn, unsigned int order, + enum meminit_context context); +extern void __free_pages_core(struct page *page, unsigned int order, + enum meminit_context context); /* * This will have no effect, other than possibly generating a warning, if the diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c index ffedf4dbc49d..b7ed98b854a6 100644 --- a/mm/kmsan/init.c +++ b/mm/kmsan/init.c @@ -172,7 +172,7 @@ static void do_collection(void) shadow = smallstack_pop(&collect); origin = smallstack_pop(&collect); kmsan_setup_meta(page, shadow, origin, collect.order); - __free_pages_core(page, collect.order); + __free_pages_core(page, collect.order, MEMINIT_LATE); } } diff --git a/mm/memblock.c b/mm/memblock.c index 5a88d6d24d79..a32364366bb2 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1685,7 +1685,7 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size) end = PFN_DOWN(base + size); for (; cursor < end; cursor++) { - memblock_free_pages(pfn_to_page(cursor), cursor, 0); + memblock_free_pages(cursor, 0, MEMINIT_LATE); totalram_pages_inc(); } } @@ -2089,7 +2089,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) while (start + (1UL << order) > end) order--; - memblock_free_pages(pfn_to_page(start), start, order); + memblock_free_pages(start, order, MEMINIT_LATE); start += (1UL << order); } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 3b301c4023ff..d38548265f26 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -634,7 +634,7 @@ void generic_online_page(struct page *page, unsigned int order) * case in page freeing fast path. */ debug_pagealloc_map_pages(page, 1 << order); - __free_pages_core(page, order); + __free_pages_core(page, order, MEMINIT_HOTPLUG); totalram_pages_add(1UL << order); } EXPORT_SYMBOL_GPL(generic_online_page); diff --git a/mm/mm_init.c b/mm/mm_init.c index c40042098a82..0a4437aae30d 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1976,7 +1976,7 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { for (i = 0; i < nr_pages; i += pageblock_nr_pages) set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); - __free_pages_core(page, MAX_ORDER); + __free_pages_core(page, MAX_ORDER, MEMINIT_LATE); return; } @@ -1986,7 +1986,7 @@ static void __init deferred_free_range(unsigned long pfn, for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0); + __free_pages_core(page, 0, MEMINIT_LATE); } } @@ -2568,9 +2568,10 @@ void __init set_dma_reserve(unsigned long new_dma_reserve) dma_reserve = new_dma_reserve; } -void __init memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order) +void __init memblock_free_pages(unsigned long pfn, unsigned int order, + enum meminit_context context) { + struct page *page = pfn_to_page(pfn); if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) { int nid = early_pfn_to_nid(pfn); @@ -2583,7 +2584,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, /* KMSAN will take care of these pages. */ return; } - __free_pages_core(page, order); + __free_pages_core(page, order, context); } DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 06be8821d833..6c4f4531bee0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1278,7 +1278,7 @@ static void __free_pages_ok(struct page *page, unsigned int order, __count_vm_events(PGFREE, 1 << order); } -void __free_pages_core(struct page *page, unsigned int order) +void __free_pages_core(struct page *page, unsigned int order, enum meminit_context context) { unsigned int nr_pages = 1 << order; struct page *p = page; @@ -1289,14 +1289,16 @@ void __free_pages_core(struct page *page, unsigned int order) * of all pages to 1 ("allocated"/"not free"). We have to set the * refcount of all involved pages to 0. */ - prefetchw(p); - for (loop = 0; loop < (nr_pages - 1); loop++, p++) { - prefetchw(p + 1); + if (context != MEMINIT_EARLY) { + prefetchw(p); + for (loop = 0; loop < (nr_pages - 1); loop++, p++) { + prefetchw(p + 1); + __ClearPageReserved(p); + set_page_count(p, 0); + } __ClearPageReserved(p); set_page_count(p, 0); } - __ClearPageReserved(p); - set_page_count(p, 0); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); From patchwork Fri Sep 22 07:09:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 13395077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F2DCE7D0D6 for ; Fri, 22 Sep 2023 07:10:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3221F6B028E; Fri, 22 Sep 2023 03:10:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D18E6B028F; Fri, 22 Sep 2023 03:10:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C1086B0290; Fri, 22 Sep 2023 03:10:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0C88F6B028E for ; Fri, 22 Sep 2023 03:10:21 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E5C5840976 for ; Fri, 22 Sep 2023 07:10:20 +0000 (UTC) X-FDA: 81263359800.03.564A2AD Received: from out-212.mta0.migadu.com (out-212.mta0.migadu.com [91.218.175.212]) by imf04.hostedemail.com (Postfix) with ESMTP id 284914001D for ; Fri, 22 Sep 2023 07:10:18 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=AFYpk1uq; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf04.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.212 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695366619; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YklfUpteE5FTVxoZIVXyNtzYnXnzzLxI2l0gg0fMR4U=; b=Wj5SoT3g+Wnwvh1rQHKWD+MtayyR/ZrL4BR7+OGYwqmghyi+gXlP84LSxdVaAvZuIA9wPX /i0V7tNObu6DxeGP3NawPQi/JIN/9i/fuuX/SX4V8XqTLBI57IefTazZIgjSz4OfRlCxxZ l9u2DHEzkU0T2MwY9kPveRF0v1axADo= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=AFYpk1uq; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf04.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.212 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695366619; a=rsa-sha256; cv=none; b=OdeB+tx9WQx8CMlrOezQLmymqVZSpJxAweuIt/8oEdM7WT8TXWeaPrPihAj1c0867kPE4g ojkCVswUGYY2pGaRf1mesVc8lnSVzivE9yYYD6JH6crM5U74w1guNHIH0BE2UDKcepcQsj HjYkMqWMnFbybYVVpA9Ne5k23p4Z4Gg= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366617; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YklfUpteE5FTVxoZIVXyNtzYnXnzzLxI2l0gg0fMR4U=; b=AFYpk1uq44SipDFiOBO/r4CBpTdJf11xLhKC3MBb7kaBEY5+vQeckQrlIU+3y3vfuW33xS T2mK50ZEPqJJLFI3vGfPNpey/+U8/pdmze2yWsY1x79c7yi/xA5LllfZ122JnR+CSx0glc RY5VQg6TmowLlVqe3uOL2GyJOGwdJA4= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 3/4] mm: Set page count and mark page reserved in reserve_bootmem_region Date: Fri, 22 Sep 2023 15:09:22 +0800 Message-Id: <20230922070923.355656-4-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 284914001D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: qtodnxbgmr3stgxc8grebiph1xpz9nw4 X-HE-Tag: 1695366618-425087 X-HE-Meta: U2FsdGVkX18mhjO9w7w3CIpN8iB2xFlRstY0TPptqFzdI9c5ChSt83Psm3lYznJk7hmEVUCp/OjgqmxhGO6rbO80bMyBuvUDQeZY1SwLi34PjwfinQvpXTFq5DYCtYvCkdS8cjXSEtXD48ULYx7LeHPyaypUe1w8LqddJJWMEp9kduVoLx7NhlVe1yvSfB4tkZYFqMEwcyLW+VIq+dm3SD6U305NAs/0KiOkEVQdpRhuvTGwDDNLjus4P+pdT/hLbqjiOvlUP9XiH/UqQet6fiYOlQnxG3TKaQp9D8FWZUeLvLxqiQp7b8cy+WYjhuCwhx1gWupaGnf+SoYUPs3q3zA1TtwIXfTNyZqLpI/xpQyUoz4u37W1SA49eL6PBDVZtgAwwmVv0VOsI/dGB9zzfHGKr1LDE9fO9x6y284abeCxRCHaIxDBD5wuDTURZGdVL73gKxHgJdK399eu+i7Kr+nwja1UdwDkAYKlueS9Cr1qbp5FydKaVcTZE8BtqRNeWKF7T3cUHwQxJUYB0/t2P+5UOijjrv62VF7lfVd3ia7+0PB8viqAgm6clo0dnN3PWdQr9Ug/FZE8e7Ty2DUs/k/frnPUFyMmLZYc9YU7epCbQOcD+mPYiXsoMfhHAh9Ualn+3P5wBOD+2gvT+f2uprla2aa9psGD236pFN9dytixQLIxakRSXE+N/yuSPCYQ9U+rgQJFHkwGL2YF7mTAJXoGslsBzZCTqvH5aGpZiBNwnVZIEJyQO/3K6Y02JhYad/I9RH/FJb3ap8OMIVW2d3dliRy1WrOaehJme3MKjMGk4O+ApgGbDqNI0AUkglCWQiN8p6AihgOzEbC2j1EUR7549t5PK3SpqbqJNtKt8EveqGHLNlqkCNRh0wnZohxtSG2e9qrSmlD1gsQGjZ+sk76pxrc697AooVtnuQuVBsQcC49TyLd/EF1oc5yMe5a9I1BiU61acgWlR5Dgyvv mZJ6mE8z rfMb0ZTFeUy9w/J+WwjwhQEobSWkxZBHs9OOwQhJm01QRvJn+UKvInYbU16dq774kHNaVU41uo32Oqv0WHnHCo1vV2izUZdQGokHdlJAk4qMoHxdYSbb/YwejAJ9R3l3sRvHo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: memmap_init_range() would set page count of all pages, but the free pages count would be reset in __free_pages_core(). These two are opposite operations. It's unnecessary and time-consuming when it's in MEMINIT_EARLY context. Set page count and mark page reserved in reserve_bootmem_region when in MEMINIT_EARLY context, and change the context from MEMINIT_LATE to MEMINIT_EARLY in __free_pages_memory. At the same time, the init list head in reserve_bootmem_region isn't need. As it already done in __init_single_page. The following data was tested on an x86 machine with 190GB of RAM. before: free_low_memory_core_early() 342ms after: free_low_memory_core_early() 286ms Signed-off-by: Yajun Deng --- mm/memblock.c | 2 +- mm/mm_init.c | 20 ++++++++++++++------ mm/page_alloc.c | 8 +++++--- 3 files changed, 20 insertions(+), 10 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index a32364366bb2..9276f1819982 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2089,7 +2089,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) while (start + (1UL << order) > end) order--; - memblock_free_pages(start, order, MEMINIT_LATE); + memblock_free_pages(start, order, MEMINIT_EARLY); start += (1UL << order); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 0a4437aae30d..1cc310f706a9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -718,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) if (zone_spans_pfn(zone, pfn)) break; } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid, true, false); + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, false, false); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} @@ -756,8 +756,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, init_reserved_page(start_pfn, nid); - /* Avoid false-positive PageTail() */ - INIT_LIST_HEAD(&page->lru); + /* Set page count for the reserve region */ + init_page_count(page); /* * no need for atomic set_bit because the struct @@ -888,9 +888,17 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone } page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid, true, false); - if (context == MEMINIT_HOTPLUG) - __SetPageReserved(page); + + /* If the context is MEMINIT_EARLY, we will set page count and + * mark page reserved in reserve_bootmem_region, the free region + * wouldn't have page count and reserved flag and we don't + * need to reset pages count and clear reserved flag in + * __free_pages_core. + */ + if (context == MEMINIT_EARLY) + __init_single_page(page, pfn, zone, nid, false, false); + else + __init_single_page(page, pfn, zone, nid, true, true); /* * Usually, we want to mark the pageblock MIGRATE_MOVABLE, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6c4f4531bee0..6ac58c5f3b00 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1285,9 +1285,11 @@ void __free_pages_core(struct page *page, unsigned int order, enum meminit_conte unsigned int loop; /* - * When initializing the memmap, __init_single_page() sets the refcount - * of all pages to 1 ("allocated"/"not free"). We have to set the - * refcount of all involved pages to 0. + * When initializing the memmap, memmap_init_range sets the refcount + * of all pages to 1 ("allocated"/"not free") in hotplug context. We + * have to set the refcount of all involved pages to 0. Otherwise, + * we don't do it, as reserve_bootmem_region only set the refcount on + * reserve region ("allocated") in early context. */ if (context != MEMINIT_EARLY) { prefetchw(p); From patchwork Fri Sep 22 07:09:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 13395078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B91E2E7D0D7 for ; Fri, 22 Sep 2023 07:10:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 592A66B0290; Fri, 22 Sep 2023 03:10:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 543E16B0291; Fri, 22 Sep 2023 03:10:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 458DD6B0292; Fri, 22 Sep 2023 03:10:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3688E6B0290 for ; Fri, 22 Sep 2023 03:10:29 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1154A1A0DFD for ; Fri, 22 Sep 2023 07:10:29 +0000 (UTC) X-FDA: 81263360178.10.BBDF4A0 Received: from out-226.mta0.migadu.com (out-226.mta0.migadu.com [91.218.175.226]) by imf01.hostedemail.com (Postfix) with ESMTP id 65DA240011 for ; Fri, 22 Sep 2023 07:10:27 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=fPPxKFrT; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf01.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.226 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695366627; a=rsa-sha256; cv=none; b=R6Hw46kAV3Wwhx3/FJtRkLn9Eud2JhMQtV0aV7KPpPLPqQeT6ISR+mOgYac2w2mv7XQVMg fPcSEwI8IVL+Ik3lzuwQrhgAcXxAYKRLxOjzqjTq6AIFRGzI3c22vBIEtLZGpvQsDRTF5e hds2CplZ9jCMQDa8Je/W78duYHaPnK4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=fPPxKFrT; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf01.hostedemail.com: domain of yajun.deng@linux.dev designates 91.218.175.226 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695366627; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0NMcmRX/w2n3M3EQea1/2ZhpgRQyZV6LdD1tfFyUHrY=; b=zTThQYjWAHoO4BSbCaM5LWgQI3qhzuAkjP+0HefSqlLhcdv9v6w3KykOnr5GHZ7hb29MZe Q3+4NSKJl0eJBlWyGodM52boQGIqad4hghfaRrMib4Vn9gOVvjPSXQOw6W/ebcSIIc5ak3 ZP/HAX4TjIIE3UQNey6E6XjJX+WWJDk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0NMcmRX/w2n3M3EQea1/2ZhpgRQyZV6LdD1tfFyUHrY=; b=fPPxKFrTLFfvL3jUV7itB9gMnKWR/r7Snjv26hnF+iiC5cBKWt1EOcJiso/9kwp07Pc/E/ xO9LzRFYie1qlDtSrkxynAhhzP/0j+M5qMOLAGoypKYac3zL2PZvrY3sznXwcP3QWg6wwc 6fDf5t+PpwLmQdL+crOn+jgf7v0Hej8= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 4/4] mm: don't set page count in deferred_init_pages Date: Fri, 22 Sep 2023 15:09:23 +0800 Message-Id: <20230922070923.355656-5-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 65DA240011 X-Stat-Signature: jutncj9qprf9r7rnnfhc1mxrnuah78bg X-HE-Tag: 1695366627-754192 X-HE-Meta: U2FsdGVkX19amj7HkTgHIC52Mo8o78aEkV5vko5s/Hqsq5gJqLRLTWyeEtzrHehHh6WQZSfqNUTeu1Ou7Ur06QYCDU2mrV8zeihZ12wxRzwo7Y9jbv7PzkPnwNQ4BbPr0qTFRNHf+p1/VGWYpxL0MSBpGyqtpG1S8HunCp8eTpH6YGXGLGmRQMLb7Me6e/VZ47+Ng1L2cs1zZLqKVl4kJx/8SLv3N1PpIk5abfT6BhvFJRWkn5sDRInapAkc4bYrcynygqOdPr5k7BkuArWlpWokMGAiDdnFdNElOWwYivrObYS2wXySpbD2YCm0VxYjo8/1v4yDaTcSFE4IlkssSW4TQYxAwHknJDictHmRs2eJZG3DLLk1keO5l4fiURcmFREPbdDV4VZHm0tR/nUXd8IQinWeal3mzPK9mRoDJxgtwIX9+EGiYXte/Jgtrqrme+CQRJ42rwE4XCpCBP3FvcvnqC4jtf/mKb5J5FZVS0wbiZ7P3SquCEoUBvDMCpu5Qz+3kPax8cy9ACPBUCD37YEiGnbocSST32ZEXx4kq1lfvet7mEoKq/YocGWaGVrnW+DpX42LyJqOMnwUT0E2wI6elDNqM00UerAQnRFV92a41jY7MTZ8i6l+f9kmF1cF6QvPv/UrEMQ8x0SUdsquvucPbxsYaNml1xXdwGRsHhY6indnFmG7itaDfTg+eYxuyJEGIeTh4zScZkFNzsxi1ogROLU5I+YaOXrtd13W2uK+7/69N+1PUpZP9qtotiiMzD4Xenn16I8VrkZRLZUxNv2ZQ9y5j4L63Apt/FjjztY1vvj1IWDy0P5MoHiXnpTJNMyOnJMeUiBhyg4Rahh/K1snVI95x/GDT9DyDvkGbntcKIFXC9P/DZ7AEGFtoLr0XRqCgQyOFL8PuOoLhOVbtoJ3npYX4W+fphxZPt7CYvTcZUK/7O1iCxt3mnozhds5g5rbyJwXqkcWIRWx6yO JKJVG8u8 ruSwI0Z2adhRkY0ctNbLAaWC/gyqw20TpXyA3dTCJAXGQ/IbrbcYikJfmbDTT1U0yZ234bnRMpEiRxeDDTqQR9vnrAckUnkPqGUWkSK7ONECg95kwQ3YcdQU203NCxeXrgxG9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The operations of page count in deferred_init_pages and deferred_free_range is the opposite operation. It's unnecessary and time-consuming. Don't set page count in deferred_init_pages, as it'll be reset later. The following data was tested on an x86 machine with 190GB of RAM. before: node 0 deferred pages initialised in 78ms after: node 0 deferred pages initialised in 72ms Signed-off-by: Yajun Deng --- mm/mm_init.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 1cc310f706a9..fe78f6916c66 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1984,7 +1984,7 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { for (i = 0; i < nr_pages; i += pageblock_nr_pages) set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); - __free_pages_core(page, MAX_ORDER, MEMINIT_LATE); + __free_pages_core(page, MAX_ORDER, MEMINIT_EARLY); return; } @@ -1994,7 +1994,7 @@ static void __init deferred_free_range(unsigned long pfn, for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0, MEMINIT_LATE); + __free_pages_core(page, 0, MEMINIT_EARLY); } } @@ -2068,7 +2068,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, } else { page++; } - __init_single_page(page, pfn, zid, nid, true, false); + __init_single_page(page, pfn, zid, nid, false, false); nr_pages++; } return (nr_pages);