From patchwork Thu Aug 4 06:55:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizhe.67@bytedance.com X-Patchwork-Id: 12936123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD0D3C00144 for ; Thu, 4 Aug 2022 06:55:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41BC88E0002; Thu, 4 Aug 2022 02:55:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CCCC8E0001; Thu, 4 Aug 2022 02:55:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 293738E0002; Thu, 4 Aug 2022 02:55:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 193DA8E0001 for ; Thu, 4 Aug 2022 02:55:32 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D6FBA416B3 for ; Thu, 4 Aug 2022 06:55:31 +0000 (UTC) X-FDA: 79760999262.21.E5C2EC9 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf23.hostedemail.com (Postfix) with ESMTP id 33BF7140061 for ; Thu, 4 Aug 2022 06:55:29 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id i71so11616055pge.9 for ; Wed, 03 Aug 2022 23:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc; bh=GuVM7gKhQND8I7y9IPoXxEMzzZslmNdwtV7SuQsxHVU=; b=5HDQrH59USeGPDJOHElvyJo52weo6SvT/vGhzU3vJwuqaGJbTNVR9h+qV63D7vGNgl 4HBbm3Sg67rSbr3uhMavl0eZ7BgJ9IgWtgfiKSoM8JrOOZpwFxTg2mWeDWbubJRnqD1t RaXX1ohgk0DSq2JpwcQEx3/WOqzkKgSOYIKJD39RLOJRyp8La694rhm4EGGQp/8sWEsj hjZv2G0tm6ebv86j1GSqKT89fc5MwD9+Ky2WZD1YGUf2NfjCmLDbnOHDb9XnuTm1mhYt AcAUUMDtdSffLl2wjFDARklK6WLozx9+quwAd5SjtAzpKqKAe1+bCYAZAIp7/LnvdHeF 7qxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc; bh=GuVM7gKhQND8I7y9IPoXxEMzzZslmNdwtV7SuQsxHVU=; b=GeBX9bnK7aUKFO936ncqM4BJQKbGujRSrOgJjThj9BdApbEL34bHRp+/AHY7uAg+rz cwMivczoFBYA0EGxKkaRGgzCHUl6FVLxhyxXX/8dc/2OpQHH7g79j3uP7da1vL/S6ap7 4v9yez5NbXSPBchFqPyUny2gfU8+RopkvMJ+w0lScV9jIZ7C6cOAKCSRkmgtXihuw1vR VhUMmTkA0HUBRZhptUPnRJgeoLwQb38rPNe13B4LzcjKMfwU9I7GauhkTl4DlmqZKeRk yIbhqiVqIrYx9oyFA7M7MgMNssOn+qwphKKtqxXsNEZ9TtBQd/U/npTqqM/PIU8c0XYP Xz6g== X-Gm-Message-State: ACgBeo0vgh+M66ZTu0Doc5XrCVDFl4pvWq2ln+mVVo32Rb7+SftNExDi WYJW5SabJnwISTMsv8AmAWZMdDbtbtztWA== X-Google-Smtp-Source: AA6agR6ZOn1TFy9F+PIK9rHzzfYE08TOonKZG4SmOvcqohSoJiFZYK2oZdcceyGjmbwgEvc+Q/ZE5w== X-Received: by 2002:a63:5353:0:b0:419:f140:2dae with SMTP id t19-20020a635353000000b00419f1402daemr503806pgl.526.1659596128000; Wed, 03 Aug 2022 23:55:28 -0700 (PDT) Received: from MacBook-Pro.local.net ([139.177.225.225]) by smtp.gmail.com with ESMTPSA id f1-20020a655501000000b0041975999455sm353120pgr.75.2022.08.03.23.55.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Aug 2022 23:55:27 -0700 (PDT) From: lizhe.67@bytedance.com To: akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, lizefan.x@bytedance.com, yuanzhu@bytedance.com, lizhe.67@bytedance.com Subject: [RFC] page_ext: move up page_ext_init() to catch early page allocation if DEFERRED_STRUCT_PAGE_INIT is n Date: Thu, 4 Aug 2022 14:55:15 +0800 Message-Id: <20220804065515.85794-1-lizhe.67@bytedance.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659596131; a=rsa-sha256; cv=none; b=53Ugx/Gs08yTmErjeUpIvqhb0cRDi1HQcfBnYptb2l9MIno++f7ES/XSoY6bQl0ILSnHCR O3Fq6g9kvJ+ZQZD8xMi6Dvbaj6yFrrUSx9yMEei5EJ/pA9zt/cmyCXXce38K2JYifGPreW vAIymSZixFGOiBRsI/6PM/oNH77QSug= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=5HDQrH59; spf=pass (imf23.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659596131; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=GuVM7gKhQND8I7y9IPoXxEMzzZslmNdwtV7SuQsxHVU=; b=g+02nCGS/bejG/epaZBTZKPRx5Wqpko30Drz80SMR9cN4YMWeUnMtNOZYz7rRKQWc68tVx JOA++TX26E4ZZov0VRkc7Hu3w7YmupiETY8n26IhpsRcTjh3QpYmmgZADj+JNr6rni0D2N ZWUj/hvCFnXiPcpJORR7uDfHhkjVF80= X-Stat-Signature: tq9j4ong7xsnyfbzi3ep3tkacbwymymr X-Rspamd-Queue-Id: 33BF7140061 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=5HDQrH59; spf=pass (imf23.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1659596129-766101 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Li Zhe In 'commit 2f1ee0913ce5 ("Revert "mm: use early_pfn_to_nid in page_ext_init"")', we call page_ext_init() after page_alloc_init_late() to avoid some panic problem. It seems that we cannot track early page allocations in current kernel even if page structure has been initialized early. This patch move up page_ext_init() to catch early page allocations when DEFERRED_STRUCT_PAGE_INIT is n. After this patch, we only need to turn DEFERRED_STRUCT_PAGE_INIT to n then we are able to analyze the early page allocations. This is useful especially when we find that the free memory value is not the same right after different kernel booting. Signed-off-by: Li Zhe --- include/linux/page_ext.h | 28 ++++++++++++++++++++++++++-- init/main.c | 7 +++++-- mm/page_ext.c | 2 +- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index fabb2e1e087f..82ebca63779c 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -43,14 +43,34 @@ extern void pgdat_page_ext_init(struct pglist_data *pgdat); static inline void page_ext_init_flatmem(void) { } -extern void page_ext_init(void); static inline void page_ext_init_flatmem_late(void) { } +extern void _page_ext_init(void); +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT +static inline void page_ext_init_early(void) +{ +} +static inline void page_ext_init_late(void) +{ + _page_ext_init(); +} +#else +static inline void page_ext_init_early(void) +{ + _page_ext_init(); +} +static inline void page_ext_init_late(void) +{ +} +#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ #else extern void page_ext_init_flatmem(void); extern void page_ext_init_flatmem_late(void); -static inline void page_ext_init(void) +static inline void page_ext_init_early(void) +{ +} +static inline void page_ext_init_late(void) { } #endif @@ -80,6 +100,10 @@ static inline void page_ext_init(void) { } +static inline void page_ext_init_late(void) +{ +} + static inline void page_ext_init_flatmem_late(void) { } diff --git a/init/main.c b/init/main.c index 91642a4e69be..7f9533ba527d 100644 --- a/init/main.c +++ b/init/main.c @@ -845,6 +845,7 @@ static void __init mm_init(void) * slab is ready so that stack_depot_init() works properly */ page_ext_init_flatmem_late(); + page_ext_init_early(); kmemleak_init(); pgtable_init(); debug_objects_mem_init(); @@ -1605,8 +1606,10 @@ static noinline void __init kernel_init_freeable(void) padata_init(); page_alloc_init_late(); - /* Initialize page ext after all struct pages are initialized. */ - page_ext_init(); + /* Initialize page ext after all struct pages are initialized if + * CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled + */ + page_ext_init_late(); do_basic_setup(); diff --git a/mm/page_ext.c b/mm/page_ext.c index 3dc715d7ac29..50419e7349cb 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -378,7 +378,7 @@ static int __meminit page_ext_callback(struct notifier_block *self, return notifier_from_errno(ret); } -void __init page_ext_init(void) +void __init _page_ext_init(void) { unsigned long pfn; int nid;