From patchwork Tue Mar 21 17:05:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5EACC7619A for ; Tue, 21 Mar 2023 17:05:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbjCURFw (ORCPT ); Tue, 21 Mar 2023 13:05:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230386AbjCURFq (ORCPT ); Tue, 21 Mar 2023 13:05:46 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A03F427986; Tue, 21 Mar 2023 10:05:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id EA95BCE182F; Tue, 21 Mar 2023 17:05:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CFA81C433A1; Tue, 21 Mar 2023 17:05:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418331; bh=nOjYuAtAJ+oJ6PX1M32a+RGsf1VV5zJ6P60y95eKQno=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cemNzKvBpTAAceV2/0Az0KYwtlis1XtvYsFIYA02FbqLqtxd0wfEQVTRhtKdbSKDB oJkSK3CjVR6lc2RL2n4DXyAInlDsaOeFFNUZtT5R5tU41UaFVOX6jEyf/z0Kn5qclq cmKKdUlBGwHmPt6647CrpaZP1DvueMpaTL20twjSKQKyYLTfX6Pv/AZ+XxNr/l+Syt dErYevhDdXvdQ2hs8+eKzvJZard192qPdg9Glo1YnJ8dpt6sli/dwf90bLwCEEu0pW eERdwW2E8Z5Vn8gEZYVrmk+ZpjjMV73hvfSSfR049HDJ1RUcX4ZutXx/YR3MXwArkt VjSHxD/UDSPug== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 01/14] mips: fix comment about pgtable_init() Date: Tue, 21 Mar 2023 19:05:00 +0200 Message-Id: <20230321170513.2401534-2-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" Comment about fixrange_init() says that its called from pgtable_init() while the actual caller is pagetabe_init(). Update comment to match the code. Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- arch/mips/include/asm/fixmap.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/mips/include/asm/fixmap.h b/arch/mips/include/asm/fixmap.h index beea14761cef..b037718d7e8b 100644 --- a/arch/mips/include/asm/fixmap.h +++ b/arch/mips/include/asm/fixmap.h @@ -70,7 +70,7 @@ enum fixed_addresses { #include /* - * Called from pgtable_init() + * Called from pagetable_init() */ extern void fixrange_init(unsigned long start, unsigned long end, pgd_t *pgd_base); From patchwork Tue Mar 21 17:05:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3351C74A5B for ; Tue, 21 Mar 2023 17:05:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbjCURFy (ORCPT ); Tue, 21 Mar 2023 13:05:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229622AbjCURFr (ORCPT ); Tue, 21 Mar 2023 13:05:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8E8025E34; Tue, 21 Mar 2023 10:05:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6BFDDB818B9; Tue, 21 Mar 2023 17:05:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C86B5C4339C; Tue, 21 Mar 2023 17:05:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418335; bh=I9y2VS2sujmV0QDCgNwqn2cmquxP68oIIJXpFeZ8zWQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rvU/i21VgZbcmY0RKxM7OfYu+1hxrZakEd0qKlbAVLq51IJjKj6CKIcCK9ES+4LDa r0GXrHFiB4bewcM9qeWDh0R4qbZZpydLGatuQAdqquua5BUoy3cgjwnftNe2vICXgk TsJya2xDyox9It3dWIYTidriR688fhCIfmDNluEO305v19/xELXcjykEw3+9opOrh6 j4JkM1QSbBxxf6Gi4okJjw4G5rC/LNmMKcrQ21dwK7mzp5N3rfjBiQ2ovPN8tR2szf v+9rKcE3YA7RQs/YMnkyaHdIu7kUj5Hu0Zm+3dYlQEJ8RZLovqVXqCPD7NH5ImyjDd ua/+qlh76Uwxw== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled Date: Tue, 21 Mar 2023 19:05:01 +0200 Message-Id: <20230321170513.2401534-3-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" Instead of duplicating long static_branch_enabled(&check_pages_enabled) wrap it in a helper function is_check_pages_enabled() Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/page_alloc.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 87d760236dba..e1149d54d738 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -245,6 +245,11 @@ EXPORT_SYMBOL(init_on_free); /* perform sanity checks on struct pages being allocated or freed */ static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); +static inline bool is_check_pages_enabled(void) +{ + return static_branch_unlikely(&check_pages_enabled); +} + static bool _init_on_alloc_enabled_early __read_mostly = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); static int __init early_init_on_alloc(char *buf) @@ -1443,7 +1448,7 @@ static __always_inline bool free_pages_prepare(struct page *page, for (i = 1; i < (1 << order); i++) { if (compound) bad += free_tail_pages_check(page, page + i); - if (static_branch_unlikely(&check_pages_enabled)) { + if (is_check_pages_enabled()) { if (unlikely(free_page_is_bad(page + i))) { bad++; continue; @@ -1456,7 +1461,7 @@ static __always_inline bool free_pages_prepare(struct page *page, page->mapping = NULL; if (memcg_kmem_online() && PageMemcgKmem(page)) __memcg_kmem_uncharge_page(page, order); - if (static_branch_unlikely(&check_pages_enabled)) { + if (is_check_pages_enabled()) { if (free_page_is_bad(page)) bad++; if (bad) @@ -2366,7 +2371,7 @@ static int check_new_page(struct page *page) static inline bool check_new_pages(struct page *page, unsigned int order) { - if (static_branch_unlikely(&check_pages_enabled)) { + if (is_check_pages_enabled()) { for (int i = 0; i < (1 << order); i++) { struct page *p = page + i; From patchwork Tue Mar 21 17:05:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36318C7619A for ; Tue, 21 Mar 2023 17:06:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230489AbjCURGR (ORCPT ); Tue, 21 Mar 2023 13:06:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230437AbjCURFy (ORCPT ); Tue, 21 Mar 2023 13:05:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 751A052F5F; Tue, 21 Mar 2023 10:05:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AE47C61D4E; Tue, 21 Mar 2023 17:05:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4D45C433EF; Tue, 21 Mar 2023 17:05:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418344; bh=MPWRuoot5VH9l7RTC3NQyqOuQ7W1sTgroDlLaej4jgM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ki4HDy3h8Gte0ZZEiRROBgIVBKFw/KuBSOBtijYBxEVe/p0aoS4vQdaXkNPzwqJiy jm62Y7wVswZ217UB4t9S6RFrE+xX2Aao48V72Jt+lJScMIs2rLYjzI38K++ZlzsIkN i/L8Zo/nLOb/yunaZ65ZbQOSUL5sKCjfTgEaxzzfUa94sKcDTnLlLzB5lPAVdkSyyy F8EUSzfSGiOVR8MHUlvJfsNjsn63XilQL/5NFHyCYEiYete/89jyWdOkUmMYvIpQ5N we3BPRxPxI0VZoDmzEjWBv4KLNuxG9TJ5NQq7Hxa4nTmiROIigUvgn7nULl7rQz+Jr MhE9LOk0eZosw== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c Date: Tue, 21 Mar 2023 19:05:03 +0200 Message-Id: <20230321170513.2401534-5-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" The hashdist variable must be initialized before the first call to alloc_large_system_hash() and free_area_init() looks like a better place for it than page_alloc_init(). Move hashdist handling to mm/mm_init.c Signed-off-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/mm_init.c | 22 ++++++++++++++++++++++ mm/page_alloc.c | 18 ------------------ 2 files changed, 22 insertions(+), 18 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 68d0187c7886..2e60c7186132 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -607,6 +607,25 @@ int __meminit early_pfn_to_nid(unsigned long pfn) return nid; } + +int hashdist = HASHDIST_DEFAULT; + +static int __init set_hashdist(char *str) +{ + if (!str) + return 0; + hashdist = simple_strtoul(str, &str, 0); + return 1; +} +__setup("hashdist=", set_hashdist); + +static inline void fixup_hashdist(void) +{ + if (num_node_state(N_MEMORY) == 1) + hashdist = 0; +} +#else +static inline void fixup_hashdist(void) {} #endif /* CONFIG_NUMA */ #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT @@ -1855,6 +1874,9 @@ void __init free_area_init(unsigned long *max_zone_pfn) } memmap_init(); + + /* disable hash distribution for systems with a single node */ + fixup_hashdist(); } /** diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c56c147bdf27..ff6a2fff2880 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6383,28 +6383,10 @@ static int page_alloc_cpu_online(unsigned int cpu) return 0; } -#ifdef CONFIG_NUMA -int hashdist = HASHDIST_DEFAULT; - -static int __init set_hashdist(char *str) -{ - if (!str) - return 0; - hashdist = simple_strtoul(str, &str, 0); - return 1; -} -__setup("hashdist=", set_hashdist); -#endif - void __init page_alloc_init(void) { int ret; -#ifdef CONFIG_NUMA - if (num_node_state(N_MEMORY) == 1) - hashdist = 0; -#endif - ret = cpuhp_setup_state_nocalls(CPUHP_PAGE_ALLOC, "mm/page_alloc:pcp", page_alloc_cpu_online, From patchwork Tue Mar 21 17:05:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 802D1C6FD1D for ; Tue, 21 Mar 2023 17:06:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230452AbjCURGl (ORCPT ); Tue, 21 Mar 2023 13:06:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230307AbjCURGN (ORCPT ); Tue, 21 Mar 2023 13:06:13 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA9F41C597; Tue, 21 Mar 2023 10:05:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A434D61D40; Tue, 21 Mar 2023 17:05:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACEA9C4339E; Tue, 21 Mar 2023 17:05:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418348; bh=yFWOiCGTMb0kxnXPUeW5UI17+2jMfP/fnjpb1q+J7gk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A9Gkip7tvni7Mnk6n8TnDZm1hR1VR1sJVwbjOXRQMBdSY6XQ/jhJ8WEU/oTZwSlE1 GP/lFo6JYzaIYubsVYtTlwq8hx6XR2v8y1h712a2FPq/Pa6aFUL5xK/t99zP5SJIuD sZvf8FuC7AqYl3VSwrRJSJYzLka8040ITH3d7XbHNuTpDpF3wPabprMDq7Ru72qm6x K62Om+CkOZFw3ZN4t3JSw3GkOICBNja2MmgJ33e9HWVB2x0mXlLMBEa7RGxcSegz3g NEoSCfxTQ69n+/jiDvPFYL2WiWcO7I0PhFAnazvV+AZWMATjfg2SvTCTTdOdaWWfJD 7j6iOQu0oR4Pw== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp() Date: Tue, 21 Mar 2023 19:05:04 +0200 Message-Id: <20230321170513.2401534-6-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" The page_alloc_init() name is really misleading because all this function does is sets up CPU hotplug callbacks for the page allocator. Rename it to page_alloc_init_cpuhp() so that name will reflect what the function does. Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/gfp.h | 2 +- init/main.c | 2 +- mm/page_alloc.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 7c554e4bd49f..ed8cb537c6a7 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -319,7 +319,7 @@ extern void page_frag_free(void *addr); #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) -void page_alloc_init(void); +void page_alloc_init_cpuhp(void); void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp); void drain_all_pages(struct zone *zone); void drain_local_pages(struct zone *zone); diff --git a/init/main.c b/init/main.c index 4425d1783d5c..b2499bee7a3c 100644 --- a/init/main.c +++ b/init/main.c @@ -969,7 +969,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) boot_cpu_hotplug_init(); build_all_zonelists(NULL); - page_alloc_init(); + page_alloc_init_cpuhp(); pr_notice("Kernel command line: %s\n", saved_command_line); /* parameters may set static keys */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ff6a2fff2880..d1276bfe7a30 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6383,7 +6383,7 @@ static int page_alloc_cpu_online(unsigned int cpu) return 0; } -void __init page_alloc_init(void) +void __init page_alloc_init_cpuhp(void) { int ret; From patchwork Tue Mar 21 17:05:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7214C6FD1D for ; Tue, 21 Mar 2023 17:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230381AbjCURGq (ORCPT ); Tue, 21 Mar 2023 13:06:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230487AbjCURGP (ORCPT ); Tue, 21 Mar 2023 13:06:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FEA5532BF; Tue, 21 Mar 2023 10:05:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4B44AB818B9; Tue, 21 Mar 2023 17:05:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5914C433EF; Tue, 21 Mar 2023 17:05:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418352; bh=PM92BQVLa2WxBbnDUdLvSknVaFsEpK5ocCyJK48uW0c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VV0W2zZWcfrS0Pnp7+uSdWblosvpxij7nM1hHcrlG6KllIxLpgkWjfxg6L8pJn6fS BuLcNFnrrsqJZ2HjcQZgwa16TIsHGm+PlSEnS618ROJunWIwJUgEEKS9BmC8kn9mtZ +a/7bhMFsF80Ers2bOuy4wk2Hv6uXB6fN16IauXtYO3TdhOGMD2w+MVQYE18Gy8lNX I0UmErrOkPo74b4zSg9+e3g/fu3Q03+8nBlqGdPNQDWEi47Z/ABd4ZcARfsXP8gp/4 en9kBBqp2MCxbotDylFJMqcRb66FWu2xCKl7T8IUAf9yteeqDDhCD3NcqHA5r7EPKp 9OrwGIUQYSbLQ== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init() Date: Tue, 21 Mar 2023 19:05:05 +0200 Message-Id: <20230321170513.2401534-7-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" Both build_all_zonelists() and page_alloc_init_cpuhp() must be called after SMP setup is complete but before the page allocator is set up. Still, they both are a part of memory management initialization, so move them to mm_init(). Signed-off-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- init/main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/init/main.c b/init/main.c index b2499bee7a3c..4423906177c1 100644 --- a/init/main.c +++ b/init/main.c @@ -833,6 +833,10 @@ static void __init report_meminit(void) */ static void __init mm_init(void) { + /* Initializations relying on SMP setup */ + build_all_zonelists(NULL); + page_alloc_init_cpuhp(); + /* * page_ext requires contiguous pages, * bigger than MAX_ORDER unless SPARSEMEM. @@ -968,9 +972,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */ boot_cpu_hotplug_init(); - build_all_zonelists(NULL); - page_alloc_init_cpuhp(); - pr_notice("Kernel command line: %s\n", saved_command_line); /* parameters may set static keys */ jump_label_init(); From patchwork Tue Mar 21 17:05:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01710C74A5B for ; Tue, 21 Mar 2023 17:06:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230507AbjCURG5 (ORCPT ); Tue, 21 Mar 2023 13:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230372AbjCURGd (ORCPT ); Tue, 21 Mar 2023 13:06:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81320532A6; Tue, 21 Mar 2023 10:05:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 439D2B818D3; Tue, 21 Mar 2023 17:05:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D557C433A0; Tue, 21 Mar 2023 17:05:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418356; bh=UNeFcb5PNOhwqohZU+Jc2J8lUuh4fggG90Bd0WQdoSg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cbrXwF/kWX943Eora0PJmXJuxATEdUYCAPvoSgcIKCZJN2+aVQ3+4FbdbyaX7bFPu /UCIEShcsyTs0b349/pkEgxG0v9MvQUm10kAiG25e8SlRbLmuVE0keHHqMtyb02mjT 37yRRmT8heaLbWNDw5y8ztkpYs0qagpbvs7cAdTDffAtzNFBzqRb5zizcPx63j8e+d fs62HRzsup8cLdnMthq6A5r2MnkK2VKvxGdOpN4yBqsraDyazwIDsZVNUCDTePcmqQ 58NrKKA65LXKm2NwaoHuT4qdRwk2ck2hjd6d1/hCqXqCDff/5DYT/SRynEpt+hqFIj 0N//T5rQMIV2w== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init() Date: Tue, 21 Mar 2023 19:05:06 +0200 Message-Id: <20230321170513.2401534-8-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" Make mm_init() a part of mm/ codebase. mm_core_init() better describes what the function does and does not clash with mm_init() in kernel/fork.c Signed-off-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 + init/main.c | 71 ++------------------------------------------ mm/mm_init.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 76 insertions(+), 69 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ee755bb4e1c1..2d7f095136fc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -39,6 +39,7 @@ struct pt_regs; extern int sysctl_page_lock_unfairness; +void mm_core_init(void); void init_mm_internals(void); #ifndef CONFIG_NUMA /* Don't use mapnrs, do it properly */ diff --git a/init/main.c b/init/main.c index 4423906177c1..8a20b4c25f24 100644 --- a/init/main.c +++ b/init/main.c @@ -803,73 +803,6 @@ static inline void initcall_debug_enable(void) } #endif -/* Report memory auto-initialization states for this boot. */ -static void __init report_meminit(void) -{ - const char *stack; - - if (IS_ENABLED(CONFIG_INIT_STACK_ALL_PATTERN)) - stack = "all(pattern)"; - else if (IS_ENABLED(CONFIG_INIT_STACK_ALL_ZERO)) - stack = "all(zero)"; - else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL)) - stack = "byref_all(zero)"; - else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF)) - stack = "byref(zero)"; - else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_USER)) - stack = "__user(zero)"; - else - stack = "off"; - - pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n", - stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off", - want_init_on_free() ? "on" : "off"); - if (want_init_on_free()) - pr_info("mem auto-init: clearing system memory may take some time...\n"); -} - -/* - * Set up kernel memory allocators - */ -static void __init mm_init(void) -{ - /* Initializations relying on SMP setup */ - build_all_zonelists(NULL); - page_alloc_init_cpuhp(); - - /* - * page_ext requires contiguous pages, - * bigger than MAX_ORDER unless SPARSEMEM. - */ - page_ext_init_flatmem(); - init_mem_debugging_and_hardening(); - kfence_alloc_pool(); - report_meminit(); - kmsan_init_shadow(); - stack_depot_early_init(); - mem_init(); - mem_init_print_info(); - kmem_cache_init(); - /* - * page_owner must be initialized after buddy is ready, and also after - * slab is ready so that stack_depot_init() works properly - */ - page_ext_init_flatmem_late(); - kmemleak_init(); - pgtable_init(); - debug_objects_mem_init(); - vmalloc_init(); - /* If no deferred init page_ext now, as vmap is fully initialized */ - if (!deferred_struct_pages) - page_ext_init(); - /* Should be run before the first non-init thread is created */ - init_espfix_bsp(); - /* Should be run after espfix64 is set up. */ - pti_init(); - kmsan_init_runtime(); - mm_cache_init(); -} - #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET DEFINE_STATIC_KEY_MAYBE_RO(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, randomize_kstack_offset); @@ -993,13 +926,13 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) /* * These use large bootmem allocations and must precede - * kmem_cache_init() + * initalization of page allocator */ setup_log_buf(0); vfs_caches_init_early(); sort_main_extable(); trap_init(); - mm_init(); + mm_core_init(); poking_init(); ftrace_init(); diff --git a/mm/mm_init.c b/mm/mm_init.c index 2e60c7186132..bba73f1fb277 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -20,9 +20,15 @@ #include #include #include +#include +#include +#include +#include #include "internal.h" #include "shuffle.h" +#include + #ifdef CONFIG_DEBUG_MEMORY_INIT int __meminitdata mminit_loglevel; @@ -2524,3 +2530,70 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, } __free_pages_core(page, order); } + +/* Report memory auto-initialization states for this boot. */ +static void __init report_meminit(void) +{ + const char *stack; + + if (IS_ENABLED(CONFIG_INIT_STACK_ALL_PATTERN)) + stack = "all(pattern)"; + else if (IS_ENABLED(CONFIG_INIT_STACK_ALL_ZERO)) + stack = "all(zero)"; + else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL)) + stack = "byref_all(zero)"; + else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF)) + stack = "byref(zero)"; + else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_USER)) + stack = "__user(zero)"; + else + stack = "off"; + + pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n", + stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off", + want_init_on_free() ? "on" : "off"); + if (want_init_on_free()) + pr_info("mem auto-init: clearing system memory may take some time...\n"); +} + +/* + * Set up kernel memory allocators + */ +void __init mm_core_init(void) +{ + /* Initializations relying on SMP setup */ + build_all_zonelists(NULL); + page_alloc_init_cpuhp(); + + /* + * page_ext requires contiguous pages, + * bigger than MAX_ORDER unless SPARSEMEM. + */ + page_ext_init_flatmem(); + init_mem_debugging_and_hardening(); + kfence_alloc_pool(); + report_meminit(); + kmsan_init_shadow(); + stack_depot_early_init(); + mem_init(); + mem_init_print_info(); + kmem_cache_init(); + /* + * page_owner must be initialized after buddy is ready, and also after + * slab is ready so that stack_depot_init() works properly + */ + page_ext_init_flatmem_late(); + kmemleak_init(); + pgtable_init(); + debug_objects_mem_init(); + vmalloc_init(); + /* If no deferred init page_ext now, as vmap is fully initialized */ + if (!deferred_struct_pages) + page_ext_init(); + /* Should be run before the first non-init thread is created */ + init_espfix_bsp(); + /* Should be run after espfix64 is set up. */ + pti_init(); + kmsan_init_runtime(); + mm_cache_init(); +} From patchwork Tue Mar 21 17:05:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B374DC6FD1D for ; Tue, 21 Mar 2023 17:07:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230446AbjCURG7 (ORCPT ); Tue, 21 Mar 2023 13:06:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230511AbjCURGk (ORCPT ); Tue, 21 Mar 2023 13:06:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38D7553D8D; Tue, 21 Mar 2023 10:06:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8EC1A61D51; Tue, 21 Mar 2023 17:06:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94EA9C433EF; Tue, 21 Mar 2023 17:05:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418360; bh=l6knMFWLRO6NO1OlV9wEy5NTCDPOQC0mr+EPbgyJTJM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O1+3RUxV8PjdFb2Z3pJU1FBDhbdrnaFRGETGsBn//Zlj51dbwei2/lrw+AKTnP3qK qsLdEwU+0j0zaQJcDaezHRMSOUDut9/IXVVYbHpy6ARc6hmxNtTBb4JqDZdVtaZM6X 76LGkvPoFkzHwg8h5UbNJUW8+gfXg2GYAB0/nU95mx/46l8TKFO3TW0tQbKHq9/HT/ /DtKEAieU2fZHcDcpayxDcyi3p1HUnTgwmMyG/cW7RdQ8qbAjSmtDkYY+EWN8lHthX YILzxaJYNzM6fK66YFzIOp0ztAWP9rSZiErQ/EN26L3SNzNsUPYM70hZHcb757bUb1 OSSJqMpe8djdg== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init() Date: Tue, 21 Mar 2023 19:05:07 +0200 Message-Id: <20230321170513.2401534-9-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" and drop pgtable_init() as it has no real value and it's name is misleading. Signed-off-by: Mike Rapoport (IBM) Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 6 ------ mm/mm_init.c | 3 ++- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2d7f095136fc..c3c67d8bc833 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2782,12 +2782,6 @@ static inline bool ptlock_init(struct page *page) { return true; } static inline void ptlock_free(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ -static inline void pgtable_init(void) -{ - ptlock_cache_init(); - pgtable_cache_init(); -} - static inline bool pgtable_pte_page_ctor(struct page *page) { if (!ptlock_init(page)) diff --git a/mm/mm_init.c b/mm/mm_init.c index bba73f1fb277..f1475413394d 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2584,7 +2584,8 @@ void __init mm_core_init(void) */ page_ext_init_flatmem_late(); kmemleak_init(); - pgtable_init(); + ptlock_cache_init(); + pgtable_cache_init(); debug_objects_mem_init(); vmalloc_init(); /* If no deferred init page_ext now, as vmap is fully initialized */ From patchwork Tue Mar 21 17:05:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 449E4C6FD1D for ; Tue, 21 Mar 2023 17:07:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230490AbjCURHT (ORCPT ); Tue, 21 Mar 2023 13:07:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230366AbjCURGy (ORCPT ); Tue, 21 Mar 2023 13:06:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B67C352936; Tue, 21 Mar 2023 10:06:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 35664B818FA; Tue, 21 Mar 2023 17:06:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C8DEC4339B; Tue, 21 Mar 2023 17:06:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418364; bh=5iW5aRqqvPvN7+YDCkp9h19X6grwHNlWZF3KQEDlyQM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fOs7Gfwv7/Rk4621KaAXZUlFBRJppIfIM83ucN9zYqWPS8fBl45hJ3ARoy0JktZzF m+9a+mMEOXKDe9+HB2FRCxhDUF5msNPHCfdfT2UEJPBgA/YvnFwxYMDawGgd9uZF13 lMcjE4cYXz/kPMvh9mjucPi3BiV7HVBajgjJU5tibcEHnlSwuapsFk7mmW6Jarp3Ap hb6aIYB5Qj6lWMUJ3JG5s/Tf6s8pWFjUXs2ZOR8lLlxm1ev0IIf5dU73xHDjm8Mcju NjkYVFcoLCoK1uPPspznK5JiVgN44OJD3c5ZQl4PZZOi7bd4YZ5wmdM7lFxDa3qaHq roqVgHKT4c1Fw== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c Date: Tue, 21 Mar 2023 19:05:08 +0200 Message-Id: <20230321170513.2401534-10-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" init_mem_debugging_and_hardening() is only called from mm_core_init(). Move it close to the caller, make it static and rename it to mem_debugging_and_hardening_init() for consistency with surrounding convention. Signed-off-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 - mm/internal.h | 8 ++++ mm/mm_init.c | 91 +++++++++++++++++++++++++++++++++++++++++++- mm/page_alloc.c | 95 ---------------------------------------------- 4 files changed, 98 insertions(+), 97 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c3c67d8bc833..2fecabb1a328 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3394,7 +3394,6 @@ extern int apply_to_existing_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); -extern void __init init_mem_debugging_and_hardening(void); #ifdef CONFIG_PAGE_POISONING extern void __kernel_poison_pages(struct page *page, int numpages); extern void __kernel_unpoison_pages(struct page *page, int numpages); diff --git a/mm/internal.h b/mm/internal.h index 2a925de49393..4750e3a7fd0d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -204,6 +204,14 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); extern char * const zone_names[MAX_NR_ZONES]; +/* perform sanity checks on struct pages being allocated or freed */ +DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); + +static inline bool is_check_pages_enabled(void) +{ + return static_branch_unlikely(&check_pages_enabled); +} + /* * Structure for holding the mostly immutable allocation parameters passed * between functions involved in allocations, including the alloc_pages* diff --git a/mm/mm_init.c b/mm/mm_init.c index f1475413394d..43f6d3ed24ef 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2531,6 +2531,95 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, __free_pages_core(page, order); } +static bool _init_on_alloc_enabled_early __read_mostly + = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); +static int __init early_init_on_alloc(char *buf) +{ + + return kstrtobool(buf, &_init_on_alloc_enabled_early); +} +early_param("init_on_alloc", early_init_on_alloc); + +static bool _init_on_free_enabled_early __read_mostly + = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON); +static int __init early_init_on_free(char *buf) +{ + return kstrtobool(buf, &_init_on_free_enabled_early); +} +early_param("init_on_free", early_init_on_free); + +DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); + +/* + * Enable static keys related to various memory debugging and hardening options. + * Some override others, and depend on early params that are evaluated in the + * order of appearance. So we need to first gather the full picture of what was + * enabled, and then make decisions. + */ +static void __init mem_debugging_and_hardening_init(void) +{ + bool page_poisoning_requested = false; + bool want_check_pages = false; + +#ifdef CONFIG_PAGE_POISONING + /* + * Page poisoning is debug page alloc for some arches. If + * either of those options are enabled, enable poisoning. + */ + if (page_poisoning_enabled() || + (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && + debug_pagealloc_enabled())) { + static_branch_enable(&_page_poisoning_enabled); + page_poisoning_requested = true; + want_check_pages = true; + } +#endif + + if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) && + page_poisoning_requested) { + pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " + "will take precedence over init_on_alloc and init_on_free\n"); + _init_on_alloc_enabled_early = false; + _init_on_free_enabled_early = false; + } + + if (_init_on_alloc_enabled_early) { + want_check_pages = true; + static_branch_enable(&init_on_alloc); + } else { + static_branch_disable(&init_on_alloc); + } + + if (_init_on_free_enabled_early) { + want_check_pages = true; + static_branch_enable(&init_on_free); + } else { + static_branch_disable(&init_on_free); + } + + if (IS_ENABLED(CONFIG_KMSAN) && + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); + +#ifdef CONFIG_DEBUG_PAGEALLOC + if (debug_pagealloc_enabled()) { + want_check_pages = true; + static_branch_enable(&_debug_pagealloc_enabled); + + if (debug_guardpage_minorder()) + static_branch_enable(&_debug_guardpage_enabled); + } +#endif + + /* + * Any page debugging or hardening option also enables sanity checking + * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's + * enabled already. + */ + if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages) + static_branch_enable(&check_pages_enabled); +} + /* Report memory auto-initialization states for this boot. */ static void __init report_meminit(void) { @@ -2570,7 +2659,7 @@ void __init mm_core_init(void) * bigger than MAX_ORDER unless SPARSEMEM. */ page_ext_init_flatmem(); - init_mem_debugging_and_hardening(); + mem_debugging_and_hardening_init(); kfence_alloc_pool(); report_meminit(); kmsan_init_shadow(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d1276bfe7a30..2f333c26170c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -240,31 +240,6 @@ EXPORT_SYMBOL(init_on_alloc); DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); EXPORT_SYMBOL(init_on_free); -/* perform sanity checks on struct pages being allocated or freed */ -static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); - -static inline bool is_check_pages_enabled(void) -{ - return static_branch_unlikely(&check_pages_enabled); -} - -static bool _init_on_alloc_enabled_early __read_mostly - = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); -static int __init early_init_on_alloc(char *buf) -{ - - return kstrtobool(buf, &_init_on_alloc_enabled_early); -} -early_param("init_on_alloc", early_init_on_alloc); - -static bool _init_on_free_enabled_early __read_mostly - = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON); -static int __init early_init_on_free(char *buf) -{ - return kstrtobool(buf, &_init_on_free_enabled_early); -} -early_param("init_on_free", early_init_on_free); - /* * A cached value of the page's pageblock's migratetype, used when the page is * put on a pcplist. Used to avoid the pageblock migratetype lookup when @@ -798,76 +773,6 @@ static inline void clear_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) {} #endif -/* - * Enable static keys related to various memory debugging and hardening options. - * Some override others, and depend on early params that are evaluated in the - * order of appearance. So we need to first gather the full picture of what was - * enabled, and then make decisions. - */ -void __init init_mem_debugging_and_hardening(void) -{ - bool page_poisoning_requested = false; - bool want_check_pages = false; - -#ifdef CONFIG_PAGE_POISONING - /* - * Page poisoning is debug page alloc for some arches. If - * either of those options are enabled, enable poisoning. - */ - if (page_poisoning_enabled() || - (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && - debug_pagealloc_enabled())) { - static_branch_enable(&_page_poisoning_enabled); - page_poisoning_requested = true; - want_check_pages = true; - } -#endif - - if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) && - page_poisoning_requested) { - pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " - "will take precedence over init_on_alloc and init_on_free\n"); - _init_on_alloc_enabled_early = false; - _init_on_free_enabled_early = false; - } - - if (_init_on_alloc_enabled_early) { - want_check_pages = true; - static_branch_enable(&init_on_alloc); - } else { - static_branch_disable(&init_on_alloc); - } - - if (_init_on_free_enabled_early) { - want_check_pages = true; - static_branch_enable(&init_on_free); - } else { - static_branch_disable(&init_on_free); - } - - if (IS_ENABLED(CONFIG_KMSAN) && - (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) - pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); - -#ifdef CONFIG_DEBUG_PAGEALLOC - if (debug_pagealloc_enabled()) { - want_check_pages = true; - static_branch_enable(&_debug_pagealloc_enabled); - - if (debug_guardpage_minorder()) - static_branch_enable(&_debug_guardpage_enabled); - } -#endif - - /* - * Any page debugging or hardening option also enables sanity checking - * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's - * enabled already. - */ - if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages) - static_branch_enable(&check_pages_enabled); -} - static inline void set_buddy_order(struct page *page, unsigned int order) { set_page_private(page, order); From patchwork Tue Mar 21 17:05:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CF96C6FD1D for ; Tue, 21 Mar 2023 17:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230351AbjCURHe (ORCPT ); Tue, 21 Mar 2023 13:07:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230515AbjCURHF (ORCPT ); Tue, 21 Mar 2023 13:07:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92D3453DB3; Tue, 21 Mar 2023 10:06:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8629461D40; Tue, 21 Mar 2023 17:06:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87D0FC433EF; Tue, 21 Mar 2023 17:06:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418367; bh=oPygRvX6RQAe+5eleW7V1m91ykfqAM2HaXq3tUgRD/w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S6Igbn+hx9kvz+8nl0l0isAVjYko/QzPOG59FO2v+dyXOHfJ5CiXlxbTS8jr9Ekks Sf7vWxebifU9v9ZP2pGM8iWmX4AhTu0A+XnBUsvNJzH9MSHIh6mMOyH38kSTEidmQz qCxspDb7L34nmybV3wlEo9UdlhZxgAUiLn1f5S0xDoVvza+3nb4bSQ6sGM22c3S7En Z1XEDv6X7cynI3+638uL6VeJ2obx0l/9uZ2YtOS0sSvWrs4hPv8Sd5C42hDfm4OC5c zmNsYvjnP61UXUd+uaJKEjMbzrAMN31WrmpAUn33F4EAZgqJDhePMJeKNHwYUQz3dz i40ILUcN2Ne5Q== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late() Date: Tue, 21 Mar 2023 19:05:09 +0200 Message-Id: <20230321170513.2401534-11-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" When deferred initialization of struct pages is enabled, page_ext_init() must be called after all the deferred initialization is done, but there is no point to keep it a separate call from kernel_init_freeable() right after page_alloc_init_late(). Fold the call to page_ext_init() into page_alloc_init_late() and localize deferred_struct_pages variable. Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/page_ext.h | 2 -- init/main.c | 4 ---- mm/mm_init.c | 6 +++++- 3 files changed, 5 insertions(+), 7 deletions(-) diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index bc2e39090a1f..67314f648aeb 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -29,8 +29,6 @@ struct page_ext_operations { bool need_shared_flags; }; -extern bool deferred_struct_pages; - #ifdef CONFIG_PAGE_EXTENSION /* diff --git a/init/main.c b/init/main.c index 8a20b4c25f24..04113514e56a 100644 --- a/init/main.c +++ b/init/main.c @@ -62,7 +62,6 @@ #include #include #include -#include #include #include #include @@ -1561,9 +1560,6 @@ static noinline void __init kernel_init_freeable(void) padata_init(); page_alloc_init_late(); - /* Initialize page ext after all struct pages are initialized. */ - if (deferred_struct_pages) - page_ext_init(); do_basic_setup(); diff --git a/mm/mm_init.c b/mm/mm_init.c index 43f6d3ed24ef..ff70da11e797 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -225,7 +225,7 @@ static unsigned long nr_kernel_pages __initdata; static unsigned long nr_all_pages __initdata; static unsigned long dma_reserve __initdata; -bool deferred_struct_pages __meminitdata; +static bool deferred_struct_pages __meminitdata; static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); @@ -2358,6 +2358,10 @@ void __init page_alloc_init_late(void) for_each_populated_zone(zone) set_zone_contiguous(zone); + + /* Initialize page ext after all struct pages are initialized. */ + if (deferred_struct_pages) + page_ext_init(); } #ifndef __HAVE_ARCH_RESERVED_KERNEL_PAGES From patchwork Tue Mar 21 17:05:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3C4CC74A5B for ; Tue, 21 Mar 2023 17:07:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229971AbjCURHN (ORCPT ); Tue, 21 Mar 2023 13:07:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231133AbjCURGu (ORCPT ); Tue, 21 Mar 2023 13:06:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29F8624BEA; Tue, 21 Mar 2023 10:06:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7A1A061D0F; Tue, 21 Mar 2023 17:06:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 812AFC4339E; Tue, 21 Mar 2023 17:06:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418371; bh=fqHaVBEvzabne3Dop/TkCX/BaEIQUJvhFGbckVVeK4Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EswGc4q0QKmJ7EyhCkitVV0N/4ViId31sNORi954j8wnF/z0Piph4dSyMRuZXT10B b4KTwFQDgSUf68C8Jy4KFF1qFuZmAyzpyBOZzyXo0YcpImj2kItQE2Z/rWH90IArLh BLVoShVB5l7vZAezjAlG7xRqJ/SOrSXbTGEDFx1cNWjSy+u5sNTid6qd9v4xpb13N9 0QEg0DQIP6zfBM1gdULTCdjKt1bdCwWWMEsu3JLuYRZ8YViyE9UKpJQY4N9pVRcFQM eXVjTXOcDceZTfRls65Qp3Hxifo5eomdIn+n2rVV79KZNxNcabX9tgwegKbMv33jp+ zKtajkzcBNKng== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c Date: Tue, 21 Mar 2023 19:05:10 +0200 Message-Id: <20230321170513.2401534-12-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" mem_init_print_info() is only called from mm_core_init(). Move it close to the caller and make it static. Signed-off-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 - mm/internal.h | 1 + mm/mm_init.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 53 ---------------------------------------------- 4 files changed, 54 insertions(+), 54 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2fecabb1a328..e249208f8fbe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2925,7 +2925,6 @@ extern unsigned long free_reserved_area(void *start, void *end, int poison, const char *s); extern void adjust_managed_page_count(struct page *page, long count); -extern void mem_init_print_info(void); extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); diff --git a/mm/internal.h b/mm/internal.h index 4750e3a7fd0d..02273c5e971f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -201,6 +201,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); /* * in mm/page_alloc.c */ +#define K(x) ((x) << (PAGE_SHIFT-10)) extern char * const zone_names[MAX_NR_ZONES]; diff --git a/mm/mm_init.c b/mm/mm_init.c index ff70da11e797..8adadf51bbd2 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -24,6 +24,8 @@ #include #include #include +#include +#include #include "internal.h" #include "shuffle.h" @@ -2649,6 +2651,57 @@ static void __init report_meminit(void) pr_info("mem auto-init: clearing system memory may take some time...\n"); } +static void __init mem_init_print_info(void) +{ + unsigned long physpages, codesize, datasize, rosize, bss_size; + unsigned long init_code_size, init_data_size; + + physpages = get_num_physpages(); + codesize = _etext - _stext; + datasize = _edata - _sdata; + rosize = __end_rodata - __start_rodata; + bss_size = __bss_stop - __bss_start; + init_data_size = __init_end - __init_begin; + init_code_size = _einittext - _sinittext; + + /* + * Detect special cases and adjust section sizes accordingly: + * 1) .init.* may be embedded into .data sections + * 2) .init.text.* may be out of [__init_begin, __init_end], + * please refer to arch/tile/kernel/vmlinux.lds.S. + * 3) .rodata.* may be embedded into .text or .data sections. + */ +#define adj_init_size(start, end, size, pos, adj) \ + do { \ + if (&start[0] <= &pos[0] && &pos[0] < &end[0] && size > adj) \ + size -= adj; \ + } while (0) + + adj_init_size(__init_begin, __init_end, init_data_size, + _sinittext, init_code_size); + adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size); + adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size); + adj_init_size(_stext, _etext, codesize, __start_rodata, rosize); + adj_init_size(_sdata, _edata, datasize, __start_rodata, rosize); + +#undef adj_init_size + + pr_info("Memory: %luK/%luK available (%luK kernel code, %luK rwdata, %luK rodata, %luK init, %luK bss, %luK reserved, %luK cma-reserved" +#ifdef CONFIG_HIGHMEM + ", %luK highmem" +#endif + ")\n", + K(nr_free_pages()), K(physpages), + codesize / SZ_1K, datasize / SZ_1K, rosize / SZ_1K, + (init_data_size + init_code_size) / SZ_1K, bss_size / SZ_1K, + K(physpages - totalram_pages() - totalcma_pages), + K(totalcma_pages) +#ifdef CONFIG_HIGHMEM + , K(totalhigh_pages()) +#endif + ); +} + /* * Set up kernel memory allocators */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2f333c26170c..bb0099f7da93 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5239,8 +5239,6 @@ static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask return !node_isset(nid, *nodemask); } -#define K(x) ((x) << (PAGE_SHIFT-10)) - static void show_migration_types(unsigned char type) { static const char types[MIGRATE_TYPES] = { @@ -6200,57 +6198,6 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char return pages; } -void __init mem_init_print_info(void) -{ - unsigned long physpages, codesize, datasize, rosize, bss_size; - unsigned long init_code_size, init_data_size; - - physpages = get_num_physpages(); - codesize = _etext - _stext; - datasize = _edata - _sdata; - rosize = __end_rodata - __start_rodata; - bss_size = __bss_stop - __bss_start; - init_data_size = __init_end - __init_begin; - init_code_size = _einittext - _sinittext; - - /* - * Detect special cases and adjust section sizes accordingly: - * 1) .init.* may be embedded into .data sections - * 2) .init.text.* may be out of [__init_begin, __init_end], - * please refer to arch/tile/kernel/vmlinux.lds.S. - * 3) .rodata.* may be embedded into .text or .data sections. - */ -#define adj_init_size(start, end, size, pos, adj) \ - do { \ - if (&start[0] <= &pos[0] && &pos[0] < &end[0] && size > adj) \ - size -= adj; \ - } while (0) - - adj_init_size(__init_begin, __init_end, init_data_size, - _sinittext, init_code_size); - adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size); - adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size); - adj_init_size(_stext, _etext, codesize, __start_rodata, rosize); - adj_init_size(_sdata, _edata, datasize, __start_rodata, rosize); - -#undef adj_init_size - - pr_info("Memory: %luK/%luK available (%luK kernel code, %luK rwdata, %luK rodata, %luK init, %luK bss, %luK reserved, %luK cma-reserved" -#ifdef CONFIG_HIGHMEM - ", %luK highmem" -#endif - ")\n", - K(nr_free_pages()), K(physpages), - codesize / SZ_1K, datasize / SZ_1K, rosize / SZ_1K, - (init_data_size + init_code_size) / SZ_1K, bss_size / SZ_1K, - K(physpages - totalram_pages() - totalcma_pages), - K(totalcma_pages) -#ifdef CONFIG_HIGHMEM - , K(totalhigh_pages()) -#endif - ); -} - static int page_alloc_cpu_dead(unsigned int cpu) { struct zone *zone; From patchwork Tue Mar 21 17:05:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C02F6C7619A for ; Tue, 21 Mar 2023 17:07:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230333AbjCURHS (ORCPT ); Tue, 21 Mar 2023 13:07:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230176AbjCURGy (ORCPT ); Tue, 21 Mar 2023 13:06:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18BAC25E30; Tue, 21 Mar 2023 10:06:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 73E9E61D4C; Tue, 21 Mar 2023 17:06:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 794FDC433D2; Tue, 21 Mar 2023 17:06:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418375; bh=kFzPprUJrZrfMtq8aGTwnrimjjzrha/o2lraF+v8D9I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b7ru+LHiDI2SJfh3/MIunocj7ElHYcDVIn5YPlQ+OOQlPpzyd/kWXiKMFB17fP9C8 lScS5MaSKccD3ZQRyYv+VFk5QAlIZ9zWBVhIkixa/oX9YGaz+uKlOZ40U9jWItJYyo o4PXtvDXC7tdpx2lI+8fUkY2de3E4JdJBhuxTcLAWv9S2alCYB0fYVqXIpMYQD75+u XKvvJNZQPEbDMqTELVDfSny//kdykc5pl5pcNxdQliTldaCR8o0YqsFL4UKJBaZoA2 hp0evxwIQG89CQKscRQp521Xu2sIep1bzaWGs/Q87zcYy8epu6Xk5MJV43PP5DJmoQ U0k1viABjv4aQ== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h Date: Tue, 21 Mar 2023 19:05:11 +0200 Message-Id: <20230321170513.2401534-13-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" kmem_cache_init() is called only from mm_core_init(), there is no need to declare it in include/linux/slab.h Move kmem_cache_init() declaration to mm/slab.h Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/mm_init.c | 1 + mm/slab.h | 1 + 3 files changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index aa4575ef2965..f8b1d63c63a3 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -167,7 +167,6 @@ struct mem_cgroup; /* * struct kmem_cache related prototypes */ -void __init kmem_cache_init(void); bool slab_is_available(void); struct kmem_cache *kmem_cache_create(const char *name, unsigned int size, diff --git a/mm/mm_init.c b/mm/mm_init.c index 8adadf51bbd2..53fb8e9d1e3b 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -27,6 +27,7 @@ #include #include #include "internal.h" +#include "slab.h" #include "shuffle.h" #include diff --git a/mm/slab.h b/mm/slab.h index 43966aa5fadf..3f8df2244f5a 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -4,6 +4,7 @@ /* * Internal slab definitions */ +void __init kmem_cache_init(void); /* Reuses the bits in struct page */ struct slab { From patchwork Tue Mar 21 17:05:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182962 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6315C6FD1D for ; Tue, 21 Mar 2023 17:08:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230390AbjCURIX (ORCPT ); Tue, 21 Mar 2023 13:08:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230482AbjCURIL (ORCPT ); Tue, 21 Mar 2023 13:08:11 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E846053716; Tue, 21 Mar 2023 10:07:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 29775B818FC; Tue, 21 Mar 2023 17:06:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72A63C4339B; Tue, 21 Mar 2023 17:06:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418379; bh=KhfLtALYKzODXFE/bKaEWmdPAncGVA8ZXqw3mr1syxI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iP13+nPFKWexWO3d0/NXkearn4Tj8eWKPbGRJCvBjOENCtyJZblBuGz9AdeeUfjdA EMjeaK/CSluxUo7Ua2KaaJmij2/OzMp5dxXpPxj04YzyIGphxVJMKhOF4bXMSL94nz ol5MOh1u+X7joFijl49MvNJQdym1l9XEL8k6DV2Hki6DyjSX4rtuMvSy1ILwGRIi44 XrAoxjg2ZccTAZ8/36v34co5C40IvtmtGUbbKNLnmjVEj9VBI5qp0vymUpsAr05iX4 k26ZQ8N0iCz8C/Gsjz8cV7FjFtufjJ/T1bSEgNV3NbZ/t6IPG5KpyVbUKrbwbSXAmj 579oV8zZkoCuw== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h Date: Tue, 21 Mar 2023 19:05:12 +0200 Message-Id: <20230321170513.2401534-14-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" vmalloc_init() is called only from mm_core_init(), there is no need to declare it in include/linux/vmalloc.h Move vmalloc_init() declaration to mm/internal.h Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/vmalloc.h | 4 ---- mm/internal.h | 5 +++++ 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 69250efa03d1..351fc7697214 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -131,12 +131,8 @@ extern void *vm_map_ram(struct page **pages, unsigned int count, int node); extern void vm_unmap_aliases(void); #ifdef CONFIG_MMU -extern void __init vmalloc_init(void); extern unsigned long vmalloc_nr_pages(void); #else -static inline void vmalloc_init(void) -{ -} static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif diff --git a/mm/internal.h b/mm/internal.h index 02273c5e971f..c05ad651b515 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -900,9 +900,14 @@ size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, * mm/vmalloc.c */ #ifdef CONFIG_MMU +void __init vmalloc_init(void); int vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift); #else +static inline void vmalloc_init(void) +{ +} + static inline int vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) From patchwork Tue Mar 21 17:05:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89F5AC74A5B for ; Tue, 21 Mar 2023 17:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230311AbjCURHc (ORCPT ); Tue, 21 Mar 2023 13:07:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230466AbjCURHD (ORCPT ); Tue, 21 Mar 2023 13:07:03 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5038A28D12; Tue, 21 Mar 2023 10:06:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 33468B818B9; Tue, 21 Mar 2023 17:06:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6BA69C43443; Tue, 21 Mar 2023 17:06:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418383; bh=gZYmuZF4Wa4TQnlxf4knKHVZ8v3wOKjdrAB+ovQBwRs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oeoede4zIlueDGf7T5iTnzKG3tTXMvBICk8SDqhYvZZF5SJKb95ktCZ97DmB4Vj4b tmFgQrjHXzwokzsliceQ6wlLBhXWElddxRYlZ6/zlOEx3aF1TXa2uxjeWgDqU1Ll7R 4de4hOqsUD4HqKNvZBcFbFytrvaycwSByhvSye7zv+5fGJ4DwtYbPa74dU8uxsCKYf 6lQpoIzfX+kr70yQcszvpf8biu3YbSOREudnD+tioQm3tKB/JgRomqUbTFCdM1VZdJ qPnupjVSPPAQEv4o53lcqQ7/O9VWki1w50YwJBY2LmDcZDyoQoDx/UCPhPthb49goW PEpI2eWzC1Pwg== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization Date: Tue, 21 Mar 2023 19:05:13 +0200 Message-Id: <20230321170513.2401534-15-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: "Mike Rapoport (IBM)" and add mm/mm_init.c to memblock entry in MAINTAINERS Signed-off-by: Mike Rapoport (IBM) Reviewed-by: David Hildenbrand Acked-by: Vlastimil Babka --- MAINTAINERS | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index 7002a5d3eb62..b79463ea1049 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13368,13 +13368,14 @@ F: arch/powerpc/include/asm/membarrier.h F: include/uapi/linux/membarrier.h F: kernel/sched/membarrier.c -MEMBLOCK +MEMBLOCK AND MEMORY MANAGEMENT INITIALIZATION M: Mike Rapoport L: linux-mm@kvack.org S: Maintained F: Documentation/core-api/boot-time-mm.rst F: include/linux/memblock.h F: mm/memblock.c +F: mm/mm_init.c F: tools/testing/memblock/ MEMORY CONTROLLER DRIVERS