From patchwork Thu Dec 28 13:10:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artem Kuzin X-Patchwork-Id: 13506036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B17DC3DA6E for ; Thu, 28 Dec 2023 13:13:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E89A76B00C8; Thu, 28 Dec 2023 08:12:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E60E28D000F; Thu, 28 Dec 2023 08:12:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C64866B00CA; Thu, 28 Dec 2023 08:12:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B20EF6B00C8 for ; Thu, 28 Dec 2023 08:12:56 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8FFBE1C1151 for ; Thu, 28 Dec 2023 13:12:56 +0000 (UTC) X-FDA: 81616267152.21.D36D8AF Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by imf16.hostedemail.com (Postfix) with ESMTP id B234D180014 for ; Thu, 28 Dec 2023 13:12:54 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf16.hostedemail.com: domain of artem.kuzin@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=artem.kuzin@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703769175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w1gZdgEVOtfRLEHHoZ8PL4vf6Op16bXQfoQwC38/goQ=; b=wRmZHcpIeO/QLY3lbu7YDWuGuoYwYd85rizBIriJxhgQrQ7pt/bNDZcETdVElmdpQzw6u8 1Y7BCSKAqhz6TbA/HkIkVVUkkaHz8h3IM48jJyfSU2Mi0DL+pjjpI9GKJIkgO79DrXnzUp GKTADvyyy/XBTe1i6vxDjJf8Qn+XLbU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf16.hostedemail.com: domain of artem.kuzin@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=artem.kuzin@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703769175; a=rsa-sha256; cv=none; b=0mqi8FxYz5DZRZ5SasbclTeqjITJKYycKchSgA6kSCQpbtJLsToTJxdPZYkXzQY/4m+2Hm TfCSMaPL//8r6H+gobp/Qto0zWzRnDwpwdhBNviTeleYCfCTp0elNYZ4Cr9GJU6XSHcvjp N+5o7b+jNTBzbp4nrYcORKBjHhW3ILk= Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T185805Mhz67cTF; Thu, 28 Dec 2023 21:10:36 +0800 (CST) Received: from lhrpeml500001.china.huawei.com (unknown [7.191.163.213]) by mail.maildlp.com (Postfix) with ESMTPS id D47711408F9; Thu, 28 Dec 2023 21:12:52 +0800 (CST) Received: from mscphis00060.huawei.com (10.123.65.147) by lhrpeml500001.china.huawei.com (7.191.163.213) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 28 Dec 2023 13:12:50 +0000 From: To: , , , , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH RFC 07/12] x86: enable per-NUMA node kernel text and rodata replication Date: Thu, 28 Dec 2023 21:10:51 +0800 Message-ID: <20231228131056.602411-8-artem.kuzin@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231228131056.602411-1-artem.kuzin@huawei.com> References: <20231228131056.602411-1-artem.kuzin@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.123.65.147] X-ClientProxiedBy: mscpeml500004.china.huawei.com (7.188.26.250) To lhrpeml500001.china.huawei.com (7.191.163.213) X-Rspamd-Queue-Id: B234D180014 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: kryjm3aqbqryz9z4y8ry5f91y995rt1c X-HE-Tag: 1703769174-387457 X-HE-Meta: U2FsdGVkX191t9hYVFIspPrFjkrewtfmLsEgJWE3728nFqEL4u3PapEN+dKXl8dO7JnVcOSi5ZpcvsK4oBaKx39MWeOV37Nt7vFnyLhlh9J61lXg3HxcJemDSt4vvesKr7XO+R3Zb41U3sR2+tGB5a7npKxOOPsvR6OpJBHSd6k01sZs7hQZnDjm/UmlsMOUByczkPak7GIP/jeM/Unxy/Bb3RLiOA8FtHAiJvodCfc9+6hbyp9TPbrAgYj/F0+Zyy5yaN3TWgHJtzsk/PuoRsDZYcd12vO9ULm86gjZfDp7d52R9SdoR1vRZQ8s1hE1SSY7jeaxWvUK/8y5MuXUb4mR6JlTdF52zvFxKj8rCI7SlyLbPBoF8MtBDvH6d1ShhiQwzdgqgLYwb8YBXzEAKObhvGM9m0G7lqlt/DZn0RF6dI/taOsH+surGJ4RMmOZjyPg2LuUmJfqyDCruO2KKfThIf93AcfaFWG2OLOebs8SGO5+tjkYYV6YKoMOm5VWsjo9fsCnZpUmHepyTzC6V7CA7VCI9hGbwpPhApkV4l4B83/oFuFhYpVHB9pmtvVdA4hXc8xpkbf9KaHu2Z6Bd4cvod5HeJoG2dyV0rBPL3gf4vd4mg9VSRGbSSGVM9wbqUrQofwy3BPjfNcZr0qM1pvKNVLgTJ1z8mXzOc0OQVJJAYkLw0lRz15E2mDpt2m7hVn4N66k1demBnaG103N7qHzBxvanqdz0wiXWfL+3dH4+dvNa8iWWRaCx9nE8FP4fjfClWuOh1lyZ2Wrh55mDdpSevcOL97kcBZN+PFDdqokUg1mwCprGRgN/MXzA2i9e0k1bOy/2ibWncq/0FdC/S3SQL6/PtWCrVWlv3hUvGLxPtRmmcz0SwvFfLHpHatIlX/nxB7QbgvsXYna7ppdPnZ8sZhWFuOUrtXBZBQdE6c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Artem Kuzin Co-developed-by: Nikita Panov Signed-off-by: Nikita Panov Co-developed-by: Alexander Grubnikov Signed-off-by: Alexander Grubnikov Signed-off-by: Artem Kuzin --- arch/x86/kernel/smpboot.c | 2 + arch/x86/mm/dump_pagetables.c | 9 +++++ arch/x86/mm/fault.c | 4 +- arch/x86/mm/pgtable.c | 76 ++++++++++++++++++++++++----------- arch/x86/mm/tlb.c | 30 +++++++++++--- init/main.c | 5 +++ 6 files changed, 97 insertions(+), 29 deletions(-) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 747b83a373a2..d2a852ba1bcf 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include @@ -244,6 +245,7 @@ static void notrace start_secondary(void *unused) * limit the things done here to the most necessary things. */ cr4_init(); + numa_setup_pgd(); /* * 32-bit specific. 64-bit reaches this code with the correct page diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index e1b599ecbbc2..5a2e36c9468a 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -17,6 +17,7 @@ #include #include #include +#include #include @@ -433,7 +434,15 @@ void ptdump_walk_user_pgd_level_checkwx(void) void ptdump_walk_pgd_level_checkwx(void) { +#ifdef CONFIG_KERNEL_REPLICATION + int node; + + for_each_replica(node) + ptdump_walk_pgd_level_core(NULL, &init_mm, + per_numa_pgd(&init_mm, node), true, false); +#else ptdump_walk_pgd_level_core(NULL, &init_mm, INIT_PGD, true, false); +#endif } static int __init pt_dump_init(void) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e8711b2cafaf..d76e072dd028 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -20,6 +20,7 @@ #include /* efi_crash_gracefully_on_page_fault()*/ #include #include /* find_and_lock_vma() */ +#include #include /* boot_cpu_has, ... */ #include /* dotraplinkage, ... */ @@ -1031,7 +1032,8 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) error_code != (X86_PF_INSTR | X86_PF_PROT)) return 0; - pgd = init_mm.pgd + pgd_index(address); + pgd = per_numa_pgd(&init_mm, numa_node_id()); + if (!pgd_present(*pgd)) return 0; diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 15a8009a4480..4c905fe0b84f 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -2,6 +2,7 @@ #include #include #include +#include #include #include #include @@ -120,23 +121,25 @@ struct mm_struct *pgd_page_get_mm(struct page *page) return page->pt_mm; } -static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd) +static void pgd_ctor(struct mm_struct *mm, int nid) { + pgd_t *dst_pgd = per_numa_pgd(mm, nid); + pgd_t *src_pgd = per_numa_pgd(&init_mm, nid); /* If the pgd points to a shared pagetable level (either the ptes in non-PAE, or shared PMD in PAE), then just copy the references from swapper_pg_dir. */ if (CONFIG_PGTABLE_LEVELS == 2 || (CONFIG_PGTABLE_LEVELS == 3 && SHARED_KERNEL_PMD) || CONFIG_PGTABLE_LEVELS >= 4) { - clone_pgd_range(pgd + KERNEL_PGD_BOUNDARY, - swapper_pg_dir + KERNEL_PGD_BOUNDARY, + clone_pgd_range(dst_pgd + KERNEL_PGD_BOUNDARY, + src_pgd + KERNEL_PGD_BOUNDARY, KERNEL_PGD_PTRS); } /* list required to sync kernel mapping updates */ if (!SHARED_KERNEL_PMD) { - pgd_set_mm(pgd, mm); - pgd_list_add(pgd); + pgd_set_mm(dst_pgd, mm); + pgd_list_add(dst_pgd); } } @@ -416,20 +419,33 @@ static inline void _pgd_free(pgd_t *pgd) { free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER); } + +#ifdef CONFIG_KERNEL_REPLICATION +static inline pgd_t *_pgd_alloc_node(int nid) +{ + struct page *pages; + + pages = __alloc_pages_node(nid, GFP_PGTABLE_USER, + PGD_ALLOCATION_ORDER); + return (pgd_t *)page_address(pages); +} + +#else +#define _pgd_alloc_node(nid) _pgd_alloc() +#endif /* CONFIG_KERNEL_REPLICATION */ #endif /* CONFIG_X86_PAE */ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *pgd; + int nid; pmd_t *u_pmds[MAX_PREALLOCATED_USER_PMDS]; pmd_t *pmds[MAX_PREALLOCATED_PMDS]; - pgd = _pgd_alloc(); - - if (pgd == NULL) - goto out; - - mm->pgd = pgd; + for_each_replica(nid) { + per_numa_pgd(mm, nid) = _pgd_alloc_node(nid); + if (per_numa_pgd(mm, nid) == NULL) + goto out_free_pgd; + } if (sizeof(pmds) != 0 && preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0) @@ -449,16 +465,22 @@ pgd_t *pgd_alloc(struct mm_struct *mm) */ spin_lock(&pgd_lock); - pgd_ctor(mm, pgd); - if (sizeof(pmds) != 0) - pgd_prepopulate_pmd(mm, pgd, pmds); + for_each_replica(nid) { + pgd_ctor(mm, nid); + if (sizeof(pmds) != 0) + pgd_prepopulate_pmd(mm, per_numa_pgd(mm, nid), pmds); - if (sizeof(u_pmds) != 0) - pgd_prepopulate_user_pmd(mm, pgd, u_pmds); + if (sizeof(u_pmds) != 0) + pgd_prepopulate_user_pmd(mm, per_numa_pgd(mm, nid), u_pmds); + } + + for_each_online_node(nid) { + per_numa_pgd(mm, nid) = per_numa_pgd(mm, numa_closest_memory_node(nid)); + } spin_unlock(&pgd_lock); - return pgd; + return mm->pgd; out_free_user_pmds: if (sizeof(u_pmds) != 0) @@ -467,17 +489,25 @@ pgd_t *pgd_alloc(struct mm_struct *mm) if (sizeof(pmds) != 0) free_pmds(mm, pmds, PREALLOCATED_PMDS); out_free_pgd: - _pgd_free(pgd); -out: + for_each_replica(nid) { + if (per_numa_pgd(mm, nid) != NULL) + _pgd_free(per_numa_pgd(mm, nid)); + } return NULL; } void pgd_free(struct mm_struct *mm, pgd_t *pgd) { + int nid; + pgd_mop_up_pmds(mm, pgd); - pgd_dtor(pgd); - paravirt_pgd_free(mm, pgd); - _pgd_free(pgd); + for_each_replica(nid) { + pgd_t *pgd_numa = per_numa_pgd(mm, nid); + + pgd_dtor(pgd_numa); + paravirt_pgd_free(mm, pgd_numa); + _pgd_free(pgd_numa); + } } /* diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 267acf27480a..de0e57827f98 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -491,6 +492,22 @@ void cr4_update_pce(void *ignored) static inline void cr4_update_pce_mm(struct mm_struct *mm) { } #endif +#ifdef CONFIG_KERNEL_REPLICATION +extern struct mm_struct *poking_mm; +static pgd_t *get_next_pgd(struct mm_struct *next) +{ + if (next == poking_mm) + return next->pgd; + else + return next->pgd_numa[numa_node_id()]; +} +#else +static pgd_t *get_next_pgd(struct mm_struct *next) +{ + return next->pgd; +} +#endif /*CONFIG_KERNEL_REPLICATION*/ + void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { @@ -502,6 +519,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, u64 next_tlb_gen; bool need_flush; u16 new_asid; + pgd_t *next_pgd; /* * NB: The scheduler will call us with prev == next when switching @@ -636,15 +654,17 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, } set_tlbstate_lam_mode(next); + + next_pgd = get_next_pgd(next); if (need_flush) { this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); - load_new_mm_cr3(next->pgd, new_asid, new_lam, true); + load_new_mm_cr3(next_pgd, new_asid, new_lam, true); trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); } else { /* The new ASID is already up to date. */ - load_new_mm_cr3(next->pgd, new_asid, new_lam, false); + load_new_mm_cr3(next_pgd, new_asid, new_lam, false); trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, 0); } @@ -703,7 +723,7 @@ void initialize_tlbstate_and_flush(void) unsigned long cr3 = __read_cr3(); /* Assert that CR3 already references the right mm. */ - WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd)); + WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(per_numa_pgd(mm, numa_node_id()))); /* LAM expected to be disabled */ WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)); @@ -718,7 +738,7 @@ void initialize_tlbstate_and_flush(void) !(cr4_read_shadow() & X86_CR4_PCIDE)); /* Disable LAM, force ASID 0 and force a TLB flush. */ - write_cr3(build_cr3(mm->pgd, 0, 0)); + write_cr3(build_cr3(per_numa_pgd(mm, numa_node_id()), 0, 0)); /* Reinitialize tlbstate. */ this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_INIT); @@ -1091,7 +1111,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) unsigned long __get_current_cr3_fast(void) { unsigned long cr3 = - build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, + build_cr3(per_numa_pgd(this_cpu_read(cpu_tlbstate.loaded_mm), numa_node_id()), this_cpu_read(cpu_tlbstate.loaded_mm_asid), tlbstate_lam_cr3_mask()); diff --git a/init/main.c b/init/main.c index ad920fac325c..98c4a908ac13 100644 --- a/init/main.c +++ b/init/main.c @@ -99,6 +99,7 @@ #include #include #include +#include #include #include @@ -921,11 +922,13 @@ void start_kernel(void) * These use large bootmem allocations and must precede * initalization of page allocator */ + numa_reserve_memory(); setup_log_buf(0); vfs_caches_init_early(); sort_main_extable(); trap_init(); mm_core_init(); + numa_replicate_kernel(); poking_init(); ftrace_init(); @@ -1446,6 +1449,8 @@ static int __ref kernel_init(void *unused) free_initmem(); mark_readonly(); + numa_replicate_kernel_rodata(); + numa_clear_linear_addresses(); /* * Kernel mappings are now finalized - update the userspace page-table * to finalize PTI.