From patchwork Tue Dec 12 21:34:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13490017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 926E5C4332F for ; Tue, 12 Dec 2023 21:36:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BpSTgLM3/3Zmilw6cYxlR65qd354Gh6RZrD2poGeA0M=; b=zLtbz9dB/bEgzn HipzIpDac68AJD8JncxpxitJzfr0xooE+VyIat8zqoeCov0B3x+Y3W88R2XIH3x+ct+GmcyTIaQR1 J79VEdBWLz/uSFiQFRXupaTGwPirK5Id2fzdaaqu2vNe4oVC0Sb1I+t0DjJ22AZMUSxV+xbTybm79 kFAIqC1OKpOQHMSIdwg6DLtfAoljvPOSRAXQYkGu3c4/iVmgXN7hdrsWrvLKMlbdy5I/W3lYntD0S chd2zSbSHyzQr1bbxMqOZ+6IBp0iyfdgSWx9FogJD757n2X2NSrsN5peQ1xWht4e82R5AqwY6epTs Lqwnm2RXlgtwr5KOVsKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDAQA-00Cpba-0P; Tue, 12 Dec 2023 21:36:06 +0000 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDAQ6-00Cpag-2p for linux-riscv@lists.infradead.org; Tue, 12 Dec 2023 21:36:04 +0000 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-40c2d50bfbfso30669045e9.0 for ; Tue, 12 Dec 2023 13:36:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702416961; x=1703021761; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BLS75Mo77acz6k4jx++nul4Z+bTVnVv2nm8SBcxb6zo=; b=DRyecMce4mulEErpBfr1W5ES37mANDCcJUFXrIDH0T6TRKTj7NVTnH5rV0gKjtRWph bfswd9AE675YIVHIvPk7aV6dJZW17cuhAxrU/dNrA/SdaORhDJL48ibWERtl7PZL+gr7 3/D6F14KgP9231/LeZZIaI23I/KRfbM5EGyEJjBCGPq+7Z+2TxVElMYO2exDOT+H6xhN V4wXYNVu9XNdIWLRS0Qqb2cZmxtvOnK8GoG+Fd+zYs/rzIho/9JyLIx4zuuBDMD7UsQA qQoY/vntYtYR8CcDck8gunjTFxfQLOjqBuPjaxvqjI+X89v2trv1C8LfBNPFGjxDV1PZ YmGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702416961; x=1703021761; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BLS75Mo77acz6k4jx++nul4Z+bTVnVv2nm8SBcxb6zo=; b=SXFwvtP//JD5Tm8IvhmZPczipBdG47N2XNHnaVStLIVPCNjNh9GYAM2eQbdZAAY5jA 7sIuI//urU10RxND3AkLaJBo0OF/c/0cZvIMKnqwg75cW/lp8ofTCCwTOkmqygEluh6M kvAWu2vwsyLYt+/pTgaxSK+e0Csh/OmYx5Q5IWIXYaMv+BEjTQovyZtaP3qU1HTzIzQv RtaIm4Ewvs2Z7thHDuTXpfPqQQH4KHJwhHIiCCYcG/HjoBeOWtxhFq62Hy37Qw67ZjLI rI1SD8i2LlIdBC2HJ9DHwnmXZCEFZyr816F/ksUWIXqYeymKFBRfdiXEmRmsEgdUkKDc TNJA== X-Gm-Message-State: AOJu0YwImdzXJFUpNti91k/EIDXz9A+dFqBE1yqIn0jvdM/ZZGESxFFL 89D53m1iVjNMS4M9t/E9Pt5zHg== X-Google-Smtp-Source: AGHT+IESLuoBQNsAkgxVuWPb1AKal27QjvlJbqXJPQE4veBnAzTcYr3uBF9MZ4rqSOkrlsCnVLUuRA== X-Received: by 2002:a7b:c456:0:b0:40b:5e4a:2365 with SMTP id l22-20020a7bc456000000b0040b5e4a2365mr3632834wmi.103.1702416960633; Tue, 12 Dec 2023 13:36:00 -0800 (PST) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id u21-20020a05600c139500b00405d9a950a2sm19994483wmf.28.2023.12.12.13.35.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 13:36:00 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Arnd Bergmann , Dennis Zhou , Tejun Heo , Christoph Lameter , Andrew Morton , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v2 1/2] mm: Introduce flush_cache_vmap_early() Date: Tue, 12 Dec 2023 22:34:56 +0100 Message-Id: <20231212213457.132605-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231212213457.132605-1-alexghiti@rivosinc.com> References: <20231212213457.132605-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231212_133602_920451_44350BD2 X-CRM114-Status: GOOD ( 18.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The pcpu setup when using the page allocator sets up a new vmalloc mapping very early in the boot process, so early that it cannot use the flush_cache_vmap() function which may depend on structures not yet initialized (for example in riscv, we currently send an IPI to flush other cpus TLB). But on some architectures, we must call flush_cache_vmap(): for example, in riscv, some uarchs can cache invalid TLB entries so we need to flush the new established mapping to avoid taking an exception. So fix this by introducing a new function flush_cache_vmap_early() which is called right after setting the new page table entry and before accessing this new mapping. This new function implements a local flush tlb on riscv and is no-op for other architectures (same as today). Signed-off-by: Alexandre Ghiti Acked-by: Geert Uytterhoeven --- arch/arc/include/asm/cacheflush.h | 1 + arch/arm/include/asm/cacheflush.h | 2 ++ arch/csky/abiv1/inc/abi/cacheflush.h | 1 + arch/csky/abiv2/inc/abi/cacheflush.h | 1 + arch/m68k/include/asm/cacheflush_mm.h | 1 + arch/mips/include/asm/cacheflush.h | 2 ++ arch/nios2/include/asm/cacheflush.h | 1 + arch/parisc/include/asm/cacheflush.h | 1 + arch/riscv/include/asm/cacheflush.h | 3 ++- arch/riscv/include/asm/tlbflush.h | 1 + arch/riscv/mm/tlbflush.c | 5 +++++ arch/sh/include/asm/cacheflush.h | 1 + arch/sparc/include/asm/cacheflush_32.h | 1 + arch/sparc/include/asm/cacheflush_64.h | 1 + arch/xtensa/include/asm/cacheflush.h | 6 ++++-- include/asm-generic/cacheflush.h | 6 ++++++ mm/percpu.c | 8 +------- 17 files changed, 32 insertions(+), 10 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index bd5b1a9a0544..6fc74500a9f5 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -40,6 +40,7 @@ void dma_cache_wback(phys_addr_t start, unsigned long sz); /* TBD: optimize this */ #define flush_cache_vmap(start, end) flush_cache_all() +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) flush_cache_all() #define flush_cache_dup_mm(mm) /* called on fork (VIVT only) */ diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index f6181f69577f..1075534b0a2e 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -340,6 +340,8 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) dsb(ishst); } +#define flush_cache_vmap_early(start, end) do { } while (0) + static inline void flush_cache_vunmap(unsigned long start, unsigned long end) { if (!cache_is_vipt_nonaliasing()) diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index 908d8b0bc4fd..d011a81575d2 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -43,6 +43,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, */ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) cache_wbinv_all() +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) cache_wbinv_all() #define flush_icache_range(start, end) cache_wbinv_range(start, end) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index 40be16907267..6513ac5d2578 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -41,6 +41,7 @@ void flush_icache_mm_range(struct mm_struct *mm, void flush_icache_deferred(struct mm_struct *mm); #define flush_cache_vmap(start, end) do { } while (0) +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index ed12358c4783..9a71b0148461 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -191,6 +191,7 @@ extern void cache_push_v(unsigned long vaddr, int len); #define flush_cache_all() __flush_cache_all() #define flush_cache_vmap(start, end) flush_cache_all() +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) flush_cache_all() static inline void flush_cache_mm(struct mm_struct *mm) diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index f36c2519ed97..1f14132b3fc9 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -97,6 +97,8 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) __flush_cache_vmap(); } +#define flush_cache_vmap_early(start, end) do { } while (0) + extern void (*__flush_cache_vunmap)(void); static inline void flush_cache_vunmap(unsigned long start, unsigned long end) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index 348cea097792..81484a776b33 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -38,6 +38,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page, #define flush_icache_pages flush_icache_pages #define flush_cache_vmap(start, end) flush_dcache_range(start, end) +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) extern void copy_to_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index b4006f2a9705..ba4c05bc24d6 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -41,6 +41,7 @@ void flush_kernel_vmap_range(void *vaddr, int size); void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) flush_cache_all() void flush_dcache_folio(struct folio *folio); diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 3cb53c4df27c..a129dac4521d 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -37,7 +37,8 @@ static inline void flush_dcache_page(struct page *page) flush_icache_mm(vma->vm_mm, 0) #ifdef CONFIG_64BIT -#define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end) +#define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end) +#define flush_cache_vmap_early(start, end) local_flush_tlb_kernel_range(start, end) #endif #ifndef CONFIG_SMP diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 8f3418c5f172..a60416bbe190 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -41,6 +41,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); void flush_tlb_kernel_range(unsigned long start, unsigned long end); +void local_flush_tlb_kernel_range(unsigned long start, unsigned long end); #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index e6659d7368b3..8aadc5f71c93 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -66,6 +66,11 @@ static inline void local_flush_tlb_range_asid(unsigned long start, local_flush_tlb_range_threshold_asid(start, size, stride, asid); } +void local_flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + local_flush_tlb_range_asid(start, end, PAGE_SIZE, FLUSH_TLB_NO_ASID); +} + static void __ipi_flush_tlb_all(void *info) { local_flush_tlb_all(); diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 878b6b551bd2..51112f54552b 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -90,6 +90,7 @@ extern void copy_from_user_page(struct vm_area_struct *vma, unsigned long len); #define flush_cache_vmap(start, end) local_flush_cache_all(NULL) +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) local_flush_cache_all(NULL) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index f3b7270bf71b..9fee0ccfccb8 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -48,6 +48,7 @@ static inline void flush_dcache_page(struct page *page) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_cache_vmap(start, end) flush_cache_all() +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) flush_cache_all() /* When a context switch happens we must flush all user windows so that diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index 0e879004efff..2b1261b77ecd 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -75,6 +75,7 @@ void flush_ptrace_access(struct vm_area_struct *, struct page *, #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_cache_vmap(start, end) do { } while (0) +#define flush_cache_vmap_early(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #endif /* !__ASSEMBLY__ */ diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 785a00ce83c1..38bcecb0e457 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -116,8 +116,9 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_mm(mm) flush_cache_all() #define flush_cache_dup_mm(mm) flush_cache_mm(mm) -#define flush_cache_vmap(start,end) flush_cache_all() -#define flush_cache_vunmap(start,end) flush_cache_all() +#define flush_cache_vmap(start,end) flush_cache_all() +#define flush_cache_vmap_early(start,end) do { } while (0) +#define flush_cache_vunmap(start,end) flush_cache_all() void flush_dcache_folio(struct folio *folio); #define flush_dcache_folio flush_dcache_folio @@ -140,6 +141,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, #define flush_cache_dup_mm(mm) do { } while (0) #define flush_cache_vmap(start,end) do { } while (0) +#define flush_cache_vmap_early(start,end) do { } while (0) #define flush_cache_vunmap(start,end) do { } while (0) #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0 diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 84ec53ccc450..7ee8a179d103 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -91,6 +91,12 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) } #endif +#ifndef flush_cache_vmap_early +static inline void flush_cache_vmap_early(unsigned long start, unsigned long end) +{ +} +#endif + #ifndef flush_cache_vunmap static inline void flush_cache_vunmap(unsigned long start, unsigned long end) { diff --git a/mm/percpu.c b/mm/percpu.c index 7b97d31df767..4e11fc1e6def 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3333,13 +3333,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t if (rc < 0) panic("failed to map percpu area, err=%d\n", rc); - /* - * FIXME: Archs with virtual cache should flush local - * cache for the linear mapping here - something - * equivalent to flush_cache_vmap() on the local cpu. - * flush_cache_vmap() can't be used as most supporting - * data structures are not set up yet. - */ + flush_cache_vmap_early(unit_addr, unit_addr + ai->unit_size); /* copy static data */ memcpy((void *)unit_addr, __per_cpu_load, ai->static_size); From patchwork Tue Dec 12 21:34:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13490018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E74D1C4167B for ; Tue, 12 Dec 2023 21:37:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6E4fGDcRoBjx26GXvSTCfEunnrzkCMffuTvbAJXTo/M=; b=ELhuGimAtN21K3 HvT3bPgjfYN5CpkecZbbYTUXx2LWWnN3sbibIhBvG9ZdczkoSr0dDF/QB3LN8mfYSgRq54EsNBC16 E+lMSBuloX+ai5rS+7/OlDmY93WL8kbKsVq7wYuNulkF4/RemQCPr3P/zUXIiYgyQxjcpJM2+ZVf9 Gskwhunb1thE2k+zsyHLllrKfArT1Alo2P4RkAeOIBhtE1O9oZHp5lkqvRDYPQz7QomJhBuXgTKmq Gf8bKgggHiRL5VfI12I50fKGhwkuAix915Lt0u18YW1MRlh2/yEBCQr64wCmFkptm/+NrnrBtBNVz u908GHAdU9+5lwtnUzJA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDARA-00Cpqr-1P; Tue, 12 Dec 2023 21:37:08 +0000 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDAR7-00CppG-1H for linux-riscv@lists.infradead.org; Tue, 12 Dec 2023 21:37:06 +0000 Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-40c2bb872e2so58038955e9.3 for ; Tue, 12 Dec 2023 13:37:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702417022; x=1703021822; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qCOh69caXbVfDD+nz3SkSSNJ4LOnzQLxIG6SJxHDatw=; b=hTX63U6TcJ/d99pRh+Kr/Nae8BUjbuO2jKROcY6hqOnqgTihpO04r9qoFYpBwTV14i QagrECXCpvdPc+TKkriV8xFt8d6ePXw0ZhyBo/tTUOxtIgezjn0Vr3VWYw1iOwpoez0u Cp1ujBBYRhKPuzrRi+vQ09Dmi2j9RwgxvpSHkPRxOKW26j/F2wPEEap5g5bydqdNLD9K Q2G7eyZ+dnL+S+QLSSfYuV/alxz/2QkHbFaTSveA0L/ZV+kKQpNi7MA6wmntOeFOuOvr 3peGuPiFtHjpnn+DM1hT7DDH83HxrtDSjZzqsvCqMFXF/y3JfObVrMGzKwLNP+3223wG bp+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702417022; x=1703021822; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qCOh69caXbVfDD+nz3SkSSNJ4LOnzQLxIG6SJxHDatw=; b=Kma1dwC6HhxGWe0M75NqqWh3JLhDyCizr/2/6Yh+Xmjo94exAcV0onCvM+xD5BMPiK Yu5HFbfEZpCEfacDIBwUaZk22Muv5anEm+9NxD11OqfaaLi+OuQsK2R+pqoPMRXD+YB2 hNTMZzmUowbLrOJBNkyJt46+7Px61j0uqm+48/cz65KnQ655YB3fhrllr5UM5Ff2UE5o vdJnM+w2tfli2kAGErYPkWXRHcRpoYxCdDcc6slasjfnjrIAXjrI9iVDilB65e1BO12u Nwh2kNX5crULMtj72tiu7WBRj5nYOaPLuxIZXup4RadssV1ZL2uEHOIR8dOUGZhRIGQj Q3sQ== X-Gm-Message-State: AOJu0YxtyL8ddm/7IHwOJazaDUXZr7Hdcw0719JaxIVa3UDt84g8K9/h NQT8BvoHXlhCAl08zsAtbXhC2A== X-Google-Smtp-Source: AGHT+IFqqgEgVUuuLcp/jxl4AbRQVAxPSPDTSvhp/ckLGpwai+H0FWmy2ZTFwQBh1m466TRnZ7G6vg== X-Received: by 2002:a05:600c:2814:b0:40b:5f03:b43f with SMTP id m20-20020a05600c281400b0040b5f03b43fmr1885174wmb.353.1702417021655; Tue, 12 Dec 2023 13:37:01 -0800 (PST) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id ay35-20020a05600c1e2300b0040b2b38a1fasm17954734wmb.4.2023.12.12.13.37.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 13:37:01 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Arnd Bergmann , Dennis Zhou , Tejun Heo , Christoph Lameter , Andrew Morton , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v2 2/2] riscv: Enable pcpu page first chunk allocator Date: Tue, 12 Dec 2023 22:34:57 +0100 Message-Id: <20231212213457.132605-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231212213457.132605-1-alexghiti@rivosinc.com> References: <20231212213457.132605-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231212_133705_507215_F02B99B9 X-CRM114-Status: GOOD ( 11.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org As explained in commit 6ea529a2037c ("percpu: make embedding first chunk allocator check vmalloc space size"), the embedding first chunk allocator needs the vmalloc space to be larger than the maximum distance between units which are grouped into NUMA nodes. On a very sparse NUMA configurations and a small vmalloc area (for example, it is 64GB in sv39), the allocation of dynamic percpu data in the vmalloc area could fail. So provide the pcpu page allocator as a fallback in case we fall into such a sparse configuration (which happened in arm64 as shown by commit 09cea6195073 ("arm64: support page mapping percpu first chunk allocator")). Signed-off-by: Alexandre Ghiti --- arch/riscv/Kconfig | 2 ++ arch/riscv/mm/kasan_init.c | 8 ++++++++ 2 files changed, 10 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 7603bd8ab333..8ba4a63e0ae5 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -415,7 +415,9 @@ config NUMA depends on SMP && MMU select ARCH_SUPPORTS_NUMA_BALANCING select GENERIC_ARCH_NUMA + select HAVE_SETUP_PER_CPU_AREA select NEED_PER_CPU_EMBED_FIRST_CHUNK + select NEED_PER_CPU_PAGE_FIRST_CHUNK select OF_NUMA select USE_PERCPU_NUMA_NODE_ID help diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 5e39dcf23fdb..4c9a2c527f08 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -438,6 +438,14 @@ static void __init kasan_shallow_populate(void *start, void *end) kasan_shallow_populate_pgd(vaddr, vend); } +#ifdef CONFIG_KASAN_VMALLOC +void __init kasan_populate_early_vm_area_shadow(void *start, unsigned long size) +{ + kasan_populate(kasan_mem_to_shadow(start), + kasan_mem_to_shadow(start + size)); +} +#endif + static void __init create_tmp_mapping(void) { void *ptr;