Message ID | 20241022015913.3524425-7-samuel.holland@sifive.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2,1/9] kasan: sw_tags: Use arithmetic shift for shadow computation | expand |
On 22/10/2024 03:57, Samuel Holland wrote: > Commit 66673099f734 ("riscv: mm: Pre-allocate vmemmap/direct map/kasan > PGD entries") used the start of the KASAN shadow memory region to > represent the end of the linear map, since the two memory regions were > immediately adjacent. This is no longer the case for Sv39; commit > 5c8405d763dc ("riscv: Extend sv39 linear mapping max size to 128G") > introduced a 4 GiB hole between the regions. Introducing KASAN_SW_TAGS > will cut the size of the shadow memory region in half, creating an even > larger hole. > > Avoid wasting PGD entries on this hole by using the size of the linear > map (KERN_VIRT_SIZE) to compute PAGE_END. > > Since KASAN_SHADOW_START/KASAN_SHADOW_END are used inside an IS_ENABLED > block, it's not possible to completely hide the constants when KASAN is > disabled, so provide dummy definitions for that case. > > Signed-off-by: Samuel Holland <samuel.holland@sifive.com> > --- > > (no changes since v1) > > arch/riscv/include/asm/kasan.h | 11 +++++++++-- > arch/riscv/mm/init.c | 2 +- > 2 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h > index e6a0071bdb56..a4e92ce9fa31 100644 > --- a/arch/riscv/include/asm/kasan.h > +++ b/arch/riscv/include/asm/kasan.h > @@ -6,6 +6,8 @@ > > #ifndef __ASSEMBLY__ > > +#ifdef CONFIG_KASAN > + > /* > * The following comment was copied from arm64: > * KASAN_SHADOW_START: beginning of the kernel virtual addresses. > @@ -33,13 +35,18 @@ > #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) > #define KASAN_SHADOW_END MODULES_LOWEST_VADDR > > -#ifdef CONFIG_KASAN > #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) > > void kasan_init(void); > asmlinkage void kasan_early_init(void); > void kasan_swapper_init(void); > > -#endif > +#else /* CONFIG_KASAN */ > + > +#define KASAN_SHADOW_START MODULES_LOWEST_VADDR > +#define KASAN_SHADOW_END MODULES_LOWEST_VADDR > + > +#endif /* CONFIG_KASAN */ > + > #endif > #endif /* __ASM_KASAN_H */ > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 0e8c20adcd98..1f9bb95c2169 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -1494,7 +1494,7 @@ static void __init preallocate_pgd_pages_range(unsigned long start, unsigned lon > panic("Failed to pre-allocate %s pages for %s area\n", lvl, area); > } > > -#define PAGE_END KASAN_SHADOW_START > +#define PAGE_END (PAGE_OFFSET + KERN_VIRT_SIZE) > > void __init pgtable_cache_init(void) > { Looks good and cleaner, you can add: Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Thanks, Alex
diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h index e6a0071bdb56..a4e92ce9fa31 100644 --- a/arch/riscv/include/asm/kasan.h +++ b/arch/riscv/include/asm/kasan.h @@ -6,6 +6,8 @@ #ifndef __ASSEMBLY__ +#ifdef CONFIG_KASAN + /* * The following comment was copied from arm64: * KASAN_SHADOW_START: beginning of the kernel virtual addresses. @@ -33,13 +35,18 @@ #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) #define KASAN_SHADOW_END MODULES_LOWEST_VADDR -#ifdef CONFIG_KASAN #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) void kasan_init(void); asmlinkage void kasan_early_init(void); void kasan_swapper_init(void); -#endif +#else /* CONFIG_KASAN */ + +#define KASAN_SHADOW_START MODULES_LOWEST_VADDR +#define KASAN_SHADOW_END MODULES_LOWEST_VADDR + +#endif /* CONFIG_KASAN */ + #endif #endif /* __ASM_KASAN_H */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 0e8c20adcd98..1f9bb95c2169 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1494,7 +1494,7 @@ static void __init preallocate_pgd_pages_range(unsigned long start, unsigned lon panic("Failed to pre-allocate %s pages for %s area\n", lvl, area); } -#define PAGE_END KASAN_SHADOW_START +#define PAGE_END (PAGE_OFFSET + KERN_VIRT_SIZE) void __init pgtable_cache_init(void) {
Commit 66673099f734 ("riscv: mm: Pre-allocate vmemmap/direct map/kasan PGD entries") used the start of the KASAN shadow memory region to represent the end of the linear map, since the two memory regions were immediately adjacent. This is no longer the case for Sv39; commit 5c8405d763dc ("riscv: Extend sv39 linear mapping max size to 128G") introduced a 4 GiB hole between the regions. Introducing KASAN_SW_TAGS will cut the size of the shadow memory region in half, creating an even larger hole. Avoid wasting PGD entries on this hole by using the size of the linear map (KERN_VIRT_SIZE) to compute PAGE_END. Since KASAN_SHADOW_START/KASAN_SHADOW_END are used inside an IS_ENABLED block, it's not possible to completely hide the constants when KASAN is disabled, so provide dummy definitions for that case. Signed-off-by: Samuel Holland <samuel.holland@sifive.com> --- (no changes since v1) arch/riscv/include/asm/kasan.h | 11 +++++++++-- arch/riscv/mm/init.c | 2 +- 2 files changed, 10 insertions(+), 3 deletions(-)