diff mbox series

[v2,4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC

Message ID 20210109103252.812517-5-lecopzer@gmail.com (mailing list archive)
State New, archived
Headers show
Series arm64: kasan: support CONFIG_KASAN_VMALLOC | expand

Commit Message

Lecopzer Chen Jan. 9, 2021, 10:32 a.m. UTC
After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
	VMALLOC area ffffffc010000000 fffffffdf0000000

	before the patch:
		module_alloc_base/end ffffffc008b80000 ffffffc010000000
	after the patch:
		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

	And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
 arch/arm64/kernel/module.c | 16 +++++++++-------
 2 files changed, 19 insertions(+), 15 deletions(-)

Comments

Will Deacon Jan. 27, 2021, 11:04 p.m. UTC | #1
On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> After KASAN_VMALLOC works in arm64, we can randomize module region
> into vmalloc area now.
> 
> Test:
> 	VMALLOC area ffffffc010000000 fffffffdf0000000
> 
> 	before the patch:
> 		module_alloc_base/end ffffffc008b80000 ffffffc010000000
> 	after the patch:
> 		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
> 
> 	And the function that insmod some modules is fine.
> 
> Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
>  arch/arm64/kernel/module.c | 16 +++++++++-------
>  2 files changed, 19 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 1c74c45b9494..a2858058e724 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
>  	/* use the top 16 bits to randomize the linear region */
>  	memstart_offset_seed = seed >> 48;
>  
> -	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> -	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> +	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
> +	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||

CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
necessary?

Will
Lecopzer Chen Jan. 28, 2021, 8:53 a.m. UTC | #2
> On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> > After KASAN_VMALLOC works in arm64, we can randomize module region
> > into vmalloc area now.
> > 
> > Test:
> > 	VMALLOC area ffffffc010000000 fffffffdf0000000
> > 
> > 	before the patch:
> > 		module_alloc_base/end ffffffc008b80000 ffffffc010000000
> > 	after the patch:
> > 		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
> > 
> > 	And the function that insmod some modules is fine.
> > 
> > Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > ---
> >  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
> >  arch/arm64/kernel/module.c | 16 +++++++++-------
> >  2 files changed, 19 insertions(+), 15 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > index 1c74c45b9494..a2858058e724 100644
> > --- a/arch/arm64/kernel/kaslr.c
> > +++ b/arch/arm64/kernel/kaslr.c
> > @@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
> >  	/* use the top 16 bits to randomize the linear region */
> >  	memstart_offset_seed = seed >> 48;
> >  
> > -	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > -	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> > +	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
> > +	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> 
> CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
> necessary?
> 
> Will

CONFIG_KASAN_VMALLOC=y means CONFIG_KASAN_GENERIC=y
but CONFIG_KASAN_GENERIC=y doesn't means CONFIG_KASAN_VMALLOC=y

So this if-condition allows only KASAN rather than
KASAN + KASAN_VMALLOC enabled.

Please correct me if I'm wrong.

thanks,
Lecopzer
Will Deacon Jan. 28, 2021, 8:26 p.m. UTC | #3
On Thu, Jan 28, 2021 at 04:53:26PM +0800, Lecopzer Chen wrote:
>  
> > On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> > > After KASAN_VMALLOC works in arm64, we can randomize module region
> > > into vmalloc area now.
> > > 
> > > Test:
> > > 	VMALLOC area ffffffc010000000 fffffffdf0000000
> > > 
> > > 	before the patch:
> > > 		module_alloc_base/end ffffffc008b80000 ffffffc010000000
> > > 	after the patch:
> > > 		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
> > > 
> > > 	And the function that insmod some modules is fine.
> > > 
> > > Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > ---
> > >  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
> > >  arch/arm64/kernel/module.c | 16 +++++++++-------
> > >  2 files changed, 19 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > > index 1c74c45b9494..a2858058e724 100644
> > > --- a/arch/arm64/kernel/kaslr.c
> > > +++ b/arch/arm64/kernel/kaslr.c
> > > @@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
> > >  	/* use the top 16 bits to randomize the linear region */
> > >  	memstart_offset_seed = seed >> 48;
> > >  
> > > -	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > > -	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> > > +	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
> > > +	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > 
> > CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
> > necessary?
> > 
> > Will
> 
> CONFIG_KASAN_VMALLOC=y means CONFIG_KASAN_GENERIC=y
> but CONFIG_KASAN_GENERIC=y doesn't means CONFIG_KASAN_VMALLOC=y
> 
> So this if-condition allows only KASAN rather than
> KASAN + KASAN_VMALLOC enabled.
> 
> Please correct me if I'm wrong.

Sorry, you're completely right -- I missed the '!' when I read this
initially.

Will
diff mbox series

Patch

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..a2858058e724 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -161,15 +161,17 @@  u64 __init kaslr_early_init(u64 dt_phys)
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
 
-	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
-	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
+	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+	     IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
 		/*
-		 * KASAN does not expect the module region to intersect the
-		 * vmalloc region, since shadow memory is allocated for each
-		 * module at load time, whereas the vmalloc region is shadowed
-		 * by KASAN zero pages. So keep modules out of the vmalloc
-		 * region if KASAN is enabled, and put the kernel well within
-		 * 4 GB of the module region.
+		 * KASAN without KASAN_VMALLOC does not expect the module region
+		 * to intersect the vmalloc region, since shadow memory is
+		 * allocated for each module at load time, whereas the vmalloc
+		 * region is shadowed by KASAN zero pages. So keep modules
+		 * out of the vmalloc region if KASAN is enabled without
+		 * KASAN_VMALLOC, and put the kernel well within 4 GB of the
+		 * module region.
 		 */
 		return offset % SZ_2G;
 
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fe21e0f06492..b5ec010c481f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -40,14 +40,16 @@  void *module_alloc(unsigned long size)
 				NUMA_NO_NODE, __builtin_return_address(0));
 
 	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
-	    !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
-	    !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+	    (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
+	     (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+	      !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
 		/*
-		 * KASAN can only deal with module allocations being served
-		 * from the reserved module region, since the remainder of
-		 * the vmalloc region is already backed by zero shadow pages,
-		 * and punching holes into it is non-trivial. Since the module
-		 * region is not randomized when KASAN is enabled, it is even
+		 * KASAN without KASAN_VMALLOC can only deal with module
+		 * allocations being served from the reserved module region,
+		 * since the remainder of the vmalloc region is already
+		 * backed by zero shadow pages, and punching holes into it
+		 * is non-trivial. Since the module region is not randomized
+		 * when KASAN is enabled without KASAN_VMALLOC, it is even
 		 * less likely that the module region gets exhausted, so we
 		 * can simply omit this fallback in that case.
 		 */