diff mbox

[1/1] kasan: fix shadow_size calculation error in kasan_module_alloc

Message ID 1529659626-12660-1-git-send-email-thunder.leizhen@huawei.com (mailing list archive)
State New, archived
Headers show

Commit Message

Zhen Lei June 22, 2018, 9:27 a.m. UTC
There is a special case that the size is "(N << KASAN_SHADOW_SCALE_SHIFT)
Pages plus X", the value of X is [1, KASAN_SHADOW_SCALE_SIZE-1]. The
operation "size >> KASAN_SHADOW_SCALE_SHIFT" will drop X, and the roundup
operation can not retrieve the missed one page. For example: size=0x28006,
PAGE_SIZE=0x1000, KASAN_SHADOW_SCALE_SHIFT=3, we will get
shadow_size=0x5000, but actually we need 6 pages.

shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);

This can lead kernel to be crashed, when kasan is enabled and the value
of mod->core_layout.size or mod->init_layout.size is like above. Because
the shadow memory of X has not been allocated and mapped.

move_module:
ptr = module_alloc(mod->core_layout.size);
...
memset(ptr, 0, mod->core_layout.size);		//crashed

Unable to handle kernel paging request at virtual address ffff0fffff97b000
......
Call trace:
[<ffff8000004694d4>] __asan_storeN+0x174/0x1a8
[<ffff800000469844>] memset+0x24/0x48
[<ffff80000025cf28>] layout_and_allocate+0xcd8/0x1800
[<ffff80000025dbe0>] load_module+0x190/0x23e8
[<ffff8000002601e8>] SyS_finit_module+0x148/0x180

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 mm/kasan/kasan.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--
1.8.3

Comments

Dmitry Vyukov June 22, 2018, 9:42 a.m. UTC | #1
On Fri, Jun 22, 2018 at 11:27 AM, Zhen Lei <thunder.leizhen@huawei.com> wrote:
> There is a special case that the size is "(N << KASAN_SHADOW_SCALE_SHIFT)
> Pages plus X", the value of X is [1, KASAN_SHADOW_SCALE_SIZE-1]. The
> operation "size >> KASAN_SHADOW_SCALE_SHIFT" will drop X, and the roundup
> operation can not retrieve the missed one page. For example: size=0x28006,
> PAGE_SIZE=0x1000, KASAN_SHADOW_SCALE_SHIFT=3, we will get
> shadow_size=0x5000, but actually we need 6 pages.
>
> shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
>
> This can lead kernel to be crashed, when kasan is enabled and the value
> of mod->core_layout.size or mod->init_layout.size is like above. Because
> the shadow memory of X has not been allocated and mapped.
>
> move_module:
> ptr = module_alloc(mod->core_layout.size);
> ...
> memset(ptr, 0, mod->core_layout.size);          //crashed
>
> Unable to handle kernel paging request at virtual address ffff0fffff97b000
> ......
> Call trace:
> [<ffff8000004694d4>] __asan_storeN+0x174/0x1a8
> [<ffff800000469844>] memset+0x24/0x48
> [<ffff80000025cf28>] layout_and_allocate+0xcd8/0x1800
> [<ffff80000025dbe0>] load_module+0x190/0x23e8
> [<ffff8000002601e8>] SyS_finit_module+0x148/0x180
>
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> ---
>  mm/kasan/kasan.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 81a2f45..f5ac4ac 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -427,12 +427,13 @@ void kasan_kfree_large(const void *ptr)
>  int kasan_module_alloc(void *addr, size_t size)
>  {
>         void *ret;
> +       size_t scaled_size;
>         size_t shadow_size;
>         unsigned long shadow_start;
>
>         shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
> -       shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
> -                       PAGE_SIZE);
> +       scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
> +       shadow_size = round_up(scaled_size, PAGE_SIZE);
>
>         if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
>                 return -EINVAL;


Hi Zhen,

Yes, this is a bug. Thanks for fixing it!

Reviewed-by: Dmitriy Vyukov <dvyukov@google.com>
Andrey Ryabinin June 25, 2018, 4:56 p.m. UTC | #2
On 06/22/2018 12:27 PM, Zhen Lei wrote:
> There is a special case that the size is "(N << KASAN_SHADOW_SCALE_SHIFT)
> Pages plus X", the value of X is [1, KASAN_SHADOW_SCALE_SIZE-1]. The
> operation "size >> KASAN_SHADOW_SCALE_SHIFT" will drop X, and the roundup
> operation can not retrieve the missed one page. For example: size=0x28006,
> PAGE_SIZE=0x1000, KASAN_SHADOW_SCALE_SHIFT=3, we will get
> shadow_size=0x5000, but actually we need 6 pages.
> 
> shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, PAGE_SIZE);
> 
> This can lead kernel to be crashed, when kasan is enabled and the value
> of mod->core_layout.size or mod->init_layout.size is like above. Because
> the shadow memory of X has not been allocated and mapped.
> 
> move_module:
> ptr = module_alloc(mod->core_layout.size);
> ...
> memset(ptr, 0, mod->core_layout.size);		//crashed
> 
> Unable to handle kernel paging request at virtual address ffff0fffff97b000
> ......
> Call trace:
> [<ffff8000004694d4>] __asan_storeN+0x174/0x1a8
> [<ffff800000469844>] memset+0x24/0x48
> [<ffff80000025cf28>] layout_and_allocate+0xcd8/0x1800
> [<ffff80000025dbe0>] load_module+0x190/0x23e8
> [<ffff8000002601e8>] SyS_finit_module+0x148/0x180
> 
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> ---

Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>


>  mm/kasan/kasan.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 81a2f45..f5ac4ac 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -427,12 +427,13 @@ void kasan_kfree_large(const void *ptr)
>  int kasan_module_alloc(void *addr, size_t size)
>  {
>  	void *ret;
> +	size_t scaled_size;
>  	size_t shadow_size;
>  	unsigned long shadow_start;
> 
>  	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
> -	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
> -			PAGE_SIZE);
> +	scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
> +	shadow_size = round_up(scaled_size, PAGE_SIZE);
> 
>  	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
>  		return -EINVAL;
> --
> 1.8.3
> 
>
diff mbox

Patch

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 81a2f45..f5ac4ac 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -427,12 +427,13 @@  void kasan_kfree_large(const void *ptr)
 int kasan_module_alloc(void *addr, size_t size)
 {
 	void *ret;
+	size_t scaled_size;
 	size_t shadow_size;
 	unsigned long shadow_start;

 	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
-	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
-			PAGE_SIZE);
+	scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
+	shadow_size = round_up(scaled_size, PAGE_SIZE);

 	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
 		return -EINVAL;