diff mbox

[RFC/PATCH,RESEND,-next,01/21] Add kernel address sanitizer infrastructure.

Message ID 1404905415-9046-2-git-send-email-a.ryabinin@samsung.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andrey Ryabinin July 9, 2014, 11:29 a.m. UTC
Address sanitizer for kernel (kasan) is a dynamic memory error detector.

The main features of kasan is:
 - is based on compiler instrumentation (fast),
 - detects out of bounds for both writes and reads,
 - provides use after free detection,

This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].

This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
latter).

Implementation details:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.

Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.

Here is function to translate address to corresponding shadow address:

     unsigned long kasan_mem_to_shadow(unsigned long addr)
     {
                return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
                             + kasan_shadow_start;
     }

where KASAN_SHADOW_SCALE_SHIFT = 3.

So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.

These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.

[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
---
 Documentation/kasan.txt | 224 +++++++++++++++++++++++++++++++++++++
 Makefile                |   8 +-
 commit                  |   3 +
 include/linux/kasan.h   |  33 ++++++
 include/linux/sched.h   |   4 +
 lib/Kconfig.debug       |   2 +
 lib/Kconfig.kasan       |  20 ++++
 mm/Makefile             |   1 +
 mm/kasan/Makefile       |   3 +
 mm/kasan/kasan.c        | 292 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h        |  36 ++++++
 mm/kasan/report.c       | 157 ++++++++++++++++++++++++++
 scripts/Makefile.lib    |  10 ++
 13 files changed, 792 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/kasan.txt
 create mode 100644 commit
 create mode 100644 include/linux/kasan.h
 create mode 100644 lib/Kconfig.kasan
 create mode 100644 mm/kasan/Makefile
 create mode 100644 mm/kasan/kasan.c
 create mode 100644 mm/kasan/kasan.h
 create mode 100644 mm/kasan/report.c

Comments

Christoph Lameter (Ampere) July 9, 2014, 2:26 p.m. UTC | #1
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:

> +
> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
> +
> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */

We call these zones "PADDING". Redzones are associated with an object.
Padding is there because bytes are left over, unusable or necessary for
alignment.
Andi Kleen July 9, 2014, 7:29 p.m. UTC | #2
Andrey Ryabinin <a.ryabinin@samsung.com> writes:

Seems like a useful facility. Thanks for working on it. Overall the code
looks fairly good. Some comments below.


> +
> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
> +
> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
> + - is based on compiler instrumentation (fast),
> + - detects OOB for both writes and reads,
> + - provides UAF detection,

Please expand the acronym.

> +
> +|--------|        |--------|
> +| Memory |----    | Memory |
> +|--------|    \   |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|  \     |--------|
> +|   Bad  |   ---->|  Bad   |
> +|--------|  /     |--------|
> +| Shadow |--   -->| Shadow |
> +|--------|    /   |--------|
> +| Memory |----    | Memory |
> +|--------|        |--------|

I guess this implies it's incompatible with memory hotplug, as the 
shadow couldn't be extended?

That's fine, but you should exclude that in Kconfig.

There are likely more exclude dependencies for Kconfig too.
Neds dependencies on the right sparse mem options?
Does it work with kmemcheck? If not exclude.

Perhaps try to boot it with all other debug options and see which ones break.

> diff --git a/Makefile b/Makefile
> index 64ab7b3..08a07f2 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>  CFLAGS_KERNEL	=
>  AFLAGS_KERNEL	=
>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
> +			--param asan-use-after-return=0 \
> +			--param asan-globals=0 \
> +			--param asan-memintrin=0 \
> +			--param asan-instrumentation-with-call-threshold=0 \

Hardcoding --param is not very nice. They can change from compiler
to compiler version. Need some version checking?

Also you should probably have some check that the compiler supports it
(and print some warning if not)
Otherwise randconfig builds will be broken if the compiler doesn't.

Also does the kernel really build/work without the other patches?
If not please move this patchkit to the end of the series, to keep
the patchkit bisectable (this may need moving parts of the includes
into a separate patch)

> diff --git a/commit b/commit
> new file mode 100644
> index 0000000..134f4dd
> --- /dev/null
> +++ b/commit
> @@ -0,0 +1,3 @@
> +
> +I'm working on address sanitizer for kernel.
> +fuck this bloody.
> \ No newline at end of file

Heh. Please remove.

> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> new file mode 100644
> index 0000000..2bfff78
> --- /dev/null
> +++ b/lib/Kconfig.kasan
> @@ -0,0 +1,20 @@
> +config HAVE_ARCH_KASAN
> +	bool
> +
> +if HAVE_ARCH_KASAN
> +
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

Needs much more description.

> +
> +config KASAN_SANITIZE_ALL
> +	bool "Instrument entire kernel"
> +	depends on KASAN
> +	default y
> +	help
> +	  This enables compiler intrumentation for entire kernel
> +

Same.


> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> new file mode 100644
> index 0000000..e2cd345
> --- /dev/null
> +++ b/mm/kasan/kasan.c
> @@ -0,0 +1,292 @@
> +/*
> + *

Add one line here what the file does. Same for other files.

> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> +#include "kasan.h"
> +#include "../slab.h"

That's ugly, but ok.

> +
> +static bool __read_mostly kasan_initialized;

It would be better to use a static_key, but I guess your initialization
is too early?

Of course the proposal to move it into start_kernel and get rid of the
flag would be best.

> +
> +unsigned long kasan_shadow_start;
> +unsigned long kasan_shadow_end;
> +
> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */

Do these all need to be global?

> +
> +
> +static inline bool addr_is_in_mem(unsigned long addr)
> +{
> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> +}

Of course there are lots of cases where this doesn't work (like large
holes), but I assume this has been checked elsewhere?


> +
> +void kasan_enable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth--;
> +}
> +
> +void kasan_disable_local(void)
> +{
> +	if (likely(kasan_initialized))
> +		current->kasan_depth++;
> +}

Couldn't this be done without checking the flag?


> +		return;
> +
> +	if (unlikely(addr < TASK_SIZE)) {
> +		info.access_addr = addr;
> +		info.access_size = size;
> +		info.is_write = write;
> +		info.ip = _RET_IP_;
> +		kasan_report_user_access(&info);
> +		return;
> +	}

How about vsyscall pages here?

> +
> +	if (!addr_is_in_mem(addr))
> +		return;
> +
> +	access_addr = memory_is_poisoned(addr, size);
> +	if (likely(access_addr == 0))
> +		return;
> +
> +	info.access_addr = access_addr;
> +	info.access_size = size;
> +	info.is_write = write;
> +	info.ip = _RET_IP_;
> +	kasan_report_error(&info);
> +}
> +
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
> +
> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
> +	if (!shadow_phys_start) {
> +		pr_err("Unable to reserve shadow memory\n");
> +		return;

Wouldn't this crash&burn later? panic?

> +void *kasan_memcpy(void *dst, const void *src, size_t len)
> +{
> +	if (unlikely(len == 0))
> +		return dst;
> +
> +	check_memory_region((unsigned long)src, len, false);
> +	check_memory_region((unsigned long)dst, len, true);

I assume this handles negative len?
Also check for overlaps?

> +
> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
> +{
> +	return x - ((x - slab_start) % s->size);
> +}

This should be in the respective slab headers, not hard coded.

> +void kasan_report_error(struct access_info *info)
> +{
> +	kasan_disable_local();
> +	pr_err("================================="
> +		"=================================\n");
> +	print_error_description(info);
> +	print_address_description(info);
> +	print_shadow_for_address(info->access_addr);
> +	pr_err("================================="
> +		"=================================\n");
> +	kasan_enable_local();
> +}
> +
> +void kasan_report_user_access(struct access_info *info)
> +{
> +	kasan_disable_local();

Should print the same prefix oopses use, a lot of log grep tools
look for that. 

Also you may want some lock to prevent multiple
reports mixing. 

-Andi
Dave Hansen July 9, 2014, 8:26 p.m. UTC | #3
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }

How does this interact with vmalloc() addresses or those from a kmap()?
Dave Hansen July 9, 2014, 8:37 p.m. UTC | #4
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +void __init kasan_alloc_shadow(void)
> +{
> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
> +	unsigned long shadow_size;
> +	phys_addr_t shadow_phys_start;
> +
> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;

This calculation is essentially meaningless, and it's going to break
when we have sparse memory situations like having big holes.  This code
attempts to allocate non-sparse data for backing what might be very
sparse memory ranges.

It's quite OK for us to handle configurations today where we have 2GB of
RAM with 1GB at 0x0 and 1GB at 0x10000000000.  This code would attempt
to allocate a 128GB shadow area for this configuration with 2GB of RAM. :)

You're probably going to get stuck doing something similar to the
sparsemem-vmemmap code does.  You could handle this for normal sparsemem
by adding a shadow area pointer to the memory section.
Or, just vmalloc() (get_vm_area() really) the virtual space and then
make sure to allocate the backing store before you need it (handling the
faults would probably get too tricky).
Dave Hansen July 9, 2014, 8:38 p.m. UTC | #5
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
> +config KASAN
> +	bool "AddressSanitizer: dynamic memory error detector"
> +	default n
> +	help
> +	  Enables AddressSanitizer - dynamic memory error detector,
> +	  that finds out-of-bounds and use-after-free bugs.

This definitely needs some more text like "This option eats boatloads of
memory and will slow your system down enough that it should never be
used in production unless you are crazy".
Yuri Gribov July 9, 2014, 8:40 p.m. UTC | #6
On Wed, Jul 9, 2014 at 11:29 PM, Andi Kleen <andi@firstfloor.org> wrote:
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?

We plan to address this soon. CFLAGS will look more like
-fsanitize=kernel-address but this flag is not yet in gcc.

-Y
Andrey Ryabinin July 10, 2014, 7:31 a.m. UTC | #7
On 07/09/14 18:26, Christoph Lameter wrote:
> On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
> 
>> +
>> +Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
>> +
>> +#define KASAN_FREE_PAGE         0xFF  /* page was freed */
>> +#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>> +#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
> 
> We call these zones "PADDING". Redzones are associated with an object.
> Padding is there because bytes are left over, unusable or necessary for
> alignment.
> 
Goop point. I will change the name to make it less confusing.
Sasha Levin July 10, 2014, 11:55 a.m. UTC | #8
On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
> 
> The main features of kasan is:
>  - is based on compiler instrumentation (fast),
>  - detects out of bounds for both writes and reads,
>  - provides use after free detection,
> 
> This patch only adds infrastructure for kernel address sanitizer. It's not
> available for use yet. The idea and some code was borrowed from [1].
> 
> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
> latter).
> 
> Implementation details:
> The main idea of KASAN is to use shadow memory to record whether each byte of memory
> is safe to access or not, and use compiler's instrumentation to check the shadow memory
> on each memory access.
> 
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
> 
> Here is function to translate address to corresponding shadow address:
> 
>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>      {
>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>                              + kasan_shadow_start;
>      }
> 
> where KASAN_SHADOW_SCALE_SHIFT = 3.
> 
> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
> corresponding memory region are valid for access; k (1 <= k <= 7) means that
> the first k bytes are valid for access, and other (8 - k) bytes are not;
> Any negative value indicates that the entire 8-bytes are unaccessible.
> Different negative values used to distinguish between different kinds of
> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
> 
> To be able to detect accesses to bad memory we need a special compiler.
> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
> before each memory access of size 1, 2, 4, 8 or 16.
> 
> These functions check whether memory region is valid to access or not by checking
> corresponding shadow memory. If access is not valid an error printed.
> 
> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
> 
> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>

I gave it a spin, and it seems that it fails for what you might call a "regular"
memory size these days, in my case it was 18G:

[    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
[    0.000000]
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
[    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
[    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
[    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
[    0.000000] Call Trace:
[    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
[    0.000000] panic (kernel/panic.c:119)
[    0.000000] memblock_alloc_base (mm/memblock.c:1092)
[    0.000000] memblock_alloc (mm/memblock.c:1097)
[    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
[    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
[    0.000000] paging_init (arch/x86/mm/init_64.c:677)
[    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
[    0.000000] ? printk (kernel/printk/printk.c:1839)
[    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
[    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
[    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
[    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)

It got better when I reduced memory to 1GB, but then my system just failed to boot
at all because that's not enough to bring everything up.


Thanks,
Sasha
Andrey Ryabinin July 10, 2014, 12:10 p.m. UTC | #9
On 07/09/14 23:29, Andi Kleen wrote:
> Andrey Ryabinin <a.ryabinin@samsung.com> writes:
> 
> Seems like a useful facility. Thanks for working on it. Overall the code
> looks fairly good. Some comments below.
> 
> 
>> +
>> +Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
>> +fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
>> +
>> +KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
>> + - is based on compiler instrumentation (fast),
>> + - detects OOB for both writes and reads,
>> + - provides UAF detection,
> 
> Please expand the acronym.
> 
Sure, will do.

>> +
>> +|--------|        |--------|
>> +| Memory |----    | Memory |
>> +|--------|    \   |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|  \     |--------|
>> +|   Bad  |   ---->|  Bad   |
>> +|--------|  /     |--------|
>> +| Shadow |--   -->| Shadow |
>> +|--------|    /   |--------|
>> +| Memory |----    | Memory |
>> +|--------|        |--------|
> 
> I guess this implies it's incompatible with memory hotplug, as the 
> shadow couldn't be extended?
> 
> That's fine, but you should exclude that in Kconfig.
> 
> There are likely more exclude dependencies for Kconfig too.
> Neds dependencies on the right sparse mem options?
> Does it work with kmemcheck? If not exclude.
> 
> Perhaps try to boot it with all other debug options and see which ones break.
> 

Besides Kconfig dependencies I might need to disable instrumentation in some places.
For example kasan doesn't play well with kmemleak. Kmemleak may look for pointers inside redzones
and kasan treats this as an error.

>> diff --git a/Makefile b/Makefile
>> index 64ab7b3..08a07f2 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -384,6 +384,12 @@ LDFLAGS_MODULE  =
>>  CFLAGS_KERNEL	=
>>  AFLAGS_KERNEL	=
>>  CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
>> +CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
>> +			--param asan-use-after-return=0 \
>> +			--param asan-globals=0 \
>> +			--param asan-memintrin=0 \
>> +			--param asan-instrumentation-with-call-threshold=0 \
> 
> Hardcoding --param is not very nice. They can change from compiler
> to compiler version. Need some version checking?
> 
> Also you should probably have some check that the compiler supports it
> (and print some warning if not)
> Otherwise randconfig builds will be broken if the compiler doesn't.
> 
> Also does the kernel really build/work without the other patches?
> If not please move this patchkit to the end of the series, to keep
> the patchkit bisectable (this may need moving parts of the includes
> into a separate patch)
> 
It's buildable. At this point you can't select CONFIG_KASAN = y because there is no
arch that supports kasan (HAVE_ARCH_KASAN config). But after x86 patches kernel could be
build and run with kasan. At that point kasan will be able to catch only "wild" memory
accesses (when someone outside mm/kasan/* tries to access shadow memory).

>> diff --git a/commit b/commit
>> new file mode 100644
>> index 0000000..134f4dd
>> --- /dev/null
>> +++ b/commit
>> @@ -0,0 +1,3 @@
>> +
>> +I'm working on address sanitizer for kernel.
>> +fuck this bloody.
>> \ No newline at end of file
> 
> Heh. Please remove.
> 

Oops. No idea how it get there :)

>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> new file mode 100644
>> index 0000000..2bfff78
>> --- /dev/null
>> +++ b/lib/Kconfig.kasan
>> @@ -0,0 +1,20 @@
>> +config HAVE_ARCH_KASAN
>> +	bool
>> +
>> +if HAVE_ARCH_KASAN
>> +
>> +config KASAN
>> +	bool "AddressSanitizer: dynamic memory error detector"
>> +	default n
>> +	help
>> +	  Enables AddressSanitizer - dynamic memory error detector,
>> +	  that finds out-of-bounds and use-after-free bugs.
> 
> Needs much more description.
> 
>> +
>> +config KASAN_SANITIZE_ALL
>> +	bool "Instrument entire kernel"
>> +	depends on KASAN
>> +	default y
>> +	help
>> +	  This enables compiler intrumentation for entire kernel
>> +
> 
> Same.
> 
> 
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> new file mode 100644
>> index 0000000..e2cd345
>> --- /dev/null
>> +++ b/mm/kasan/kasan.c
>> @@ -0,0 +1,292 @@
>> +/*
>> + *
> 
> Add one line here what the file does. Same for other files.
> 
>> + * Copyright (c) 2014 Samsung Electronics Co., Ltd.
>> + * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> +#include "kasan.h"
>> +#include "../slab.h"
> 
> That's ugly, but ok.
Hm... "../slab.h" is not needed in this file. linux/slab.h is enough here.

> 
>> +
>> +static bool __read_mostly kasan_initialized;
> 
> It would be better to use a static_key, but I guess your initialization
> is too early?

No, not too early. kasan_init_shadow which switches this flag called just after jump_label_init,
so it's not a problem for static_key, but there is another one.
I tried static key here. I works really well for arm, but it has some problems on x86.
While switching static key by calling static_key_slow_inc, the first byte of static key is replaced with
breakpoint (look at text_poke_bp()). After that, at first memory access __asan_load/__asan_store called and
we are executing this breakpoint from the code that trying to update that instruction.

text_poke_bp()
{
	....
	//replace first byte with breakpoint
		....
			___asan_load*()
				....
				if (static_key_false(&kasan_initlized)) <-- static key update still in progress
		....
	//patching code done
}

To make static_key work on x86 I need to disable instrumentation in text_poke_bp() and in any other functions that called from it.
It might be a big problem if text_poke_bp uses some very generic functions.

Another better option would be to get rid of kasan_initilized check in kasan_enabled():
static inline bool kasan_enabled(void)
{
	return likely(kasan_initialized
		&& !current->kasan_depth);
}


> 
> Of course the proposal to move it into start_kernel and get rid of the
> flag would be best.
>

that's the plan for future.


>> +
>> +unsigned long kasan_shadow_start;
>> +unsigned long kasan_shadow_end;
>> +
>> +/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
>> +unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
> 
> Do these all need to be global?
> 

For now only  kasan_shadow_start and kasan_shadow_offset need to be global.
It also should be possible to get rid of using kasan_shadow_start in kasan_shadow_to_mem(), and make it static

>> +
>> +
>> +static inline bool addr_is_in_mem(unsigned long addr)
>> +{
>> +	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> +}
> 
> Of course there are lots of cases where this doesn't work (like large
> holes), but I assume this has been checked elsewhere?
> 
Seems I need to do some work for sparsemem configurations.

> 
>> +
>> +void kasan_enable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth--;
>> +}
>> +
>> +void kasan_disable_local(void)
>> +{
>> +	if (likely(kasan_initialized))
>> +		current->kasan_depth++;
>> +}
> 
> Couldn't this be done without checking the flag?
> 
Not sure. Do we always have current available? I assume it should be initialized at some point of boot process.
I will check that.


> 
>> +		return;
>> +
>> +	if (unlikely(addr < TASK_SIZE)) {
>> +		info.access_addr = addr;
>> +		info.access_size = size;
>> +		info.is_write = write;
>> +		info.ip = _RET_IP_;
>> +		kasan_report_user_access(&info);
>> +		return;
>> +	}
> 
> How about vsyscall pages here?
> 

Not sure what do you mean. Could you please elaborate?

>> +
>> +	if (!addr_is_in_mem(addr))
>> +		return;
>> +
>> +	access_addr = memory_is_poisoned(addr, size);
>> +	if (likely(access_addr == 0))
>> +		return;
>> +
>> +	info.access_addr = access_addr;
>> +	info.access_size = size;
>> +	info.is_write = write;
>> +	info.ip = _RET_IP_;
>> +	kasan_report_error(&info);
>> +}
>> +
>> +void __init kasan_alloc_shadow(void)
>> +{
>> +	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
>> +	unsigned long shadow_size;
>> +	phys_addr_t shadow_phys_start;
>> +
>> +	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
>> +
>> +	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
>> +	if (!shadow_phys_start) {
>> +		pr_err("Unable to reserve shadow memory\n");
>> +		return;
> 
> Wouldn't this crash&burn later? panic?
> 

As already Sasha reported it will panic in memblock_alloc.

>> +void *kasan_memcpy(void *dst, const void *src, size_t len)
>> +{
>> +	if (unlikely(len == 0))
>> +		return dst;
>> +
>> +	check_memory_region((unsigned long)src, len, false);
>> +	check_memory_region((unsigned long)dst, len, true);
> 
> I assume this handles negative len?
> Also check for overlaps?
> 
Will do.

>> +
>> +static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
>> +{
>> +	return x - ((x - slab_start) % s->size);
>> +}
> 
> This should be in the respective slab headers, not hard coded.
> 
Agreed.

>> +void kasan_report_error(struct access_info *info)
>> +{
>> +	kasan_disable_local();
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	print_error_description(info);
>> +	print_address_description(info);
>> +	print_shadow_for_address(info->access_addr);
>> +	pr_err("================================="
>> +		"=================================\n");
>> +	kasan_enable_local();
>> +}
>> +
>> +void kasan_report_user_access(struct access_info *info)
>> +{
>> +	kasan_disable_local();
> 
> Should print the same prefix oopses use, a lot of log grep tools
> look for that. 
> 
Ok

> Also you may want some lock to prevent multiple
> reports mixing. 

I think hiding it into
 if (spin_trylock) { ... }

would be enough.
I think it might be a good idea to add option for reporting only first error.
It will be usefull for some cases (for example strlen on not null terminated string makes kasan crazy)

Thanks for review

> 
> -Andi
>
Andrey Ryabinin July 10, 2014, 12:12 p.m. UTC | #10
On 07/10/14 00:26, Dave Hansen wrote:
> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
> 
> How does this interact with vmalloc() addresses or those from a kmap()?
> 

It's used only for lowmem:

static inline bool addr_is_in_mem(unsigned long addr)
{
	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
}



static __always_inline void check_memory_region(unsigned long addr,
						size_t size, bool write)
{

	....
	if (!addr_is_in_mem(addr))
		return;
	// check shadow here
Andrey Ryabinin July 10, 2014, 1:01 p.m. UTC | #11
On 07/10/14 15:55, Sasha Levin wrote:
> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>
>> The main features of kasan is:
>>  - is based on compiler instrumentation (fast),
>>  - detects out of bounds for both writes and reads,
>>  - provides use after free detection,
>>
>> This patch only adds infrastructure for kernel address sanitizer. It's not
>> available for use yet. The idea and some code was borrowed from [1].
>>
>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>> latter).
>>
>> Implementation details:
>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>> on each memory access.
>>
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>      {
>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>                              + kasan_shadow_start;
>>      }
>>
>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>
>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>> Any negative value indicates that the entire 8-bytes are unaccessible.
>> Different negative values used to distinguish between different kinds of
>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>
>> To be able to detect accesses to bad memory we need a special compiler.
>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>> before each memory access of size 1, 2, 4, 8 or 16.
>>
>> These functions check whether memory region is valid to access or not by checking
>> corresponding shadow memory. If access is not valid an error printed.
>>
>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>
>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
> 
> I gave it a spin, and it seems that it fails for what you might call a "regular"
> memory size these days, in my case it was 18G:
> 
> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
> [    0.000000]
> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
> [    0.000000] Call Trace:
> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
> [    0.000000] panic (kernel/panic.c:119)
> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
> [    0.000000] memblock_alloc (mm/memblock.c:1097)
> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
> [    0.000000] ? printk (kernel/printk/printk.c:1839)
> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
> 
> It got better when I reduced memory to 1GB, but then my system just failed to boot
> at all because that's not enough to bring everything up.
> 

Thanks.
I think memory size is not a problem here. I tested on my desktop with 16G.
Seems it's a problem with memory holes cited by Dave.
kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.


> 
> Thanks,
> Sasha
>
Sasha Levin July 10, 2014, 1:31 p.m. UTC | #12
On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
> On 07/10/14 15:55, Sasha Levin wrote:
>> > On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>> >> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>> >>
>>> >> The main features of kasan is:
>>> >>  - is based on compiler instrumentation (fast),
>>> >>  - detects out of bounds for both writes and reads,
>>> >>  - provides use after free detection,
>>> >>
>>> >> This patch only adds infrastructure for kernel address sanitizer. It's not
>>> >> available for use yet. The idea and some code was borrowed from [1].
>>> >>
>>> >> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>> >> latter).
>>> >>
>>> >> Implementation details:
>>> >> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>> >> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>> >> on each memory access.
>>> >>
>>> >> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> >> mapping with a scale and offset to translate a memory address to its corresponding
>>> >> shadow address.
>>> >>
>>> >> Here is function to translate address to corresponding shadow address:
>>> >>
>>> >>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>> >>      {
>>> >>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>> >>                              + kasan_shadow_start;
>>> >>      }
>>> >>
>>> >> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>> >>
>>> >> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>> >> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>> >> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>> >> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>> >> Any negative value indicates that the entire 8-bytes are unaccessible.
>>> >> Different negative values used to distinguish between different kinds of
>>> >> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>> >>
>>> >> To be able to detect accesses to bad memory we need a special compiler.
>>> >> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>> >> before each memory access of size 1, 2, 4, 8 or 16.
>>> >>
>>> >> These functions check whether memory region is valid to access or not by checking
>>> >> corresponding shadow memory. If access is not valid an error printed.
>>> >>
>>> >> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>> >>
>>> >> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > 
>> > I gave it a spin, and it seems that it fails for what you might call a "regular"
>> > memory size these days, in my case it was 18G:
>> > 
>> > [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>> > [    0.000000]
>> > [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>> > [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>> > [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>> > [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>> > [    0.000000] Call Trace:
>> > [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>> > [    0.000000] panic (kernel/panic.c:119)
>> > [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>> > [    0.000000] memblock_alloc (mm/memblock.c:1097)
>> > [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>> > [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>> > [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>> > [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>> > [    0.000000] ? printk (kernel/printk/printk.c:1839)
>> > [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>> > [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>> > [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>> > [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>> > 
>> > It got better when I reduced memory to 1GB, but then my system just failed to boot
>> > at all because that's not enough to bring everything up.
>> > 
> Thanks.
> I think memory size is not a problem here. I tested on my desktop with 16G.
> Seems it's a problem with memory holes cited by Dave.
> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.

That's correct (I've mistyped and got 18 instead of 28 above).

However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
thing, so I'm not sure how it applies here.

Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
get KASAN running on my machine?


Thanks,
Sasha
Andrey Ryabinin July 10, 2014, 1:39 p.m. UTC | #13
On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
Right. By lowmemsize here I mean size of direct
mapping of all phys. memory (which usually called as lowmem on 32bit systems).



> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 
Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
Also boot cmdline might help.

> 
> Thanks,
> Sasha
> 
>
Andrey Ryabinin July 10, 2014, 1:50 p.m. UTC | #14
On 07/10/14 17:31, Sasha Levin wrote:
> On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
>> On 07/10/14 15:55, Sasha Levin wrote:
>>>> On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
>>>>>> Address sanitizer for kernel (kasan) is a dynamic memory error detector.
>>>>>>
>>>>>> The main features of kasan is:
>>>>>>  - is based on compiler instrumentation (fast),
>>>>>>  - detects out of bounds for both writes and reads,
>>>>>>  - provides use after free detection,
>>>>>>
>>>>>> This patch only adds infrastructure for kernel address sanitizer. It's not
>>>>>> available for use yet. The idea and some code was borrowed from [1].
>>>>>>
>>>>>> This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
>>>>>> latter).
>>>>>>
>>>>>> Implementation details:
>>>>>> The main idea of KASAN is to use shadow memory to record whether each byte of memory
>>>>>> is safe to access or not, and use compiler's instrumentation to check the shadow memory
>>>>>> on each memory access.
>>>>>>
>>>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>>>> shadow address.
>>>>>>
>>>>>> Here is function to translate address to corresponding shadow address:
>>>>>>
>>>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>>>      {
>>>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>>>                              + kasan_shadow_start;
>>>>>>      }
>>>>>>
>>>>>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>>>>>
>>>>>> So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
>>>>>> The following encoding used for each shadow byte: 0 means that all 8 bytes of the
>>>>>> corresponding memory region are valid for access; k (1 <= k <= 7) means that
>>>>>> the first k bytes are valid for access, and other (8 - k) bytes are not;
>>>>>> Any negative value indicates that the entire 8-bytes are unaccessible.
>>>>>> Different negative values used to distinguish between different kinds of
>>>>>> unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
>>>>>>
>>>>>> To be able to detect accesses to bad memory we need a special compiler.
>>>>>> Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
>>>>>> before each memory access of size 1, 2, 4, 8 or 16.
>>>>>>
>>>>>> These functions check whether memory region is valid to access or not by checking
>>>>>> corresponding shadow memory. If access is not valid an error printed.
>>>>>>
>>>>>> [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
>>>>>>
>>>>>> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
>>>>
>>>> I gave it a spin, and it seems that it fails for what you might call a "regular"
>>>> memory size these days, in my case it was 18G:
>>>>
>>>> [    0.000000] Kernel panic - not syncing: ERROR: Failed to allocate 0xe0c00000 bytes below 0x0.
>>>> [    0.000000]
>>>> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.16.0-rc4-next-20140710-sasha-00044-gb7b0579-dirty #784
>>>> [    0.000000]  ffffffffb9c2d3c8 cd9ce91adea4379a 0000000000000000 ffffffffb9c2d3c8
>>>> [    0.000000]  ffffffffb9c2d330 ffffffffb7fe89b7 ffffffffb93c8c28 ffffffffb9c2d3b8
>>>> [    0.000000]  ffffffffb7fcff1d 0000000000000018 ffffffffb9c2d3c8 ffffffffb9c2d360
>>>> [    0.000000] Call Trace:
>>>> [    0.000000] <UNK> dump_stack (lib/dump_stack.c:52)
>>>> [    0.000000] panic (kernel/panic.c:119)
>>>> [    0.000000] memblock_alloc_base (mm/memblock.c:1092)
>>>> [    0.000000] memblock_alloc (mm/memblock.c:1097)
>>>> [    0.000000] kasan_alloc_shadow (mm/kasan/kasan.c:151)
>>>> [    0.000000] zone_sizes_init (arch/x86/mm/init.c:684)
>>>> [    0.000000] paging_init (arch/x86/mm/init_64.c:677)
>>>> [    0.000000] setup_arch (arch/x86/kernel/setup.c:1168)
>>>> [    0.000000] ? printk (kernel/printk/printk.c:1839)
>>>> [    0.000000] start_kernel (include/linux/mm_types.h:462 init/main.c:533)
>>>> [    0.000000] ? early_idt_handlers (arch/x86/kernel/head_64.S:344)
>>>> [    0.000000] x86_64_start_reservations (arch/x86/kernel/head64.c:194)
>>>> [    0.000000] x86_64_start_kernel (arch/x86/kernel/head64.c:183)
>>>>
>>>> It got better when I reduced memory to 1GB, but then my system just failed to boot
>>>> at all because that's not enough to bring everything up.
>>>>
>> Thanks.
>> I think memory size is not a problem here. I tested on my desktop with 16G.
>> Seems it's a problem with memory holes cited by Dave.
>> kasan tries to allocate ~3.5G. It means that lowmemsize is 28G in your case.
> 
> That's correct (I've mistyped and got 18 instead of 28 above).
> 
> However, I'm a bit confused here, I thought highmem/lowmem split was a 32bit
> thing, so I'm not sure how it applies here.
> 
> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
> get KASAN running on my machine?
> 

It's not boot with the same Failed to allocate error?

> 
> Thanks,
> Sasha
> 
>
Sasha Levin July 10, 2014, 2:02 p.m. UTC | #15
On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> > 
> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
> Also boot cmdline might help.
> 

Sure. It's the .config I use for fuzzing so it's rather big (attached).

The cmdline is:

[    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init

And the memory map:

[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable


On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>> > get KASAN running on my machine?
>> >
> It's not boot with the same Failed to allocate error?

I think I misunderstood your question here. With >1GB is triggers a panic() when
KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
because 1GB isn't enough to load everything - so it fails in some other random
spot as it runs on out memory.


Thanks,
Sasha
Dave Hansen July 10, 2014, 3:55 p.m. UTC | #16
On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
> On 07/10/14 00:26, Dave Hansen wrote:
>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>> mapping with a scale and offset to translate a memory address to its corresponding
>>> shadow address.
>>>
>>> Here is function to translate address to corresponding shadow address:
>>>
>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>      {
>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>                              + kasan_shadow_start;
>>>      }
>>
>> How does this interact with vmalloc() addresses or those from a kmap()?
>> 
> It's used only for lowmem:
> 
> static inline bool addr_is_in_mem(unsigned long addr)
> {
> 	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
> }

That's fine, and definitely covers the common cases.  Could you make
sure to call this out explicitly?  Also, there's nothing to _keep_ this
approach working for things out of the direct map, right?  It would just
be a matter of updating the shadow memory to have entries for the other
virtual address ranges.

addr_is_in_mem() is a pretty bad name for what it's doing. :)

I'd probably call it something like kasan_tracks_vaddr().
Andrey Ryabinin July 10, 2014, 7:04 p.m. UTC | #17
2014-07-10 18:02 GMT+04:00 Sasha Levin <sasha.levin@oracle.com>:
> On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
>>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> Could you share you .config? I'll try to boot it by myself. It could be that some options conflicting with kasan.
>> Also boot cmdline might help.
>>
>
> Sure. It's the .config I use for fuzzing so it's rather big (attached).
>
> The cmdline is:
>
> [    0.000000] Command line: noapic noacpi pci=conf1 reboot=k panic=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 console=ttyS0 earlyprintk=serial i8042.noaux=1 numa=fake=32 init=/virt/init zcache ftrace_dump_on_oops debugpat kvm.mmu_audit=1 slub_debug=FZPU rcutorture.rcutorture_runnable=0 loop.max_loop=64 zram.num_devices=4 rcutorture.nreaders=8 oops=panic nr_hugepages=1000 numa_balancing=enable softlockup_all_cpu_backtrace=1 root=/dev/root rw rootflags=rw,trans=virtio,version=9p2000.L rootfstype=9p init=/virt/init
>
> And the memory map:
>
> [    0.000000] e820: BIOS-provided physical RAM map:
> [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
> [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
> [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000ffffe] reserved
> [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000cfffffff] usable
> [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000705ffffff] usable
>
>
> On 07/10/2014 09:50 AM, Andrey Ryabinin wrote:>> Anyways, the machine won't boot with more than 1GB of RAM, is there a solution to
>>> > get KASAN running on my machine?
>>> >
>> It's not boot with the same Failed to allocate error?
>
> I think I misunderstood your question here. With >1GB is triggers a panic() when
> KASAN fails the memblock allocation. With <=1GB it fails a bit later in boot just
> because 1GB isn't enough to load everything - so it fails in some other random
> spot as it runs on out memory.
>
>
> Thanks,
> Sasha

Looks like I found where is a problem. memblock_alloc cannot allocate
accross numa nodes,
therefore kasan fails for numa=fake>=8.
You should succeed with numa=fake=7 or less.
Andrey Ryabinin July 10, 2014, 7:48 p.m. UTC | #18
2014-07-10 19:55 GMT+04:00 Dave Hansen <dave.hansen@intel.com>:
> On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
>> On 07/10/14 00:26, Dave Hansen wrote:
>>> On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
>>>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>>>> mapping with a scale and offset to translate a memory address to its corresponding
>>>> shadow address.
>>>>
>>>> Here is function to translate address to corresponding shadow address:
>>>>
>>>>      unsigned long kasan_mem_to_shadow(unsigned long addr)
>>>>      {
>>>>                 return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
>>>>                              + kasan_shadow_start;
>>>>      }
>>>
>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>
>> It's used only for lowmem:
>>
>> static inline bool addr_is_in_mem(unsigned long addr)
>> {
>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>> }
>
> That's fine, and definitely covers the common cases.  Could you make
> sure to call this out explicitly?  Also, there's nothing to _keep_ this
> approach working for things out of the direct map, right?  It would just
> be a matter of updating the shadow memory to have entries for the other
> virtual address ranges.

Why do you want shadow for things out of the direct map?
If you want to catch use-after-free in vmalloc than DEBUG_PAGEALLOC
will be enough.
If you want catch out-of-bounds in vmalloc you don't need anything,
because vmalloc
allocates guarding hole in the end.
Or do you want something else?

>
> addr_is_in_mem() is a pretty bad name for what it's doing. :)
>
> I'd probably call it something like kasan_tracks_vaddr().
>
Agree

> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
Dave Hansen July 10, 2014, 8:04 p.m. UTC | #19
On 07/10/2014 12:48 PM, Andrey Ryabinin wrote:
>>>> How does this interact with vmalloc() addresses or those from a kmap()?
>>>>
>>> It's used only for lowmem:
>>>
>>> static inline bool addr_is_in_mem(unsigned long addr)
>>> {
>>>       return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
>>> }
>>
>> That's fine, and definitely covers the common cases.  Could you make
>> sure to call this out explicitly?  Also, there's nothing to _keep_ this
>> approach working for things out of the direct map, right?  It would just
>> be a matter of updating the shadow memory to have entries for the other
>> virtual address ranges.
> 
> Why do you want shadow for things out of the direct map? If you want
> to catch use-after-free in vmalloc than DEBUG_PAGEALLOC will be
> enough. If you want catch out-of-bounds in vmalloc you don't need
> anything, because vmalloc allocates guarding hole in the end. Or do
> you want something else?

That's all true for page-size accesses.  Address sanitizer's biggest
advantage over using the page tables is that it can do checks at
sub-page granularity.  But, we don't have any APIs that I can think of
that _care_ about <PAGE_SIZE outside of the direct map (maybe zsmalloc,
but that's pretty obscure).

So I guess it doesn't matter.
diff mbox

Patch

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..141391ba
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,224 @@ 
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
+ - is based on compiler instrumentation (fast),
+ - detects OOB for both writes and reads,
+ - provides UAF detection,
+ - prints informative reports.
+
+KASAN uses compiler instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.10.0.
+
+Currently KASAN supported on x86/x86_64/arm architectures and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 4.10.0).
+
+To enable KASAN configure kernel with:
+
+	  CONFIG_KASAN = y
+
+to instrument entire kernel:
+
+	  CONFIG_KASAN_SANTIZE_ALL = y
+
+Currently KASAN works only with SLUB. It is highly recommended to run KASAN with
+CONFIG_SLUB_DEBUG=y and 'slub_debug=U'. This enables user tracking (free and alloc traces).
+There is no need to enable redzoning since KASAN detects access to user tracking structs
+so they actually act like redzones.
+
+To enable instrumentation for only specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+        For a single file (e.g. main.o):
+                KASAN_SANITIZE_main.o := y
+
+        For all files in one directory:
+                KASAN_SANITIZE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use:
+
+                KASAN_SANITIZE_main.o := n
+        and:
+                KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical buffer overflow report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+	__slab_alloc.constprop.72+0x64f/0x680
+	kmem_cache_alloc+0xa8/0xe0
+	kasan_kmalloc_oob_rigth+0x2c/0x7a
+	kasan_tests_init+0x8/0xc
+	do_one_initcall+0x85/0x1a0
+	kernel_init_freeable+0x1f1/0x279
+	kernel_init+0x8/0xd0
+	ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00  .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G    B          3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+                    ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE         0xFF  /* page was freed */
+#define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE      0xFD  /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE         0xFA  /* free slab page */
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+2.1. Shadow memory
+==================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use instrumentation to check the shadow memory on each memory
+access.
+
+AddressSanitizer dedicates one-eighth of the low memory to its shadow
+memory and uses direct mapping with a scale and offset to translate a memory
+address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+The figure below shows the address space layout. The memory is split
+into two parts (low and high) which map to the corresponding shadow regions.
+Applying the shadow mapping to addresses in the shadow region gives us
+addresses in the Bad region.
+
+|--------|        |--------|
+| Memory |----    | Memory |
+|--------|    \   |--------|
+| Shadow |--   -->| Shadow |
+|--------|  \     |--------|
+|   Bad  |   ---->|  Bad   |
+|--------|  /     |--------|
+| Shadow |--   -->| Shadow |
+|--------|    /   |--------|
+| Memory |----    | Memory |
+|--------|        |--------|
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
+
+2.2. Instrumentation
+====================
+
+Since some functions (such as memset, memmove, memcpy) wich do memory accesses
+are written in assembly, compiler can't instrument them.
+Therefore we replace these functions with our own instrumented functions
+(kasan_memset, kasan_memcpy, kasan_memove).
+In some circumstances you may need to use the original functions,
+in such case insert #undef KASAN_HOOKS before includes.
+
diff --git a/Makefile b/Makefile
index 64ab7b3..08a07f2 100644
--- a/Makefile
+++ b/Makefile
@@ -384,6 +384,12 @@  LDFLAGS_MODULE  =
 CFLAGS_KERNEL	=
 AFLAGS_KERNEL	=
 CFLAGS_GCOV	= -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN	= -fsanitize=address --param asan-stack=0 \
+			--param asan-use-after-return=0 \
+			--param asan-globals=0 \
+			--param asan-memintrin=0 \
+			--param asan-instrumentation-with-call-threshold=0 \
+			-DKASAN_HOOKS
 
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
@@ -428,7 +434,7 @@  export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
diff --git a/commit b/commit
new file mode 100644
index 0000000..134f4dd
--- /dev/null
+++ b/commit
@@ -0,0 +1,3 @@ 
+
+I'm working on address sanitizer for kernel.
+fuck this bloody.
\ No newline at end of file
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..7efc3eb
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,33 @@ 
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+void unpoison_shadow(const void *address, size_t size);
+
+void kasan_enable_local(void);
+void kasan_disable_local(void);
+
+/* Reserves shadow memory. */
+void kasan_alloc_shadow(void);
+void kasan_init_shadow(void);
+
+#else /* CONFIG_KASAN */
+
+static inline void unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+/* Reserves shadow memory. */
+static inline void kasan_init_shadow(void) {}
+static inline void kasan_alloc_shadow(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..286650a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1471,6 +1471,10 @@  struct task_struct {
 	gfp_t lockdep_reclaim_gfp;
 #endif
 
+#ifdef CONFIG_KASAN
+	int kasan_depth;
+#endif
+
 /* journalling filesystem info */
 	void *journal_info;
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cf9cf82..67a4dfc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -611,6 +611,8 @@  config DEBUG_STACKOVERFLOW
 
 source "lib/Kconfig.kmemcheck"
 
+source "lib/Kconfig.kasan"
+
 endmenu # "Memory Debugging"
 
 config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..2bfff78
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,20 @@ 
+config HAVE_ARCH_KASAN
+	bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+	bool "AddressSanitizer: dynamic memory error detector"
+	default n
+	help
+	  Enables AddressSanitizer - dynamic memory error detector,
+	  that finds out-of-bounds and use-after-free bugs.
+
+config KASAN_SANITIZE_ALL
+	bool "Instrument entire kernel"
+	depends on KASAN
+	default y
+	help
+	  This enables compiler intrumentation for entire kernel
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index e4a97bd..dbe9a22 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -64,3 +64,4 @@  obj-$(CONFIG_ZPOOL)	+= zpool.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_KASAN)	+= kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@ 
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..e2cd345
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,292 @@ 
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+static bool __read_mostly kasan_initialized;
+
+unsigned long kasan_shadow_start;
+unsigned long kasan_shadow_end;
+
+/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
+unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
+
+
+static inline bool addr_is_in_mem(unsigned long addr)
+{
+	return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
+}
+
+void kasan_enable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth--;
+}
+
+void kasan_disable_local(void)
+{
+	if (likely(kasan_initialized))
+		current->kasan_depth++;
+}
+
+static inline bool kasan_enabled(void)
+{
+	return likely(kasan_initialized
+		&& !current->kasan_depth);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void poison_shadow(const void *address, size_t size, u8 value)
+{
+	unsigned long shadow_start, shadow_end;
+	unsigned long addr = (unsigned long)address;
+
+	shadow_start = kasan_mem_to_shadow(addr);
+	shadow_end = kasan_mem_to_shadow(addr + size);
+
+	memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void unpoison_shadow(const void *address, size_t size)
+{
+	poison_shadow(address, size, 0);
+
+	if (size & KASAN_SHADOW_MASK) {
+		u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+						+ size);
+		*shadow = size & KASAN_SHADOW_MASK;
+	}
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+	s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+	if (shadow_value != 0) {
+		s8 last_byte = addr & KASAN_SHADOW_MASK;
+		return last_byte >= shadow_value;
+	}
+	return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+							size_t size)
+{
+	unsigned long end = addr + size;
+	for (; addr < end; addr++)
+		if (unlikely(address_is_poisoned(addr)))
+			return addr;
+	return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+						size_t size, bool write)
+{
+	unsigned long access_addr;
+	struct access_info info;
+
+	if (!kasan_enabled())
+		return;
+
+	if (unlikely(addr < TASK_SIZE)) {
+		info.access_addr = addr;
+		info.access_size = size;
+		info.is_write = write;
+		info.ip = _RET_IP_;
+		kasan_report_user_access(&info);
+		return;
+	}
+
+	if (!addr_is_in_mem(addr))
+		return;
+
+	access_addr = memory_is_poisoned(addr, size);
+	if (likely(access_addr == 0))
+		return;
+
+	info.access_addr = access_addr;
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = _RET_IP_;
+	kasan_report_error(&info);
+}
+
+void __init kasan_alloc_shadow(void)
+{
+	unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
+	unsigned long shadow_size;
+	phys_addr_t shadow_phys_start;
+
+	shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
+
+	shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
+	if (!shadow_phys_start) {
+		pr_err("Unable to reserve shadow memory\n");
+		return;
+	}
+
+	kasan_shadow_start = (unsigned long)phys_to_virt(shadow_phys_start);
+	kasan_shadow_end = kasan_shadow_start + shadow_size;
+
+	pr_info("reserved shadow memory: [0x%lx - 0x%lx]\n",
+		kasan_shadow_start, kasan_shadow_end);
+	kasan_shadow_offset = kasan_shadow_start -
+		(PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
+}
+
+void __init kasan_init_shadow(void)
+{
+	if (kasan_shadow_start) {
+		unpoison_shadow((void *)PAGE_OFFSET,
+				(size_t)(kasan_shadow_start - PAGE_OFFSET));
+		poison_shadow((void *)kasan_shadow_start,
+			kasan_shadow_end - kasan_shadow_start,
+			KASAN_SHADOW_GAP);
+		unpoison_shadow((void *)kasan_shadow_end,
+				(size_t)(high_memory - kasan_shadow_end));
+		kasan_initialized = true;
+		pr_info("shadow memory initialized\n");
+	}
+}
+
+void *kasan_memcpy(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memcpy(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memcpy);
+
+void *kasan_memset(void *ptr, int val, size_t len)
+{
+	if (unlikely(len == 0))
+		return ptr;
+
+	check_memory_region((unsigned long)ptr, len, true);
+
+	return memset(ptr, val, len);
+}
+EXPORT_SYMBOL(kasan_memset);
+
+void *kasan_memmove(void *dst, const void *src, size_t len)
+{
+	if (unlikely(len == 0))
+		return dst;
+
+	check_memory_region((unsigned long)src, len, false);
+	check_memory_region((unsigned long)dst, len, true);
+
+	return memmove(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memmove);
+
+void __asan_load1(unsigned long addr)
+{
+	check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+	check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+	check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+	check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+	check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+	check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+	check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+	check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+	check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+	check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+	check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..711ae4f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,36 @@ 
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP        0xF9  /* address belongs to shadow memory */
+
+struct access_info {
+	unsigned long access_addr;
+	size_t access_size;
+	bool is_write;
+	unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+	return (addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ kasan_shadow_offset;
+}
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+	return ((shadow_addr - kasan_shadow_start)
+		<< KASAN_SHADOW_SCALE_SHIFT) + PAGE_OFFSET;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..2430e05
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,157 @@ 
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin@samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ *        Andrey Konovalov <andreyknvl@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h> /* for ../slab.h */
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
+{
+	return x - ((x - slab_start) % s->size);
+}
+
+static void print_error_description(struct access_info *info)
+{
+	const char *bug_type = "unknown crash";
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	switch (shadow_val) {
+	case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+		bug_type = "buffer overflow";
+		break;
+	case KASAN_SHADOW_GAP:
+		bug_type = "wild memory access";
+		break;
+	}
+
+	pr_err("AddressSanitizer: %s in %pS at addr %p\n",
+		bug_type, (void *)info->ip,
+		(void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+	void *object;
+	struct kmem_cache *cache;
+	void *slab_start;
+	struct page *page;
+	u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+	page = virt_to_page(info->access_addr);
+
+	switch (shadow_val) {
+	case KASAN_SHADOW_GAP:
+		pr_err("No metainfo is available for this access.\n");
+		dump_stack();
+		break;
+	default:
+		WARN_ON(1);
+	}
+
+	pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+		info->access_size, current->pid);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+	return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static void print_shadow_pointer(unsigned long row, unsigned long shadow,
+				 char *output)
+{
+	/* The length of ">ff00ff00ff00ff00: " is 3 + (BITS_PER_LONG/8)*2 chars. */
+	unsigned long space_count = 3 + (BITS_PER_LONG >> 2) + (shadow - row)*2 +
+		(shadow - row) / SHADOW_BYTES_PER_BLOCK;
+	unsigned long i;
+
+	for (i = 0; i < space_count; i++)
+		output[i] = ' ';
+	output[space_count] = '^';
+	output[space_count + 1] = '\0';
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+	int i;
+	unsigned long shadow = kasan_mem_to_shadow(addr);
+	unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+		- SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+	pr_err("Memory state around the buggy address:\n");
+
+	for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+		unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+		char buffer[100];
+
+		snprintf(buffer, sizeof(buffer),
+			(i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+		print_hex_dump(KERN_ERR, buffer,
+			DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+			(void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+
+		if (row_is_guilty(aligned_shadow, shadow)) {
+			print_shadow_pointer(aligned_shadow, shadow, buffer);
+			pr_err("%s\n", buffer);
+		}
+		aligned_shadow += SHADOW_BYTES_PER_ROW;
+	}
+}
+
+void kasan_report_error(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+	print_error_description(info);
+	print_address_description(info);
+	print_shadow_for_address(info->access_addr);
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+	kasan_disable_local();
+	pr_err("================================="
+		"=================================\n");
+        pr_err("AddressSanitizer: user-memory-access on address %lx\n",
+		info->access_addr);
+        pr_err("%s of size %zu by thread T%d:\n",
+		info->is_write ? "Write" : "Read",
+               info->access_size, current->pid);
+	dump_stack();
+	pr_err("================================="
+		"=================================\n");
+	kasan_enable_local();
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 260bf8a..2bec69e 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@  _c_flags += $(if $(patsubst n%,, \
 		$(CFLAGS_GCOV))
 endif
 
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+		$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN_SANITIZE_ALL)), \
+		$(CFLAGS_KASAN))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').