diff mbox

[3/4] arm64/kasan: don't allocate extra shadow memory

Message ID 20170601162338.23540-3-aryabinin@virtuozzo.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andrey Ryabinin June 1, 2017, 4:23 p.m. UTC
We used to read several bytes of the shadow memory in advance.
Therefore additional shadow memory mapped to prevent crash if
speculative load would happen near the end of the mapped shadow memory.

Now we don't have such speculative loads, so we no longer need to map
additional shadow memory.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/mm/kasan_init.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

Comments

Mark Rutland June 1, 2017, 4:34 p.m. UTC | #1
On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> We used to read several bytes of the shadow memory in advance.
> Therefore additional shadow memory mapped to prevent crash if
> speculative load would happen near the end of the mapped shadow memory.
>
> Now we don't have such speculative loads, so we no longer need to map
> additional shadow memory.

I see that patch 1 fixed up the Linux helpers for outline
instrumentation.

Just to check, is it also true that the inline instrumentation never
performs unaligned accesses to the shadow memory?

If so, this looks good to me; it also avoids a potential fencepost issue
when memory exists right at the end of the linear map. Assuming that
holds:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/mm/kasan_init.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 687a358a3733..81f03959a4ab 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -191,14 +191,8 @@ void __init kasan_init(void)
>               if (start >= end)
>                       break;
>
> -             /*
> -              * end + 1 here is intentional. We check several shadow bytes in
> -              * advance to slightly speed up fastpath. In some rare cases
> -              * we could cross boundary of mapped shadow, so we just map
> -              * some more here.
> -              */
>               vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
> -                             (unsigned long)kasan_mem_to_shadow(end) + 1,
> +                             (unsigned long)kasan_mem_to_shadow(end),
>                               pfn_to_nid(virt_to_pfn(start)));
>       }
>
> --
> 2.13.0
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Dmitry Vyukov June 1, 2017, 4:45 p.m. UTC | #2
On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>> We used to read several bytes of the shadow memory in advance.
>> Therefore additional shadow memory mapped to prevent crash if
>> speculative load would happen near the end of the mapped shadow memory.
>>
>> Now we don't have such speculative loads, so we no longer need to map
>> additional shadow memory.
>
> I see that patch 1 fixed up the Linux helpers for outline
> instrumentation.
>
> Just to check, is it also true that the inline instrumentation never
> performs unaligned accesses to the shadow memory?

Inline instrumentation generally accesses only a single byte.

> If so, this looks good to me; it also avoids a potential fencepost issue
> when memory exists right at the end of the linear map. Assuming that
> holds:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> Thanks,
> Mark.
>
>>
>> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: linux-arm-kernel@lists.infradead.org
>> ---
>>  arch/arm64/mm/kasan_init.c | 8 +-------
>>  1 file changed, 1 insertion(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index 687a358a3733..81f03959a4ab 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -191,14 +191,8 @@ void __init kasan_init(void)
>>               if (start >= end)
>>                       break;
>>
>> -             /*
>> -              * end + 1 here is intentional. We check several shadow bytes in
>> -              * advance to slightly speed up fastpath. In some rare cases
>> -              * we could cross boundary of mapped shadow, so we just map
>> -              * some more here.
>> -              */
>>               vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
>> -                             (unsigned long)kasan_mem_to_shadow(end) + 1,
>> +                             (unsigned long)kasan_mem_to_shadow(end),
>>                               pfn_to_nid(virt_to_pfn(start)));
>>       }
>>
>> --
>> 2.13.0
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Mark Rutland June 1, 2017, 4:52 p.m. UTC | #3
On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> >> We used to read several bytes of the shadow memory in advance.
> >> Therefore additional shadow memory mapped to prevent crash if
> >> speculative load would happen near the end of the mapped shadow memory.
> >>
> >> Now we don't have such speculative loads, so we no longer need to map
> >> additional shadow memory.
> >
> > I see that patch 1 fixed up the Linux helpers for outline
> > instrumentation.
> >
> > Just to check, is it also true that the inline instrumentation never
> > performs unaligned accesses to the shadow memory?
> 
> Inline instrumentation generally accesses only a single byte.

Sorry to be a little pedantic, but does that mean we'll never access the
additional shadow, or does that mean it's very unlikely that we will?

I'm guessing/hoping it's the former!

Thanks,
Mark.
Andrey Ryabinin June 1, 2017, 4:59 p.m. UTC | #4
On 06/01/2017 07:52 PM, Mark Rutland wrote:
> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>>>> We used to read several bytes of the shadow memory in advance.
>>>> Therefore additional shadow memory mapped to prevent crash if
>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>
>>>> Now we don't have such speculative loads, so we no longer need to map
>>>> additional shadow memory.
>>>
>>> I see that patch 1 fixed up the Linux helpers for outline
>>> instrumentation.
>>>
>>> Just to check, is it also true that the inline instrumentation never
>>> performs unaligned accesses to the shadow memory?
>>

Correct, inline instrumentation assumes that all accesses are properly aligned as it
required by C standard. I knew that the kernel violates this rule in many places,
therefore I decided to add checks for unaligned accesses in outline case.


>> Inline instrumentation generally accesses only a single byte.
> 
> Sorry to be a little pedantic, but does that mean we'll never access the
> additional shadow, or does that mean it's very unlikely that we will?
> 
> I'm guessing/hoping it's the former!
> 

Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses

> Thanks,
> Mark.
>
Andrey Ryabinin June 1, 2017, 5 p.m. UTC | #5
On 06/01/2017 07:59 PM, Andrey Ryabinin wrote:
> 
> 
> On 06/01/2017 07:52 PM, Mark Rutland wrote:
>> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
>>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>>>>> We used to read several bytes of the shadow memory in advance.
>>>>> Therefore additional shadow memory mapped to prevent crash if
>>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>>
>>>>> Now we don't have such speculative loads, so we no longer need to map
>>>>> additional shadow memory.
>>>>
>>>> I see that patch 1 fixed up the Linux helpers for outline
>>>> instrumentation.
>>>>
>>>> Just to check, is it also true that the inline instrumentation never
>>>> performs unaligned accesses to the shadow memory?
>>>
> 
> Correct, inline instrumentation assumes that all accesses are properly aligned as it
> required by C standard. I knew that the kernel violates this rule in many places,
> therefore I decided to add checks for unaligned accesses in outline case.
> 
> 
>>> Inline instrumentation generally accesses only a single byte.
>>
>> Sorry to be a little pedantic, but does that mean we'll never access the
>> additional shadow, or does that mean it's very unlikely that we will?
>>
>> I'm guessing/hoping it's the former!
>>
> 
> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses

s/Outline/inline  of course.

> 
>> Thanks,
>> Mark.
>>
Dmitry Vyukov June 1, 2017, 5:05 p.m. UTC | #6
On Thu, Jun 1, 2017 at 7:00 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
> On 06/01/2017 07:59 PM, Andrey Ryabinin wrote:
>>
>>
>> On 06/01/2017 07:52 PM, Mark Rutland wrote:
>>> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
>>>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>>>>>> We used to read several bytes of the shadow memory in advance.
>>>>>> Therefore additional shadow memory mapped to prevent crash if
>>>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>>>
>>>>>> Now we don't have such speculative loads, so we no longer need to map
>>>>>> additional shadow memory.
>>>>>
>>>>> I see that patch 1 fixed up the Linux helpers for outline
>>>>> instrumentation.
>>>>>
>>>>> Just to check, is it also true that the inline instrumentation never
>>>>> performs unaligned accesses to the shadow memory?
>>>>
>>
>> Correct, inline instrumentation assumes that all accesses are properly aligned as it
>> required by C standard. I knew that the kernel violates this rule in many places,
>> therefore I decided to add checks for unaligned accesses in outline case.
>>
>>
>>>> Inline instrumentation generally accesses only a single byte.
>>>
>>> Sorry to be a little pedantic, but does that mean we'll never access the
>>> additional shadow, or does that mean it's very unlikely that we will?
>>>
>>> I'm guessing/hoping it's the former!
>>>
>>
>> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses
>
> s/Outline/inline  of course.


I suspect that actual implementations have diverged from that
description. Trying to follow asan_expand_check_ifn in:
https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/asan.c?revision=246703&view=markup
but it's not trivial.

+Yuri, maybe you know off the top of your head if asan instrumentation
in gcc ever accesses off-by-one shadow byte (i.e. 1 byte after actual
object end)?
Dmitry Vyukov June 1, 2017, 5:38 p.m. UTC | #7
On Thu, Jun 1, 2017 at 7:05 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>>>>>>> We used to read several bytes of the shadow memory in advance.
>>>>>>> Therefore additional shadow memory mapped to prevent crash if
>>>>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>>>>
>>>>>>> Now we don't have such speculative loads, so we no longer need to map
>>>>>>> additional shadow memory.
>>>>>>
>>>>>> I see that patch 1 fixed up the Linux helpers for outline
>>>>>> instrumentation.
>>>>>>
>>>>>> Just to check, is it also true that the inline instrumentation never
>>>>>> performs unaligned accesses to the shadow memory?
>>>>>
>>>
>>> Correct, inline instrumentation assumes that all accesses are properly aligned as it
>>> required by C standard. I knew that the kernel violates this rule in many places,
>>> therefore I decided to add checks for unaligned accesses in outline case.
>>>
>>>
>>>>> Inline instrumentation generally accesses only a single byte.
>>>>
>>>> Sorry to be a little pedantic, but does that mean we'll never access the
>>>> additional shadow, or does that mean it's very unlikely that we will?
>>>>
>>>> I'm guessing/hoping it's the former!
>>>>
>>>
>>> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses
>>
>> s/Outline/inline  of course.
>
>
> I suspect that actual implementations have diverged from that
> description. Trying to follow asan_expand_check_ifn in:
> https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/asan.c?revision=246703&view=markup
> but it's not trivial.
>
> +Yuri, maybe you know off the top of your head if asan instrumentation
> in gcc ever accesses off-by-one shadow byte (i.e. 1 byte after actual
> object end)?

Thinking of this more. There is at least 1 case in user-space asan
where off-by-one shadow access would lead to similar crashes: for
mmap-ed regions we don't have redzones and map shadow only for the
region itself, so any off-by-one access would lead to crashes. So I
guess we are safe here. Or at least any crash would be gcc bug.
diff mbox

Patch

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 687a358a3733..81f03959a4ab 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -191,14 +191,8 @@  void __init kasan_init(void)
 		if (start >= end)
 			break;
 
-		/*
-		 * end + 1 here is intentional. We check several shadow bytes in
-		 * advance to slightly speed up fastpath. In some rare cases
-		 * we could cross boundary of mapped shadow, so we just map
-		 * some more here.
-		 */
 		vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
-				(unsigned long)kasan_mem_to_shadow(end) + 1,
+				(unsigned long)kasan_mem_to_shadow(end),
 				pfn_to_nid(virt_to_pfn(start)));
 	}