Message ID | 20210621154442.18463-1-yee.lee@mediatek.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | kasan: unpoison use memset to init unaligned object size | expand |
On Mon, 21 Jun 2021 at 17:45, <yee.lee@mediatek.com> wrote: > > From: Yee Lee <yee.lee@mediatek.com> > > This patch adds a memset to initialize object of unaligned size. s/This patch adds/Add/ > Duing to the MTE granulrity, the integrated initialization using s/Duing/Doing/ s/granulrity/granularity/ > hwtag instruction will force clearing out bytes in granular size, > which may cause undesired effect, such as overwriting to the redzone > of SLUB debug. In this patch, for the unaligned object size, function Did you encounter a crash due to this? Was it only SLUB debug that caused the problem? Do you have data on what the percentage of allocations are that would now be treated differently? E.g. what's the percentage of such odd-sized allocations during a normal boot with SLUB debug off? We need to know if this change would pessimize a non-debug kernel, and if so, we'd have to make the below behave differently. > uses memset to initailize context instead of the hwtag instruction. s/initailize/initialize/ > Signed-off-by: Yee Lee <yee.lee@mediatek.com> > --- > mm/kasan/kasan.h | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..d8faa64614b7 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,8 +387,11 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memset((void *)addr, 0, size); Should use memzero_explicit(). > + } > size = round_up(size, KASAN_GRANULE_SIZE); > - Remove whitespace change. > hw_set_mem_tag_range((void *)addr, size, tag, init); > } Thanks, -- Marco
On Mon, Jun 21, 2021 at 6:45 PM <yee.lee@mediatek.com> wrote: > > From: Yee Lee <yee.lee@mediatek.com> > > This patch adds a memset to initialize object of unaligned size. > Duing to the MTE granulrity, the integrated initialization using > hwtag instruction will force clearing out bytes in granular size, > which may cause undesired effect, such as overwriting to the redzone > of SLUB debug. In this patch, for the unaligned object size, function > uses memset to initailize context instead of the hwtag instruction. > > Signed-off-by: Yee Lee <yee.lee@mediatek.com> > --- > mm/kasan/kasan.h | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..d8faa64614b7 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,8 +387,11 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memset((void *)addr, 0, size); > + } With this implementation, we loose the benefit of setting tags and initializing memory with the same instructions. Perhaps a better implementation would be to call hw_set_mem_tag_range() with the size rounded down, and then separately deal with the leftover memory. > size = round_up(size, KASAN_GRANULE_SIZE); > - > hw_set_mem_tag_range((void *)addr, size, tag, init); > } > > -- > 2.18.0 >
On Tue, 2021-06-22 at 17:03 +0300, Andrey Konovalov wrote: > On Mon, Jun 21, 2021 at 6:45 PM <yee.lee@mediatek.com> wrote: > > > > From: Yee Lee <yee.lee@mediatek.com> > > > > This patch adds a memset to initialize object of unaligned size. > > Duing to the MTE granulrity, the integrated initialization using > > hwtag instruction will force clearing out bytes in granular size, > > which may cause undesired effect, such as overwriting to the > > redzone > > of SLUB debug. In this patch, for the unaligned object size, > > function > > uses memset to initailize context instead of the hwtag instruction. > > > > Signed-off-by: Yee Lee <yee.lee@mediatek.com> > > --- > > mm/kasan/kasan.h | 5 ++++- > > 1 file changed, 4 insertions(+), 1 deletion(-) > > > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > > index 8f450bc28045..d8faa64614b7 100644 > > --- a/mm/kasan/kasan.h > > +++ b/mm/kasan/kasan.h > > @@ -387,8 +387,11 @@ static inline void kasan_unpoison(const void > > *addr, size_t size, bool init) > > > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > > return; > > + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > > + init = false; > > + memset((void *)addr, 0, size); > > + } > > With this implementation, we loose the benefit of setting tags and > initializing memory with the same instructions. > > Perhaps a better implementation would be to call > hw_set_mem_tag_range() with the size rounded down, and then > separately > deal with the leftover memory. Yes, this fully takes the advantage of hw instruction. However, the leftover memory needs one more hw_set_mem_tag_range() for protection as well. If the extra path is only executed as CONFIG_SLUB_DEBUG, the performance lost would be less concerned. > > > size = round_up(size, KASAN_GRANULE_SIZE); > > - > > hw_set_mem_tag_range((void *)addr, size, tag, init); > > } > > > > -- > > 2.18.0 > >
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 8f450bc28045..d8faa64614b7 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -387,8 +387,11 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + init = false; + memset((void *)addr, 0, size); + } size = round_up(size, KASAN_GRANULE_SIZE); - hw_set_mem_tag_range((void *)addr, size, tag, init); }