diff mbox series

[v2,2/3] arm64: lib: use C string functions with KASAN enabled.

Message ID 20180920135631.23833-2-aryabinin@virtuozzo.com (mailing list archive)
State New, archived
Headers show
Series [v2,1/3] linkage.h: Align weak symbols. | expand

Commit Message

Andrey Ryabinin Sept. 20, 2018, 1:56 p.m. UTC
ARM64 has asm implementation of memchr(), memcmp(), str[r]chr(),
str[n]cmp(), str[n]len(). KASAN don't see memory accesses in asm
code, thus it can potentially miss many bugs.

Ifdef out __HAVE_ARCH_* defines of these functions when KASAN is
enabled, so the generic implementations from lib/string.c will be used.

We can't just remove the asm functions because efistub uses them.
And we can't have two non-weak functions either, so declare the asm
functions as weak.

Reported-by: Kyeongdon Kim <kyeongdon.kim@lge.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
Changes since v1:
 - Use WEAK() instead of .weak

 arch/arm64/include/asm/string.h | 14 ++++++++------
 arch/arm64/kernel/arm64ksyms.c  |  7 +++++--
 arch/arm64/lib/memchr.S         |  2 +-
 arch/arm64/lib/memcmp.S         |  2 +-
 arch/arm64/lib/strchr.S         |  2 +-
 arch/arm64/lib/strcmp.S         |  2 +-
 arch/arm64/lib/strlen.S         |  2 +-
 arch/arm64/lib/strncmp.S        |  2 +-
 arch/arm64/lib/strnlen.S        |  2 +-
 arch/arm64/lib/strrchr.S        |  2 +-
 10 files changed, 21 insertions(+), 16 deletions(-)

Comments

Will Deacon Oct. 29, 2018, 10:29 a.m. UTC | #1
On Thu, Sep 20, 2018 at 04:56:30PM +0300, Andrey Ryabinin wrote:
> ARM64 has asm implementation of memchr(), memcmp(), str[r]chr(),
> str[n]cmp(), str[n]len(). KASAN don't see memory accesses in asm
> code, thus it can potentially miss many bugs.
> 
> Ifdef out __HAVE_ARCH_* defines of these functions when KASAN is
> enabled, so the generic implementations from lib/string.c will be used.
> 
> We can't just remove the asm functions because efistub uses them.
> And we can't have two non-weak functions either, so declare the asm
> functions as weak.
> 
> Reported-by: Kyeongdon Kim <kyeongdon.kim@lge.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
> Changes since v1:
>  - Use WEAK() instead of .weak
> 
>  arch/arm64/include/asm/string.h | 14 ++++++++------
>  arch/arm64/kernel/arm64ksyms.c  |  7 +++++--
>  arch/arm64/lib/memchr.S         |  2 +-
>  arch/arm64/lib/memcmp.S         |  2 +-
>  arch/arm64/lib/strchr.S         |  2 +-
>  arch/arm64/lib/strcmp.S         |  2 +-
>  arch/arm64/lib/strlen.S         |  2 +-
>  arch/arm64/lib/strncmp.S        |  2 +-
>  arch/arm64/lib/strnlen.S        |  2 +-
>  arch/arm64/lib/strrchr.S        |  2 +-
>  10 files changed, 21 insertions(+), 16 deletions(-)

Acked-by: Will Deacon <will.deacon@arm.com>

Please post these again after the merge window and we'll figure out how to
get them queued.

Will
Andrey Ryabinin Oct. 29, 2018, 11:16 a.m. UTC | #2
On 10/29/2018 01:29 PM, Will Deacon wrote:
> On Thu, Sep 20, 2018 at 04:56:30PM +0300, Andrey Ryabinin wrote:
>> ARM64 has asm implementation of memchr(), memcmp(), str[r]chr(),
>> str[n]cmp(), str[n]len(). KASAN don't see memory accesses in asm
>> code, thus it can potentially miss many bugs.
>>
>> Ifdef out __HAVE_ARCH_* defines of these functions when KASAN is
>> enabled, so the generic implementations from lib/string.c will be used.
>>
>> We can't just remove the asm functions because efistub uses them.
>> And we can't have two non-weak functions either, so declare the asm
>> functions as weak.
>>
>> Reported-by: Kyeongdon Kim <kyeongdon.kim@lge.com>
>> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> ---
>> Changes since v1:
>>  - Use WEAK() instead of .weak
>>
>>  arch/arm64/include/asm/string.h | 14 ++++++++------
>>  arch/arm64/kernel/arm64ksyms.c  |  7 +++++--
>>  arch/arm64/lib/memchr.S         |  2 +-
>>  arch/arm64/lib/memcmp.S         |  2 +-
>>  arch/arm64/lib/strchr.S         |  2 +-
>>  arch/arm64/lib/strcmp.S         |  2 +-
>>  arch/arm64/lib/strlen.S         |  2 +-
>>  arch/arm64/lib/strncmp.S        |  2 +-
>>  arch/arm64/lib/strnlen.S        |  2 +-
>>  arch/arm64/lib/strrchr.S        |  2 +-
>>  10 files changed, 21 insertions(+), 16 deletions(-)
> 
> Acked-by: Will Deacon <will.deacon@arm.com>
> 
> Please post these again after the merge window and we'll figure out how to
> get them queued.
> 


Andrew sent these patches to Linus couple days ago, so they are in tree already.

Something went wrong with mail notification though. I didn't even realize that they
were in -mm tree, because I didn't receive the usual 'the patch has been added to -mm tree' email.
But I did receive email that was sent to Linus.

Also there was no you or Catalin in Cc tags in 2,3 patches, and in the first patch, the Cc tags were
corrupted:

From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Subject: include/linux/linkage.h: align weak symbols

Since WEAK() supposed to be used instead of ENTRY() to define weak
symbols, but unlike ENTRY() it doesn't have ALIGN directive.  It seems
there is no actual reason to not have, so let's add ALIGN to WEAK() too.

Link: http://lkml.kernel.org/r/20180920135631.23833-1-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Will Deacon <will.deacon@arm.com>, Catalin Marinas <catalin.marinas@arm.com>
Cc: Kyeongdon Kim <kyeongdon.kim@lge.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Will Deacon Oct. 29, 2018, 11:20 a.m. UTC | #3
Hi Andrey, Andrew,

On Mon, Oct 29, 2018 at 11:16:15AM +0000, Andrey Ryabinin wrote:
> On 10/29/2018 01:29 PM, Will Deacon wrote:
> > On Thu, Sep 20, 2018 at 04:56:30PM +0300, Andrey Ryabinin wrote:
> >> ARM64 has asm implementation of memchr(), memcmp(), str[r]chr(),
> >> str[n]cmp(), str[n]len(). KASAN don't see memory accesses in asm
> >> code, thus it can potentially miss many bugs.
> >>
> >> Ifdef out __HAVE_ARCH_* defines of these functions when KASAN is
> >> enabled, so the generic implementations from lib/string.c will be used.
> >>
> >> We can't just remove the asm functions because efistub uses them.
> >> And we can't have two non-weak functions either, so declare the asm
> >> functions as weak.
> >>
> >> Reported-by: Kyeongdon Kim <kyeongdon.kim@lge.com>
> >> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> >> ---
> >> Changes since v1:
> >>  - Use WEAK() instead of .weak
> >>
> >>  arch/arm64/include/asm/string.h | 14 ++++++++------
> >>  arch/arm64/kernel/arm64ksyms.c  |  7 +++++--
> >>  arch/arm64/lib/memchr.S         |  2 +-
> >>  arch/arm64/lib/memcmp.S         |  2 +-
> >>  arch/arm64/lib/strchr.S         |  2 +-
> >>  arch/arm64/lib/strcmp.S         |  2 +-
> >>  arch/arm64/lib/strlen.S         |  2 +-
> >>  arch/arm64/lib/strncmp.S        |  2 +-
> >>  arch/arm64/lib/strnlen.S        |  2 +-
> >>  arch/arm64/lib/strrchr.S        |  2 +-
> >>  10 files changed, 21 insertions(+), 16 deletions(-)
> > 
> > Acked-by: Will Deacon <will.deacon@arm.com>
> > 
> > Please post these again after the merge window and we'll figure out how to
> > get them queued.
> 
> Andrew sent these patches to Linus couple days ago, so they are in tree
> already.

Oh, good thing I was happy with them in the end, then!

> Something went wrong with mail notification though. I didn't even realize
> that they were in -mm tree, because I didn't receive the usual 'the patch
> has been added to -mm tree' email.  But I did receive email that was sent
> to Linus.

Yeah, strange. I usually see the notifications from Andrew.

> Also there was no you or Catalin in Cc tags in 2,3 patches, and in the
> first patch, the Cc tags were corrupted:

:/

Andrew -- have we broken your scripts somehow, or is this just a one-off
for these patches?

Thanks,

Will

> From: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Subject: include/linux/linkage.h: align weak symbols
> 
> Since WEAK() supposed to be used instead of ENTRY() to define weak
> symbols, but unlike ENTRY() it doesn't have ALIGN directive.  It seems
> there is no actual reason to not have, so let's add ALIGN to WEAK() too.
> 
> Link: http://lkml.kernel.org/r/20180920135631.23833-1-aryabinin@virtuozzo.com
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Will Deacon <will.deacon@arm.com>, Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kyeongdon Kim <kyeongdon.kim@lge.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index dd95d33a5bd5..03a6c256b7ec 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -16,6 +16,7 @@ 
 #ifndef __ASM_STRING_H
 #define __ASM_STRING_H
 
+#ifndef CONFIG_KASAN
 #define __HAVE_ARCH_STRRCHR
 extern char *strrchr(const char *, int c);
 
@@ -34,6 +35,13 @@  extern __kernel_size_t strlen(const char *);
 #define __HAVE_ARCH_STRNLEN
 extern __kernel_size_t strnlen(const char *, __kernel_size_t);
 
+#define __HAVE_ARCH_MEMCMP
+extern int memcmp(const void *, const void *, size_t);
+
+#define __HAVE_ARCH_MEMCHR
+extern void *memchr(const void *, int, __kernel_size_t);
+#endif
+
 #define __HAVE_ARCH_MEMCPY
 extern void *memcpy(void *, const void *, __kernel_size_t);
 extern void *__memcpy(void *, const void *, __kernel_size_t);
@@ -42,16 +50,10 @@  extern void *__memcpy(void *, const void *, __kernel_size_t);
 extern void *memmove(void *, const void *, __kernel_size_t);
 extern void *__memmove(void *, const void *, __kernel_size_t);
 
-#define __HAVE_ARCH_MEMCHR
-extern void *memchr(const void *, int, __kernel_size_t);
-
 #define __HAVE_ARCH_MEMSET
 extern void *memset(void *, int, __kernel_size_t);
 extern void *__memset(void *, int, __kernel_size_t);
 
-#define __HAVE_ARCH_MEMCMP
-extern int memcmp(const void *, const void *, size_t);
-
 #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
 #define __HAVE_ARCH_MEMCPY_FLUSHCACHE
 void memcpy_flushcache(void *dst, const void *src, size_t cnt);
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index d894a20b70b2..72f63a59b008 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -44,20 +44,23 @@  EXPORT_SYMBOL(__arch_copy_in_user);
 EXPORT_SYMBOL(memstart_addr);
 
 	/* string / mem functions */
+#ifndef CONFIG_KASAN
 EXPORT_SYMBOL(strchr);
 EXPORT_SYMBOL(strrchr);
 EXPORT_SYMBOL(strcmp);
 EXPORT_SYMBOL(strncmp);
 EXPORT_SYMBOL(strlen);
 EXPORT_SYMBOL(strnlen);
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memchr);
+#endif
+
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memcpy);
 EXPORT_SYMBOL(memmove);
 EXPORT_SYMBOL(__memset);
 EXPORT_SYMBOL(__memcpy);
 EXPORT_SYMBOL(__memmove);
-EXPORT_SYMBOL(memchr);
-EXPORT_SYMBOL(memcmp);
 
 	/* atomic bitops */
 EXPORT_SYMBOL(set_bit);
diff --git a/arch/arm64/lib/memchr.S b/arch/arm64/lib/memchr.S
index 4444c1d25f4b..0f164a4baf52 100644
--- a/arch/arm64/lib/memchr.S
+++ b/arch/arm64/lib/memchr.S
@@ -30,7 +30,7 @@ 
  * Returns:
  *	x0 - address of first occurrence of 'c' or 0
  */
-ENTRY(memchr)
+WEAK(memchr)
 	and	w1, w1, #0xff
 1:	subs	x2, x2, #1
 	b.mi	2f
diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S
index 2a4e239bd17a..fb295f52e9f8 100644
--- a/arch/arm64/lib/memcmp.S
+++ b/arch/arm64/lib/memcmp.S
@@ -58,7 +58,7 @@  pos		.req	x11
 limit_wd	.req	x12
 mask		.req	x13
 
-ENTRY(memcmp)
+WEAK(memcmp)
 	cbz	limit, .Lret0
 	eor	tmp1, src1, src2
 	tst	tmp1, #7
diff --git a/arch/arm64/lib/strchr.S b/arch/arm64/lib/strchr.S
index dae0cf5591f9..7c83091d1bcd 100644
--- a/arch/arm64/lib/strchr.S
+++ b/arch/arm64/lib/strchr.S
@@ -29,7 +29,7 @@ 
  * Returns:
  *	x0 - address of first occurrence of 'c' or 0
  */
-ENTRY(strchr)
+WEAK(strchr)
 	and	w1, w1, #0xff
 1:	ldrb	w2, [x0], #1
 	cmp	w2, w1
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 471fe61760ef..7d5d15398bfb 100644
--- a/arch/arm64/lib/strcmp.S
+++ b/arch/arm64/lib/strcmp.S
@@ -60,7 +60,7 @@  tmp3		.req	x9
 zeroones	.req	x10
 pos		.req	x11
 
-ENTRY(strcmp)
+WEAK(strcmp)
 	eor	tmp1, src1, src2
 	mov	zeroones, #REP8_01
 	tst	tmp1, #7
diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S
index 55ccc8e24c08..8e0b14205dcb 100644
--- a/arch/arm64/lib/strlen.S
+++ b/arch/arm64/lib/strlen.S
@@ -56,7 +56,7 @@  pos		.req	x12
 #define REP8_7f 0x7f7f7f7f7f7f7f7f
 #define REP8_80 0x8080808080808080
 
-ENTRY(strlen)
+WEAK(strlen)
 	mov	zeroones, #REP8_01
 	bic	src, srcin, #15
 	ands	tmp1, srcin, #15
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index e267044761c6..66bd145935d9 100644
--- a/arch/arm64/lib/strncmp.S
+++ b/arch/arm64/lib/strncmp.S
@@ -64,7 +64,7 @@  limit_wd	.req	x13
 mask		.req	x14
 endloop		.req	x15
 
-ENTRY(strncmp)
+WEAK(strncmp)
 	cbz	limit, .Lret0
 	eor	tmp1, src1, src2
 	mov	zeroones, #REP8_01
diff --git a/arch/arm64/lib/strnlen.S b/arch/arm64/lib/strnlen.S
index eae38da6e0bb..355be04441fe 100644
--- a/arch/arm64/lib/strnlen.S
+++ b/arch/arm64/lib/strnlen.S
@@ -59,7 +59,7 @@  limit_wd	.req	x14
 #define REP8_7f 0x7f7f7f7f7f7f7f7f
 #define REP8_80 0x8080808080808080
 
-ENTRY(strnlen)
+WEAK(strnlen)
 	cbz	limit, .Lhit_limit
 	mov	zeroones, #REP8_01
 	bic	src, srcin, #15
diff --git a/arch/arm64/lib/strrchr.S b/arch/arm64/lib/strrchr.S
index f8e2784d5752..ea84924d5990 100644
--- a/arch/arm64/lib/strrchr.S
+++ b/arch/arm64/lib/strrchr.S
@@ -29,7 +29,7 @@ 
  * Returns:
  *	x0 - address of last occurrence of 'c' or 0
  */
-ENTRY(strrchr)
+WEAK(strrchr)
 	mov	x3, #0
 	and	w1, w1, #0xff
 1:	ldrb	w2, [x0], #1