diff mbox series

[v3] powerpc: Fix virt_addr_valid() check

Message ID 20220127123754.77825-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series [v3] powerpc: Fix virt_addr_valid() check | expand

Commit Message

Kefeng Wang Jan. 27, 2022, 12:37 p.m. UTC
When run ethtool eth0 on PowerPC64, the BUG occurred,

  usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
  kernel BUG at mm/usercopy.c:99
  ...
  usercopy_abort+0x64/0xa0 (unreliable)
  __check_heap_object+0x168/0x190
  __check_object_size+0x1a0/0x200
  dev_ethtool+0x2494/0x2b20
  dev_ioctl+0x5d0/0x770
  sock_do_ioctl+0xf0/0x1d0
  sock_ioctl+0x3ec/0x5a0
  __se_sys_ioctl+0xf0/0x160
  system_call_exception+0xfc/0x1f0
  system_call_common+0xf8/0x200

The code shows below,

  data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
  copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))

The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
on PowerPC64, which leads to the panic.

As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
and __pa addresses") does, let's check the virt addr above PAGE_OFFSET in
the virt_addr_valid() for PowerPC64, which will make sure that the passed
address is a valid linear map address.

Meanwhile, PAGE_OFFSET is the virtual address of the start of lowmem,
the check is suitable for PowerPC32 too.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v3:
- update changelog and remove a redundant cast 
 arch/powerpc/include/asm/page.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Christophe Leroy Feb. 1, 2022, 11:57 a.m. UTC | #1
Le 27/01/2022 à 13:37, Kefeng Wang a écrit :
> When run ethtool eth0 on PowerPC64, the BUG occurred,
> 
>    usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>    kernel BUG at mm/usercopy.c:99
>    ...
>    usercopy_abort+0x64/0xa0 (unreliable)
>    __check_heap_object+0x168/0x190
>    __check_object_size+0x1a0/0x200
>    dev_ethtool+0x2494/0x2b20
>    dev_ioctl+0x5d0/0x770
>    sock_do_ioctl+0xf0/0x1d0
>    sock_ioctl+0x3ec/0x5a0
>    __se_sys_ioctl+0xf0/0x160
>    system_call_exception+0xfc/0x1f0
>    system_call_common+0xf8/0x200
> 
> The code shows below,
> 
>    data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>    copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
> 
> The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
> on PowerPC64, which leads to the panic.
> 
> As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
> and __pa addresses") does, let's check the virt addr above PAGE_OFFSET in
> the virt_addr_valid() for PowerPC64, which will make sure that the passed
> address is a valid linear map address.
> 
> Meanwhile, PAGE_OFFSET is the virtual address of the start of lowmem,
> the check is suitable for PowerPC32 too.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>

> ---
> v3:
> - update changelog and remove a redundant cast
>   arch/powerpc/include/asm/page.h | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 254687258f42..a8a29a23ce2d 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>   #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
>   #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
>   
> -#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
> +#define virt_addr_valid(vaddr)	({						\
> +	unsigned long _addr = (unsigned long)vaddr;				\
> +	_addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
> +})
>   
>   /*
>    * On Book-E parts we need __va to parse the device tree and we can't
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 254687258f42..a8a29a23ce2d 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -132,7 +132,10 @@  static inline bool pfn_valid(unsigned long pfn)
 #define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 
-#define virt_addr_valid(kaddr)	pfn_valid(virt_to_pfn(kaddr))
+#define virt_addr_valid(vaddr)	({						\
+	unsigned long _addr = (unsigned long)vaddr;				\
+	_addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));	\
+})
 
 /*
  * On Book-E parts we need __va to parse the device tree and we can't