diff mbox series

Revert "mm/usercopy: Drop extra is_vmalloc_or_module() check"

Message ID 20211223102126.161848-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series Revert "mm/usercopy: Drop extra is_vmalloc_or_module() check" | expand

Commit Message

Kefeng Wang Dec. 23, 2021, 10:21 a.m. UTC
This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.

  usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
  kernel BUG at mm/usercopy.c:99
  ...
  usercopy_abort+0x64/0xa0 (unreliable)
  __check_heap_object+0x168/0x190
  __check_object_size+0x1a0/0x200
  dev_ethtool+0x2494/0x2b20
  dev_ioctl+0x5d0/0x770
  sock_do_ioctl+0xf0/0x1d0
  sock_ioctl+0x3ec/0x5a0
  __se_sys_ioctl+0xf0/0x160
  system_call_exception+0xfc/0x1f0
  system_call_common+0xf8/0x200

When run ethtool eth0, the BUG occurred, the code shows below,

  data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
  copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))

The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
on PowerPC64, which leads to the panic, add back the is_vmalloc_or_module()
check to fix it.

Fixes: 517e1fbeb65f (mm/usercopy: Drop extra is_vmalloc_or_module() check)
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/usercopy.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Christophe Leroy Dec. 24, 2021, 6:01 a.m. UTC | #1
Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
> 
>    usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>    kernel BUG at mm/usercopy.c:99
>    ...
>    usercopy_abort+0x64/0xa0 (unreliable)
>    __check_heap_object+0x168/0x190
>    __check_object_size+0x1a0/0x200
>    dev_ethtool+0x2494/0x2b20
>    dev_ioctl+0x5d0/0x770
>    sock_do_ioctl+0xf0/0x1d0
>    sock_ioctl+0x3ec/0x5a0
>    __se_sys_ioctl+0xf0/0x160
>    system_call_exception+0xfc/0x1f0
>    system_call_common+0xf8/0x200
> 
> When run ethtool eth0, the BUG occurred, the code shows below,
> 
>    data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>    copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
> 
> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
> on PowerPC64, which leads to the panic, add back the is_vmalloc_or_module()
> check to fix it.

Is it expected that virt_addr_valid() returns true on PPC64 for 
vmalloc'ed memory ? If that's the case it also means that 
CONFIG_DEBUG_VIRTUAL won't work as expected either.

If it is unexpected, I think you should fix PPC64 instead of adding this 
hack back. Maybe the ARM64 fix can be used as a starting point, see 
commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using 
__is_lm_address()")

In the meantime, can you provide more information on your config, 
especially which memory model is used ?

Christophe
Kefeng Wang Dec. 24, 2021, 7:06 a.m. UTC | #2
On 2021/12/24 14:01, Christophe Leroy wrote:
>
> Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
>>
>>     usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>>     kernel BUG at mm/usercopy.c:99
>>     ...
>>     usercopy_abort+0x64/0xa0 (unreliable)
>>     __check_heap_object+0x168/0x190
>>     __check_object_size+0x1a0/0x200
>>     dev_ethtool+0x2494/0x2b20
>>     dev_ioctl+0x5d0/0x770
>>     sock_do_ioctl+0xf0/0x1d0
>>     sock_ioctl+0x3ec/0x5a0
>>     __se_sys_ioctl+0xf0/0x160
>>     system_call_exception+0xfc/0x1f0
>>     system_call_common+0xf8/0x200
>>
>> When run ethtool eth0, the BUG occurred, the code shows below,
>>
>>     data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>     copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>
>> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
>> on PowerPC64, which leads to the panic, add back the is_vmalloc_or_module()
>> check to fix it.
> Is it expected that virt_addr_valid() returns true on PPC64 for
> vmalloc'ed memory ? If that's the case it also means that
> CONFIG_DEBUG_VIRTUAL won't work as expected either.

Our product reports this bug to me, after let them do some test,

I found virt_addr_valid return true for vmalloc'ed memory on their board.

I think DEBUG_VIRTUAL could not be work well too, but I can't test it.

>
> If it is unexpected, I think you should fix PPC64 instead of adding this
> hack back. Maybe the ARM64 fix can be used as a starting point, see
> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
> __is_lm_address()")

Yes, I check the history,  fix virt_addr_valid() on PowerPC is what I 
firstly want to do,

but I am not familiar with PPC, and also HARDENED_USERCOPY on other's 
ARCHs could

has this issue too, so I add the workaround back.


1) PPC maintainer/expert, any suggestion ?

2) Maybe we could add some check to WARN this scenario.

--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -229,6 +229,8 @@ static inline void check_heap_object(const void 
*ptr, unsigned long n,
         if (!virt_addr_valid(ptr))
                 return;

+       WARN_ON_ONCE(is_vmalloc_or_module_addr(ptr));

> In the meantime, can you provide more information on your config,
> especially which memory model is used ?

Some useful configs,

CONFIG_PPC64=y
CONFIG_PPC_BOOK3E_64=y
CONFIG_E5500_CPU=y
CONFIG_TARGET_CPU_BOOL=y
CONFIG_PPC_BOOK3E=y
CONFIG_E500=y
CONFIG_PPC_E500MC=y
CONFIG_PPC_FPU=y
CONFIG_FSL_EMB_PERFMON=y
CONFIG_FSL_EMB_PERF_EVENT=y
CONFIG_FSL_EMB_PERF_EVENT_E500=y
CONFIG_BOOKE=y
CONFIG_PPC_FSL_BOOK3E=y
CONFIG_PTE_64BIT=y
CONFIG_PHYS_64BIT=y
CONFIG_PPC_MMU_NOHASH=y
CONFIG_PPC_BOOK3E_MMU=y
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_FLATMEM_MANUAL=y
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y

>
> Christophe
Christophe Leroy Dec. 24, 2021, 1:18 p.m. UTC | #3
Le 24/12/2021 à 08:06, Kefeng Wang a écrit :
> 
> On 2021/12/24 14:01, Christophe Leroy wrote:
>>
>> Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
>>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
>>>
>>>     usercopy: Kernel memory exposure attempt detected from SLUB 
>>> object not in SLUB page?! (offset 0, size 1048)!
>>>     kernel BUG at mm/usercopy.c:99
>>>     ...
>>>     usercopy_abort+0x64/0xa0 (unreliable)
>>>     __check_heap_object+0x168/0x190
>>>     __check_object_size+0x1a0/0x200
>>>     dev_ethtool+0x2494/0x2b20
>>>     dev_ioctl+0x5d0/0x770
>>>     sock_do_ioctl+0xf0/0x1d0
>>>     sock_ioctl+0x3ec/0x5a0
>>>     __se_sys_ioctl+0xf0/0x160
>>>     system_call_exception+0xfc/0x1f0
>>>     system_call_common+0xf8/0x200
>>>
>>> When run ethtool eth0, the BUG occurred, the code shows below,
>>>
>>>     data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>>     copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>>
>>> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
>>> on PowerPC64, which leads to the panic, add back the 
>>> is_vmalloc_or_module()
>>> check to fix it.
>> Is it expected that virt_addr_valid() returns true on PPC64 for
>> vmalloc'ed memory ? If that's the case it also means that
>> CONFIG_DEBUG_VIRTUAL won't work as expected either.
> 
> Our product reports this bug to me, after let them do some test,
> 
> I found virt_addr_valid return true for vmalloc'ed memory on their board.
> 
> I think DEBUG_VIRTUAL could not be work well too, but I can't test it.
> 
>>
>> If it is unexpected, I think you should fix PPC64 instead of adding this
>> hack back. Maybe the ARM64 fix can be used as a starting point, see
>> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
>> __is_lm_address()")
> 
> Yes, I check the history,  fix virt_addr_valid() on PowerPC is what I 
> firstly want to do,
> 
> but I am not familiar with PPC, and also HARDENED_USERCOPY on other's 
> ARCHs could
> 
> has this issue too, so I add the workaround back.
> 
> 
> 1) PPC maintainer/expert, any suggestion ?
> 
> 2) Maybe we could add some check to WARN this scenario.
> 
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -229,6 +229,8 @@ static inline void check_heap_object(const void 
> *ptr, unsigned long n,
>          if (!virt_addr_valid(ptr))
>                  return;
> 
> +       WARN_ON_ONCE(is_vmalloc_or_module_addr(ptr));
> 
>> In the meantime, can you provide more information on your config,
>> especially which memory model is used ?
> 
> Some useful configs,
> 
> CONFIG_PPC64=y
> CONFIG_PPC_BOOK3E_64=y
> CONFIG_E5500_CPU=y
> CONFIG_TARGET_CPU_BOOL=y
> CONFIG_PPC_BOOK3E=y
> CONFIG_E500=y
> CONFIG_PPC_E500MC=y
> CONFIG_PPC_FPU=y
> CONFIG_FSL_EMB_PERFMON=y
> CONFIG_FSL_EMB_PERF_EVENT=y
> CONFIG_FSL_EMB_PERF_EVENT_E500=y
> CONFIG_BOOKE=y
> CONFIG_PPC_FSL_BOOK3E=y
> CONFIG_PTE_64BIT=y
> CONFIG_PHYS_64BIT=y
> CONFIG_PPC_MMU_NOHASH=y
> CONFIG_PPC_BOOK3E_MMU=y
> CONFIG_SELECT_MEMORY_MODEL=y
> CONFIG_FLATMEM_MANUAL=y
> CONFIG_FLATMEM=y
> CONFIG_FLAT_NODE_MEM_MAP=y
> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
> 

OK so it is PPC64 book3e and with flatmem.

The problem is virt_to_pfn() which uses __pa()

__pa(x) on PPC64 is (x) & 0x0fffffffffffffffUL

And on book3e/64 we have

VMALLOC_START = KERN_VIRT_START = ASM_CONST(0x8000000000000000)


It means that __pa() will return a valid PFN for VMALLOCed addresses.


So an additional check is required in virt_addr_valid(), maybe check 
that (kaddr & PAGE_OFFSET) == PAGE_OFFSET

Can you try that ?

#define virt_addr_valid(kaddr)	((kaddr & PAGE_OFFSET) == PAGE_OFFSET && 
pfn_valid(virt_to_pfn(kaddr)))


Thanks
Christophe
Kefeng Wang Dec. 25, 2021, 2:05 a.m. UTC | #4
On 2021/12/24 21:18, Christophe Leroy wrote:
>
> Le 24/12/2021 à 08:06, Kefeng Wang a écrit :
>> On 2021/12/24 14:01, Christophe Leroy wrote:
>>> Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
>>>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
>>>>
>>>>      usercopy: Kernel memory exposure attempt detected from SLUB
>>>> object not in SLUB page?! (offset 0, size 1048)!
>>>>      kernel BUG at mm/usercopy.c:99
>>>>      ...
>>>>      usercopy_abort+0x64/0xa0 (unreliable)
>>>>      __check_heap_object+0x168/0x190
>>>>      __check_object_size+0x1a0/0x200
>>>>      dev_ethtool+0x2494/0x2b20
>>>>      dev_ioctl+0x5d0/0x770
>>>>      sock_do_ioctl+0xf0/0x1d0
>>>>      sock_ioctl+0x3ec/0x5a0
>>>>      __se_sys_ioctl+0xf0/0x160
>>>>      system_call_exception+0xfc/0x1f0
>>>>      system_call_common+0xf8/0x200
>>>>
>>>> When run ethtool eth0, the BUG occurred, the code shows below,
>>>>
>>>>      data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>>>      copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>>>
>>>> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
>>>> on PowerPC64, which leads to the panic, add back the
>>>> is_vmalloc_or_module()
>>>> check to fix it.
>>> Is it expected that virt_addr_valid() returns true on PPC64 for
>>> vmalloc'ed memory ? If that's the case it also means that
>>> CONFIG_DEBUG_VIRTUAL won't work as expected either.
>> Our product reports this bug to me, after let them do some test,
>>
>> I found virt_addr_valid return true for vmalloc'ed memory on their board.
>>
>> I think DEBUG_VIRTUAL could not be work well too, but I can't test it.
>>
>>> If it is unexpected, I think you should fix PPC64 instead of adding this
>>> hack back. Maybe the ARM64 fix can be used as a starting point, see
>>> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
>>> __is_lm_address()")
>> Yes, I check the history,  fix virt_addr_valid() on PowerPC is what I
>> firstly want to do,
>>
>> but I am not familiar with PPC, and also HARDENED_USERCOPY on other's
>> ARCHs could
>>
>> has this issue too, so I add the workaround back.
>>
>>
>> 1) PPC maintainer/expert, any suggestion ?
>>
>> 2) Maybe we could add some check to WARN this scenario.
>>
>> --- a/mm/usercopy.c
>> +++ b/mm/usercopy.c
>> @@ -229,6 +229,8 @@ static inline void check_heap_object(const void
>> *ptr, unsigned long n,
>>           if (!virt_addr_valid(ptr))
>>                   return;
>>
>> +       WARN_ON_ONCE(is_vmalloc_or_module_addr(ptr));

>>
>>> In the meantime, can you provide more information on your config,
>>> especially which memory model is used ?
>> Some useful configs,
>>
>> CONFIG_PPC64=y
>> CONFIG_PPC_BOOK3E_64=y
>> CONFIG_E5500_CPU=y
>> CONFIG_TARGET_CPU_BOOL=y
>> CONFIG_PPC_BOOK3E=y
>> CONFIG_E500=y
>> CONFIG_PPC_E500MC=y
>> CONFIG_PPC_FPU=y
>> CONFIG_FSL_EMB_PERFMON=y
>> CONFIG_FSL_EMB_PERF_EVENT=y
>> CONFIG_FSL_EMB_PERF_EVENT_E500=y
>> CONFIG_BOOKE=y
>> CONFIG_PPC_FSL_BOOK3E=y
>> CONFIG_PTE_64BIT=y
>> CONFIG_PHYS_64BIT=y
>> CONFIG_PPC_MMU_NOHASH=y
>> CONFIG_PPC_BOOK3E_MMU=y
>> CONFIG_SELECT_MEMORY_MODEL=y
>> CONFIG_FLATMEM_MANUAL=y
>> CONFIG_FLATMEM=y
>> CONFIG_FLAT_NODE_MEM_MAP=y
>> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
>>
> OK so it is PPC64 book3e and with flatmem.
>
> The problem is virt_to_pfn() which uses __pa()
>
> __pa(x) on PPC64 is (x) & 0x0fffffffffffffffUL
>
> And on book3e/64 we have
>
> VMALLOC_START = KERN_VIRT_START = ASM_CONST(0x8000000000000000)
>
>
> It means that __pa() will return a valid PFN for VMALLOCed addresses.
>
>
> So an additional check is required in virt_addr_valid(), maybe check
> that (kaddr & PAGE_OFFSET) == PAGE_OFFSET
>
> Can you try that ?
>
> #define virt_addr_valid(kaddr)	((kaddr & PAGE_OFFSET) == PAGE_OFFSET &&
> pfn_valid(virt_to_pfn(kaddr)))

I got this commit,

commit 4dd7554a6456d124c85e0a4ad156625b71390b5c

Author: Nicholas Piggin <npiggin@gmail.com>
Date:   Wed Jul 24 18:46:37 2019 +1000

     powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses

     Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
     given a virtual address above PAGE_OFFSET.

It has check the PAGE_OFFSET in __pa,  will test it and resend the 
patch(with above warning changes).

Thanks.

>
>
> Thanks
> Christophe
Nicholas Piggin Dec. 25, 2021, 11:04 a.m. UTC | #5
Excerpts from Kefeng Wang's message of December 25, 2021 12:05 pm:
> 
> On 2021/12/24 21:18, Christophe Leroy wrote:
>>
>> Le 24/12/2021 à 08:06, Kefeng Wang a écrit :
>>> On 2021/12/24 14:01, Christophe Leroy wrote:
>>>> Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
>>>>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
>>>>>
>>>>>      usercopy: Kernel memory exposure attempt detected from SLUB
>>>>> object not in SLUB page?! (offset 0, size 1048)!
>>>>>      kernel BUG at mm/usercopy.c:99
>>>>>      ...
>>>>>      usercopy_abort+0x64/0xa0 (unreliable)
>>>>>      __check_heap_object+0x168/0x190
>>>>>      __check_object_size+0x1a0/0x200
>>>>>      dev_ethtool+0x2494/0x2b20
>>>>>      dev_ioctl+0x5d0/0x770
>>>>>      sock_do_ioctl+0xf0/0x1d0
>>>>>      sock_ioctl+0x3ec/0x5a0
>>>>>      __se_sys_ioctl+0xf0/0x160
>>>>>      system_call_exception+0xfc/0x1f0
>>>>>      system_call_common+0xf8/0x200
>>>>>
>>>>> When run ethtool eth0, the BUG occurred, the code shows below,
>>>>>
>>>>>      data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>>>>      copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>>>>
>>>>> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
>>>>> on PowerPC64, which leads to the panic, add back the
>>>>> is_vmalloc_or_module()
>>>>> check to fix it.
>>>> Is it expected that virt_addr_valid() returns true on PPC64 for
>>>> vmalloc'ed memory ? If that's the case it also means that
>>>> CONFIG_DEBUG_VIRTUAL won't work as expected either.
>>> Our product reports this bug to me, after let them do some test,
>>>
>>> I found virt_addr_valid return true for vmalloc'ed memory on their board.
>>>
>>> I think DEBUG_VIRTUAL could not be work well too, but I can't test it.
>>>
>>>> If it is unexpected, I think you should fix PPC64 instead of adding this
>>>> hack back. Maybe the ARM64 fix can be used as a starting point, see
>>>> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
>>>> __is_lm_address()")
>>> Yes, I check the history,  fix virt_addr_valid() on PowerPC is what I
>>> firstly want to do,
>>>
>>> but I am not familiar with PPC, and also HARDENED_USERCOPY on other's
>>> ARCHs could
>>>
>>> has this issue too, so I add the workaround back.
>>>
>>>
>>> 1) PPC maintainer/expert, any suggestion ?
>>>
>>> 2) Maybe we could add some check to WARN this scenario.
>>>
>>> --- a/mm/usercopy.c
>>> +++ b/mm/usercopy.c
>>> @@ -229,6 +229,8 @@ static inline void check_heap_object(const void
>>> *ptr, unsigned long n,
>>>           if (!virt_addr_valid(ptr))
>>>                   return;
>>>
>>> +       WARN_ON_ONCE(is_vmalloc_or_module_addr(ptr));
> 
>>>
>>>> In the meantime, can you provide more information on your config,
>>>> especially which memory model is used ?
>>> Some useful configs,
>>>
>>> CONFIG_PPC64=y
>>> CONFIG_PPC_BOOK3E_64=y
>>> CONFIG_E5500_CPU=y
>>> CONFIG_TARGET_CPU_BOOL=y
>>> CONFIG_PPC_BOOK3E=y
>>> CONFIG_E500=y
>>> CONFIG_PPC_E500MC=y
>>> CONFIG_PPC_FPU=y
>>> CONFIG_FSL_EMB_PERFMON=y
>>> CONFIG_FSL_EMB_PERF_EVENT=y
>>> CONFIG_FSL_EMB_PERF_EVENT_E500=y
>>> CONFIG_BOOKE=y
>>> CONFIG_PPC_FSL_BOOK3E=y
>>> CONFIG_PTE_64BIT=y
>>> CONFIG_PHYS_64BIT=y
>>> CONFIG_PPC_MMU_NOHASH=y
>>> CONFIG_PPC_BOOK3E_MMU=y
>>> CONFIG_SELECT_MEMORY_MODEL=y
>>> CONFIG_FLATMEM_MANUAL=y
>>> CONFIG_FLATMEM=y
>>> CONFIG_FLAT_NODE_MEM_MAP=y
>>> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
>>>
>> OK so it is PPC64 book3e and with flatmem.
>>
>> The problem is virt_to_pfn() which uses __pa()
>>
>> __pa(x) on PPC64 is (x) & 0x0fffffffffffffffUL
>>
>> And on book3e/64 we have
>>
>> VMALLOC_START = KERN_VIRT_START = ASM_CONST(0x8000000000000000)
>>
>>
>> It means that __pa() will return a valid PFN for VMALLOCed addresses.
>>
>>
>> So an additional check is required in virt_addr_valid(), maybe check
>> that (kaddr & PAGE_OFFSET) == PAGE_OFFSET
>>
>> Can you try that ?
>>
>> #define virt_addr_valid(kaddr)	((kaddr & PAGE_OFFSET) == PAGE_OFFSET &&
>> pfn_valid(virt_to_pfn(kaddr)))
> 
> I got this commit,
> 
> commit 4dd7554a6456d124c85e0a4ad156625b71390b5c
> 
> Author: Nicholas Piggin <npiggin@gmail.com>
> Date:   Wed Jul 24 18:46:37 2019 +1000
> 
>      powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
> 
>      Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
>      given a virtual address above PAGE_OFFSET.
> 
> It has check the PAGE_OFFSET in __pa,  will test it and resend the 
> patch(with above warning changes).

What did you get with this commit? Is this what causes the crash?

riscv for example with flatmem also relies on pfn_valid to do the right
thing, so as far as I can see the check should exclude vmalloc addresses
and it's just a matter of virt_addr_valid not to give virt_to_pfn an
address < PAGE_OFFSET.

If we take riscv's implementation

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 254687258f42..7713188516a6 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
 #define virt_to_page(kaddr)    pfn_to_page(virt_to_pfn(kaddr))
 #define pfn_to_kaddr(pfn)      __va((pfn) << PAGE_SHIFT)
 
-#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
+#define virt_addr_valid(vaddr) ({                                      \
+       unsigned long _addr = (unsigned long)vaddr;                     \
+       (unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr)); \
+})
 
 /*
  * On Book-E parts we need __va to parse the device tree and we can't
Kefeng Wang Dec. 25, 2021, noon UTC | #6
On 2021/12/25 19:04, Nicholas Piggin wrote:
> Excerpts from Kefeng Wang's message of December 25, 2021 12:05 pm:
>
...
>>> Can you try that ?
>>>
>>> #define virt_addr_valid(kaddr)	((kaddr & PAGE_OFFSET) == PAGE_OFFSET &&
>>> pfn_valid(virt_to_pfn(kaddr)))
>> I got this commit,
>>
>> commit 4dd7554a6456d124c85e0a4ad156625b71390b5c
>>
>> Author: Nicholas Piggin <npiggin@gmail.com>
>> Date:   Wed Jul 24 18:46:37 2019 +1000
>>
>>       powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
>>
>>       Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
>>       given a virtual address above PAGE_OFFSET.
>>
>> It has check the PAGE_OFFSET in __pa,  will test it and resend the
>> patch(with above warning changes).
> What did you get with this commit? Is this what causes the crash?

I mean that your patch does the check to make sure the virt addr should 
above PAGE_OFFSET,

and we can add the check in the virt_addr_valid too.

>
> riscv for example with flatmem also relies on pfn_valid to do the right
> thing, so as far as I can see the check should exclude vmalloc addresses
> and it's just a matter of virt_addr_valid not to give virt_to_pfn an
> address < PAGE_OFFSET.
>
> If we take riscv's implementation
>
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 254687258f42..7713188516a6 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>   #define virt_to_page(kaddr)    pfn_to_page(virt_to_pfn(kaddr))
>   #define pfn_to_kaddr(pfn)      __va((pfn) << PAGE_SHIFT)
>   
> -#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
> +#define virt_addr_valid(vaddr) ({                                      \
> +       unsigned long _addr = (unsigned long)vaddr;                     \
> +       (unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr)); \
> +})
Yes, I send a new v2  with this change, thanks
>   
>   /*
>    * On Book-E parts we need __va to parse the device tree and we can't
>
> .
diff mbox series

Patch

diff --git a/mm/usercopy.c b/mm/usercopy.c
index b3de3c4eefba..cfc845403017 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -225,6 +225,17 @@  static inline void check_heap_object(const void *ptr, unsigned long n,
 {
 	struct page *page;
 
+	/*
+	 * Some architectures (PowerPC64) return true for virt_addr_valid() on
+	 * vmalloced addresses. Work around this by checking for vmalloc
+	 * first.
+	 *
+	 * We also need to check for module addresses explicitly since we
+	 * may copy static data from modules to userspace
+	 */
+	if (is_vmalloc_or_module_addr(ptr))
+		return;
+
 	if (!virt_addr_valid(ptr))
 		return;