[2/8] arm64: memory: Ensure address tag is masked in conversion macros
diff mbox series

Message ID 20190813170149.26037-3-will@kernel.org
State New
Headers show
Series
  • Fix issues with 52-bit kernel virtual addressing
Related show

Commit Message

Will Deacon Aug. 13, 2019, 5:01 p.m. UTC
When converting a linear virtual address to a physical address, pfn or
struct page *, we must make sure that the tag bits are masked before the
calculation otherwise we end up with corrupt pointers when running with
CONFIG_KASAN_SW_TAGS=y:

  | Unable to handle kernel paging request at virtual address 0037fe0007580d08
  | [0037fe0007580d08] address between user and kernel address ranges

Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().

Reported-by: Qian Cai <cai@lca.pw>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Steve Capper Aug. 13, 2019, 6:54 p.m. UTC | #1
On Tue, Aug 13, 2019 at 06:01:43PM +0100, Will Deacon wrote:
> When converting a linear virtual address to a physical address, pfn or
> struct page *, we must make sure that the tag bits are masked before the
> calculation otherwise we end up with corrupt pointers when running with
> CONFIG_KASAN_SW_TAGS=y:
> 
>   | Unable to handle kernel paging request at virtual address 0037fe0007580d08
>   | [0037fe0007580d08] address between user and kernel address ranges
> 
> Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 442ab861cab8..47b4dc73b8bf 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -252,7 +252,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
>  
>  #define __virt_to_phys_nodebug(x) ({					\
> -	phys_addr_t __x = (phys_addr_t)(x);				\
> +	phys_addr_t __x = (phys_addr_t)(__tag_reset(x));		\
>  	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
>  			       __kimg_to_phys(__x);			\
>  })
> @@ -324,7 +324,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  	((void *)__addr_tag);						\
>  })
>  
> -#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
> +#define virt_to_page(vaddr)	\
> +	((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
>  #endif
>  
>  #define virt_addr_valid(addr)	({					\
> -- 
> 2.11.0
>
Catalin Marinas Aug. 14, 2019, 9:23 a.m. UTC | #2
On Tue, Aug 13, 2019 at 06:01:43PM +0100, Will Deacon wrote:
> When converting a linear virtual address to a physical address, pfn or
> struct page *, we must make sure that the tag bits are masked before the
> calculation otherwise we end up with corrupt pointers when running with
> CONFIG_KASAN_SW_TAGS=y:
> 
>   | Unable to handle kernel paging request at virtual address 0037fe0007580d08
>   | [0037fe0007580d08] address between user and kernel address ranges
> 
> Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Patch
diff mbox series

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 442ab861cab8..47b4dc73b8bf 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -252,7 +252,7 @@  static inline const void *__tag_set(const void *addr, u8 tag)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
-	phys_addr_t __x = (phys_addr_t)(x);				\
+	phys_addr_t __x = (phys_addr_t)(__tag_reset(x));		\
 	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
 			       __kimg_to_phys(__x);			\
 })
@@ -324,7 +324,8 @@  static inline void *phys_to_virt(phys_addr_t x)
 	((void *)__addr_tag);						\
 })
 
-#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
+#define virt_to_page(vaddr)	\
+	((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
 #endif
 
 #define virt_addr_valid(addr)	({					\