diff mbox series

powerpc/mm: Fix UBSAN warning reported on hugetlb

Message ID 20220908072440.258301-1-aneesh.kumar@linux.ibm.com (mailing list archive)
State New
Headers show
Series powerpc/mm: Fix UBSAN warning reported on hugetlb | expand

Commit Message

Aneesh Kumar K.V Sept. 8, 2022, 7:24 a.m. UTC
Powerpc architecture supports 16GB hugetlb pages with hash translation. For 4K
page size, this is implemented as a hugepage directory entry at PGD level and
for 64K it is implemented as a huge page pte at PUD level

With 16GB hugetlb size, offset within a page is greater than 32 bits. Hence
switch to use unsigned long type when using hugepd_shift.

Inorder to keep things simpler, we make sure we always use unsigned long type
when using hugepd_shift() even though all the hugetlb page size won't require
that.

The walk_hugepd_range change won't have any impact because we don't use that for
the hugetlb walk. That code is needed to support hugepd on init_mm with PPC_8XX
and 8M page. Even though 8M page size won't result in any real issue, we update
that to keep it simpler.

The hugetlb_free_p*d_range changes are all related to nohash usage where we can
have multiple pgd entries pointing to the same hugepd entries. Hence on book3s64
where we can have > 4GB hugetlb page size we will always find more < next even
if we compute the value of more correctly.

Hence there is no functional change in this patch except that it fixes the below
warning.

 UBSAN: shift-out-of-bounds in arch/powerpc/mm/hugetlbpage.c:499:21
 shift exponent 34 is too large for 32-bit type 'int'
 CPU: 39 PID: 1673 Comm: a.out Not tainted 6.0.0-rc2-00327-gee88a56e8517-dirty #1
 Call Trace:
 [c00000002ccb3720] [c000000000cb21e4] dump_stack_lvl+0x98/0xe0 (unreliable)
 [c00000002ccb3760] [c000000000cacf60] ubsan_epilogue+0x18/0x70
 [c00000002ccb37c0] [c000000000cac44c] __ubsan_handle_shift_out_of_bounds+0x1bc/0x390
 [c00000002ccb38c0] [c0000000000d6f78] hugetlb_free_pgd_range+0x5d8/0x600
 [c00000002ccb39f0] [c000000000550e94] free_pgtables+0x114/0x290
 [c00000002ccb3ac0] [c00000000056cbe0] exit_mmap+0x150/0x550
 [c00000002ccb3be0] [c00000000017bf0c] mmput+0xcc/0x210
 [c00000002ccb3c20] [c00000000018f180] do_exit+0x420/0xdd0
 [c00000002ccb3cf0] [c00000000018fcdc] do_group_exit+0x4c/0xd0
 [c00000002ccb3d30] [c00000000018fd84] sys_exit_group+0x24/0x30
 [c00000002ccb3d50] [c00000000003cde0] system_call_exception+0x250/0x600
 [c00000002ccb3e10] [c00000000000c3bc] system_call_common+0xec/0x250

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/mm/hugetlbpage.c | 6 +++---
 mm/pagewalk.c                 | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

Comments

Christophe Leroy Sept. 8, 2022, 6:55 p.m. UTC | #1
Le 08/09/2022 à 09:24, Aneesh Kumar K.V a écrit :
> Powerpc architecture supports 16GB hugetlb pages with hash translation. For 4K
> page size, this is implemented as a hugepage directory entry at PGD level and
> for 64K it is implemented as a huge page pte at PUD level
> 
> With 16GB hugetlb size, offset within a page is greater than 32 bits. Hence
> switch to use unsigned long type when using hugepd_shift.
> 
> Inorder to keep things simpler, we make sure we always use unsigned long type
> when using hugepd_shift() even though all the hugetlb page size won't require
> that.
> 
> The walk_hugepd_range change won't have any impact because we don't use that for
> the hugetlb walk. That code is needed to support hugepd on init_mm with PPC_8XX
> and 8M page. Even though 8M page size won't result in any real issue, we update
> that to keep it simpler.
> 
> The hugetlb_free_p*d_range changes are all related to nohash usage where we can
> have multiple pgd entries pointing to the same hugepd entries. Hence on book3s64
> where we can have > 4GB hugetlb page size we will always find more < next even
> if we compute the value of more correctly.
> 
> Hence there is no functional change in this patch except that it fixes the below
> warning.
> 
>   UBSAN: shift-out-of-bounds in arch/powerpc/mm/hugetlbpage.c:499:21
>   shift exponent 34 is too large for 32-bit type 'int'
>   CPU: 39 PID: 1673 Comm: a.out Not tainted 6.0.0-rc2-00327-gee88a56e8517-dirty #1
>   Call Trace:
>   [c00000002ccb3720] [c000000000cb21e4] dump_stack_lvl+0x98/0xe0 (unreliable)
>   [c00000002ccb3760] [c000000000cacf60] ubsan_epilogue+0x18/0x70
>   [c00000002ccb37c0] [c000000000cac44c] __ubsan_handle_shift_out_of_bounds+0x1bc/0x390
>   [c00000002ccb38c0] [c0000000000d6f78] hugetlb_free_pgd_range+0x5d8/0x600
>   [c00000002ccb39f0] [c000000000550e94] free_pgtables+0x114/0x290
>   [c00000002ccb3ac0] [c00000000056cbe0] exit_mmap+0x150/0x550
>   [c00000002ccb3be0] [c00000000017bf0c] mmput+0xcc/0x210
>   [c00000002ccb3c20] [c00000000018f180] do_exit+0x420/0xdd0
>   [c00000002ccb3cf0] [c00000000018fcdc] do_group_exit+0x4c/0xd0
>   [c00000002ccb3d30] [c00000000018fd84] sys_exit_group+0x24/0x30
>   [c00000002ccb3d50] [c00000000003cde0] system_call_exception+0x250/0x600
>   [c00000002ccb3e10] [c00000000000c3bc] system_call_common+0xec/0x250
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/mm/hugetlbpage.c | 6 +++---
>   mm/pagewalk.c                 | 2 +-
>   2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index fa7a3d21a751..e210b737658c 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -65,7 +65,7 @@ static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr,
>   	int err = 0;
>   	const struct mm_walk_ops *ops = walk->ops;
>   	int shift = hugepd_shift(*phpd);
> -	int page_size = 1 << shift;
> +	long page_size = 1UL << shift;

1UL means _unsigned_ long. Should page_size be unsigned ?

>   
>   	if (!ops->pte_entry)
>   		return 0;
Michael Ellerman Oct. 4, 2022, 1:26 p.m. UTC | #2
On Thu, 8 Sep 2022 12:54:40 +0530, Aneesh Kumar K.V wrote:
> Powerpc architecture supports 16GB hugetlb pages with hash translation. For 4K
> page size, this is implemented as a hugepage directory entry at PGD level and
> for 64K it is implemented as a huge page pte at PUD level
> 
> With 16GB hugetlb size, offset within a page is greater than 32 bits. Hence
> switch to use unsigned long type when using hugepd_shift.
> 
> [...]

Applied to powerpc/next.

[1/1] powerpc/mm: Fix UBSAN warning reported on hugetlb
      https://git.kernel.org/powerpc/c/7dd3a7b90bca2c12e2146a47d63cf69a2f5d7e89

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index bc84a594ca62..d1af03db6181 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -392,7 +392,7 @@  static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
 		 * single hugepage, but all of them point to
 		 * the same kmem cache that holds the hugepte.
 		 */
-		more = addr + (1 << hugepd_shift(*(hugepd_t *)pmd));
+		more = addr + (1UL << hugepd_shift(*(hugepd_t *)pmd));
 		if (more > next)
 			next = more;
 
@@ -434,7 +434,7 @@  static void hugetlb_free_pud_range(struct mmu_gather *tlb, p4d_t *p4d,
 			 * single hugepage, but all of them point to
 			 * the same kmem cache that holds the hugepte.
 			 */
-			more = addr + (1 << hugepd_shift(*(hugepd_t *)pud));
+			more = addr + (1UL << hugepd_shift(*(hugepd_t *)pud));
 			if (more > next)
 				next = more;
 
@@ -496,7 +496,7 @@  void hugetlb_free_pgd_range(struct mmu_gather *tlb,
 			 * for a single hugepage, but all of them point to the
 			 * same kmem cache that holds the hugepte.
 			 */
-			more = addr + (1 << hugepd_shift(*(hugepd_t *)pgd));
+			more = addr + (1ULL << hugepd_shift(*(hugepd_t *)pgd));
 			if (more > next)
 				next = more;
 
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index fa7a3d21a751..e210b737658c 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -65,7 +65,7 @@  static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr,
 	int err = 0;
 	const struct mm_walk_ops *ops = walk->ops;
 	int shift = hugepd_shift(*phpd);
-	int page_size = 1 << shift;
+	long page_size = 1UL << shift;
 
 	if (!ops->pte_entry)
 		return 0;