diff mbox series

[2/3] mm/mprotect: Use long for page accountings and retval

Message ID 20230104225207.1066932-3-peterx@redhat.com (mailing list archive)
State New
Headers show
Series mm/uffd: Fix missing markers on hugetlb | expand

Commit Message

Peter Xu Jan. 4, 2023, 10:52 p.m. UTC
Switch to use type "long" for page accountings and retval across the whole
procedure of change_protection().

The change should have shrinked the possible maximum page number to be half
comparing to previous (ULONG_MAX / 2), but it shouldn't overflow on any
system either because the maximum possible pages touched by change
protection should be ULONG_MAX / PAGE_SIZE.

Two reasons to switch from "unsigned long" to "long":

  1. It suites better on count_vm_numa_events(), whose 2nd parameter takes
     a long type.

  2. It paves way for returning negative (error) values in the future.

Currently the only caller that consumes this retval is change_prot_numa(),
where the unsigned long was converted to an int.  Since at it, touching up
the numa code to also take a long, so it'll avoid any possible overflow too
during the int-size convertion.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h |  4 ++--
 include/linux/mm.h      |  2 +-
 mm/hugetlb.c            |  4 ++--
 mm/mempolicy.c          |  2 +-
 mm/mprotect.c           | 26 +++++++++++++-------------
 5 files changed, 19 insertions(+), 19 deletions(-)

Comments

James Houghton Jan. 5, 2023, 1:51 a.m. UTC | #1
On Wed, Jan 4, 2023 at 10:52 PM Peter Xu <peterx@redhat.com> wrote:
>
> Switch to use type "long" for page accountings and retval across the whole
> procedure of change_protection().
>
> The change should have shrinked the possible maximum page number to be half
> comparing to previous (ULONG_MAX / 2), but it shouldn't overflow on any
> system either because the maximum possible pages touched by change
> protection should be ULONG_MAX / PAGE_SIZE.
>
> Two reasons to switch from "unsigned long" to "long":
>
>   1. It suites better on count_vm_numa_events(), whose 2nd parameter takes
>      a long type.
>
>   2. It paves way for returning negative (error) values in the future.
>
> Currently the only caller that consumes this retval is change_prot_numa(),
> where the unsigned long was converted to an int.  Since at it, touching up
> the numa code to also take a long, so it'll avoid any possible overflow too
> during the int-size convertion.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/hugetlb.h |  4 ++--
>  include/linux/mm.h      |  2 +-
>  mm/hugetlb.c            |  4 ++--
>  mm/mempolicy.c          |  2 +-
>  mm/mprotect.c           | 26 +++++++++++++-------------
>  5 files changed, 19 insertions(+), 19 deletions(-)

Acked-by: James Houghton <jthoughton@google.com>
David Hildenbrand Jan. 5, 2023, 8:44 a.m. UTC | #2
On 04.01.23 23:52, Peter Xu wrote:
> Switch to use type "long" for page accountings and retval across the whole
> procedure of change_protection().
> 
> The change should have shrinked the possible maximum page number to be half
> comparing to previous (ULONG_MAX / 2), but it shouldn't overflow on any
> system either because the maximum possible pages touched by change
> protection should be ULONG_MAX / PAGE_SIZE.

Yeah, highly unlikely.

> 
> Two reasons to switch from "unsigned long" to "long":
> 
>    1. It suites better on count_vm_numa_events(), whose 2nd parameter takes
>       a long type.
> 
>    2. It paves way for returning negative (error) values in the future.
> 
> Currently the only caller that consumes this retval is change_prot_numa(),
> where the unsigned long was converted to an int.  Since at it, touching up
> the numa code to also take a long, so it'll avoid any possible overflow too
> during the int-size convertion.

I'm wondering if we should just return the number of changed pages via a 
separate pointer and later using an int for returning errors -- when 
touching this interface already.

Only who's actually interested in the number of pages would pass a 
pointer to an unsigned long (NUMA).

And code that expects that there never ever are failures (mprotect, 
NUMA) could simply check for WARN_ON_ONCE(ret).

I assume you evaluated that option as well, what was your conclusion?
Mike Kravetz Jan. 5, 2023, 6:48 p.m. UTC | #3
On 01/04/23 17:52, Peter Xu wrote:
> Switch to use type "long" for page accountings and retval across the whole
> procedure of change_protection().
> 
> The change should have shrinked the possible maximum page number to be half
> comparing to previous (ULONG_MAX / 2), but it shouldn't overflow on any
> system either because the maximum possible pages touched by change
> protection should be ULONG_MAX / PAGE_SIZE.
> 
> Two reasons to switch from "unsigned long" to "long":
> 
>   1. It suites better on count_vm_numa_events(), whose 2nd parameter takes
>      a long type.
> 
>   2. It paves way for returning negative (error) values in the future.
> 
> Currently the only caller that consumes this retval is change_prot_numa(),
> where the unsigned long was converted to an int.  Since at it, touching up
> the numa code to also take a long, so it'll avoid any possible overflow too
> during the int-size convertion.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/hugetlb.h |  4 ++--
>  include/linux/mm.h      |  2 +-
>  mm/hugetlb.c            |  4 ++--
>  mm/mempolicy.c          |  2 +-
>  mm/mprotect.c           | 26 +++++++++++++-------------
>  5 files changed, 19 insertions(+), 19 deletions(-)

Acked-by: Mike Kravetz <mike.kravetz@oracle.com>

> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b6b10101bea7..e3aa336df900 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -248,7 +248,7 @@ void hugetlb_vma_lock_release(struct kref *kref);
>  
>  int pmd_huge(pmd_t pmd);
>  int pud_huge(pud_t pud);
> -unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> +long hugetlb_change_protection(struct vm_area_struct *vma,
>  		unsigned long address, unsigned long end, pgprot_t newprot,
>  		unsigned long cp_flags);
>  
> @@ -437,7 +437,7 @@ static inline void move_hugetlb_state(struct folio *old_folio,
>  {
>  }
>  
> -static inline unsigned long hugetlb_change_protection(
> +static inline long hugetlb_change_protection(
>  			struct vm_area_struct *vma, unsigned long address,
>  			unsigned long end, pgprot_t newprot,
>  			unsigned long cp_flags)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c37f9330f14e..86fe17e6ded7 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2132,7 +2132,7 @@ static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma
>  }
>  bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
>  			     pte_t pte);
> -extern unsigned long change_protection(struct mmu_gather *tlb,
> +extern long change_protection(struct mmu_gather *tlb,
>  			      struct vm_area_struct *vma, unsigned long start,
>  			      unsigned long end, unsigned long cp_flags);
>  extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 017d9159cddf..84bc665c7c86 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6613,7 +6613,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	return i ? i : err;
>  }
>  
> -unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> +long hugetlb_change_protection(struct vm_area_struct *vma,
>  		unsigned long address, unsigned long end,
>  		pgprot_t newprot, unsigned long cp_flags)
>  {
> @@ -6622,7 +6622,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>  	pte_t *ptep;
>  	pte_t pte;
>  	struct hstate *h = hstate_vma(vma);
> -	unsigned long pages = 0, psize = huge_page_size(h);
> +	long pages = 0, psize = huge_page_size(h);

Small nit,
psize is passed to routines as an unsigned long argument.  Arithmetic
should always be correct, but I am not sure if some of the static
checkers may complain.
Peter Xu Jan. 5, 2023, 7:22 p.m. UTC | #4
On Thu, Jan 05, 2023 at 09:44:16AM +0100, David Hildenbrand wrote:
> I'm wondering if we should just return the number of changed pages via a
> separate pointer and later using an int for returning errors -- when
> touching this interface already.
> 
> Only who's actually interested in the number of pages would pass a pointer
> to an unsigned long (NUMA).
> 
> And code that expects that there never ever are failures (mprotect, NUMA)
> could simply check for WARN_ON_ONCE(ret).
> 
> I assume you evaluated that option as well, what was your conclusion?

Since a single long can cover both things as retval, it's better to keep it
simple?  Thanks,
David Hildenbrand Jan. 9, 2023, 8:04 a.m. UTC | #5
On 05.01.23 20:22, Peter Xu wrote:
> On Thu, Jan 05, 2023 at 09:44:16AM +0100, David Hildenbrand wrote:
>> I'm wondering if we should just return the number of changed pages via a
>> separate pointer and later using an int for returning errors -- when
>> touching this interface already.
>>
>> Only who's actually interested in the number of pages would pass a pointer
>> to an unsigned long (NUMA).
>>
>> And code that expects that there never ever are failures (mprotect, NUMA)
>> could simply check for WARN_ON_ONCE(ret).
>>
>> I assume you evaluated that option as well, what was your conclusion?
> 
> Since a single long can cover both things as retval, it's better to keep it
> simple?  Thanks,
> 

Fine with me.
diff mbox series

Patch

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b6b10101bea7..e3aa336df900 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -248,7 +248,7 @@  void hugetlb_vma_lock_release(struct kref *kref);
 
 int pmd_huge(pmd_t pmd);
 int pud_huge(pud_t pud);
-unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot,
 		unsigned long cp_flags);
 
@@ -437,7 +437,7 @@  static inline void move_hugetlb_state(struct folio *old_folio,
 {
 }
 
-static inline unsigned long hugetlb_change_protection(
+static inline long hugetlb_change_protection(
 			struct vm_area_struct *vma, unsigned long address,
 			unsigned long end, pgprot_t newprot,
 			unsigned long cp_flags)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c37f9330f14e..86fe17e6ded7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2132,7 +2132,7 @@  static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma
 }
 bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
 			     pte_t pte);
-extern unsigned long change_protection(struct mmu_gather *tlb,
+extern long change_protection(struct mmu_gather *tlb,
 			      struct vm_area_struct *vma, unsigned long start,
 			      unsigned long end, unsigned long cp_flags);
 extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 017d9159cddf..84bc665c7c86 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6613,7 +6613,7 @@  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	return i ? i : err;
 }
 
-unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end,
 		pgprot_t newprot, unsigned long cp_flags)
 {
@@ -6622,7 +6622,7 @@  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 	pte_t *ptep;
 	pte_t pte;
 	struct hstate *h = hstate_vma(vma);
-	unsigned long pages = 0, psize = huge_page_size(h);
+	long pages = 0, psize = huge_page_size(h);
 	bool shared_pmd = false;
 	struct mmu_notifier_range range;
 	unsigned long last_addr_mask;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d3558248a0f0..a86b8f15e2f0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -631,7 +631,7 @@  unsigned long change_prot_numa(struct vm_area_struct *vma,
 			unsigned long addr, unsigned long end)
 {
 	struct mmu_gather tlb;
-	int nr_updated;
+	long nr_updated;
 
 	tlb_gather_mmu(&tlb, vma->vm_mm);
 
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 71358e45a742..0af22ab59ea8 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -80,13 +80,13 @@  bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
 	return pte_dirty(pte);
 }
 
-static unsigned long change_pte_range(struct mmu_gather *tlb,
+static long change_pte_range(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	pte_t *pte, oldpte;
 	spinlock_t *ptl;
-	unsigned long pages = 0;
+	long pages = 0;
 	int target_node = NUMA_NO_NODE;
 	bool prot_numa = cp_flags & MM_CP_PROT_NUMA;
 	bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
@@ -353,13 +353,13 @@  uffd_wp_protect_file(struct vm_area_struct *vma, unsigned long cp_flags)
 		}							\
 	} while (0)
 
-static inline unsigned long change_pmd_range(struct mmu_gather *tlb,
+static inline long change_pmd_range(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, pud_t *pud, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	pmd_t *pmd;
 	unsigned long next;
-	unsigned long pages = 0;
+	long pages = 0;
 	unsigned long nr_huge_updates = 0;
 	struct mmu_notifier_range range;
 
@@ -367,7 +367,7 @@  static inline unsigned long change_pmd_range(struct mmu_gather *tlb,
 
 	pmd = pmd_offset(pud, addr);
 	do {
-		unsigned long this_pages;
+		long this_pages;
 
 		next = pmd_addr_end(addr, end);
 
@@ -437,13 +437,13 @@  static inline unsigned long change_pmd_range(struct mmu_gather *tlb,
 	return pages;
 }
 
-static inline unsigned long change_pud_range(struct mmu_gather *tlb,
+static inline long change_pud_range(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	pud_t *pud;
 	unsigned long next;
-	unsigned long pages = 0;
+	long pages = 0;
 
 	pud = pud_offset(p4d, addr);
 	do {
@@ -458,13 +458,13 @@  static inline unsigned long change_pud_range(struct mmu_gather *tlb,
 	return pages;
 }
 
-static inline unsigned long change_p4d_range(struct mmu_gather *tlb,
+static inline long change_p4d_range(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	p4d_t *p4d;
 	unsigned long next;
-	unsigned long pages = 0;
+	long pages = 0;
 
 	p4d = p4d_offset(pgd, addr);
 	do {
@@ -479,14 +479,14 @@  static inline unsigned long change_p4d_range(struct mmu_gather *tlb,
 	return pages;
 }
 
-static unsigned long change_protection_range(struct mmu_gather *tlb,
+static long change_protection_range(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	pgd_t *pgd;
 	unsigned long next;
-	unsigned long pages = 0;
+	long pages = 0;
 
 	BUG_ON(addr >= end);
 	pgd = pgd_offset(mm, addr);
@@ -505,12 +505,12 @@  static unsigned long change_protection_range(struct mmu_gather *tlb,
 	return pages;
 }
 
-unsigned long change_protection(struct mmu_gather *tlb,
+long change_protection(struct mmu_gather *tlb,
 		       struct vm_area_struct *vma, unsigned long start,
 		       unsigned long end, unsigned long cp_flags)
 {
 	pgprot_t newprot = vma->vm_page_prot;
-	unsigned long pages;
+	long pages;
 
 	BUG_ON((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL);