diff mbox series

[RFC,10/26] hugetlb: add for_each_hgm_shift

Message ID 20220624173656.2033256-11-jthoughton@google.com (mailing list archive)
State New
Headers show
Series hugetlb: Introduce HugeTLB high-granularity mapping | expand

Commit Message

James Houghton June 24, 2022, 5:36 p.m. UTC
This is a helper macro to loop through all the usable page sizes for a
high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
loop, in descending order, through the page sizes that HugeTLB supports
for this architecture; it always includes PAGE_SIZE.

Signed-off-by: James Houghton <jthoughton@google.com>
---
 mm/hugetlb.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

manish.mishra June 27, 2022, 1:01 p.m. UTC | #1
On 24/06/22 11:06 pm, James Houghton wrote:
> This is a helper macro to loop through all the usable page sizes for a
> high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
> loop, in descending order, through the page sizes that HugeTLB supports
> for this architecture; it always includes PAGE_SIZE.
reviewed-by:manish.mishra@nutanix.com
> Signed-off-by: James Houghton <jthoughton@google.com>
> ---
>   mm/hugetlb.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8b10b941458d..557b0afdb503 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
>   	/* All shared VMAs have HGM enabled. */
>   	return vma->vm_flags & VM_SHARED;
>   }
> +static unsigned int __shift_for_hstate(struct hstate *h)
> +{
> +	if (h >= &hstates[hugetlb_max_hstate])
> +		return PAGE_SHIFT;
> +	return huge_page_shift(h);
> +}
> +#define for_each_hgm_shift(hstate, tmp_h, shift) \
> +	for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
> +			       (tmp_h) <= &hstates[hugetlb_max_hstate]; \
> +			       (tmp_h)++)
>   #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
>   
>   /*
Mina Almasry June 28, 2022, 9:58 p.m. UTC | #2
On Fri, Jun 24, 2022 at 10:37 AM James Houghton <jthoughton@google.com> wrote:
>
> This is a helper macro to loop through all the usable page sizes for a
> high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
> loop, in descending order, through the page sizes that HugeTLB supports
> for this architecture; it always includes PAGE_SIZE.
>
> Signed-off-by: James Houghton <jthoughton@google.com>
> ---
>  mm/hugetlb.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8b10b941458d..557b0afdb503 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
>         /* All shared VMAs have HGM enabled. */
>         return vma->vm_flags & VM_SHARED;
>  }
> +static unsigned int __shift_for_hstate(struct hstate *h)
> +{
> +       if (h >= &hstates[hugetlb_max_hstate])
> +               return PAGE_SHIFT;

h > &hstates[hugetlb_max_hstate] means that h is out of bounds, no? am
I missing something here?

So is this intending to do:

if (h == hstates[hugetlb_max_hstate]
    return PAGE_SHIFT;

? If so, could we write it as so?

I'm also wondering why __shift_for_hstate(hstate[hugetlb_max_hstate])
== PAGE_SHIFT? Isn't the last hstate the smallest hstate which should
be 2MB on x86? Shouldn't this return PMD_SHIFT in that case?

> +       return huge_page_shift(h);
> +}
> +#define for_each_hgm_shift(hstate, tmp_h, shift) \
> +       for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
> +                              (tmp_h) <= &hstates[hugetlb_max_hstate]; \
> +                              (tmp_h)++)
>  #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
>
>  /*
> --
> 2.37.0.rc0.161.g10f37bed90-goog
>
Mike Kravetz July 7, 2022, 9:39 p.m. UTC | #3
On 06/28/22 14:58, Mina Almasry wrote:
> On Fri, Jun 24, 2022 at 10:37 AM James Houghton <jthoughton@google.com> wrote:
> >
> > This is a helper macro to loop through all the usable page sizes for a
> > high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
> > loop, in descending order, through the page sizes that HugeTLB supports
> > for this architecture; it always includes PAGE_SIZE.
> >
> > Signed-off-by: James Houghton <jthoughton@google.com>
> > ---
> >  mm/hugetlb.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 8b10b941458d..557b0afdb503 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
> >         /* All shared VMAs have HGM enabled. */
> >         return vma->vm_flags & VM_SHARED;
> >  }
> > +static unsigned int __shift_for_hstate(struct hstate *h)
> > +{
> > +       if (h >= &hstates[hugetlb_max_hstate])
> > +               return PAGE_SHIFT;
> 
> h > &hstates[hugetlb_max_hstate] means that h is out of bounds, no? am
> I missing something here?
> 
> So is this intending to do:
> 
> if (h == hstates[hugetlb_max_hstate]
>     return PAGE_SHIFT;
> 
> ? If so, could we write it as so?
> 
> I'm also wondering why __shift_for_hstate(hstate[hugetlb_max_hstate])
> == PAGE_SHIFT? Isn't the last hstate the smallest hstate which should
> be 2MB on x86? Shouldn't this return PMD_SHIFT in that case?
> 

I too am missing how this is working for similar reasons.
James Houghton July 8, 2022, 3:52 p.m. UTC | #4
On Tue, Jun 28, 2022 at 2:58 PM Mina Almasry <almasrymina@google.com> wrote:
>
> On Fri, Jun 24, 2022 at 10:37 AM James Houghton <jthoughton@google.com> wrote:
> >
> > This is a helper macro to loop through all the usable page sizes for a
> > high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
> > loop, in descending order, through the page sizes that HugeTLB supports
> > for this architecture; it always includes PAGE_SIZE.
> >
> > Signed-off-by: James Houghton <jthoughton@google.com>
> > ---
> >  mm/hugetlb.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 8b10b941458d..557b0afdb503 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
> >         /* All shared VMAs have HGM enabled. */
> >         return vma->vm_flags & VM_SHARED;
> >  }
> > +static unsigned int __shift_for_hstate(struct hstate *h)
> > +{
> > +       if (h >= &hstates[hugetlb_max_hstate])
> > +               return PAGE_SHIFT;
>
> h > &hstates[hugetlb_max_hstate] means that h is out of bounds, no? am
> I missing something here?

Yeah, it goes out of bounds intentionally. Maybe I should have called
this out. We need for_each_hgm_shift to include PAGE_SHIFT, and there
is no hstate for it. So to handle it, we iterate past the end of the
hstate array, and when we are past the end, we return PAGE_SHIFT and
stop iterating further. This is admittedly kind of gross; if you have
other suggestions for a way to get a clean `for_each_hgm_shift` macro
like this, I'm all ears. :)

>
> So is this intending to do:
>
> if (h == hstates[hugetlb_max_hstate]
>     return PAGE_SHIFT;
>
> ? If so, could we write it as so?

Yeah, this works. I'll write it this way instead. If that condition is
true, `h` is out of bounds (`hugetlb_max_hstate` is past the end, not
the index for the final element). I guess `hugetlb_max_hstate` is a
bit of a misnomer.

>
> I'm also wondering why __shift_for_hstate(hstate[hugetlb_max_hstate])
> == PAGE_SHIFT? Isn't the last hstate the smallest hstate which should
> be 2MB on x86? Shouldn't this return PMD_SHIFT in that case?

`huge_page_shift(hstate[hugetlb_max_hstate-1])` is PMD_SHIFT on x86.
Actually reading `hstate[hugetlb_max_hstate]` would be bad, which is
why `__shift_for_hstate` exists: to return PAGE_SIZE when we would
otherwise attempt to compute
`huge_page_shift(hstate[hugetlb_max_hstate])`.

>
> > +       return huge_page_shift(h);
> > +}
> > +#define for_each_hgm_shift(hstate, tmp_h, shift) \
> > +       for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
> > +                              (tmp_h) <= &hstates[hugetlb_max_hstate]; \

Note the <= here. If we wanted to always remain inbounds here, we'd
want < instead. But we don't have an hstate for PAGE_SIZE.

> > +                              (tmp_h)++)
> >  #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
> >
> >  /*
> > --
> > 2.37.0.rc0.161.g10f37bed90-goog
> >
Mina Almasry July 9, 2022, 9:55 p.m. UTC | #5
On Fri, Jul 8, 2022 at 8:52 AM James Houghton <jthoughton@google.com> wrote:
>
> On Tue, Jun 28, 2022 at 2:58 PM Mina Almasry <almasrymina@google.com> wrote:
> >
> > On Fri, Jun 24, 2022 at 10:37 AM James Houghton <jthoughton@google.com> wrote:
> > >
> > > This is a helper macro to loop through all the usable page sizes for a
> > > high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will
> > > loop, in descending order, through the page sizes that HugeTLB supports
> > > for this architecture; it always includes PAGE_SIZE.
> > >
> > > Signed-off-by: James Houghton <jthoughton@google.com>
> > > ---
> > >  mm/hugetlb.c | 10 ++++++++++
> > >  1 file changed, 10 insertions(+)
> > >
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index 8b10b941458d..557b0afdb503 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
> > >         /* All shared VMAs have HGM enabled. */
> > >         return vma->vm_flags & VM_SHARED;
> > >  }
> > > +static unsigned int __shift_for_hstate(struct hstate *h)
> > > +{
> > > +       if (h >= &hstates[hugetlb_max_hstate])
> > > +               return PAGE_SHIFT;
> >
> > h > &hstates[hugetlb_max_hstate] means that h is out of bounds, no? am
> > I missing something here?
>
> Yeah, it goes out of bounds intentionally. Maybe I should have called
> this out. We need for_each_hgm_shift to include PAGE_SHIFT, and there
> is no hstate for it. So to handle it, we iterate past the end of the
> hstate array, and when we are past the end, we return PAGE_SHIFT and
> stop iterating further. This is admittedly kind of gross; if you have
> other suggestions for a way to get a clean `for_each_hgm_shift` macro
> like this, I'm all ears. :)
>
> >
> > So is this intending to do:
> >
> > if (h == hstates[hugetlb_max_hstate]
> >     return PAGE_SHIFT;
> >
> > ? If so, could we write it as so?
>
> Yeah, this works. I'll write it this way instead. If that condition is
> true, `h` is out of bounds (`hugetlb_max_hstate` is past the end, not
> the index for the final element). I guess `hugetlb_max_hstate` is a
> bit of a misnomer.
>
> >
> > I'm also wondering why __shift_for_hstate(hstate[hugetlb_max_hstate])
> > == PAGE_SHIFT? Isn't the last hstate the smallest hstate which should
> > be 2MB on x86? Shouldn't this return PMD_SHIFT in that case?
>
> `huge_page_shift(hstate[hugetlb_max_hstate-1])` is PMD_SHIFT on x86.
> Actually reading `hstate[hugetlb_max_hstate]` would be bad, which is
> why `__shift_for_hstate` exists: to return PAGE_SIZE when we would
> otherwise attempt to compute
> `huge_page_shift(hstate[hugetlb_max_hstate])`.
>
> >
> > > +       return huge_page_shift(h);
> > > +}
> > > +#define for_each_hgm_shift(hstate, tmp_h, shift) \
> > > +       for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
> > > +                              (tmp_h) <= &hstates[hugetlb_max_hstate]; \
>
> Note the <= here. If we wanted to always remain inbounds here, we'd
> want < instead. But we don't have an hstate for PAGE_SIZE.
>

I see, thanks for the explanation. I can see 2 options here to make
the code more understandable:

option (a), don't go past the array. I.e. for_each_hgm_shift() will
loop over all the hugetlb-supported shifts on this arch, and the
calling code falls back to PAGE_SHIFT if the hugetlb page shifts don't
work for it. I admit that could lead to code dup in the calling code,
but I have not gotten to the patch that calls this yet.

option (b), simply add a comment and/or make it more obvious that
you're intentionally going out of bounds, and you want to loop over
PAGE_SHIFT at the end. Something like:

+ /* Returns huge_page_shift(h) if h is a pointer to an hstate in
hstates[] array, PAGE_SIZE otherwise. */
+static unsigned int __shift_for_hstate(struct hstate *h)
+{
+       if (h < &hstates[0] || h > &hstates[hugetlb_max_hstate - 1])
+               return PAGE_SHIFT;
+       return huge_page_shift(h);
+}
+
+ /* Loops over all the HGM shifts supported on this arch, from the
largest shift possible down to PAGE_SHIFT inclusive. */
+#define for_each_hgm_shift(hstate, tmp_h, shift) \
+       for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
+                              (tmp_h) <= &hstates[hugetlb_max_hstate]; \
+                              (tmp_h)++)
 #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */

> > > +                              (tmp_h)++)
> > >  #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
> > >
> > >  /*
> > > --
> > > 2.37.0.rc0.161.g10f37bed90-goog
> > >
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8b10b941458d..557b0afdb503 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6989,6 +6989,16 @@  bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
 	/* All shared VMAs have HGM enabled. */
 	return vma->vm_flags & VM_SHARED;
 }
+static unsigned int __shift_for_hstate(struct hstate *h)
+{
+	if (h >= &hstates[hugetlb_max_hstate])
+		return PAGE_SHIFT;
+	return huge_page_shift(h);
+}
+#define for_each_hgm_shift(hstate, tmp_h, shift) \
+	for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \
+			       (tmp_h) <= &hstates[hugetlb_max_hstate]; \
+			       (tmp_h)++)
 #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
 
 /*