diff mbox series

[RFC,V2] mm: add the zero case to page[1].compound_nr in set_compound_order

Message ID 20221213234505.173468-1-npache@redhat.com (mailing list archive)
State New
Headers show
Series [RFC,V2] mm: add the zero case to page[1].compound_nr in set_compound_order | expand

Commit Message

Nico Pache Dec. 13, 2022, 11:45 p.m. UTC
Since commit 1378a5ee451a ("mm: store compound_nr as well as
compound_order") the page[1].compound_nr must be explicitly set to 0 if
calling set_compound_order(page, 0).

This can lead to bugs if the caller of set_compound_order(page, 0) forgets
to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
("mm/hugetlb: clear compound_nr before freeing gigantic pages")

Collapse these calls into the set_compound_order by utilizing branchless
bitmaths [1].

[1] https://graphics.stanford.edu/~seander/bithacks.html#ConditionalSetOrClearBitsWithoutBranching

V2: slight changes to commit log and remove extra '//' in the comments

Signed-off-by: Nico Pache <npache@redhat.com>
---
 include/linux/mm.h | 6 +++++-
 mm/hugetlb.c       | 6 ------
 2 files changed, 5 insertions(+), 7 deletions(-)

Comments

Mike Kravetz Dec. 13, 2022, 11:47 p.m. UTC | #1
On 12/13/22 16:45, Nico Pache wrote:
> Since commit 1378a5ee451a ("mm: store compound_nr as well as
> compound_order") the page[1].compound_nr must be explicitly set to 0 if
> calling set_compound_order(page, 0).
> 
> This can lead to bugs if the caller of set_compound_order(page, 0) forgets
> to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
> ("mm/hugetlb: clear compound_nr before freeing gigantic pages")

There has been some recent work in this area.  The latest patch being,          
https://lore.kernel.org/linux-mm/20221213212053.106058-1-sidhartha.kumar@oracle.com/
Nico Pache Dec. 13, 2022, 11:53 p.m. UTC | #2
Hi Mike,

Thanks for the pointer! Would the branchless conditional be an
improvement over the current approach? I'm not sure how hot this path
is, but it may be worth the optimization.

-- Nico

On Tue, Dec 13, 2022 at 4:48 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/13/22 16:45, Nico Pache wrote:
> > Since commit 1378a5ee451a ("mm: store compound_nr as well as
> > compound_order") the page[1].compound_nr must be explicitly set to 0 if
> > calling set_compound_order(page, 0).
> >
> > This can lead to bugs if the caller of set_compound_order(page, 0) forgets
> > to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
> > ("mm/hugetlb: clear compound_nr before freeing gigantic pages")
>
> There has been some recent work in this area.  The latest patch being,
> https://lore.kernel.org/linux-mm/20221213212053.106058-1-sidhartha.kumar@oracle.com/
>
> --
> Mike Kravetz
>
Nico Pache Dec. 14, 2022, 12:27 a.m. UTC | #3
According to the document linked the following approach is even faster
than the one I used due to CPU parallelization:

page[1].compound_nr = ( shift & ~shift) | (-order & shift);

for(int x =0; x< 11;x++){
        unsigned int order = x;
        unsigned long shift = 1U << order;
        printf("order %d output : %lu\n", order, ( shift & ~shift) |
(-order & shift));
}
order 0 output : 0
order 1 output : 2
order 2 output : 4
order 3 output : 8
order 4 output : 16
order 5 output : 32
order 6 output : 64
order 7 output : 128
order 8 output : 256

-- Nico

On Tue, Dec 13, 2022 at 4:53 PM Nico Pache <npache@redhat.com> wrote:
>
> Hi Mike,
>
> Thanks for the pointer! Would the branchless conditional be an
> improvement over the current approach? I'm not sure how hot this path
> is, but it may be worth the optimization.
>
> -- Nico
>
> On Tue, Dec 13, 2022 at 4:48 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> >
> > On 12/13/22 16:45, Nico Pache wrote:
> > > Since commit 1378a5ee451a ("mm: store compound_nr as well as
> > > compound_order") the page[1].compound_nr must be explicitly set to 0 if
> > > calling set_compound_order(page, 0).
> > >
> > > This can lead to bugs if the caller of set_compound_order(page, 0) forgets
> > > to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
> > > ("mm/hugetlb: clear compound_nr before freeing gigantic pages")
> >
> > There has been some recent work in this area.  The latest patch being,
> > https://lore.kernel.org/linux-mm/20221213212053.106058-1-sidhartha.kumar@oracle.com/
> >
> > --
> > Mike Kravetz
> >
Mike Kravetz Dec. 14, 2022, 1:02 a.m. UTC | #4
On 12/13/22 17:27, Nico Pache wrote:
> According to the document linked the following approach is even faster
> than the one I used due to CPU parallelization:

I do not think we are very concerned with speed here.  This routine is being
called in the creation of compound pages, and in the case of hugetlb the
tear down of gigantic pages.  In general, creation and tear down of gigantic
pages happens infrequently.  Usually only at system/application startup and
system/application shutdown.

I think the only case where we 'might' be concerned with speed is in the
creation of compound pages for THP.  Do note that this code path is
still using set_compound_order as it has not been converted to folios.
Sidhartha Kumar Dec. 14, 2022, 6:38 a.m. UTC | #5
On 12/13/22 5:02 PM, Mike Kravetz wrote:
> On 12/13/22 17:27, Nico Pache wrote:
>> According to the document linked the following approach is even faster
>> than the one I used due to CPU parallelization:
> 
> I do not think we are very concerned with speed here.  This routine is being
> called in the creation of compound pages, and in the case of hugetlb the
> tear down of gigantic pages.  In general, creation and tear down of gigantic
> pages happens infrequently.  Usually only at system/application startup and
> system/application shutdown.
> 
Hi Nico,

I wrote a bpftrace script to track the time spent in 
__prep_compound_gigantic_folio both with and without the branch in 
folio_set_order() and resulting histogram was the same for both 
versions. This is probably because the for loop through every base page 
has a much higher overhead than the singular call to folio_set_order(). 
I am not sure what the performance difference for THP would be.

@prep_nsecs:
[1M, 2M) 
50|@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|


Below is the script.

Thanks,
Sidhartha Kumar

k:__prep_compound_gigantic_folio
{
         @prep_start[pid] = nsecs;
}

kr:__prep_compound_gigantic_folio
{
         @prep_nsecs = hist((nsecs - @prep_start[pid]));
         delete(@prep_start[pid]);
}

> I think the only case where we 'might' be concerned with speed is in the
> creation of compound pages for THP.  Do note that this code path is
> still using set_compound_order as it has not been converted to folios.
Matthew Wilcox Dec. 14, 2022, 5:04 p.m. UTC | #6
On Tue, Dec 13, 2022 at 04:45:05PM -0700, Nico Pache wrote:
> Since commit 1378a5ee451a ("mm: store compound_nr as well as
> compound_order") the page[1].compound_nr must be explicitly set to 0 if
> calling set_compound_order(page, 0).
> 
> This can lead to bugs if the caller of set_compound_order(page, 0) forgets
> to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
> ("mm/hugetlb: clear compound_nr before freeing gigantic pages")
> 
> Collapse these calls into the set_compound_order by utilizing branchless
> bitmaths [1].
> 
> [1] https://graphics.stanford.edu/~seander/bithacks.html#ConditionalSetOrClearBitsWithoutBranching
> 
> V2: slight changes to commit log and remove extra '//' in the comments

We don't usually use // comments anywhere in the kernel other than
the SPDX header.

>  static inline void set_compound_order(struct page *page, unsigned int order)
>  {
> +	unsigned long shift = (1U << order);

Shift is a funny name for this variable.  order is the shift.  this is 'nr'.

>  	page[1].compound_order = order;
>  #ifdef CONFIG_64BIT
> -	page[1].compound_nr = 1U << order;
> +	// Branchless conditional:
> +	// order  > 0 --> compound_nr = shift
> +	// order == 0 --> compound_nr = 0
> +	page[1].compound_nr = shift ^ (-order  ^ shift) & shift;

Can the compiler see through this?  Before, the compiler sees:

	page[1].compound_order = 0;
	page[1].compound_nr = 1U << 0;
...
	page[1].compound_nr = 0;

and it can eliminate the first store.  Now the compiler sees:

	unsigned long shift = (1U << 0);
	page[1].compound_order = order;
	page[1].compound_nr = shift ^ (0  ^ shift) & shift;

Does it do the maths at compile-time, knowing that order is 0 at this
callsite and deducing that it can just store a 0?

I think it might, since shift is constant-1,

	page[1].compound_nr = 1 ^ (0 ^ 1) & 1;
->	page[1].compound_nr = 1 ^ 1 & 1;
->	page[1].compound_nr = 0 & 1;
->	page[1].compound_nr = 0;

But you should run it through the compiler and check the assembly
output for __destroy_compound_gigantic_page().
Nico Pache Dec. 15, 2022, 1:05 a.m. UTC | #7
On Tue, Dec 13, 2022 at 11:38 PM Sidhartha Kumar
<sidhartha.kumar@oracle.com> wrote:
>
> On 12/13/22 5:02 PM, Mike Kravetz wrote:
> > On 12/13/22 17:27, Nico Pache wrote:
> >> According to the document linked the following approach is even faster
> >> than the one I used due to CPU parallelization:
> >
> > I do not think we are very concerned with speed here.  This routine is being
> > called in the creation of compound pages, and in the case of hugetlb the
> > tear down of gigantic pages.  In general, creation and tear down of gigantic
> > pages happens infrequently.  Usually only at system/application startup and
> > system/application shutdown.
> >
> Hi Nico,
>
> I wrote a bpftrace script to track the time spent in
> __prep_compound_gigantic_folio both with and without the branch in
> folio_set_order() and resulting histogram was the same for both
> versions. This is probably because the for loop through every base page
> has a much higher overhead than the singular call to folio_set_order().
> I am not sure what the performance difference for THP would be.

Hi Sidhartha,

Ok great! We may want to proactively implement a branchless version so
once/if THP comes around to utilizing this we won't see a regression.

Furthermore, Waiman brought up a good point off the list:
This bitmath is needlessly complex and can be achieved with
           page[1].compound_nr = (1U << order) & ~1U;

Tested:
order 0 output : 0
order 1 output : 2
order 2 output : 4
order 3 output : 8
order 4 output : 16
order 5 output : 32
order 6 output : 64
order 7 output : 128
order 8 output : 256
order 9 output : 512
order 10 output : 1024


> Below is the script.
> Thanks,
> Sidhartha Kumar

Thanks for the script!!
Cheers,
-- Nico

> k:__prep_compound_gigantic_folio
> {
>          @prep_start[pid] = nsecs;
> }
>
> kr:__prep_compound_gigantic_folio
> {
>          @prep_nsecs = hist((nsecs - @prep_start[pid]));
>          delete(@prep_start[pid]);
> }
Nico Pache Dec. 15, 2022, 2:48 a.m. UTC | #8
On Wed, Dec 14, 2022 at 10:04 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Dec 13, 2022 at 04:45:05PM -0700, Nico Pache wrote:
> > Since commit 1378a5ee451a ("mm: store compound_nr as well as
> > compound_order") the page[1].compound_nr must be explicitly set to 0 if
> > calling set_compound_order(page, 0).
> >
> > This can lead to bugs if the caller of set_compound_order(page, 0) forgets
> > to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
> > ("mm/hugetlb: clear compound_nr before freeing gigantic pages")
> >
> > Collapse these calls into the set_compound_order by utilizing branchless
> > bitmaths [1].
> >
> > [1] https://graphics.stanford.edu/~seander/bithacks.html#ConditionalSetOrClearBitsWithoutBranching
> >
> > V2: slight changes to commit log and remove extra '//' in the comments
>
> We don't usually use // comments anywhere in the kernel other than
> the SPDX header.

Whoops!

> >  static inline void set_compound_order(struct page *page, unsigned int order)
> >  {
> > +     unsigned long shift = (1U << order);
>
> Shift is a funny name for this variable.  order is the shift.  this is 'nr'.

Good point! Waiman found an even better/cleaner solution that would
avoid needing an extra variable.
    page[1].compound_nr = (1U << order) & ~1U;

> >       page[1].compound_order = order;
> >  #ifdef CONFIG_64BIT
> > -     page[1].compound_nr = 1U << order;
> > +     // Branchless conditional:
> > +     // order  > 0 --> compound_nr = shift
> > +     // order == 0 --> compound_nr = 0
> > +     page[1].compound_nr = shift ^ (-order  ^ shift) & shift;
>
> Can the compiler see through this?  Before, the compiler sees:
>
>         page[1].compound_order = 0;
>         page[1].compound_nr = 1U << 0;
> ...
>         page[1].compound_nr = 0;
>
> and it can eliminate the first store.

This may be the case at the moment, but with:
https://lore.kernel.org/linux-mm/20221213212053.106058-1-sidhartha.kumar@oracle.com/
we will have a branch instead. Sidhartha tested it and found no
regression; the concern is that if THPs get implemented using this
callpath then we may end up seeing a slowdown.

After doing my analysis below I dont think this is the case for the
destroy case(at least on x86).
In the destroy case for both the branch and branchless approach we see
the compiler optimizing away the bitmath and the branch and setting
the variable to zero.
In the prep case we see the introduction of a test and cmovne
instruction, implying a branch.

> Now the compiler sees:
>         unsigned long shift = (1U << 0);
>         page[1].compound_order = order;
>         page[1].compound_nr = shift ^ (0  ^ shift) & shift;
>
> Does it do the maths at compile-time, knowing that order is 0 at this
> callsite and deducing that it can just store a 0?
>
> I think it might, since shift is constant-1,
>
>         page[1].compound_nr = 1 ^ (0 ^ 1) & 1;
> ->      page[1].compound_nr = 1 ^ 1 & 1;
> ->      page[1].compound_nr = 0 & 1;
> ->      page[1].compound_nr = 0;
>
> But you should run it through the compiler and check the assembly
> output for __destroy_compound_gigantic_page().

Yep it does look like it gets optimized away for the destroy case:

Bitmath Case (destroy)
---------------------------------
Dump of assembler code for function __update_and_free_page:
...
mov    %rsi,%rbp //move 2nd arg (page) to rbp
...
movb   $0x0,0x51(%rbp) //page[1].compound_order = 0
movl   $0x0,0x5c(%rbp)  //page[1].compound_nr = 0
...

Math for movl : 0x5c (92) - 64 (sizeof page[0]) = 28
pahole page: unsigned int compound_nr;        /*    28     4 */

Bitmath Case (prep)
---------------------------------
In the case of prep_compound_gigantic_page the bitmath is being computed
   0xffffffff8134f17d <+13>:    mov    %rdi,%r12
   0xffffffff8134f180 <+16>:    push   %rbp
   0xffffffff8134f181 <+17>:    mov    $0x1,%ebp
   0xffffffff8134f186 <+22>:    shl    %cl,%ebp
   0xffffffff8134f188 <+24>:    neg    %ecx
   0xffffffff8134f18a <+26>:    push   %rbx
   0xffffffff8134f18b <+27>:    and    %ebp,%ecx
   0xffffffff8134f18d <+29>:    mov    %sil,0x51(%rdi)
   0xffffffff8134f191 <+33>:    mov    %ecx,0x5c(%rdi) //set page[1].compound_nr

Now to break down the approach with the branch:

Branch Case (destroy)
---------------------------------
  No branch utilized to determine the following instructions.
   0xffffffff813507bc <+236>:    movb   $0x0,0x51(%rbp)
   0xffffffff813507c0 <+240>:    movl   $0x0,0x5c(%rbp)

Branch  Case (prep)
---------------------------------
The branch is being computed with the introduction of a cmovne instruction.
   0xffffffff8134f15d <+13>:    mov    %rdi,%r12
   0xffffffff8134f160 <+16>:    push   %rbp
   0xffffffff8134f161 <+17>:    mov    $0x1,%ebp
   0xffffffff8134f166 <+22>:    shl    %cl,%ebp
   0xffffffff8134f168 <+24>:    test   %esi,%esi             //test
   0xffffffff8134f16a <+26>:    push   %rbx
   0xffffffff8134f16b <+27>:    cmovne %ebp,%ecx     //branch evaluation
   0xffffffff8134f16e <+30>:    mov    %sil,0x51(%rdi)
   0xffffffff8134f172 <+34>:    mov    %ecx,0x5c(%rdi)

So it looks like in the destruction of compound pages we'll see no
gain or loss between the bitmath or branch approach.
However, in the prep case we may see some performance loss once/if THP
utilizes this path due to the branch and the loss of CPU
parallelization that can be achieved utilizing the bitmath approach.

Cheers,
-- Nico



>
Nico Pache Dec. 15, 2022, 9:38 p.m. UTC | #9
On Wed, Dec 14, 2022 at 7:48 PM Nico Pache <npache@redhat.com> wrote:
>
> On Wed, Dec 14, 2022 at 10:04 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Tue, Dec 13, 2022 at 04:45:05PM -0700, Nico Pache wrote:
> > > Since commit 1378a5ee451a ("mm: store compound_nr as well as
> > > compound_order") the page[1].compound_nr must be explicitly set to 0 if
> > > calling set_compound_order(page, 0).
> > >
> > > This can lead to bugs if the caller of set_compound_order(page, 0) forgets
> > > to explicitly set compound_nr=0. An example of this is commit ba9c1201beaa
> > > ("mm/hugetlb: clear compound_nr before freeing gigantic pages")
> > >
> > > Collapse these calls into the set_compound_order by utilizing branchless
> > > bitmaths [1].
> > >
> > > [1] https://graphics.stanford.edu/~seander/bithacks.html#ConditionalSetOrClearBitsWithoutBranching
> > >
> > > V2: slight changes to commit log and remove extra '//' in the comments
> >
> > We don't usually use // comments anywhere in the kernel other than
> > the SPDX header.
>
> Whoops!
>
> > >  static inline void set_compound_order(struct page *page, unsigned int order)
> > >  {
> > > +     unsigned long shift = (1U << order);
> >
> > Shift is a funny name for this variable.  order is the shift.  this is 'nr'.
>
> Good point! Waiman found an even better/cleaner solution that would
> avoid needing an extra variable.
>     page[1].compound_nr = (1U << order) & ~1U;
>
> > >       page[1].compound_order = order;
> > >  #ifdef CONFIG_64BIT
> > > -     page[1].compound_nr = 1U << order;
> > > +     // Branchless conditional:
> > > +     // order  > 0 --> compound_nr = shift
> > > +     // order == 0 --> compound_nr = 0
> > > +     page[1].compound_nr = shift ^ (-order  ^ shift) & shift;
> >
> > Can the compiler see through this?  Before, the compiler sees:
> >
> >         page[1].compound_order = 0;
> >         page[1].compound_nr = 1U << 0;
> > ...
> >         page[1].compound_nr = 0;
> >
> > and it can eliminate the first store.
>
> This may be the case at the moment, but with:
> https://lore.kernel.org/linux-mm/20221213212053.106058-1-sidhartha.kumar@oracle.com/
> we will have a branch instead. Sidhartha tested it and found no
> regression; the concern is that if THPs get implemented using this
> callpath then we may end up seeing a slowdown.
>
> After doing my analysis below I dont think this is the case for the
> destroy case(at least on x86).
> In the destroy case for both the branch and branchless approach we see
> the compiler optimizing away the bitmath and the branch and setting
> the variable to zero.
> In the prep case we see the introduction of a test and cmovne
> instruction, implying a branch.
>
> > Now the compiler sees:
> >         unsigned long shift = (1U << 0);
> >         page[1].compound_order = order;
> >         page[1].compound_nr = shift ^ (0  ^ shift) & shift;
> >
> > Does it do the maths at compile-time, knowing that order is 0 at this
> > callsite and deducing that it can just store a 0?
> >
> > I think it might, since shift is constant-1,
> >
> >         page[1].compound_nr = 1 ^ (0 ^ 1) & 1;
> > ->      page[1].compound_nr = 1 ^ 1 & 1;
> > ->      page[1].compound_nr = 0 & 1;
> > ->      page[1].compound_nr = 0;
> >
> > But you should run it through the compiler and check the assembly
> > output for __destroy_compound_gigantic_page().
>
> Yep it does look like it gets optimized away for the destroy case:
>
> Bitmath Case (destroy)
> ---------------------------------
> Dump of assembler code for function __update_and_free_page:
> ...
> mov    %rsi,%rbp //move 2nd arg (page) to rbp
> ...
> movb   $0x0,0x51(%rbp) //page[1].compound_order = 0
> movl   $0x0,0x5c(%rbp)  //page[1].compound_nr = 0
> ...
>
> Math for movl : 0x5c (92) - 64 (sizeof page[0]) = 28
> pahole page: unsigned int compound_nr;        /*    28     4 */
>
> Bitmath Case (prep)
> ---------------------------------
> In the case of prep_compound_gigantic_page the bitmath is being computed
>    0xffffffff8134f17d <+13>:    mov    %rdi,%r12
>    0xffffffff8134f180 <+16>:    push   %rbp
>    0xffffffff8134f181 <+17>:    mov    $0x1,%ebp
>    0xffffffff8134f186 <+22>:    shl    %cl,%ebp
>    0xffffffff8134f188 <+24>:    neg    %ecx
>    0xffffffff8134f18a <+26>:    push   %rbx
>    0xffffffff8134f18b <+27>:    and    %ebp,%ecx
>    0xffffffff8134f18d <+29>:    mov    %sil,0x51(%rdi)
>    0xffffffff8134f191 <+33>:    mov    %ecx,0x5c(%rdi) //set page[1].compound_nr
>
> Now to break down the approach with the branch:
>
> Branch Case (destroy)
> ---------------------------------
>   No branch utilized to determine the following instructions.
>    0xffffffff813507bc <+236>:    movb   $0x0,0x51(%rbp)
>    0xffffffff813507c0 <+240>:    movl   $0x0,0x5c(%rbp)
>
> Branch  Case (prep)
> ---------------------------------
> The branch is being computed with the introduction of a cmovne instruction.
>    0xffffffff8134f15d <+13>:    mov    %rdi,%r12
>    0xffffffff8134f160 <+16>:    push   %rbp
>    0xffffffff8134f161 <+17>:    mov    $0x1,%ebp
>    0xffffffff8134f166 <+22>:    shl    %cl,%ebp
>    0xffffffff8134f168 <+24>:    test   %esi,%esi             //test
>    0xffffffff8134f16a <+26>:    push   %rbx
>    0xffffffff8134f16b <+27>:    cmovne %ebp,%ecx     //branch evaluation
>    0xffffffff8134f16e <+30>:    mov    %sil,0x51(%rdi)
>    0xffffffff8134f172 <+34>:    mov    %ecx,0x5c(%rdi)
>
To expand a little more on the analysis:
I computed the latency/throughput between <+24> and <+27> using
intel's manual (APPENDIX D):

The bitmath solutions shows a total latency of 2.5 with a Throughput of 0.5.
The branch solution show a total latency of 4 and throughput of 1.5.

Given this is not a tight loop, and the next instruction is requiring
the data computed, better (lower) latency is the more ideal situation.

Just wanted to add that little piece :)
 -- Nico

> So it looks like in the destruction of compound pages we'll see no
> gain or loss between the bitmath or branch approach.
> However, in the prep case we may see some performance loss once/if THP
> utilizes this path due to the branch and the loss of CPU
> parallelization that can be achieved utilizing the bitmath approach.
>
> Cheers,
> -- Nico
Matthew Wilcox Dec. 15, 2022, 9:47 p.m. UTC | #10
On Thu, Dec 15, 2022 at 02:38:28PM -0700, Nico Pache wrote:
> To expand a little more on the analysis:
> I computed the latency/throughput between <+24> and <+27> using
> intel's manual (APPENDIX D):
> 
> The bitmath solutions shows a total latency of 2.5 with a Throughput of 0.5.
> The branch solution show a total latency of 4 and throughput of 1.5.
> 
> Given this is not a tight loop, and the next instruction is requiring
> the data computed, better (lower) latency is the more ideal situation.
> 
> Just wanted to add that little piece :)

I appreciate how hard you're working on this, but it really is straining
at gnats ;-)  For a modern cpu, the most important thing is cache misses
and avoiding dirtying cachelines.  Cycle counting isn't that important
when an L3 cache miss takes 2000 (or more) cycles.
Nico Pache Dec. 15, 2022, 10:02 p.m. UTC | #11
On Thu, Dec 15, 2022 at 2:47 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Dec 15, 2022 at 02:38:28PM -0700, Nico Pache wrote:
> > To expand a little more on the analysis:
> > I computed the latency/throughput between <+24> and <+27> using
> > intel's manual (APPENDIX D):
> >
> > The bitmath solutions shows a total latency of 2.5 with a Throughput of 0.5.
> > The branch solution show a total latency of 4 and throughput of 1.5.
> >
> > Given this is not a tight loop, and the next instruction is requiring
> > the data computed, better (lower) latency is the more ideal situation.
> >
> > Just wanted to add that little piece :)
>
> I appreciate how hard you're working on this, but it really is straining
> at gnats ;-)  For a modern cpu, the most important thing is cache misses
> and avoiding dirtying cachelines.  Cycle counting isn't that important
> when an L3 cache miss takes 2000 (or more) cycles.

Haha yeah I figured so once I saw the results, but I figured I'd share.

We have HPC systems in the TiB of memory so sometimes gnats matter ;p
The 2-3 extra cycles may turn into 2million extra cycles on a 2TiB
system full of THPs-- I guess that's not a significant amount of
cycles either in the grand scheme of things.

Cheers,
-- Nico

>
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6a05a3bc0a28..9510f6294706 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -938,9 +938,13 @@  static inline int head_compound_pincount(struct page *head)
 
 static inline void set_compound_order(struct page *page, unsigned int order)
 {
+	unsigned long shift = (1U << order);
 	page[1].compound_order = order;
 #ifdef CONFIG_64BIT
-	page[1].compound_nr = 1U << order;
+	// Branchless conditional:
+	// order  > 0 --> compound_nr = shift
+	// order == 0 --> compound_nr = 0
+	page[1].compound_nr = shift ^ (-order  ^ shift) & shift;
 #endif
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3d9f4abec17c..706dec43a6a2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1344,9 +1344,6 @@  static void __destroy_compound_gigantic_page(struct page *page,
 	}
 
 	set_compound_order(page, 0);
-#ifdef CONFIG_64BIT
-	page[1].compound_nr = 0;
-#endif
 	__ClearPageHead(page);
 }
 
@@ -1865,9 +1862,6 @@  static bool __prep_compound_gigantic_page(struct page *page, unsigned int order,
 		__ClearPageReserved(p);
 	}
 	set_compound_order(page, 0);
-#ifdef CONFIG_64BIT
-	page[1].compound_nr = 0;
-#endif
 	__ClearPageHead(page);
 	return false;
 }