diff mbox series

[RFC,v11,09/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()

Message ID 20240719093338.55117-10-linyunsheng@huawei.com (mailing list archive)
State Superseded
Headers show
Series Replace page_frag with page_frag_cache for sk_page_frag() | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_maps, false, 360) / test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Guessed tree name to be net-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 661 this patch: 661
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 2 of 2 maintainers
netdev/build_clang success Errors and warnings before: 662 this patch: 662
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 662 this patch: 662
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 14 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Yunsheng Lin July 19, 2024, 9:33 a.m. UTC
There are more new APIs calling __page_frag_cache_refill() in
this patchset, which may cause compiler not being able to inline
__page_frag_cache_refill() into __page_frag_alloc_va_align().

Not being able to do the inlining seems to casue some notiable
performance degradation in arm64 system with 64K PAGE_SIZE after
adding new API calling __page_frag_cache_refill().

It seems there is about 24Bytes binary size increase for
__page_frag_cache_refill() and __page_frag_cache_refill() in
arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling,
It seems we can have more than 100Bytes decrease for the binary
size by using __alloc_pages() to replace alloc_pages_node(), as
there seems to be some unnecessary checking for nid being
NUMA_NO_NODE, especially when page_frag is still part of the mm
system.

CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 mm/page_frag_cache.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Alexander Duyck July 21, 2024, 9:41 p.m. UTC | #1
On Fri, 2024-07-19 at 17:33 +0800, Yunsheng Lin wrote:
> There are more new APIs calling __page_frag_cache_refill() in
> this patchset, which may cause compiler not being able to inline
> __page_frag_cache_refill() into __page_frag_alloc_va_align().
> 
> Not being able to do the inlining seems to casue some notiable
> performance degradation in arm64 system with 64K PAGE_SIZE after
> adding new API calling __page_frag_cache_refill().
> 
> It seems there is about 24Bytes binary size increase for
> __page_frag_cache_refill() and __page_frag_cache_refill() in
> arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling,
> It seems we can have more than 100Bytes decrease for the binary
> size by using __alloc_pages() to replace alloc_pages_node(), as
> there seems to be some unnecessary checking for nid being
> NUMA_NO_NODE, especially when page_frag is still part of the mm
> system.
> 
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
>  mm/page_frag_cache.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index d9c9cad17af7..3f162e9d23ba 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -59,11 +59,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>  	gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) |  __GFP_COMP |
>  		   __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
> -	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
> -				PAGE_FRAG_CACHE_MAX_ORDER);
> +	page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
> +			     numa_mem_id(), NULL);
>  #endif
>  	if (unlikely(!page)) {
> -		page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
> +		page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
>  		if (unlikely(!page)) {
>  			memset(nc, 0, sizeof(*nc));
>  			return NULL;

So if I am understanding correctly this is basically just stripping the
checks that were being performed since they aren't really needed to
verify the output of numa_mem_id.

Rather than changing the code here, it might make more sense to update
alloc_pages_node_noprof to move the lines from
__alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and
warn_if_node_offline into an else statement which would cause them to
be automatically stripped for this and all other callers. The benefit
would likely be much more significant and may be worthy of being
accepted on its own merit without being a part of this patch set as I
would imagine it would show slight gains in terms of performance and
binary size by dropping the unnecessary instructions.
Yunsheng Lin July 24, 2024, 12:54 p.m. UTC | #2
On 2024/7/22 5:41, Alexander H Duyck wrote:

...

>>  	if (unlikely(!page)) {
>> -		page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
>> +		page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
>>  		if (unlikely(!page)) {
>>  			memset(nc, 0, sizeof(*nc));
>>  			return NULL;
> 
> So if I am understanding correctly this is basically just stripping the
> checks that were being performed since they aren't really needed to
> verify the output of numa_mem_id.
> 
> Rather than changing the code here, it might make more sense to update
> alloc_pages_node_noprof to move the lines from
> __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and
> warn_if_node_offline into an else statement which would cause them to
> be automatically stripped for this and all other callers. The benefit

I suppose you meant something like below:

@@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid)
 static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
                                                   unsigned int order)
 {
-       if (nid == NUMA_NO_NODE)
+       if (nid == NUMA_NO_NODE) {
                nid = numa_mem_id();
+       } else {
+               VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
+               warn_if_node_offline(nid, gfp_mask);
+       }

-       return __alloc_pages_node_noprof(nid, gfp_mask, order);
+       return __alloc_pages_noprof(gfp_mask, order, nid, NULL);
 }


> would likely be much more significant and may be worthy of being
> accepted on its own merit without being a part of this patch set as I
> would imagine it would show slight gains in terms of performance and
> binary size by dropping the unnecessary instructions.

Below is the result, it does reduce the binary size for
__page_frag_alloc_align() significantly as expected, but also
increase the size for other functions, which seems to be passing
a runtime nid, so the trick above doesn't work. I am not sure if
the overall reduction is significant enough to justify the change?
It seems that depends on how many future callers are passing runtime
nid to alloc_pages_node() related APIs.

[linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux
add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96)
Function                                     old     new   delta
bpf_map_alloc_pages                          708     764     +56
its_probe_one                               2836    2860     +24
iommu_dma_alloc                              984    1008     +24
__iommu_dma_alloc_noncontiguous.constprop    1180    1192     +12
e843419@0f3f_00011fb1_4348                     -       8      +8
its_vpe_irq_domain_deactivate                312     316      +4
its_vpe_irq_domain_alloc                    1492    1496      +4
its_irq_domain_free                          440     444      +4
iommu_dma_map_sg                            1328    1332      +4
dpaa_eth_probe                              5524    5528      +4
dpaa2_eth_xdp_xmit                           676     680      +4
dpaa2_eth_open                               564     568      +4
dma_direct_get_required_mask                 116     120      +4
__dma_direct_alloc_pages.constprop           656     660      +4
its_vpe_set_affinity                         928     924      -4
its_send_single_command                      340     336      -4
its_alloc_table_entry                        456     452      -4
dpaa_bp_seed                                 232     228      -4
arm_64_lpae_alloc_pgtable_s1                 680     676      -4
__arm_lpae_alloc_pages                       900     896      -4
e843419@0473_00005079_16ec                     8       -      -8
e843419@0189_00001c33_1c8                      8       -      -8
ringbuf_map_alloc                            612     600     -12
__page_frag_alloc_align                      740     536    -204
Total: Before=30306836, After=30306740, chg -0.00%
Alexander Duyck July 24, 2024, 3:03 p.m. UTC | #3
On Wed, Jul 24, 2024 at 5:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/7/22 5:41, Alexander H Duyck wrote:
>
> ...
>
> >>      if (unlikely(!page)) {
> >> -            page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
> >> +            page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
> >>              if (unlikely(!page)) {
> >>                      memset(nc, 0, sizeof(*nc));
> >>                      return NULL;
> >
> > So if I am understanding correctly this is basically just stripping the
> > checks that were being performed since they aren't really needed to
> > verify the output of numa_mem_id.
> >
> > Rather than changing the code here, it might make more sense to update
> > alloc_pages_node_noprof to move the lines from
> > __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and
> > warn_if_node_offline into an else statement which would cause them to
> > be automatically stripped for this and all other callers. The benefit
>
> I suppose you meant something like below:
>
> @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid)
>  static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
>                                                    unsigned int order)
>  {
> -       if (nid == NUMA_NO_NODE)
> +       if (nid == NUMA_NO_NODE) {
>                 nid = numa_mem_id();
> +       } else {
> +               VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
> +               warn_if_node_offline(nid, gfp_mask);
> +       }
>
> -       return __alloc_pages_node_noprof(nid, gfp_mask, order);
> +       return __alloc_pages_noprof(gfp_mask, order, nid, NULL);
>  }

Yes, that is more or less what I was thinking.

> > would likely be much more significant and may be worthy of being
> > accepted on its own merit without being a part of this patch set as I
> > would imagine it would show slight gains in terms of performance and
> > binary size by dropping the unnecessary instructions.
>
> Below is the result, it does reduce the binary size for
> __page_frag_alloc_align() significantly as expected, but also
> increase the size for other functions, which seems to be passing
> a runtime nid, so the trick above doesn't work. I am not sure if
> the overall reduction is significant enough to justify the change?
> It seems that depends on how many future callers are passing runtime
> nid to alloc_pages_node() related APIs.
>
> [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux
> add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96)
> Function                                     old     new   delta
> bpf_map_alloc_pages                          708     764     +56
> its_probe_one                               2836    2860     +24
> iommu_dma_alloc                              984    1008     +24
> __iommu_dma_alloc_noncontiguous.constprop    1180    1192     +12
> e843419@0f3f_00011fb1_4348                     -       8      +8
> its_vpe_irq_domain_deactivate                312     316      +4
> its_vpe_irq_domain_alloc                    1492    1496      +4
> its_irq_domain_free                          440     444      +4
> iommu_dma_map_sg                            1328    1332      +4
> dpaa_eth_probe                              5524    5528      +4
> dpaa2_eth_xdp_xmit                           676     680      +4
> dpaa2_eth_open                               564     568      +4
> dma_direct_get_required_mask                 116     120      +4
> __dma_direct_alloc_pages.constprop           656     660      +4
> its_vpe_set_affinity                         928     924      -4
> its_send_single_command                      340     336      -4
> its_alloc_table_entry                        456     452      -4
> dpaa_bp_seed                                 232     228      -4
> arm_64_lpae_alloc_pgtable_s1                 680     676      -4
> __arm_lpae_alloc_pages                       900     896      -4
> e843419@0473_00005079_16ec                     8       -      -8
> e843419@0189_00001c33_1c8                      8       -      -8
> ringbuf_map_alloc                            612     600     -12
> __page_frag_alloc_align                      740     536    -204
> Total: Before=30306836, After=30306740, chg -0.00%

I'm assuming the compiler must have uninlined
__alloc_pages_node_noprof in the previous version of things for the
cases where it is causing an increase in the code size.

One alternative approach we could look at doing would be to just add
the following to the start of the function:
if (__builtin_constant_p(nid) && nid == NUMA_NO_NODE)
        return __alloc_pages_noprof(gfp_mask, order, numa_mem_id(), NULL);

That should yield the best result as it essentially skips over the
problematic code at compile time for the constant case, otherwise the
code should be fully stripped so it shouldn't add any additional
overhead.
Yunsheng Lin July 25, 2024, 12:19 p.m. UTC | #4
On 2024/7/24 23:03, Alexander Duyck wrote:
> On Wed, Jul 24, 2024 at 5:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/7/22 5:41, Alexander H Duyck wrote:
>>
>> ...
>>
>>>>      if (unlikely(!page)) {
>>>> -            page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
>>>> +            page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
>>>>              if (unlikely(!page)) {
>>>>                      memset(nc, 0, sizeof(*nc));
>>>>                      return NULL;
>>>
>>> So if I am understanding correctly this is basically just stripping the
>>> checks that were being performed since they aren't really needed to
>>> verify the output of numa_mem_id.
>>>
>>> Rather than changing the code here, it might make more sense to update
>>> alloc_pages_node_noprof to move the lines from
>>> __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and
>>> warn_if_node_offline into an else statement which would cause them to
>>> be automatically stripped for this and all other callers. The benefit
>>
>> I suppose you meant something like below:
>>
>> @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid)
>>  static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
>>                                                    unsigned int order)
>>  {
>> -       if (nid == NUMA_NO_NODE)
>> +       if (nid == NUMA_NO_NODE) {
>>                 nid = numa_mem_id();
>> +       } else {
>> +               VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
>> +               warn_if_node_offline(nid, gfp_mask);
>> +       }
>>
>> -       return __alloc_pages_node_noprof(nid, gfp_mask, order);
>> +       return __alloc_pages_noprof(gfp_mask, order, nid, NULL);
>>  }
> 
> Yes, that is more or less what I was thinking.
> 
>>> would likely be much more significant and may be worthy of being
>>> accepted on its own merit without being a part of this patch set as I
>>> would imagine it would show slight gains in terms of performance and
>>> binary size by dropping the unnecessary instructions.
>>
>> Below is the result, it does reduce the binary size for
>> __page_frag_alloc_align() significantly as expected, but also
>> increase the size for other functions, which seems to be passing
>> a runtime nid, so the trick above doesn't work. I am not sure if
>> the overall reduction is significant enough to justify the change?
>> It seems that depends on how many future callers are passing runtime
>> nid to alloc_pages_node() related APIs.
>>
>> [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux
>> add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96)
>> Function                                     old     new   delta
>> bpf_map_alloc_pages                          708     764     +56
>> its_probe_one                               2836    2860     +24
>> iommu_dma_alloc                              984    1008     +24
>> __iommu_dma_alloc_noncontiguous.constprop    1180    1192     +12
>> e843419@0f3f_00011fb1_4348                     -       8      +8
>> its_vpe_irq_domain_deactivate                312     316      +4
>> its_vpe_irq_domain_alloc                    1492    1496      +4
>> its_irq_domain_free                          440     444      +4
>> iommu_dma_map_sg                            1328    1332      +4
>> dpaa_eth_probe                              5524    5528      +4
>> dpaa2_eth_xdp_xmit                           676     680      +4
>> dpaa2_eth_open                               564     568      +4
>> dma_direct_get_required_mask                 116     120      +4
>> __dma_direct_alloc_pages.constprop           656     660      +4
>> its_vpe_set_affinity                         928     924      -4
>> its_send_single_command                      340     336      -4
>> its_alloc_table_entry                        456     452      -4
>> dpaa_bp_seed                                 232     228      -4
>> arm_64_lpae_alloc_pgtable_s1                 680     676      -4
>> __arm_lpae_alloc_pages                       900     896      -4
>> e843419@0473_00005079_16ec                     8       -      -8
>> e843419@0189_00001c33_1c8                      8       -      -8
>> ringbuf_map_alloc                            612     600     -12
>> __page_frag_alloc_align                      740     536    -204
>> Total: Before=30306836, After=30306740, chg -0.00%
> 
> I'm assuming the compiler must have uninlined
> __alloc_pages_node_noprof in the previous version of things for the
> cases where it is causing an increase in the code size.
> 
> One alternative approach we could look at doing would be to just add
> the following to the start of the function:
> if (__builtin_constant_p(nid) && nid == NUMA_NO_NODE)
>         return __alloc_pages_noprof(gfp_mask, order, numa_mem_id(), NULL);
> 
> That should yield the best result as it essentially skips over the
> problematic code at compile time for the constant case, otherwise the
> code should be fully stripped so it shouldn't add any additional
> overhead.

Just tried it, it seems it is more complicated than expected too.
For example, the above changing seems to cause alloc_slab_page() to be
inlined to new_slab() and other inlining/uninlining that is hard to
understand.

[linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux
add/remove: 1/2 grow/shrink: 16/11 up/down: 432/-536 (-104)
Function                                     old     new   delta
new_slab                                     808    1124    +316
its_probe_one                               2836    2876     +40
dpaa2_eth_set_dist_key                      1096    1112     +16
e843419@0f3f_00011fb1_4348                     -       8      +8
rx_default_dqrr                             2776    2780      +4
pcpu_unmap_pages                             356     360      +4
its_vpe_irq_domain_alloc                    1492    1496      +4
iommu_dma_init_fq                            520     524      +4
iommu_dma_alloc                              984     988      +4
hns3_nic_net_timeout                         704     708      +4
hns3_init_all_ring                          1168    1172      +4
hns3_clear_all_ring                          372     376      +4
enetc_refill_rx_ring                         448     452      +4
enetc_free_rxtx_rings                        276     280      +4
dpaa2_eth_xdp_xmit                           676     680      +4
dpaa2_eth_rx                                1716    1720      +4
___slab_alloc                               2120    2124      +4
pcpu_free_pages.constprop                    236     232      -4
its_alloc_table_entry                        456     452      -4
hns3_reset_notify_init_enet                  628     624      -4
dpaa_cleanup_tx_fd                           556     552      -4
dpaa_bp_seed                                 232     228      -4
blk_update_request                           944     940      -4
blk_execute_rq                               540     536      -4
arm_64_lpae_alloc_pgtable_s1                 680     676      -4
__kmalloc_large_node                         340     336      -4
__arm_lpae_unmap                            1588    1584      -4
e843419@0473_00005079_16ec                     8       -      -8
__page_frag_alloc_align                      740     536    -204
alloc_slab_page                              284       -    -284
Total: Before=30306836, After=30306732, chg -0.00%
Alexander Duyck Aug. 14, 2024, 6:34 p.m. UTC | #5
On Thu, 2024-07-25 at 20:19 +0800, Yunsheng Lin wrote:
> On 2024/7/24 23:03, Alexander Duyck wrote:
> > On Wed, Jul 24, 2024 at 5:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> > > 
> > > On 2024/7/22 5:41, Alexander H Duyck wrote:
> > > 
> > > ...
> > > 
> > > > >      if (unlikely(!page)) {
> > > > > -            page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
> > > > > +            page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
> > > > >              if (unlikely(!page)) {
> > > > >                      memset(nc, 0, sizeof(*nc));
> > > > >                      return NULL;
> > > > 
> > > > So if I am understanding correctly this is basically just stripping the
> > > > checks that were being performed since they aren't really needed to
> > > > verify the output of numa_mem_id.
> > > > 
> > > > Rather than changing the code here, it might make more sense to update
> > > > alloc_pages_node_noprof to move the lines from
> > > > __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and
> > > > warn_if_node_offline into an else statement which would cause them to
> > > > be automatically stripped for this and all other callers. The benefit
> > > 
> > > I suppose you meant something like below:
> > > 
> > > @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid)
> > >  static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
> > >                                                    unsigned int order)
> > >  {
> > > -       if (nid == NUMA_NO_NODE)
> > > +       if (nid == NUMA_NO_NODE) {
> > >                 nid = numa_mem_id();
> > > +       } else {
> > > +               VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
> > > +               warn_if_node_offline(nid, gfp_mask);
> > > +       }
> > > 
> > > -       return __alloc_pages_node_noprof(nid, gfp_mask, order);
> > > +       return __alloc_pages_noprof(gfp_mask, order, nid, NULL);
> > >  }
> > 
> > Yes, that is more or less what I was thinking.
> > 
> > > > would likely be much more significant and may be worthy of being
> > > > accepted on its own merit without being a part of this patch set as I
> > > > would imagine it would show slight gains in terms of performance and
> > > > binary size by dropping the unnecessary instructions.
> > > 
> > > Below is the result, it does reduce the binary size for
> > > __page_frag_alloc_align() significantly as expected, but also
> > > increase the size for other functions, which seems to be passing
> > > a runtime nid, so the trick above doesn't work. I am not sure if
> > > the overall reduction is significant enough to justify the change?
> > > It seems that depends on how many future callers are passing runtime
> > > nid to alloc_pages_node() related APIs.
> > > 
> > > [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux
> > > add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96)
> > > Function                                     old     new   delta
> > > bpf_map_alloc_pages                          708     764     +56
> > > its_probe_one                               2836    2860     +24
> > > iommu_dma_alloc                              984    1008     +24
> > > __iommu_dma_alloc_noncontiguous.constprop    1180    1192     +12
> > > e843419@0f3f_00011fb1_4348                     -       8      +8
> > > its_vpe_irq_domain_deactivate                312     316      +4
> > > its_vpe_irq_domain_alloc                    1492    1496      +4
> > > its_irq_domain_free                          440     444      +4
> > > iommu_dma_map_sg                            1328    1332      +4
> > > dpaa_eth_probe                              5524    5528      +4
> > > dpaa2_eth_xdp_xmit                           676     680      +4
> > > dpaa2_eth_open                               564     568      +4
> > > dma_direct_get_required_mask                 116     120      +4
> > > __dma_direct_alloc_pages.constprop           656     660      +4
> > > its_vpe_set_affinity                         928     924      -4
> > > its_send_single_command                      340     336      -4
> > > its_alloc_table_entry                        456     452      -4
> > > dpaa_bp_seed                                 232     228      -4
> > > arm_64_lpae_alloc_pgtable_s1                 680     676      -4
> > > __arm_lpae_alloc_pages                       900     896      -4
> > > e843419@0473_00005079_16ec                     8       -      -8
> > > e843419@0189_00001c33_1c8                      8       -      -8
> > > ringbuf_map_alloc                            612     600     -12
> > > __page_frag_alloc_align                      740     536    -204
> > > Total: Before=30306836, After=30306740, chg -0.00%
> > 
> > I'm assuming the compiler must have uninlined
> > __alloc_pages_node_noprof in the previous version of things for the
> > cases where it is causing an increase in the code size.
> > 
> > One alternative approach we could look at doing would be to just add
> > the following to the start of the function:
> > if (__builtin_constant_p(nid) && nid == NUMA_NO_NODE)
> >         return __alloc_pages_noprof(gfp_mask, order, numa_mem_id(), NULL);
> > 
> > That should yield the best result as it essentially skips over the
> > problematic code at compile time for the constant case, otherwise the
> > code should be fully stripped so it shouldn't add any additional
> > overhead.
> 
> Just tried it, it seems it is more complicated than expected too.
> For example, the above changing seems to cause alloc_slab_page() to be
> inlined to new_slab() and other inlining/uninlining that is hard to
> understand.
> 
> [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux
> add/remove: 1/2 grow/shrink: 16/11 up/down: 432/-536 (-104)
> Function                                     old     new   delta
> new_slab                                     808    1124    +316
> its_probe_one                               2836    2876     +40
> dpaa2_eth_set_dist_key                      1096    1112     +16
> e843419@0f3f_00011fb1_4348                     -       8      +8
> rx_default_dqrr                             2776    2780      +4
> pcpu_unmap_pages                             356     360      +4
> its_vpe_irq_domain_alloc                    1492    1496      +4
> iommu_dma_init_fq                            520     524      +4
> iommu_dma_alloc                              984     988      +4
> hns3_nic_net_timeout                         704     708      +4
> hns3_init_all_ring                          1168    1172      +4
> hns3_clear_all_ring                          372     376      +4
> enetc_refill_rx_ring                         448     452      +4
> enetc_free_rxtx_rings                        276     280      +4
> dpaa2_eth_xdp_xmit                           676     680      +4
> dpaa2_eth_rx                                1716    1720      +4
> ___slab_alloc                               2120    2124      +4
> pcpu_free_pages.constprop                    236     232      -4
> its_alloc_table_entry                        456     452      -4
> hns3_reset_notify_init_enet                  628     624      -4
> dpaa_cleanup_tx_fd                           556     552      -4
> dpaa_bp_seed                                 232     228      -4
> blk_update_request                           944     940      -4
> blk_execute_rq                               540     536      -4
> arm_64_lpae_alloc_pgtable_s1                 680     676      -4
> __kmalloc_large_node                         340     336      -4
> __arm_lpae_unmap                            1588    1584      -4
> e843419@0473_00005079_16ec                     8       -      -8
> __page_frag_alloc_align                      740     536    -204
> alloc_slab_page                              284       -    -284
> Total: Before=30306836, After=30306732, chg -0.00%

One interesting similarity between the alloc_slab function and
__page_frag_alloc_align is that they both seem to be doing the higher
order followed by lower order allocation.

I wonder if we couldn't somehow consolidate the checks and make it so
that we have a function that will provide a page size within a range.
diff mbox series

Patch

diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index d9c9cad17af7..3f162e9d23ba 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -59,11 +59,11 @@  static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
 #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
 	gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) |  __GFP_COMP |
 		   __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
-	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
-				PAGE_FRAG_CACHE_MAX_ORDER);
+	page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
+			     numa_mem_id(), NULL);
 #endif
 	if (unlikely(!page)) {
-		page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+		page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
 		if (unlikely(!page)) {
 			memset(nc, 0, sizeof(*nc));
 			return NULL;