diff mbox series

[net,1/2] mm: Use fixed constant in page_frag_alloc instead of size + 1

Message ID 20190215224412.16881.89296.stgit@localhost.localdomain (mailing list archive)
State New, archived
Headers show
Series Address recent issues found in netdev page_frag_alloc usage | expand

Commit Message

Alexander Duyck Feb. 15, 2019, 10:44 p.m. UTC
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>

This patch replaces the size + 1 value introduced with the recent fix for 1
byte allocs with a constant value.

The idea here is to reduce code overhead as the previous logic would have
to read size into a register, then increment it, and write it back to
whatever field was being used. By using a constant we can avoid those
memory reads and arithmetic operations in favor of just encoding the
maximum value into the operation itself.

Fixes: 2c2ade81741c ("mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs")
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 mm/page_alloc.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Vlastimil Babka Feb. 17, 2023, 9:30 a.m. UTC | #1
On 2/15/19 23:44, Alexander Duyck wrote:
> From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> 
> This patch replaces the size + 1 value introduced with the recent fix for 1
> byte allocs with a constant value.
> 
> The idea here is to reduce code overhead as the previous logic would have
> to read size into a register, then increment it, and write it back to
> whatever field was being used. By using a constant we can avoid those
> memory reads and arithmetic operations in favor of just encoding the
> maximum value into the operation itself.
> 
> Fixes: 2c2ade81741c ("mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs")
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> ---
>  mm/page_alloc.c |    8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ebb35e4d0d90..37ed14ad0b59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4857,11 +4857,11 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>  		/* Even if we own the page, we do not use atomic_set().
>  		 * This would break get_page_unless_zero() users.
>  		 */
> -		page_ref_add(page, size);
> +		page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);

But this value can be theoretically too low when PAGE_SIZE >
PAGE_FRAG_CACHE_MAX_SIZE? Such as on architectures with 64kB page size,
while PAGE_FRAG_CACHE_MAX_SIZE is 32kB?

Maybe impossible to exploit in practice thanks to the minimum alignment, but
still IMHO we should be using the larger of PAGE_FRAG_CACHE_MAX_SIZE and
PAGE_SIZE, which should still be a build-time constant, so not defeat the
optimization.

>  
>  		/* reset page count bias and offset to start of new frag */
>  		nc->pfmemalloc = page_is_pfmemalloc(page);
> -		nc->pagecnt_bias = size + 1;
> +		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>  		nc->offset = size;
>  	}
>  
> @@ -4877,10 +4877,10 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>  		size = nc->size;
>  #endif
>  		/* OK, page count is 0, we can safely set it */
> -		set_page_count(page, size + 1);
> +		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
>  
>  		/* reset page count bias and offset to start of new frag */
> -		nc->pagecnt_bias = size + 1;
> +		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>  		offset = size - fragsz;
>  	}
>  
>
Vlastimil Babka March 20, 2023, 3:14 p.m. UTC | #2
On 2/17/23 10:30, Vlastimil Babka wrote:
> On 2/15/19 23:44, Alexander Duyck wrote:
>> From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>> 
>> This patch replaces the size + 1 value introduced with the recent fix for 1
>> byte allocs with a constant value.
>> 
>> The idea here is to reduce code overhead as the previous logic would have
>> to read size into a register, then increment it, and write it back to
>> whatever field was being used. By using a constant we can avoid those
>> memory reads and arithmetic operations in favor of just encoding the
>> maximum value into the operation itself.
>> 
>> Fixes: 2c2ade81741c ("mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs")
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>> ---
>>  mm/page_alloc.c |    8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>> 
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index ebb35e4d0d90..37ed14ad0b59 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4857,11 +4857,11 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>>  		/* Even if we own the page, we do not use atomic_set().
>>  		 * This would break get_page_unless_zero() users.
>>  		 */
>> -		page_ref_add(page, size);
>> +		page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
> 
> But this value can be theoretically too low when PAGE_SIZE >
> PAGE_FRAG_CACHE_MAX_SIZE? Such as on architectures with 64kB page size,
> while PAGE_FRAG_CACHE_MAX_SIZE is 32kB?

Nevermind, PAGE_FRAG_CACHE_MAX_SIZE would be 64kB because

#define PAGE_FRAG_CACHE_MAX_SIZE        __ALIGN_MASK(32768, ~PAGE_MASK)

So all is fine, sorry for the noise.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ebb35e4d0d90..37ed14ad0b59 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4857,11 +4857,11 @@  void *page_frag_alloc(struct page_frag_cache *nc,
 		/* Even if we own the page, we do not use atomic_set().
 		 * This would break get_page_unless_zero() users.
 		 */
-		page_ref_add(page, size);
+		page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
 
 		/* reset page count bias and offset to start of new frag */
 		nc->pfmemalloc = page_is_pfmemalloc(page);
-		nc->pagecnt_bias = size + 1;
+		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
 		nc->offset = size;
 	}
 
@@ -4877,10 +4877,10 @@  void *page_frag_alloc(struct page_frag_cache *nc,
 		size = nc->size;
 #endif
 		/* OK, page count is 0, we can safely set it */
-		set_page_count(page, size + 1);
+		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
 
 		/* reset page count bias and offset to start of new frag */
-		nc->pagecnt_bias = size + 1;
+		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
 		offset = size - fragsz;
 	}