diff mbox series

[RFC] put page to pcp->lists[] tail if it is not on the same node

Message ID 20181019043303.s5axhjfb2v2lzsr3@master (mailing list archive)
State New, archived
Headers show
Series [RFC] put page to pcp->lists[] tail if it is not on the same node | expand

Commit Message

Wei Yang Oct. 19, 2018, 4:33 a.m. UTC
node
Reply-To: Wei Yang <richard.weiyang@gmail.com>

Masters,

During the code reading, I pop up this idea.

    In case we put some intelegence of NUMA node to pcp->lists[], we may
    get a better performance.

The idea is simple:

    Put page on other nodes to the tail of pcp->lists[], because we
    allocate from head and free from tail.

Since my desktop just has one numa node, I couldn't test the effect. I
just run a kernel build test to see if it would degrade current kernel.
The result looks not bad.

                    make -j4 bzImage
           base-line:
           
           real    6m15.947s        
           user    21m14.481s       
           sys     2m34.407s        
           
           real    6m16.089s        
           user    21m18.295s       
           sys     2m35.551s        
           
           real    6m16.239s        
           user    21m17.590s       
           sys     2m35.252s        
           
           patched:
           
           real    6m14.558s
           user    21m18.374s
           sys     2m33.143s
           
           real    6m14.606s
           user    21m14.969s
           sys     2m32.039s
           
           real    6m15.264s
           user    21m16.698s
           sys     2m33.024s

Sorry for sending this without a real justification. Hope this will not
make you uncomfortable. I would be very glad if you suggest some
verifications that I could do.

Below is my testing patch, look forward your comments.

From 2f9a99521068dfe7ec98ea39f73649226d9a837b Mon Sep 17 00:00:00 2001
From: Wei Yang <richard.weiyang@gmail.com>
Date: Fri, 19 Oct 2018 11:37:09 +0800
Subject: [PATCH] mm: put page to pcp->lists[] tail if it is not on the same
 node

pcp->lists[] is used to allocate/free page for order 0 page.  While a
list of CPU on Node A could contain page of Node B.

If we put page on the same node to list head and put other pages on list
tail, this would increase the chance to allocate a page on the same node
and free a page on other nodes.

On a 64bit machine, size of per_cpu_pages will not increase because of
the alignment. The new added field *node* will fit in the same cache
line with count,  which minimize the performance impact.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 include/linux/mmzone.h |  1 +
 mm/page_alloc.c        | 30 +++++++++++++++++++++---------
 2 files changed, 22 insertions(+), 9 deletions(-)

Comments

Mel Gorman Oct. 19, 2018, 8:38 a.m. UTC | #1
On Fri, Oct 19, 2018 at 04:33:03AM +0000, Wei Yang wrote:
> node
> Reply-To: Wei Yang <richard.weiyang@gmail.com>
> 
> Masters,
> 
> During the code reading, I pop up this idea.
> 
>     In case we put some intelegence of NUMA node to pcp->lists[], we may
>     get a better performance.
> 

Why?

> The idea is simple:
> 
>     Put page on other nodes to the tail of pcp->lists[], because we
>     allocate from head and free from tail.
> 

Pages from remote nodes are not placed on local lists. Even in the slab
context, such objects are placed on alien caches which have special
handling.

> Since my desktop just has one numa node, I couldn't test the effect.

I suspect it would eventually cause a crash or at least weirdness as the
page zone ids would not match due to different nodes.

> Sorry for sending this without a real justification. Hope this will not
> make you uncomfortable. I would be very glad if you suggest some
> verifications that I could do.
> 
> Below is my testing patch, look forward your comments.
> 

I commend you trying to understand how the page allocator works but I
suggest you take a step back, pick a workload that is of interest and
profile it to see where hot spots are that may pinpoint where an
improvement can be made.
Vlastimil Babka Oct. 19, 2018, 1:43 p.m. UTC | #2
On 10/19/18 6:33 AM, Wei Yang wrote:
> @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>  	}
>  
>  	pcp = &this_cpu_ptr(zone->pageset)->pcp;
> -	list_add(&page->lru, &pcp->lists[migratetype]);

My impression is that you think there's only one pcp per cpu. But the
"pcp" here is already specific to the zone (and thus node) of the page
being freed. So it doesn't matter if we put the page to the list or
tail. For allocation we already typically prefer local nodes, thus local
zones, thus pcp's containing only local pages.

> +	/*
> +	 * If the page has the same node_id as this cpu, put the page at head.
> +	 * Otherwise, put at the end.
> +	 */
> +	if (page_node == pcp->node)

So this should in fact be always true due to what I explained above.

Otherwise I second the recommendation from Mel.

Cheers,
Vlastimil
Wei Yang Oct. 20, 2018, 12:54 a.m. UTC | #3
On Fri, Oct 19, 2018 at 09:38:18AM +0100, Mel Gorman wrote:
>On Fri, Oct 19, 2018 at 04:33:03AM +0000, Wei Yang wrote:
>> node
>> Reply-To: Wei Yang <richard.weiyang@gmail.com>
>> 
>> Masters,
>> 
>> During the code reading, I pop up this idea.
>> 
>>     In case we put some intelegence of NUMA node to pcp->lists[], we may
>>     get a better performance.
>> 
>
>Why?
>
>> The idea is simple:
>> 
>>     Put page on other nodes to the tail of pcp->lists[], because we
>>     allocate from head and free from tail.
>> 
>
>Pages from remote nodes are not placed on local lists. Even in the slab
>context, such objects are placed on alien caches which have special
>handling.
>

Hmm... ok, I need to read the code again.

>> Since my desktop just has one numa node, I couldn't test the effect.
>
>I suspect it would eventually cause a crash or at least weirdness as the
>page zone ids would not match due to different nodes.
>
>> Sorry for sending this without a real justification. Hope this will not
>> make you uncomfortable. I would be very glad if you suggest some
>> verifications that I could do.
>> 
>> Below is my testing patch, look forward your comments.
>> 
>
>I commend you trying to understand how the page allocator works but I
>suggest you take a step back, pick a workload that is of interest and
>profile it to see where hot spots are that may pinpoint where an
>improvement can be made.
>

Thanks for your words.

>-- 
>Mel Gorman
>SUSE Labs
Wei Yang Oct. 20, 2018, 1:38 a.m. UTC | #4
On Fri, Oct 19, 2018 at 03:43:29PM +0200, Vlastimil Babka wrote:
>On 10/19/18 6:33 AM, Wei Yang wrote:
>> @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>>  	}
>>  
>>  	pcp = &this_cpu_ptr(zone->pageset)->pcp;
>> -	list_add(&page->lru, &pcp->lists[migratetype]);
>
>My impression is that you think there's only one pcp per cpu. But the
>"pcp" here is already specific to the zone (and thus node) of the page
>being freed. So it doesn't matter if we put the page to the list or
>tail. For allocation we already typically prefer local nodes, thus local
>zones, thus pcp's containing only local pages.
>

Your guess is right. :-)

I took a look in the code

    zone->pageset = alloc_percpu(struct per_cpu_pageset);

each zone has its pageset.

This means just a portion of the pageset is used on a multi-node
system, since a node just belongs to one node. Could we allocate just
this part or initialize just this part? Maybe it is too small to polish.

Well, I am lost on when we will allocate a page from remote node. Let me
try to understand :-)

>> +	/*
>> +	 * If the page has the same node_id as this cpu, put the page at head.
>> +	 * Otherwise, put at the end.
>> +	 */
>> +	if (page_node == pcp->node)
>
>So this should in fact be always true due to what I explained above.
>
>Otherwise I second the recommendation from Mel.
>

Sure, I have to say you are right.

BTW, is there other channel not as formal as mail list to raise some
question or discussion? Reading the code alone is not that exciting and
sometimes when I get some idea or confusion, I really willing to chat
with someone or to understand why it is so.

Mail list seems not the proper channel, maybe the irc is a proper way?

>Cheers,
>Vlastimil
Wei Yang Oct. 20, 2018, 4:10 p.m. UTC | #5
On Fri, Oct 19, 2018 at 03:43:29PM +0200, Vlastimil Babka wrote:
>On 10/19/18 6:33 AM, Wei Yang wrote:
>> @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>>  	}
>>  
>>  	pcp = &this_cpu_ptr(zone->pageset)->pcp;
>> -	list_add(&page->lru, &pcp->lists[migratetype]);
>
>My impression is that you think there's only one pcp per cpu. But the
>"pcp" here is already specific to the zone (and thus node) of the page
>being freed. So it doesn't matter if we put the page to the list or
>tail. For allocation we already typically prefer local nodes, thus local
>zones, thus pcp's containing only local pages.
>
>> +	/*
>> +	 * If the page has the same node_id as this cpu, put the page at head.
>> +	 * Otherwise, put at the end.
>> +	 */
>> +	if (page_node == pcp->node)
>
>So this should in fact be always true due to what I explained above.

Vlastimil,

After looking at the code, I got some new understanding of the pcp
pages, which maybe a little different from yours.

Every zone has a per_cpu_pageset for each cpu, and the pages allocated
to per_cpu_pageset is either of the same node with this *cpu* or
different node.

So this comparison (page_node == pcp->node) would always be true or
false for a particular per_cpu_pageset.

Well, one thing for sure is putting a page to tail will not improve the
locality.

>
>Otherwise I second the recommendation from Mel.
>
>Cheers,
>Vlastimil
Wei Yang Oct. 20, 2018, 4:33 p.m. UTC | #6
On Fri, Oct 19, 2018 at 09:38:18AM +0100, Mel Gorman wrote:
>On Fri, Oct 19, 2018 at 04:33:03AM +0000, Wei Yang wrote:
>> node
>> Reply-To: Wei Yang <richard.weiyang@gmail.com>
>> 
>> Masters,
>> 
>> During the code reading, I pop up this idea.
>> 
>>     In case we put some intelegence of NUMA node to pcp->lists[], we may
>>     get a better performance.
>> 
>
>Why?
>
>> The idea is simple:
>> 
>>     Put page on other nodes to the tail of pcp->lists[], because we
>>     allocate from head and free from tail.
>> 
>
>Pages from remote nodes are not placed on local lists. Even in the slab
>context, such objects are placed on alien caches which have special
>handling.
>

Hmm... I am not sure get your point correctly.

As I mentioned in the reply to Vlastimil, every zone has a
per_cpu_pageset for each cpu. For those per_cpu_pageset of one zone, it
will only contains pages from this zone. This means, some of
per_cpu_pageset will have the pages with the same node id, while others
not.

I don't get your point for the slab context. They use a different list
instead of pcp->lists[]? If you could give me some hint, I may catch up.

>> Since my desktop just has one numa node, I couldn't test the effect.
>
>I suspect it would eventually cause a crash or at least weirdness as the
>page zone ids would not match due to different nodes.
>

If my analysis is correct, there are only two relationship between page
node_id of those pages in pcp and the pcp's node_id, either the same or
not.

Let me have a try with qemu emulated numa system. :-)

>> Sorry for sending this without a real justification. Hope this will not
>> make you uncomfortable. I would be very glad if you suggest some
>> verifications that I could do.
>> 
>> Below is my testing patch, look forward your comments.
>> 
>
>I commend you trying to understand how the page allocator works but I
>suggest you take a step back, pick a workload that is of interest and
>profile it to see where hot spots are that may pinpoint where an
>improvement can be made.
>
>-- 
>Mel Gorman
>SUSE Labs
Wei Yang Oct. 21, 2018, 2:36 a.m. UTC | #7
On Sat, Oct 20, 2018 at 04:33:18PM +0000, Wei Yang wrote:
>>
>>I suspect it would eventually cause a crash or at least weirdness as the
>>page zone ids would not match due to different nodes.
>>
>
>If my analysis is correct, there are only two relationship between page
>node_id of those pages in pcp and the pcp's node_id, either the same or
>not.
>
>Let me have a try with qemu emulated numa system. :-)
>

Just run an emulated sytem with 4 numa nodes in qemu, the kernel with
this change looks good.

But nothing to be happy, just want you be informed.
Mel Gorman Oct. 21, 2018, 12:12 p.m. UTC | #8
On Sat, Oct 20, 2018 at 04:33:18PM +0000, Wei Yang wrote:
> >Pages from remote nodes are not placed on local lists. Even in the slab
> >context, such objects are placed on alien caches which have special
> >handling.
> >
> 
> Hmm... I am not sure get your point correctly.
> 

The point is that one list should not contain a mix of pages belonging to
different nodes or zones or it'll result in unexpected behaviour. If you
are just shuffling the ordering of pages in the list, it needs justification
as to why that makes sense.
Wei Yang Oct. 22, 2018, 1:24 a.m. UTC | #9
On Sun, Oct 21, 2018 at 01:12:51PM +0100, Mel Gorman wrote:
>On Sat, Oct 20, 2018 at 04:33:18PM +0000, Wei Yang wrote:
>> >Pages from remote nodes are not placed on local lists. Even in the slab
>> >context, such objects are placed on alien caches which have special
>> >handling.
>> >
>> 
>> Hmm... I am not sure get your point correctly.
>> 
>
>The point is that one list should not contain a mix of pages belonging to
>different nodes or zones or it'll result in unexpected behaviour. If you
>are just shuffling the ordering of pages in the list, it needs justification
>as to why that makes sense.
>

Yep, you are right :-)

>-- 
>Mel Gorman
>SUSE Labs
diff mbox series

Patch

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5138efde11ae..27ce071bc99c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -272,6 +272,7 @@  enum zone_watermarks {
 #define high_wmark_pages(z) (z->watermark[WMARK_HIGH])
 
 struct per_cpu_pages {
+	int node;               /* node id of this cpu */
 	int count;		/* number of pages in the list */
 	int high;		/* high watermark, emptying needed */
 	int batch;		/* chunk size for buddy add/remove */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a398eafbae46..c7a27e461602 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2741,6 +2741,7 @@  static bool free_unref_page_prepare(struct page *page, unsigned long pfn)
 static void free_unref_page_commit(struct page *page, unsigned long pfn)
 {
 	struct zone *zone = page_zone(page);
+	int page_node = page_to_nid(page);
 	struct per_cpu_pages *pcp;
 	int migratetype;
 
@@ -2763,7 +2764,14 @@  static void free_unref_page_commit(struct page *page, unsigned long pfn)
 	}
 
 	pcp = &this_cpu_ptr(zone->pageset)->pcp;
-	list_add(&page->lru, &pcp->lists[migratetype]);
+	/*
+	 * If the page has the same node_id as this cpu, put the page at head.
+	 * Otherwise, put at the end.
+	 */
+	if (page_node == pcp->node)
+		list_add(&page->lru, &pcp->lists[migratetype]);
+	else
+		list_add_tail(&page->lru, &pcp->lists[migratetype]);
 	pcp->count++;
 	if (pcp->count >= pcp->high) {
 		unsigned long batch = READ_ONCE(pcp->batch);
@@ -5615,7 +5623,7 @@  static int zone_batchsize(struct zone *zone)
  * exist).
  */
 static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
-		unsigned long batch)
+			   unsigned long batch, int node_id)
 {
        /* start with a fail safe value for batch */
 	pcp->batch = 1;
@@ -5626,12 +5634,14 @@  static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
 	smp_wmb();
 
 	pcp->batch = batch;
+	pcp->node = node_id;
 }
 
 /* a companion to pageset_set_high() */
-static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch)
+static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch,
+			      int node_id)
 {
-	pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch));
+	pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch), node_id);
 }
 
 static void pageset_init(struct per_cpu_pageset *p)
@@ -5650,7 +5660,7 @@  static void pageset_init(struct per_cpu_pageset *p)
 static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
 {
 	pageset_init(p);
-	pageset_set_batch(p, batch);
+	pageset_set_batch(p, batch, 0);
 }
 
 /*
@@ -5658,13 +5668,13 @@  static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
  * to the value high for the pageset p.
  */
 static void pageset_set_high(struct per_cpu_pageset *p,
-				unsigned long high)
+				unsigned long high, int node_id)
 {
 	unsigned long batch = max(1UL, high / 4);
 	if ((high / 4) > (PAGE_SHIFT * 8))
 		batch = PAGE_SHIFT * 8;
 
-	pageset_update(&p->pcp, high, batch);
+	pageset_update(&p->pcp, high, batch, node_id);
 }
 
 static void pageset_set_high_and_batch(struct zone *zone,
@@ -5673,9 +5683,11 @@  static void pageset_set_high_and_batch(struct zone *zone,
 	if (percpu_pagelist_fraction)
 		pageset_set_high(pcp,
 			(zone->managed_pages /
-				percpu_pagelist_fraction));
+				percpu_pagelist_fraction),
+			zone->zone_pgdat->node_id);
 	else
-		pageset_set_batch(pcp, zone_batchsize(zone));
+		pageset_set_batch(pcp, zone_batchsize(zone),
+				  zone->zone_pgdat->node_id);
 }
 
 static void __meminit zone_pageset_init(struct zone *zone, int cpu)