Message ID | 20181019043303.s5axhjfb2v2lzsr3@master (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] put page to pcp->lists[] tail if it is not on the same node | expand |
On Fri, Oct 19, 2018 at 04:33:03AM +0000, Wei Yang wrote: > node > Reply-To: Wei Yang <richard.weiyang@gmail.com> > > Masters, > > During the code reading, I pop up this idea. > > In case we put some intelegence of NUMA node to pcp->lists[], we may > get a better performance. > Why? > The idea is simple: > > Put page on other nodes to the tail of pcp->lists[], because we > allocate from head and free from tail. > Pages from remote nodes are not placed on local lists. Even in the slab context, such objects are placed on alien caches which have special handling. > Since my desktop just has one numa node, I couldn't test the effect. I suspect it would eventually cause a crash or at least weirdness as the page zone ids would not match due to different nodes. > Sorry for sending this without a real justification. Hope this will not > make you uncomfortable. I would be very glad if you suggest some > verifications that I could do. > > Below is my testing patch, look forward your comments. > I commend you trying to understand how the page allocator works but I suggest you take a step back, pick a workload that is of interest and profile it to see where hot spots are that may pinpoint where an improvement can be made.
On 10/19/18 6:33 AM, Wei Yang wrote: > @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) > } > > pcp = &this_cpu_ptr(zone->pageset)->pcp; > - list_add(&page->lru, &pcp->lists[migratetype]); My impression is that you think there's only one pcp per cpu. But the "pcp" here is already specific to the zone (and thus node) of the page being freed. So it doesn't matter if we put the page to the list or tail. For allocation we already typically prefer local nodes, thus local zones, thus pcp's containing only local pages. > + /* > + * If the page has the same node_id as this cpu, put the page at head. > + * Otherwise, put at the end. > + */ > + if (page_node == pcp->node) So this should in fact be always true due to what I explained above. Otherwise I second the recommendation from Mel. Cheers, Vlastimil
On Fri, Oct 19, 2018 at 09:38:18AM +0100, Mel Gorman wrote: >On Fri, Oct 19, 2018 at 04:33:03AM +0000, Wei Yang wrote: >> node >> Reply-To: Wei Yang <richard.weiyang@gmail.com> >> >> Masters, >> >> During the code reading, I pop up this idea. >> >> In case we put some intelegence of NUMA node to pcp->lists[], we may >> get a better performance. >> > >Why? > >> The idea is simple: >> >> Put page on other nodes to the tail of pcp->lists[], because we >> allocate from head and free from tail. >> > >Pages from remote nodes are not placed on local lists. Even in the slab >context, such objects are placed on alien caches which have special >handling. > Hmm... ok, I need to read the code again. >> Since my desktop just has one numa node, I couldn't test the effect. > >I suspect it would eventually cause a crash or at least weirdness as the >page zone ids would not match due to different nodes. > >> Sorry for sending this without a real justification. Hope this will not >> make you uncomfortable. I would be very glad if you suggest some >> verifications that I could do. >> >> Below is my testing patch, look forward your comments. >> > >I commend you trying to understand how the page allocator works but I >suggest you take a step back, pick a workload that is of interest and >profile it to see where hot spots are that may pinpoint where an >improvement can be made. > Thanks for your words. >-- >Mel Gorman >SUSE Labs
On Fri, Oct 19, 2018 at 03:43:29PM +0200, Vlastimil Babka wrote: >On 10/19/18 6:33 AM, Wei Yang wrote: >> @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) >> } >> >> pcp = &this_cpu_ptr(zone->pageset)->pcp; >> - list_add(&page->lru, &pcp->lists[migratetype]); > >My impression is that you think there's only one pcp per cpu. But the >"pcp" here is already specific to the zone (and thus node) of the page >being freed. So it doesn't matter if we put the page to the list or >tail. For allocation we already typically prefer local nodes, thus local >zones, thus pcp's containing only local pages. > Your guess is right. :-) I took a look in the code zone->pageset = alloc_percpu(struct per_cpu_pageset); each zone has its pageset. This means just a portion of the pageset is used on a multi-node system, since a node just belongs to one node. Could we allocate just this part or initialize just this part? Maybe it is too small to polish. Well, I am lost on when we will allocate a page from remote node. Let me try to understand :-) >> + /* >> + * If the page has the same node_id as this cpu, put the page at head. >> + * Otherwise, put at the end. >> + */ >> + if (page_node == pcp->node) > >So this should in fact be always true due to what I explained above. > >Otherwise I second the recommendation from Mel. > Sure, I have to say you are right. BTW, is there other channel not as formal as mail list to raise some question or discussion? Reading the code alone is not that exciting and sometimes when I get some idea or confusion, I really willing to chat with someone or to understand why it is so. Mail list seems not the proper channel, maybe the irc is a proper way? >Cheers, >Vlastimil
On Fri, Oct 19, 2018 at 03:43:29PM +0200, Vlastimil Babka wrote: >On 10/19/18 6:33 AM, Wei Yang wrote: >> @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) >> } >> >> pcp = &this_cpu_ptr(zone->pageset)->pcp; >> - list_add(&page->lru, &pcp->lists[migratetype]); > >My impression is that you think there's only one pcp per cpu. But the >"pcp" here is already specific to the zone (and thus node) of the page >being freed. So it doesn't matter if we put the page to the list or >tail. For allocation we already typically prefer local nodes, thus local >zones, thus pcp's containing only local pages. > >> + /* >> + * If the page has the same node_id as this cpu, put the page at head. >> + * Otherwise, put at the end. >> + */ >> + if (page_node == pcp->node) > >So this should in fact be always true due to what I explained above. Vlastimil, After looking at the code, I got some new understanding of the pcp pages, which maybe a little different from yours. Every zone has a per_cpu_pageset for each cpu, and the pages allocated to per_cpu_pageset is either of the same node with this *cpu* or different node. So this comparison (page_node == pcp->node) would always be true or false for a particular per_cpu_pageset. Well, one thing for sure is putting a page to tail will not improve the locality. > >Otherwise I second the recommendation from Mel. > >Cheers, >Vlastimil
On Fri, Oct 19, 2018 at 09:38:18AM +0100, Mel Gorman wrote: >On Fri, Oct 19, 2018 at 04:33:03AM +0000, Wei Yang wrote: >> node >> Reply-To: Wei Yang <richard.weiyang@gmail.com> >> >> Masters, >> >> During the code reading, I pop up this idea. >> >> In case we put some intelegence of NUMA node to pcp->lists[], we may >> get a better performance. >> > >Why? > >> The idea is simple: >> >> Put page on other nodes to the tail of pcp->lists[], because we >> allocate from head and free from tail. >> > >Pages from remote nodes are not placed on local lists. Even in the slab >context, such objects are placed on alien caches which have special >handling. > Hmm... I am not sure get your point correctly. As I mentioned in the reply to Vlastimil, every zone has a per_cpu_pageset for each cpu. For those per_cpu_pageset of one zone, it will only contains pages from this zone. This means, some of per_cpu_pageset will have the pages with the same node id, while others not. I don't get your point for the slab context. They use a different list instead of pcp->lists[]? If you could give me some hint, I may catch up. >> Since my desktop just has one numa node, I couldn't test the effect. > >I suspect it would eventually cause a crash or at least weirdness as the >page zone ids would not match due to different nodes. > If my analysis is correct, there are only two relationship between page node_id of those pages in pcp and the pcp's node_id, either the same or not. Let me have a try with qemu emulated numa system. :-) >> Sorry for sending this without a real justification. Hope this will not >> make you uncomfortable. I would be very glad if you suggest some >> verifications that I could do. >> >> Below is my testing patch, look forward your comments. >> > >I commend you trying to understand how the page allocator works but I >suggest you take a step back, pick a workload that is of interest and >profile it to see where hot spots are that may pinpoint where an >improvement can be made. > >-- >Mel Gorman >SUSE Labs
On Sat, Oct 20, 2018 at 04:33:18PM +0000, Wei Yang wrote: >> >>I suspect it would eventually cause a crash or at least weirdness as the >>page zone ids would not match due to different nodes. >> > >If my analysis is correct, there are only two relationship between page >node_id of those pages in pcp and the pcp's node_id, either the same or >not. > >Let me have a try with qemu emulated numa system. :-) > Just run an emulated sytem with 4 numa nodes in qemu, the kernel with this change looks good. But nothing to be happy, just want you be informed.
On Sat, Oct 20, 2018 at 04:33:18PM +0000, Wei Yang wrote: > >Pages from remote nodes are not placed on local lists. Even in the slab > >context, such objects are placed on alien caches which have special > >handling. > > > > Hmm... I am not sure get your point correctly. > The point is that one list should not contain a mix of pages belonging to different nodes or zones or it'll result in unexpected behaviour. If you are just shuffling the ordering of pages in the list, it needs justification as to why that makes sense.
On Sun, Oct 21, 2018 at 01:12:51PM +0100, Mel Gorman wrote: >On Sat, Oct 20, 2018 at 04:33:18PM +0000, Wei Yang wrote: >> >Pages from remote nodes are not placed on local lists. Even in the slab >> >context, such objects are placed on alien caches which have special >> >handling. >> > >> >> Hmm... I am not sure get your point correctly. >> > >The point is that one list should not contain a mix of pages belonging to >different nodes or zones or it'll result in unexpected behaviour. If you >are just shuffling the ordering of pages in the list, it needs justification >as to why that makes sense. > Yep, you are right :-) >-- >Mel Gorman >SUSE Labs
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5138efde11ae..27ce071bc99c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -272,6 +272,7 @@ enum zone_watermarks { #define high_wmark_pages(z) (z->watermark[WMARK_HIGH]) struct per_cpu_pages { + int node; /* node id of this cpu */ int count; /* number of pages in the list */ int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a398eafbae46..c7a27e461602 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2741,6 +2741,7 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn) static void free_unref_page_commit(struct page *page, unsigned long pfn) { struct zone *zone = page_zone(page); + int page_node = page_to_nid(page); struct per_cpu_pages *pcp; int migratetype; @@ -2763,7 +2764,14 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) } pcp = &this_cpu_ptr(zone->pageset)->pcp; - list_add(&page->lru, &pcp->lists[migratetype]); + /* + * If the page has the same node_id as this cpu, put the page at head. + * Otherwise, put at the end. + */ + if (page_node == pcp->node) + list_add(&page->lru, &pcp->lists[migratetype]); + else + list_add_tail(&page->lru, &pcp->lists[migratetype]); pcp->count++; if (pcp->count >= pcp->high) { unsigned long batch = READ_ONCE(pcp->batch); @@ -5615,7 +5623,7 @@ static int zone_batchsize(struct zone *zone) * exist). */ static void pageset_update(struct per_cpu_pages *pcp, unsigned long high, - unsigned long batch) + unsigned long batch, int node_id) { /* start with a fail safe value for batch */ pcp->batch = 1; @@ -5626,12 +5634,14 @@ static void pageset_update(struct per_cpu_pages *pcp, unsigned long high, smp_wmb(); pcp->batch = batch; + pcp->node = node_id; } /* a companion to pageset_set_high() */ -static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch) +static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch, + int node_id) { - pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch)); + pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch), node_id); } static void pageset_init(struct per_cpu_pageset *p) @@ -5650,7 +5660,7 @@ static void pageset_init(struct per_cpu_pageset *p) static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch) { pageset_init(p); - pageset_set_batch(p, batch); + pageset_set_batch(p, batch, 0); } /* @@ -5658,13 +5668,13 @@ static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch) * to the value high for the pageset p. */ static void pageset_set_high(struct per_cpu_pageset *p, - unsigned long high) + unsigned long high, int node_id) { unsigned long batch = max(1UL, high / 4); if ((high / 4) > (PAGE_SHIFT * 8)) batch = PAGE_SHIFT * 8; - pageset_update(&p->pcp, high, batch); + pageset_update(&p->pcp, high, batch, node_id); } static void pageset_set_high_and_batch(struct zone *zone, @@ -5673,9 +5683,11 @@ static void pageset_set_high_and_batch(struct zone *zone, if (percpu_pagelist_fraction) pageset_set_high(pcp, (zone->managed_pages / - percpu_pagelist_fraction)); + percpu_pagelist_fraction), + zone->zone_pgdat->node_id); else - pageset_set_batch(pcp, zone_batchsize(zone)); + pageset_set_batch(pcp, zone_batchsize(zone), + zone->zone_pgdat->node_id); } static void __meminit zone_pageset_init(struct zone *zone, int cpu)