Message ID | 20220412001319.7462-1-richard.weiyang@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] mm/page_alloc: add same penalty is enough to get round-robin order | expand |
On 4/12/22 02:13, Wei Yang wrote: > To make node order in round-robin in the same distance group, we add a > penalty to the first node we got in each round. > > To get a round-robin order in the same distance group, we don't need to > decrease the penalty since: > > * find_next_best_node() always iterates node in the same order > * distance matters more then penalty in find_next_best_node() > * in nodes with the same distance, the first one would be picked up > > So it is fine to increase same penalty when we get the first node in the > same distance group. Since we just increase a constance of 1 to node > penalty, it is not necessary to multiply MAX_NODE_LOAD for preference. > > [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD] > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > CC: Vlastimil Babka <vbabka@suse.cz> > CC: David Hildenbrand <david@redhat.com> > CC: Oscar Salvador <osalvador@suse.de> Acked-by: Vlastimil Babka <vbabka@suse.cz> > v3: merge into a single patch > v2: adjust constant penalty to 1 > --- > mm/page_alloc.c | 9 +++------ > 1 file changed, 3 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 5d71b8dcb5f4..0334c06a0a47 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write, > } > > > -#define MAX_NODE_LOAD (nr_online_nodes) > static int node_load[MAX_NUMNODES]; > > /** > @@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask) > val += PENALTY_FOR_NODE_WITH_CPUS; > > /* Slight preference for less loaded node */ > - val *= (MAX_NODE_LOAD*MAX_NUMNODES); > + val *= MAX_NUMNODES; > val += node_load[n]; > > if (val < min_val) { > @@ -6283,13 +6282,12 @@ static void build_thisnode_zonelists(pg_data_t *pgdat) > static void build_zonelists(pg_data_t *pgdat) > { > static int node_order[MAX_NUMNODES]; > - int node, load, nr_nodes = 0; > + int node, nr_nodes = 0; > nodemask_t used_mask = NODE_MASK_NONE; > int local_node, prev_node; > > /* NUMA-aware ordering of nodes */ > local_node = pgdat->node_id; > - load = nr_online_nodes; > prev_node = local_node; > > memset(node_order, 0, sizeof(node_order)); > @@ -6301,11 +6299,10 @@ static void build_zonelists(pg_data_t *pgdat) > */ > if (node_distance(local_node, node) != > node_distance(local_node, prev_node)) > - node_load[node] += load; > + node_load[node] += 1; > > node_order[nr_nodes++] = node; > prev_node = node; > - load--; > } > > build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
On 12.04.22 02:13, Wei Yang wrote: > To make node order in round-robin in the same distance group, we add a > penalty to the first node we got in each round. > > To get a round-robin order in the same distance group, we don't need to > decrease the penalty since: > > * find_next_best_node() always iterates node in the same order > * distance matters more then penalty in find_next_best_node() > * in nodes with the same distance, the first one would be picked up > > So it is fine to increase same penalty when we get the first node in the > same distance group. Since we just increase a constance of 1 to node > penalty, it is not necessary to multiply MAX_NODE_LOAD for preference. > > [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD] > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > CC: Vlastimil Babka <vbabka@suse.cz> > CC: David Hildenbrand <david@redhat.com> > CC: Oscar Salvador <osalvador@suse.de> > --- > v3: merge into a single patch > v2: adjust constant penalty to 1 > --- Acked-by: David Hildenbrand <david@redhat.com>
On Tue, Apr 12, 2022 at 12:13:19AM +0000, Wei Yang wrote: > To make node order in round-robin in the same distance group, we add a > penalty to the first node we got in each round. > > To get a round-robin order in the same distance group, we don't need to > decrease the penalty since: > > * find_next_best_node() always iterates node in the same order > * distance matters more then penalty in find_next_best_node() > * in nodes with the same distance, the first one would be picked up > > So it is fine to increase same penalty when we get the first node in the > same distance group. Since we just increase a constance of 1 to node > penalty, it is not necessary to multiply MAX_NODE_LOAD for preference. > > [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD] > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > CC: Vlastimil Babka <vbabka@suse.cz> > CC: David Hildenbrand <david@redhat.com> > CC: Oscar Salvador <osalvador@suse.de> Acked-by: Oscar Salvador <osalvador@suse.de> > --- > v3: merge into a single patch > v2: adjust constant penalty to 1 > --- > mm/page_alloc.c | 9 +++------ > 1 file changed, 3 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 5d71b8dcb5f4..0334c06a0a47 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write, > } > > > -#define MAX_NODE_LOAD (nr_online_nodes) > static int node_load[MAX_NUMNODES]; > > /** > @@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask) > val += PENALTY_FOR_NODE_WITH_CPUS; > > /* Slight preference for less loaded node */ > - val *= (MAX_NODE_LOAD*MAX_NUMNODES); > + val *= MAX_NUMNODES; > val += node_load[n]; > > if (val < min_val) { > @@ -6283,13 +6282,12 @@ static void build_thisnode_zonelists(pg_data_t *pgdat) > static void build_zonelists(pg_data_t *pgdat) > { > static int node_order[MAX_NUMNODES]; > - int node, load, nr_nodes = 0; > + int node, nr_nodes = 0; > nodemask_t used_mask = NODE_MASK_NONE; > int local_node, prev_node; > > /* NUMA-aware ordering of nodes */ > local_node = pgdat->node_id; > - load = nr_online_nodes; > prev_node = local_node; > > memset(node_order, 0, sizeof(node_order)); > @@ -6301,11 +6299,10 @@ static void build_zonelists(pg_data_t *pgdat) > */ > if (node_distance(local_node, node) != > node_distance(local_node, prev_node)) > - node_load[node] += load; > + node_load[node] += 1; > > node_order[nr_nodes++] = node; > prev_node = node; > - load--; > } > > build_zonelists_in_node_order(pgdat, node_order, nr_nodes); > -- > 2.33.1 > >
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5d71b8dcb5f4..0334c06a0a47 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write, } -#define MAX_NODE_LOAD (nr_online_nodes) static int node_load[MAX_NUMNODES]; /** @@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask) val += PENALTY_FOR_NODE_WITH_CPUS; /* Slight preference for less loaded node */ - val *= (MAX_NODE_LOAD*MAX_NUMNODES); + val *= MAX_NUMNODES; val += node_load[n]; if (val < min_val) { @@ -6283,13 +6282,12 @@ static void build_thisnode_zonelists(pg_data_t *pgdat) static void build_zonelists(pg_data_t *pgdat) { static int node_order[MAX_NUMNODES]; - int node, load, nr_nodes = 0; + int node, nr_nodes = 0; nodemask_t used_mask = NODE_MASK_NONE; int local_node, prev_node; /* NUMA-aware ordering of nodes */ local_node = pgdat->node_id; - load = nr_online_nodes; prev_node = local_node; memset(node_order, 0, sizeof(node_order)); @@ -6301,11 +6299,10 @@ static void build_zonelists(pg_data_t *pgdat) */ if (node_distance(local_node, node) != node_distance(local_node, prev_node)) - node_load[node] += load; + node_load[node] += 1; node_order[nr_nodes++] = node; prev_node = node; - load--; } build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
To make node order in round-robin in the same distance group, we add a penalty to the first node we got in each round. To get a round-robin order in the same distance group, we don't need to decrease the penalty since: * find_next_best_node() always iterates node in the same order * distance matters more then penalty in find_next_best_node() * in nodes with the same distance, the first one would be picked up So it is fine to increase same penalty when we get the first node in the same distance group. Since we just increase a constance of 1 to node penalty, it is not necessary to multiply MAX_NODE_LOAD for preference. [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD] Signed-off-by: Wei Yang <richard.weiyang@gmail.com> CC: Vlastimil Babka <vbabka@suse.cz> CC: David Hildenbrand <david@redhat.com> CC: Oscar Salvador <osalvador@suse.de> --- v3: merge into a single patch v2: adjust constant penalty to 1 --- mm/page_alloc.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)