Message ID | 20200326222445.18781-1-richard.weiyang@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] mm/page_alloc.c: leverage compiler to zero out used_mask | expand |
On Thu, Mar 26, 2020 at 10:24:44PM +0000, Wei Yang wrote: > Since we always clear used_mask before getting node order, we can > leverage compiler to do this instead of at run time. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > mm/page_alloc.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0e823bca3f2f..2144b6ceb119 100644 > +++ b/mm/page_alloc.c > @@ -5587,14 +5587,13 @@ static void build_zonelists(pg_data_t *pgdat) > { > static int node_order[MAX_NUMNODES]; > int node, load, nr_nodes = 0; > - nodemask_t used_mask; > + nodemask_t used_mask = {.bits = {0}}; If this style is to be done it should just be '= {}'; This case demonstrates why the popular '= {0}' idiom is not such a good idea, as it only works if the first member is an integral type. Jason
On 26.03.20 23:24, Wei Yang wrote: > Since we always clear used_mask before getting node order, we can > leverage compiler to do this instead of at run time. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > --- > mm/page_alloc.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0e823bca3f2f..2144b6ceb119 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5587,14 +5587,13 @@ static void build_zonelists(pg_data_t *pgdat) > { > static int node_order[MAX_NUMNODES]; > int node, load, nr_nodes = 0; > - nodemask_t used_mask; > + nodemask_t used_mask = {.bits = {0}}; > int local_node, prev_node; > > /* NUMA-aware ordering of nodes */ > local_node = pgdat->node_id; > load = nr_online_nodes; > prev_node = local_node; > - nodes_clear(used_mask); > > memset(node_order, 0, sizeof(node_order)); > while ((node = find_next_best_node(local_node, &used_mask)) >= 0) { > t480s: ~/git/linux default_online_type $ git grep "nodemask_t " | grep "=" arch/x86/mm/numa.c: nodemask_t reserved_nodemask = NODE_MASK_NONE; arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; drivers/acpi/numa/srat.c:static nodemask_t nodes_found_map = NODE_MASK_NONE; kernel/irq/affinity.c: nodemask_t nodemsk = NODE_MASK_NONE; kernel/sched/fair.c: nodemask_t max_group = NODE_MASK_NONE; mm/memory_hotplug.c: nodemask_t nmask = node_states[N_MEMORY]; mm/mempolicy.c: nodemask_t mems = cpuset_mems_allowed(current); mm/mempolicy.c: nodemask_t nodes = NODE_MASK_NONE; mm/oom_kill.c: const nodemask_t *mask = oc->nodemask; mm/page_alloc.c:nodemask_t node_states[NR_NODE_STATES] __read_mostly = { mm/page_alloc.c: nodemask_t saved_node_state = node_states[N_MEMORY]; Should this be NODE_MASK_NONE?
On Fri, Mar 27, 2020 at 10:32:45AM +0100, David Hildenbrand wrote: >On 26.03.20 23:24, Wei Yang wrote: >> Since we always clear used_mask before getting node order, we can >> leverage compiler to do this instead of at run time. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> --- >> mm/page_alloc.c | 3 +-- >> 1 file changed, 1 insertion(+), 2 deletions(-) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 0e823bca3f2f..2144b6ceb119 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -5587,14 +5587,13 @@ static void build_zonelists(pg_data_t *pgdat) >> { >> static int node_order[MAX_NUMNODES]; >> int node, load, nr_nodes = 0; >> - nodemask_t used_mask; >> + nodemask_t used_mask = {.bits = {0}}; >> int local_node, prev_node; >> >> /* NUMA-aware ordering of nodes */ >> local_node = pgdat->node_id; >> load = nr_online_nodes; >> prev_node = local_node; >> - nodes_clear(used_mask); >> >> memset(node_order, 0, sizeof(node_order)); >> while ((node = find_next_best_node(local_node, &used_mask)) >= 0) { >> > >t480s: ~/git/linux default_online_type $ git grep "nodemask_t " | grep "=" >arch/x86/mm/numa.c: nodemask_t reserved_nodemask = NODE_MASK_NONE; >arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; >arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; >arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; >drivers/acpi/numa/srat.c:static nodemask_t nodes_found_map = NODE_MASK_NONE; >kernel/irq/affinity.c: nodemask_t nodemsk = NODE_MASK_NONE; >kernel/sched/fair.c: nodemask_t max_group = NODE_MASK_NONE; >mm/memory_hotplug.c: nodemask_t nmask = node_states[N_MEMORY]; >mm/mempolicy.c: nodemask_t mems = cpuset_mems_allowed(current); >mm/mempolicy.c: nodemask_t nodes = NODE_MASK_NONE; >mm/oom_kill.c: const nodemask_t *mask = oc->nodemask; >mm/page_alloc.c:nodemask_t node_states[NR_NODE_STATES] __read_mostly = { >mm/page_alloc.c: nodemask_t saved_node_state = node_states[N_MEMORY]; > >Should this be NODE_MASK_NONE? Thanks, this is gcc extension. Learn something new. I would update this in v2. > >-- >Thanks, > >David / dhildenb
On Thu, Mar 26, 2020 at 07:36:04PM -0300, Jason Gunthorpe wrote: >On Thu, Mar 26, 2020 at 10:24:44PM +0000, Wei Yang wrote: >> Since we always clear used_mask before getting node order, we can >> leverage compiler to do this instead of at run time. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> mm/page_alloc.c | 3 +-- >> 1 file changed, 1 insertion(+), 2 deletions(-) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 0e823bca3f2f..2144b6ceb119 100644 >> +++ b/mm/page_alloc.c >> @@ -5587,14 +5587,13 @@ static void build_zonelists(pg_data_t *pgdat) >> { >> static int node_order[MAX_NUMNODES]; >> int node, load, nr_nodes = 0; >> - nodemask_t used_mask; >> + nodemask_t used_mask = {.bits = {0}}; > >If this style is to be done it should just be '= {}'; > >This case demonstrates why the popular '= {0}' idiom is not such a >good idea, as it only works if the first member is an integral type. > Thanks for your comment. I think David found a better solution. >Jason
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e823bca3f2f..2144b6ceb119 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5587,14 +5587,13 @@ static void build_zonelists(pg_data_t *pgdat) { static int node_order[MAX_NUMNODES]; int node, load, nr_nodes = 0; - nodemask_t used_mask; + nodemask_t used_mask = {.bits = {0}}; int local_node, prev_node; /* NUMA-aware ordering of nodes */ local_node = pgdat->node_id; load = nr_online_nodes; prev_node = local_node; - nodes_clear(used_mask); memset(node_order, 0, sizeof(node_order)); while ((node = find_next_best_node(local_node, &used_mask)) >= 0) {
Since we always clear used_mask before getting node order, we can leverage compiler to do this instead of at run time. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> --- mm/page_alloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)