Message ID | 20220825181210.284283-9-vschneid@redhat.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | sched, net: NUMA-aware CPU spreading interface | expand |
On 8/25/2022 9:12 PM, Valentin Schneider wrote: > The recently introduced sched_numa_hop_mask() exposes cpumasks of CPUs > reachable within a given distance budget, but this means each successive > cpumask is a superset of the previous one. > > Code wanting to allocate one item per CPU (e.g. IRQs) at increasing > distances would thus need to allocate a temporary cpumask to note which > CPUs have already been visited. This can be prevented by leveraging > for_each_cpu_andnot() - package all that logic into one ugl^D fancy macro. > > Signed-off-by: Valentin Schneider <vschneid@redhat.com> > --- > include/linux/topology.h | 37 +++++++++++++++++++++++++++++++++++++ > 1 file changed, 37 insertions(+) > > diff --git a/include/linux/topology.h b/include/linux/topology.h > index 13b82b83e547..6c671dc3252c 100644 > --- a/include/linux/topology.h > +++ b/include/linux/topology.h > @@ -254,5 +254,42 @@ static inline const struct cpumask *sched_numa_hop_mask(int node, int hops) > } > #endif /* CONFIG_NUMA */ > > +/** > + * for_each_numa_hop_cpu - iterate over CPUs by increasing NUMA distance, > + * starting from a given node. > + * @cpu: the iteration variable. > + * @node: the NUMA node to start the search from. > + * > + * Requires rcu_lock to be held. > + * Careful: this is a double loop, 'break' won't work as expected. > + * > + * > + * Implementation notes: > + * > + * Providing it is valid, the mask returned by > + * sched_numa_hop_mask(node, hops+1) > + * is a superset of the one returned by > + * sched_numa_hop_mask(node, hops) > + * which may not be that useful for drivers that try to spread things out and > + * want to visit a CPU not more than once. > + * > + * To accommodate for that, we use for_each_cpu_andnot() to iterate over the cpus > + * of sched_numa_hop_mask(node, hops+1) with the CPUs of > + * sched_numa_hop_mask(node, hops) removed, IOW we only iterate over CPUs > + * a given distance away (rather than *up to* a given distance). > + * > + * hops=0 forces us to play silly games: we pass cpu_none_mask to > + * for_each_cpu_andnot(), which turns it into for_each_cpu(). > + */ > +#define for_each_numa_hop_cpu(cpu, node) \ > + for (struct { const struct cpumask *curr, *prev; int hops; } __v = \ > + { sched_numa_hop_mask(node, 0), NULL, 0 }; \ > + !IS_ERR_OR_NULL(__v.curr); \ > + __v.hops++, \ > + __v.prev = __v.curr, \ > + __v.curr = sched_numa_hop_mask(node, __v.hops)) \ > + for_each_cpu_andnot(cpu, \ > + __v.curr, \ > + __v.hops ? __v.prev : cpu_none_mask) > Hiding two nested loops together in one for_each_* macro leads to unexpected behavior for the standard usage of 'break/continue'. for_each_numa_hop_cpu(cpu, node) { if (condition) break; <== will terminate the inner loop only, but it's invisible to the human developer/reviewer. } These bugs will not be easy to spot in code review. > #endif /* _LINUX_TOPOLOGY_H */
On 05/09/22 12:46, Tariq Toukan wrote: > On 8/25/2022 9:12 PM, Valentin Schneider wrote: >> The recently introduced sched_numa_hop_mask() exposes cpumasks of CPUs >> reachable within a given distance budget, but this means each successive >> cpumask is a superset of the previous one. >> >> Code wanting to allocate one item per CPU (e.g. IRQs) at increasing >> distances would thus need to allocate a temporary cpumask to note which >> CPUs have already been visited. This can be prevented by leveraging >> for_each_cpu_andnot() - package all that logic into one ugl^D fancy macro. >> >> Signed-off-by: Valentin Schneider <vschneid@redhat.com> >> --- >> include/linux/topology.h | 37 +++++++++++++++++++++++++++++++++++++ >> 1 file changed, 37 insertions(+) >> >> diff --git a/include/linux/topology.h b/include/linux/topology.h >> index 13b82b83e547..6c671dc3252c 100644 >> --- a/include/linux/topology.h >> +++ b/include/linux/topology.h >> @@ -254,5 +254,42 @@ static inline const struct cpumask *sched_numa_hop_mask(int node, int hops) >> } >> #endif /* CONFIG_NUMA */ >> >> +/** >> + * for_each_numa_hop_cpu - iterate over CPUs by increasing NUMA distance, >> + * starting from a given node. >> + * @cpu: the iteration variable. >> + * @node: the NUMA node to start the search from. >> + * >> + * Requires rcu_lock to be held. >> + * Careful: this is a double loop, 'break' won't work as expected. >> + * >> + * >> + * Implementation notes: >> + * >> + * Providing it is valid, the mask returned by >> + * sched_numa_hop_mask(node, hops+1) >> + * is a superset of the one returned by >> + * sched_numa_hop_mask(node, hops) >> + * which may not be that useful for drivers that try to spread things out and >> + * want to visit a CPU not more than once. >> + * >> + * To accommodate for that, we use for_each_cpu_andnot() to iterate over the cpus >> + * of sched_numa_hop_mask(node, hops+1) with the CPUs of >> + * sched_numa_hop_mask(node, hops) removed, IOW we only iterate over CPUs >> + * a given distance away (rather than *up to* a given distance). >> + * >> + * hops=0 forces us to play silly games: we pass cpu_none_mask to >> + * for_each_cpu_andnot(), which turns it into for_each_cpu(). >> + */ >> +#define for_each_numa_hop_cpu(cpu, node) \ >> + for (struct { const struct cpumask *curr, *prev; int hops; } __v = \ >> + { sched_numa_hop_mask(node, 0), NULL, 0 }; \ >> + !IS_ERR_OR_NULL(__v.curr); \ >> + __v.hops++, \ >> + __v.prev = __v.curr, \ >> + __v.curr = sched_numa_hop_mask(node, __v.hops)) \ >> + for_each_cpu_andnot(cpu, \ >> + __v.curr, \ >> + __v.hops ? __v.prev : cpu_none_mask) >> > > Hiding two nested loops together in one for_each_* macro leads to > unexpected behavior for the standard usage of 'break/continue'. > > for_each_numa_hop_cpu(cpu, node) { > if (condition) > break; <== will terminate the inner loop only, but it's > invisible to the human developer/reviewer. > } > > These bugs will not be easy to spot in code review. > Yeah, it's not ideal, but I *did* attach a comment warning about it. Spreading this pattern for sure isn't great, but there *is* a precedent with for_each_process_thread(). >> #endif /* _LINUX_TOPOLOGY_H */
diff --git a/include/linux/topology.h b/include/linux/topology.h index 13b82b83e547..6c671dc3252c 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -254,5 +254,42 @@ static inline const struct cpumask *sched_numa_hop_mask(int node, int hops) } #endif /* CONFIG_NUMA */ +/** + * for_each_numa_hop_cpu - iterate over CPUs by increasing NUMA distance, + * starting from a given node. + * @cpu: the iteration variable. + * @node: the NUMA node to start the search from. + * + * Requires rcu_lock to be held. + * Careful: this is a double loop, 'break' won't work as expected. + * + * + * Implementation notes: + * + * Providing it is valid, the mask returned by + * sched_numa_hop_mask(node, hops+1) + * is a superset of the one returned by + * sched_numa_hop_mask(node, hops) + * which may not be that useful for drivers that try to spread things out and + * want to visit a CPU not more than once. + * + * To accommodate for that, we use for_each_cpu_andnot() to iterate over the cpus + * of sched_numa_hop_mask(node, hops+1) with the CPUs of + * sched_numa_hop_mask(node, hops) removed, IOW we only iterate over CPUs + * a given distance away (rather than *up to* a given distance). + * + * hops=0 forces us to play silly games: we pass cpu_none_mask to + * for_each_cpu_andnot(), which turns it into for_each_cpu(). + */ +#define for_each_numa_hop_cpu(cpu, node) \ + for (struct { const struct cpumask *curr, *prev; int hops; } __v = \ + { sched_numa_hop_mask(node, 0), NULL, 0 }; \ + !IS_ERR_OR_NULL(__v.curr); \ + __v.hops++, \ + __v.prev = __v.curr, \ + __v.curr = sched_numa_hop_mask(node, __v.hops)) \ + for_each_cpu_andnot(cpu, \ + __v.curr, \ + __v.hops ? __v.prev : cpu_none_mask) #endif /* _LINUX_TOPOLOGY_H */
The recently introduced sched_numa_hop_mask() exposes cpumasks of CPUs reachable within a given distance budget, but this means each successive cpumask is a superset of the previous one. Code wanting to allocate one item per CPU (e.g. IRQs) at increasing distances would thus need to allocate a temporary cpumask to note which CPUs have already been visited. This can be prevented by leveraging for_each_cpu_andnot() - package all that logic into one ugl^D fancy macro. Signed-off-by: Valentin Schneider <vschneid@redhat.com> --- include/linux/topology.h | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+)