Message ID | 20200625223443.2684-4-nitesh@redhat.com (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers | show |
Series | Preventing job distribution to isolated CPUs | expand |
On Thu, Jun 25, 2020 at 06:34:43PM -0400, Nitesh Narayan Lal wrote: > From: Alex Belits <abelits@marvell.com> > > With the existing implementation of store_rps_map(), packets are queued > in the receive path on the backlog queues of other CPUs irrespective of > whether they are isolated or not. This could add a latency overhead to > any RT workload that is running on the same CPU. > > Ensure that store_rps_map() only uses available housekeeping CPUs for > storing the rps_map. > > Signed-off-by: Alex Belits <abelits@marvell.com> > Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> Dave, ACK if I route this?
From: Peter Zijlstra <peterz@infradead.org> Date: Fri, 26 Jun 2020 13:14:01 +0200 > On Thu, Jun 25, 2020 at 06:34:43PM -0400, Nitesh Narayan Lal wrote: >> From: Alex Belits <abelits@marvell.com> >> >> With the existing implementation of store_rps_map(), packets are queued >> in the receive path on the backlog queues of other CPUs irrespective of >> whether they are isolated or not. This could add a latency overhead to >> any RT workload that is running on the same CPU. >> >> Ensure that store_rps_map() only uses available housekeeping CPUs for >> storing the rps_map. >> >> Signed-off-by: Alex Belits <abelits@marvell.com> >> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> > > Dave, ACK if I route this? No problem: Acked-by: David S. Miller <davem@davemloft.net>
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index e353b822bb15..677868fea316 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -11,6 +11,7 @@ #include <linux/if_arp.h> #include <linux/slab.h> #include <linux/sched/signal.h> +#include <linux/sched/isolation.h> #include <linux/nsproxy.h> #include <net/sock.h> #include <net/net_namespace.h> @@ -741,7 +742,7 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue, { struct rps_map *old_map, *map; cpumask_var_t mask; - int err, cpu, i; + int err, cpu, i, hk_flags; static DEFINE_MUTEX(rps_map_mutex); if (!capable(CAP_NET_ADMIN)) @@ -756,6 +757,13 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue, return err; } + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ; + cpumask_and(mask, mask, housekeeping_cpumask(hk_flags)); + if (cpumask_empty(mask)) { + free_cpumask_var(mask); + return -EINVAL; + } + map = kzalloc(max_t(unsigned int, RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES), GFP_KERNEL);