diff mbox series

[net-next] octeontx2-pf: Use only non-isolated cpus in irq affinity

Message ID 20220725094402.21203-1-gakula@marvell.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net-next] octeontx2-pf: Use only non-isolated cpus in irq affinity | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 3 maintainers not CCed: hkelam@marvell.com sbhatta@marvell.com pabeni@redhat.com
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 7 this patch: 7
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 36 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Geetha sowjanya July 25, 2022, 9:44 a.m. UTC
This patch excludes the isolates cpus from the cpus list
while setting up TX/RX queue interrupts affinity

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
---
 .../ethernet/marvell/octeontx2/nic/otx2_common.c    | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

Comments

Jakub Kicinski July 27, 2022, 3:08 a.m. UTC | #1
On Mon, 25 Jul 2022 15:14:02 +0530 Geetha sowjanya wrote:
> This patch excludes the isolates cpus from the cpus list
> while setting up TX/RX queue interrupts affinity
> 
> Signed-off-by: Geetha sowjanya <gakula@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>

Hm, housekeeping_cpumask() looks barely used by drivers,
do you have any references to discussions indicated drivers
are expected to pay attention to it? Really seems like something 
that the core should take care of.

Tariq, thoughts?
Tariq Toukan July 27, 2022, 5:28 a.m. UTC | #2
On 7/27/2022 6:08 AM, Jakub Kicinski wrote:
> On Mon, 25 Jul 2022 15:14:02 +0530 Geetha sowjanya wrote:
>> This patch excludes the isolates cpus from the cpus list
>> while setting up TX/RX queue interrupts affinity
>>
>> Signed-off-by: Geetha sowjanya <gakula@marvell.com>
>> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> 
> Hm, housekeeping_cpumask() looks barely used by drivers,
> do you have any references to discussions indicated drivers
> are expected to pay attention to it? Really seems like something
> that the core should take care of.
> 
> Tariq, thoughts?

I agree.
IMO this logic best fits inside the new sched API I proposed last week 
(pending Ack...), transparent to driver.

Find here:
https://lore.kernel.org/all/20220719162339.23865-2-tariqt@nvidia.com/
Tariq Toukan Aug. 1, 2022, 6:55 a.m. UTC | #3
On 7/27/2022 10:03 AM, Sunil Kovvuri wrote:
> 
> 
> On Wed, Jul 27, 2022 at 11:01 AM Tariq Toukan <ttoukan.linux@gmail.com 
> <mailto:ttoukan.linux@gmail.com>> wrote:
> 
> 
> 
>     On 7/27/2022 6:08 AM, Jakub Kicinski wrote:
>      > On Mon, 25 Jul 2022 15:14:02 +0530 Geetha sowjanya wrote:
>      >> This patch excludes the isolates cpus from the cpus list
>      >> while setting up TX/RX queue interrupts affinity
>      >>
>      >> Signed-off-by: Geetha sowjanya <gakula@marvell.com
>     <mailto:gakula@marvell.com>>
>      >> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com
>     <mailto:sgoutham@marvell.com>>
>      >
>      > Hm, housekeeping_cpumask() looks barely used by drivers,
>      > do you have any references to discussions indicated drivers
>      > are expected to pay attention to it? Really seems like something
>      > that the core should take care of.
>      >
>      > Tariq, thoughts?
> 
>     I agree.
>     IMO this logic best fits inside the new sched API I proposed last week
>     (pending Ack...), transparent to driver.
> 
>     Find here:
>     https://lore.kernel.org/all/20220719162339.23865-2-tariqt@nvidia.com/ <https://lore.kernel.org/all/20220719162339.23865-2-tariqt@nvidia.com/>
> 
> 
> You mean
> 
> +static bool sched_cpus_spread_by_distance(int node, u16 *cpus, int 
> ncpus) +{ +
> 
> .... + cpumask_copy(cpumask, cpu_online_mask);
> 
> Change cpu_online_mask here to a mask which gives non-isolated cores mask ?
> 

Yes that was the intention.
However, on a second thought, I'm not sure this is a good idea.

In some cases, the device driver is isolated-out for other higher prio 
tasks. While in other cases, the device driver processing is the high 
prio task and is isolated in these cpus for best performance.
As the cpus spread usually affects affinity hints and numa-aware 
allocations, your patch might cause a degradation if always applied.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index fb8db5888d2f..9886a02dd756 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -8,6 +8,7 @@ 
 #include <linux/interrupt.h>
 #include <linux/pci.h>
 #include <net/tso.h>
+#include <linux/sched/isolation.h>
 
 #include "otx2_reg.h"
 #include "otx2_common.h"
@@ -1657,9 +1658,16 @@  void otx2_set_cints_affinity(struct otx2_nic *pfvf)
 {
 	struct otx2_hw *hw = &pfvf->hw;
 	int vec, cpu, irq, cint;
+	cpumask_var_t mask;
+
+	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
+		return;
+
+	cpumask_and(mask, cpu_online_mask,
+		    housekeeping_cpumask(HK_TYPE_DOMAIN));
+	cpu = cpumask_first(mask);
 
 	vec = hw->nix_msixoff + NIX_LF_CINT_VEC_START;
-	cpu = cpumask_first(cpu_online_mask);
 
 	/* CQ interrupts */
 	for (cint = 0; cint < pfvf->hw.cint_cnt; cint++, vec++) {
@@ -1671,10 +1679,11 @@  void otx2_set_cints_affinity(struct otx2_nic *pfvf)
 		irq = pci_irq_vector(pfvf->pdev, vec);
 		irq_set_affinity_hint(irq, hw->affinity_mask[vec]);
 
-		cpu = cpumask_next(cpu, cpu_online_mask);
+		cpu = cpumask_next(cpu, mask);
 		if (unlikely(cpu >= nr_cpu_ids))
 			cpu = 0;
 	}
+	free_cpumask_var(mask);
 }
 
 u16 otx2_get_max_mtu(struct otx2_nic *pfvf)