From patchwork Mon Jan 13 17:10:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13937808 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6E1D1C5D42 for ; Mon, 13 Jan 2025 17:11:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.20 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736788304; cv=none; b=pbg5qOuv1kQRTlkT6v/LdU7U95p7a/YlAKY66lQmGL2eLZbm1ZvvC7Ktkg24P8UhxFNrFyaBZXEnhyP2ITMWuyck8JkdVBZ/JCK73RgBAGoDOkYzw9Eg6hjM7muq+iaH7XPTGG1lNJB2pzCVI5+SxIu1BYmSs9m1jgJ6fT1B+gU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736788304; c=relaxed/simple; bh=dYpk1spwEtlOomeBT9dQvdbxt/f+PnNZTwRIWbezKxM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Tx6feUeF5jV0c7ijnnCIM2NkheD1atl4mWX7YBhZAtSoHrV34a8aDLqsmSZ88Zfy5r3ydoCuRyvJTVBgLiLyr4Q5zn3eAUfL673jdaJ2uO64T9plMJVebTgZdomu/A74ISlUG/wZ/gvgKonMKZuDKfaG7dn1NcCT/IfU1JrC7AM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=foG0ymGa; arc=none smtp.client-ip=198.175.65.20 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="foG0ymGa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736788303; x=1768324303; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dYpk1spwEtlOomeBT9dQvdbxt/f+PnNZTwRIWbezKxM=; b=foG0ymGaV+aK0IAXJt2NnzMHhVnqikC8HjJenapHUTjqksB1GZngVbyn rxpmgZ8kAP+TccXb+mSEfqrRKdEsPbuGdCkDokCmKYjW802sFrELy2EjK 5Q33r4lsTn1zXHl6iyp+WNkTS9loiWbY1+glzrpdQhtImdGg9miFj4sZg +AaBu/7vj8s8KxxyALDCKJDA2hJgooxt2xSFmmHigoDJIl8oNgTA2mAeW CpY2nDOX8U+kVY7nnH2XV0ptMmLU5gCELSS+DxfuRamzkv1vxL90W8bXC bn3A3pqlPHfJcmI8SPJSOF5B92xSx9GAgQkVJMH4hojHh9szRPiWLlLwE w==; X-CSE-ConnectionGUID: rR30l+rCSVmwU4ndbMbbDw== X-CSE-MsgGUID: wPuGHeZZS1iV99Ze7w0Bzg== X-IronPort-AV: E=McAfee;i="6700,10204,11314"; a="36748959" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="36748959" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 09:11:43 -0800 X-CSE-ConnectionGUID: j9wAMdoARHqr9Cm1dLyGLQ== X-CSE-MsgGUID: KOK3/k+wRbqS4LkOcL1cPA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="104499817" Received: from jdoman-mobl3.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.108.26]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 09:11:35 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, pavan.chebbi@broadcom.com, yury.norov@gmail.com, darinzon@amazon.com, Ahmed Zaki Subject: [PATCH net-next v5 6/6] idpf: use napi's irq affinity Date: Mon, 13 Jan 2025 10:10:42 -0700 Message-ID: <20250113171042.158123-7-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250113171042.158123-1-ahmed.zaki@intel.com> References: <20250113171042.158123-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Delete the driver CPU affinity info and use the core's napi config instead. Signed-off-by: Ahmed Zaki --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 1 + drivers/net/ethernet/intel/idpf/idpf_txrx.c | 22 +++++++-------------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 ++---- 3 files changed, 10 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index b4fbb99bfad2..d54be068f53f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -814,6 +814,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) netdev->hw_features |= dflt_features | offloads; netdev->hw_enc_features |= dflt_features | offloads; idpf_set_ethtool_ops(netdev); + netif_enable_irq_affinity(netdev); SET_NETDEV_DEV(netdev, &adapter->pdev->dev); /* carrier off on init to avoid Tx hangs */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 2fa9c36e33c9..f6b5b45a061c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3554,8 +3554,6 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) q_vector->tx = NULL; kfree(q_vector->rx); q_vector->rx = NULL; - - free_cpumask_var(q_vector->affinity_mask); } kfree(vport->q_vectors); @@ -3582,8 +3580,6 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport *vport) vidx = vport->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; - /* clear the affinity_mask in the IRQ descriptor */ - irq_set_affinity_hint(irq_num, NULL); kfree(free_irq(irq_num, q_vector)); } } @@ -3771,8 +3767,6 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport) "Request_irq failed, error: %d\n", err); goto free_q_irqs; } - /* assign the mask for this irq */ - irq_set_affinity_hint(irq_num, q_vector->affinity_mask); } return 0; @@ -4184,7 +4178,8 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) { int (*napi_poll)(struct napi_struct *napi, int budget); - u16 v_idx; + u16 v_idx, qv_idx; + int irq_num; if (idpf_is_queue_model_split(vport->txq_model)) napi_poll = idpf_vport_splitq_napi_poll; @@ -4193,12 +4188,12 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; + qv_idx = vport->q_vector_idxs[v_idx]; + irq_num = vport->adapter->msix_entries[qv_idx].vector; - netif_napi_add(vport->netdev, &q_vector->napi, napi_poll); - - /* only set affinity_mask if the CPU is online */ - if (cpu_online(v_idx)) - cpumask_set_cpu(v_idx, q_vector->affinity_mask); + netif_napi_add_config(vport->netdev, &q_vector->napi, + napi_poll, v_idx); + netif_napi_set_irq(&q_vector->napi, irq_num); } } @@ -4242,9 +4237,6 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) q_vector->rx_intr_mode = IDPF_ITR_DYNAMIC; q_vector->rx_itr_idx = VIRTCHNL2_ITR_IDX_0; - if (!zalloc_cpumask_var(&q_vector->affinity_mask, GFP_KERNEL)) - goto error; - q_vector->tx = kcalloc(txqs_per_vector, sizeof(*q_vector->tx), GFP_KERNEL); if (!q_vector->tx) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 0f71a6f5557b..13251f63c7c3 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -401,7 +401,6 @@ struct idpf_intr_reg { * @rx_intr_mode: Dynamic ITR or not * @rx_itr_idx: RX ITR index * @v_idx: Vector index - * @affinity_mask: CPU affinity mask */ struct idpf_q_vector { __cacheline_group_begin_aligned(read_mostly); @@ -438,13 +437,12 @@ struct idpf_q_vector { __cacheline_group_begin_aligned(cold); u16 v_idx; - cpumask_var_t affinity_mask; __cacheline_group_end_aligned(cold); }; libeth_cacheline_set_assert(struct idpf_q_vector, 120, 24 + sizeof(struct napi_struct) + 2 * sizeof(struct dim), - 8 + sizeof(cpumask_var_t)); + 8); struct idpf_rx_queue_stats { u64_stats_t packets; @@ -940,7 +938,7 @@ static inline int idpf_q_vector_to_mem(const struct idpf_q_vector *q_vector) if (!q_vector) return NUMA_NO_NODE; - cpu = cpumask_first(q_vector->affinity_mask); + cpu = cpumask_first(&q_vector->napi.config->affinity_mask); return cpu < nr_cpu_ids ? cpu_to_mem(cpu) : NUMA_NO_NODE; }