From patchwork Tue Dec 10 00:26:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13900554 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 295CB224D4 for ; Tue, 10 Dec 2024 00:27:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733790426; cv=none; b=FMg3z/pUCvZmx0gWorAJtkI2T/mCs0ZoEX4APH9ZgaW4IQ2bXls0vX1aT0L/kdr/BDlx8taF2zaxuuVyNWCeYJ2Sk0xpIZl5nZVGz3VvSXNVMs6quEpwSDVv7oSrAwgODAnxWfw34lLD/FHaIuLMjrtEebDOR5ccaL0/kXy10vU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733790426; c=relaxed/simple; bh=8CfcyPhQFg9Wtol6dFFKBsgjwzqfKui42LowyAWyqrU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KqVV0wFwfPKwbd0j/jgAM17FZHmRRF/4vW8TsX8OftmNajSJx6qFSEKjmerShAADJf+CoAARFKB41thSjNEFzLZ4SUp/5wl2Zf+HXsF7ChCab40ewmhKpoC7sVvpAUPeDVGnmKECkUlAAIniSEKUWh8C3bAoKFm0FsoL6qumM0s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MQ/UVe7T; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MQ/UVe7T" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733790425; x=1765326425; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8CfcyPhQFg9Wtol6dFFKBsgjwzqfKui42LowyAWyqrU=; b=MQ/UVe7TIj0R+a6Jn1dSe3BXKQSl9wD+KZ54qIJQOqorTmF7Q6qZvaed /oON2MhT8rRubpcwnh4g7m2fXIJd7ilHytnnAoc7iLMj55AqyK6UDbqx9 8AT4gAEv6Awkmv5X5Bf4fqCsUchvR97NxvZXP0ITxABwa5FDKceNyks5C f7h/sAfPsZgzHijWnlYKW49isInKWQjmuXQd2fCphr6dUnfdEXMoEtbCu ezGAHZpE8p9EzdWeUATufi1srAWDUVhjOhyv7Z3Fgg9oQloEfgCyNDNgV xyx+rNRl1AbHjwoixqkgFewzlK2XuSWHuMe1/SW8/jAWMIfUpF808Pyut w==; X-CSE-ConnectionGUID: uNOIjrQNRJ2Rc9HP2zJkJg== X-CSE-MsgGUID: 7XgLrX+2T3ONZEn/w7QgUQ== X-IronPort-AV: E=McAfee;i="6700,10204,11281"; a="44791469" X-IronPort-AV: E=Sophos;i="6.12,220,1728975600"; d="scan'208";a="44791469" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2024 16:27:05 -0800 X-CSE-ConnectionGUID: +Wb7AsU0RGy3XO4XmgBpxw== X-CSE-MsgGUID: pmxY7rwZSRyP0+t172N/Zw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,220,1728975600"; d="scan'208";a="126132186" Received: from cmdeoliv-mobl4.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.109.73]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2024 16:27:00 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, Ahmed Zaki Subject: [PATCH v1 net-next 5/6] ice: use napi's irq affinity Date: Mon, 9 Dec 2024 17:26:25 -0700 Message-ID: <20241210002626.366878-6-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241210002626.366878-1-ahmed.zaki@intel.com> References: <20241210002626.366878-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Delete the driver CPU affinity info and use the core's napi config instead. Signed-off-by: Ahmed Zaki --- drivers/net/ethernet/intel/ice/ice.h | 3 -- drivers/net/ethernet/intel/ice/ice_base.c | 7 ++-- drivers/net/ethernet/intel/ice/ice_lib.c | 9 ++--- drivers/net/ethernet/intel/ice/ice_main.c | 44 ----------------------- 4 files changed, 4 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 2f5d6f974185..0db665a6b38a 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -477,9 +477,6 @@ struct ice_q_vector { struct ice_ring_container rx; struct ice_ring_container tx; - cpumask_t affinity_mask; - struct irq_affinity_notify affinity_notify; - struct ice_channel *ch; char name[ICE_INT_NAME_STR_LEN]; diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 82a9cd4ec7ae..48a6e1068223 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -147,10 +147,6 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx) q_vector->reg_idx = q_vector->irq.index; q_vector->vf_reg_idx = q_vector->irq.index; - /* only set affinity_mask if the CPU is online */ - if (cpu_online(v_idx)) - cpumask_set_cpu(v_idx, &q_vector->affinity_mask); - /* This will not be called in the driver load path because the netdev * will not be created yet. All other cases with register the NAPI * handler here (i.e. resume, reset/rebuild, etc.) @@ -276,7 +272,8 @@ static void ice_cfg_xps_tx_ring(struct ice_tx_ring *ring) if (test_and_set_bit(ICE_TX_XPS_INIT_DONE, ring->xps_state)) return; - netif_set_xps_queue(ring->netdev, &ring->q_vector->affinity_mask, + netif_set_xps_queue(ring->netdev, + &ring->q_vector->napi.config->affinity_mask, ring->q_index); } diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index ff91e70f596f..91fd4271129d 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2589,12 +2589,6 @@ void ice_vsi_free_irq(struct ice_vsi *vsi) vsi->q_vectors[i]->num_ring_rx)) continue; - /* clear the affinity notifier in the IRQ descriptor */ - if (!IS_ENABLED(CONFIG_RFS_ACCEL)) - irq_set_affinity_notifier(irq_num, NULL); - - /* clear the affinity_hint in the IRQ descriptor */ - irq_update_affinity_hint(irq_num, NULL); synchronize_irq(irq_num); devm_free_irq(ice_pf_to_dev(pf), irq_num, vsi->q_vectors[i]); } @@ -2735,7 +2729,8 @@ void ice_vsi_set_napi_queues(struct ice_vsi *vsi) ice_for_each_q_vector(vsi, v_idx) { struct ice_q_vector *q_vector = vsi->q_vectors[v_idx]; - netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq, 0); + netif_napi_set_irq(&q_vector->napi, + q_vector->irq.virq, NAPIF_F_IRQ_AFFINITY); } } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 1eaa4428fd24..93150d9ad565 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2504,34 +2504,6 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset) return 0; } -/** - * ice_irq_affinity_notify - Callback for affinity changes - * @notify: context as to what irq was changed - * @mask: the new affinity mask - * - * This is a callback function used by the irq_set_affinity_notifier function - * so that we may register to receive changes to the irq affinity masks. - */ -static void -ice_irq_affinity_notify(struct irq_affinity_notify *notify, - const cpumask_t *mask) -{ - struct ice_q_vector *q_vector = - container_of(notify, struct ice_q_vector, affinity_notify); - - cpumask_copy(&q_vector->affinity_mask, mask); -} - -/** - * ice_irq_affinity_release - Callback for affinity notifier release - * @ref: internal core kernel usage - * - * This is a callback function used by the irq_set_affinity_notifier function - * to inform the current notification subscriber that they will no longer - * receive notifications. - */ -static void ice_irq_affinity_release(struct kref __always_unused *ref) {} - /** * ice_vsi_ena_irq - Enable IRQ for the given VSI * @vsi: the VSI being configured @@ -2595,19 +2567,6 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename) err); goto free_q_irqs; } - - /* register for affinity change notifications */ - if (!IS_ENABLED(CONFIG_RFS_ACCEL)) { - struct irq_affinity_notify *affinity_notify; - - affinity_notify = &q_vector->affinity_notify; - affinity_notify->notify = ice_irq_affinity_notify; - affinity_notify->release = ice_irq_affinity_release; - irq_set_affinity_notifier(irq_num, affinity_notify); - } - - /* assign the mask for this irq */ - irq_update_affinity_hint(irq_num, &q_vector->affinity_mask); } err = ice_set_cpu_rx_rmap(vsi); @@ -2623,9 +2582,6 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename) free_q_irqs: while (vector--) { irq_num = vsi->q_vectors[vector]->irq.virq; - if (!IS_ENABLED(CONFIG_RFS_ACCEL)) - irq_set_affinity_notifier(irq_num, NULL); - irq_update_affinity_hint(irq_num, NULL); devm_free_irq(dev, irq_num, &vsi->q_vectors[vector]); } return err;