From patchwork Tue Oct 24 11:09:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434264 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65102262B5 for ; Tue, 24 Oct 2023 11:34:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QKMoTWCF" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 782F8D68 for ; Tue, 24 Oct 2023 04:34:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147282; x=1729683282; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Jk9WnyGWFSZDV6d13eidwJrXarpyMH14nA8ApjZgd7Y=; b=QKMoTWCFB+GqQtV+zDGiNFJmfoaTeoGuhGkRToJVWnq615N2OVAheYE8 4uGeJLtQJRjod478FcgPWI9NcfGwy4ALr81c7vnKA99DTCN6yuBL0kbPw OcEjVp8RzVc816sXTJxwZTn6Gk9S4azSOL2FvI7cq3cnsZ7dLjUpaYHEB UUGt7eMe/5Y0ZA7baG8UYyccQliwlXQffc2Mzf/jsftEdjM+PR7AMTY5m hZICQGPQFuQNj8OXlKyjg9VxAj5DSIyQ7u5Kei04nHj77jDBBERzwJIUk Yst8D78fW+6YMu42P1bbYmUOmeoVGaIMV38hT0oOd8GLDFoWEue4O4l2o Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660518" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660518" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6145954" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:22 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 01/15] ice: rename switchdev to eswitch Date: Tue, 24 Oct 2023 13:09:15 +0200 Message-ID: <20231024110929.19423-2-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Eswitch is used as a prefix for related functions. Main structure storing all data related to eswitch should also be named as eswitch instead of ice_switchdev_info. Rename it. Also rename switchdev to eswitch where the context is not about eswitch mode. ::uplink_netdev was changed to netdev for simplicity. There is no other netdev in function scope so it is obvious. Reviewed-by: Wojciech Drewek Reviewed-by: Piotr Raczynski Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice.h | 6 +- drivers/net/ethernet/intel/ice/ice_eswitch.c | 63 ++++++++++--------- .../net/ethernet/intel/ice/ice_eswitch_br.c | 12 ++-- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 4 +- 4 files changed, 43 insertions(+), 42 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 351e0d36df44..6c59ca86d959 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -522,7 +522,7 @@ enum ice_misc_thread_tasks { ICE_MISC_THREAD_NBITS /* must be last */ }; -struct ice_switchdev_info { +struct ice_eswitch { struct ice_vsi *control_vsi; struct ice_vsi *uplink_vsi; struct ice_esw_br_offloads *br_offloads; @@ -637,7 +637,7 @@ struct ice_pf { struct ice_link_default_override_tlv link_dflt_override; struct ice_lag *lag; /* Link Aggregation information */ - struct ice_switchdev_info switchdev; + struct ice_eswitch eswitch; struct ice_esw_br_port *br_port; #define ICE_INVALID_AGG_NODE_ID 0 @@ -846,7 +846,7 @@ static inline struct ice_vsi *ice_find_vsi(struct ice_pf *pf, u16 vsi_num) */ static inline bool ice_is_switchdev_running(struct ice_pf *pf) { - return pf->switchdev.is_running; + return pf->eswitch.is_running; } #define ICE_FD_STAT_CTR_BLOCK_COUNT 256 diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index a655d499abfa..e7f1e53314d7 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -16,12 +16,12 @@ * @vf: pointer to VF struct * * This function adds advanced rule that forwards packets with - * VF's VSI index to the corresponding switchdev ctrl VSI queue. + * VF's VSI index to the corresponding eswitch ctrl VSI queue. */ static int ice_eswitch_add_vf_sp_rule(struct ice_pf *pf, struct ice_vf *vf) { - struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; struct ice_adv_rule_info rule_info = { 0 }; struct ice_adv_lkup_elem *list; struct ice_hw *hw = &pf->hw; @@ -59,7 +59,7 @@ ice_eswitch_add_vf_sp_rule(struct ice_pf *pf, struct ice_vf *vf) * @vf: pointer to the VF struct * * Delete the advanced rule that was used to forward packets with the VF's VSI - * index to the corresponding switchdev ctrl VSI queue. + * index to the corresponding eswitch ctrl VSI queue. */ static void ice_eswitch_del_vf_sp_rule(struct ice_vf *vf) { @@ -70,7 +70,7 @@ static void ice_eswitch_del_vf_sp_rule(struct ice_vf *vf) } /** - * ice_eswitch_setup_env - configure switchdev HW filters + * ice_eswitch_setup_env - configure eswitch HW filters * @pf: pointer to PF struct * * This function adds HW filters configuration specific for switchdev @@ -78,18 +78,18 @@ static void ice_eswitch_del_vf_sp_rule(struct ice_vf *vf) */ static int ice_eswitch_setup_env(struct ice_pf *pf) { - struct ice_vsi *uplink_vsi = pf->switchdev.uplink_vsi; - struct net_device *uplink_netdev = uplink_vsi->netdev; - struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi *uplink_vsi = pf->eswitch.uplink_vsi; + struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; + struct net_device *netdev = uplink_vsi->netdev; struct ice_vsi_vlan_ops *vlan_ops; bool rule_added = false; ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); - netif_addr_lock_bh(uplink_netdev); - __dev_uc_unsync(uplink_netdev, NULL); - __dev_mc_unsync(uplink_netdev, NULL); - netif_addr_unlock_bh(uplink_netdev); + netif_addr_lock_bh(netdev); + __dev_uc_unsync(netdev, NULL); + __dev_mc_unsync(netdev, NULL); + netif_addr_unlock_bh(netdev); if (ice_vsi_add_vlan_zero(uplink_vsi)) goto err_def_rx; @@ -132,10 +132,10 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) } /** - * ice_eswitch_remap_rings_to_vectors - reconfigure rings of switchdev ctrl VSI + * ice_eswitch_remap_rings_to_vectors - reconfigure rings of eswitch ctrl VSI * @pf: pointer to PF struct * - * In switchdev number of allocated Tx/Rx rings is equal. + * In eswitch number of allocated Tx/Rx rings is equal. * * This function fills q_vectors structures associated with representor and * move each ring pairs to port representor netdevs. Each port representor @@ -144,7 +144,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) */ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) { - struct ice_vsi *vsi = pf->switchdev.control_vsi; + struct ice_vsi *vsi = pf->eswitch.control_vsi; int q_id; ice_for_each_txq(vsi, q_id) { @@ -189,7 +189,7 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) /** * ice_eswitch_release_reprs - clear PR VSIs configuration * @pf: poiner to PF struct - * @ctrl_vsi: pointer to switchdev control VSI + * @ctrl_vsi: pointer to eswitch control VSI */ static void ice_eswitch_release_reprs(struct ice_pf *pf, struct ice_vsi *ctrl_vsi) @@ -223,7 +223,7 @@ ice_eswitch_release_reprs(struct ice_pf *pf, struct ice_vsi *ctrl_vsi) */ static int ice_eswitch_setup_reprs(struct ice_pf *pf) { - struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; int max_vsi_num = 0; struct ice_vf *vf; unsigned int bkt; @@ -359,7 +359,7 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev) } /** - * ice_eswitch_set_target_vsi - set switchdev context in Tx context descriptor + * ice_eswitch_set_target_vsi - set eswitch context in Tx context descriptor * @skb: pointer to send buffer * @off: pointer to offload struct */ @@ -382,7 +382,7 @@ ice_eswitch_set_target_vsi(struct sk_buff *skb, } /** - * ice_eswitch_release_env - clear switchdev HW filters + * ice_eswitch_release_env - clear eswitch HW filters * @pf: pointer to PF struct * * This function removes HW filters configuration specific for switchdev @@ -390,8 +390,8 @@ ice_eswitch_set_target_vsi(struct sk_buff *skb, */ static void ice_eswitch_release_env(struct ice_pf *pf) { - struct ice_vsi *uplink_vsi = pf->switchdev.uplink_vsi; - struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi *uplink_vsi = pf->eswitch.uplink_vsi; + struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; struct ice_vsi_vlan_ops *vlan_ops; vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi); @@ -407,7 +407,7 @@ static void ice_eswitch_release_env(struct ice_pf *pf) } /** - * ice_eswitch_vsi_setup - configure switchdev control VSI + * ice_eswitch_vsi_setup - configure eswitch control VSI * @pf: pointer to PF structure * @pi: pointer to port_info structure */ @@ -486,12 +486,12 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) return -EINVAL; } - pf->switchdev.control_vsi = ice_eswitch_vsi_setup(pf, pf->hw.port_info); - if (!pf->switchdev.control_vsi) + pf->eswitch.control_vsi = ice_eswitch_vsi_setup(pf, pf->hw.port_info); + if (!pf->eswitch.control_vsi) return -ENODEV; - ctrl_vsi = pf->switchdev.control_vsi; - pf->switchdev.uplink_vsi = uplink_vsi; + ctrl_vsi = pf->eswitch.control_vsi; + pf->eswitch.uplink_vsi = uplink_vsi; if (ice_eswitch_setup_env(pf)) goto err_vsi; @@ -526,12 +526,12 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) } /** - * ice_eswitch_disable_switchdev - disable switchdev resources + * ice_eswitch_disable_switchdev - disable eswitch resources * @pf: pointer to PF structure */ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) { - struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; ice_eswitch_napi_disable(pf); ice_eswitch_br_offloads_deinit(pf); @@ -625,7 +625,7 @@ void ice_eswitch_release(struct ice_pf *pf) return; ice_eswitch_disable_switchdev(pf); - pf->switchdev.is_running = false; + pf->eswitch.is_running = false; } /** @@ -636,14 +636,15 @@ int ice_eswitch_configure(struct ice_pf *pf) { int status; - if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_LEGACY || pf->switchdev.is_running) + if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_LEGACY || + pf->eswitch.is_running) return 0; status = ice_eswitch_enable_switchdev(pf); if (status) return status; - pf->switchdev.is_running = true; + pf->eswitch.is_running = true; return 0; } @@ -693,7 +694,7 @@ void ice_eswitch_stop_all_tx_queues(struct ice_pf *pf) */ int ice_eswitch_rebuild(struct ice_pf *pf) { - struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; int status; ice_eswitch_napi_disable(pf); diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 6ae0269bdf73..16bbcaca8fda 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -947,7 +947,7 @@ ice_eswitch_br_vf_repr_port_init(struct ice_esw_br *bridge, static int ice_eswitch_br_uplink_port_init(struct ice_esw_br *bridge, struct ice_pf *pf) { - struct ice_vsi *vsi = pf->switchdev.uplink_vsi; + struct ice_vsi *vsi = pf->eswitch.uplink_vsi; struct ice_esw_br_port *br_port; int err; @@ -1185,7 +1185,7 @@ ice_eswitch_br_port_event(struct notifier_block *nb, static void ice_eswitch_br_offloads_dealloc(struct ice_pf *pf) { - struct ice_esw_br_offloads *br_offloads = pf->switchdev.br_offloads; + struct ice_esw_br_offloads *br_offloads = pf->eswitch.br_offloads; ASSERT_RTNL(); @@ -1194,7 +1194,7 @@ ice_eswitch_br_offloads_dealloc(struct ice_pf *pf) ice_eswitch_br_deinit(br_offloads, br_offloads->bridge); - pf->switchdev.br_offloads = NULL; + pf->eswitch.br_offloads = NULL; kfree(br_offloads); } @@ -1205,14 +1205,14 @@ ice_eswitch_br_offloads_alloc(struct ice_pf *pf) ASSERT_RTNL(); - if (pf->switchdev.br_offloads) + if (pf->eswitch.br_offloads) return ERR_PTR(-EEXIST); br_offloads = kzalloc(sizeof(*br_offloads), GFP_KERNEL); if (!br_offloads) return ERR_PTR(-ENOMEM); - pf->switchdev.br_offloads = br_offloads; + pf->eswitch.br_offloads = br_offloads; br_offloads->pf = pf; return br_offloads; @@ -1223,7 +1223,7 @@ ice_eswitch_br_offloads_deinit(struct ice_pf *pf) { struct ice_esw_br_offloads *br_offloads; - br_offloads = pf->switchdev.br_offloads; + br_offloads = pf->eswitch.br_offloads; if (!br_offloads) return; diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index 0e75fc6b3c06..770d3a085af8 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -653,7 +653,7 @@ static int ice_tc_setup_redirect_action(struct net_device *filter_dev, ice_tc_is_dev_uplink(target_dev)) { repr = ice_netdev_to_repr(filter_dev); - fltr->dest_vsi = repr->src_vsi->back->switchdev.uplink_vsi; + fltr->dest_vsi = repr->src_vsi->back->eswitch.uplink_vsi; fltr->direction = ICE_ESWITCH_FLTR_EGRESS; } else if (ice_tc_is_dev_uplink(filter_dev) && ice_is_port_repr_netdev(target_dev)) { @@ -743,7 +743,7 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr) rule_info.sw_act.src = hw->pf_id; rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE; } else if (fltr->direction == ICE_ESWITCH_FLTR_EGRESS && - fltr->dest_vsi == vsi->back->switchdev.uplink_vsi) { + fltr->dest_vsi == vsi->back->eswitch.uplink_vsi) { /* VF to Uplink */ rule_info.sw_act.flag |= ICE_FLTR_TX; rule_info.sw_act.src = vsi->idx; From patchwork Tue Oct 24 11:09:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434265 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92E961FD7 for ; Tue, 24 Oct 2023 11:34:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="n17hddpf" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2C86128 for ; Tue, 24 Oct 2023 04:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147285; x=1729683285; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tVyuCVIXyackENHVgz8P8HyFcLS2q5Cu0eQ0fEgBzBg=; b=n17hddpfGjdUxeBg6cNylqeahcc1Yu5MPV3sF3q9+jW3IT1i8qBsOHGQ KSspoaaqT/AsqjTXthi/lSAn2ETLbaI4TFMW226D7KMlBYS4ANQvN7qWd Cq09pZ0pjQmtmNemtmP5Jwun3pEyjkffjoXqXckL7hDoVIbp95hVBtirk hIjwoOTy2draEh7dU13UQ5MgOMZOFWsDXVR0wYB3rhBUg+ngcFptUgymC Ob5TuUZw1eSi6Ax+IKwoWCPYXmbdolcpUV01mNVSz/gItE8RD8a0lR9bP MrDW2YRhH3N+zt7OIdE306b0b5RHq6pvWcDIBnQgIblbyZKuQSqd9FyIJ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660521" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660521" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6145963" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:25 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 02/15] ice: remove redundant max_vsi_num variable Date: Tue, 24 Oct 2023 13:09:16 +0200 Message-ID: <20231024110929.19423-3-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org It is a leftover from previous implementation. Accidentally it wasn't removed. Do it now. Commit that has removed it: commit c1e5da5dd465 ("ice: improve switchdev's slow-path") Reviewed-by: Wojciech Drewek Reviewed-by: Piotr Raczynski Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index e7f1e53314d7..fd8d59f4d97d 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -224,7 +224,6 @@ ice_eswitch_release_reprs(struct ice_pf *pf, struct ice_vsi *ctrl_vsi) static int ice_eswitch_setup_reprs(struct ice_pf *pf) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; - int max_vsi_num = 0; struct ice_vf *vf; unsigned int bkt; @@ -267,9 +266,6 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) goto err; } - if (max_vsi_num < vsi->vsi_num) - max_vsi_num = vsi->vsi_num; - netif_napi_add(vf->repr->netdev, &vf->repr->q_vector->napi, ice_napi_poll); From patchwork Tue Oct 24 11:09:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434266 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92C63266B3 for ; Tue, 24 Oct 2023 11:34:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Om3nEWGG" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 077F3D79 for ; Tue, 24 Oct 2023 04:34:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147287; x=1729683287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uNyGsXLuPM6ACKgLWe6U53AUDwuyxhPZp6uE5dhElZc=; b=Om3nEWGG5+qyQCF8oF/osCWfaUmvw11/7BKzpDStx9rNAg/623eaN/jm d/cnVSmIvFWy1ehuYoiERuLBX9KkKIkUlhkLeQem0/MbfCwymNjrGBGSc vdKdM+Qrj6WQf+Kw/e1qc6/Le99xWZ8/dU+x42nqaFS/LAe/l/WGRrzq0 dJdVRzjmQni/9qr7cxT417/9psJqG0osVGVoVjuab2QRChYTdyzzrHOEv EK8yYrivUTXSyFv5K3yperQezjXzGUwxwmb50TN9IMLKDzBvhXN9PswXz fFI4Ghs7WbdqDsMrcx+HTDAtSD1FJ9THGXJ60EyeC60eSeWg36So4/G3j A==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660525" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660525" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6145979" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:27 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 03/15] ice: remove unused control VSI parameter Date: Tue, 24 Oct 2023 13:09:17 +0200 Message-ID: <20231024110929.19423-4-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org It isn't used in ice_eswitch_release_reprs(). Probably leftover. Remove it. Commit that has removed usage of ctrl_vsi: commit c1e5da5dd465 ("ice: improve switchdev's slow-path") Reviewed-by: Wojciech Drewek Reviewed-by: Piotr Raczynski Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index fd8d59f4d97d..a862681c0f64 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -189,10 +189,9 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) /** * ice_eswitch_release_reprs - clear PR VSIs configuration * @pf: poiner to PF struct - * @ctrl_vsi: pointer to eswitch control VSI */ static void -ice_eswitch_release_reprs(struct ice_pf *pf, struct ice_vsi *ctrl_vsi) +ice_eswitch_release_reprs(struct ice_pf *pf) { struct ice_vf *vf; unsigned int bkt; @@ -286,7 +285,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) return 0; err: - ice_eswitch_release_reprs(pf, ctrl_vsi); + ice_eswitch_release_reprs(pf); return -ENODEV; } @@ -532,7 +531,7 @@ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) ice_eswitch_napi_disable(pf); ice_eswitch_br_offloads_deinit(pf); ice_eswitch_release_env(pf); - ice_eswitch_release_reprs(pf, ctrl_vsi); + ice_eswitch_release_reprs(pf); ice_vsi_release(ctrl_vsi); ice_repr_rem_from_all_vfs(pf); } From patchwork Tue Oct 24 11:09:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434267 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C140266B4 for ; Tue, 24 Oct 2023 11:34:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SzvM+1pX" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 417C0128 for ; Tue, 24 Oct 2023 04:34:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147289; x=1729683289; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JQWkstSUlgGY3Gc/QGH2k/VG3fCYrtBoeCE/haNRHMo=; b=SzvM+1pXWcLSSuUpZ1itOnp2Y2ZkcVy15vk5PF3DF5FaiSFYJdGn5rBJ FpPXSxNKB4Lj8xZu9UMCkvqvLRMRf1XzZnPN8nLRWEKMjTYwnPESZeQwY v95Uo02kJkIPGehhQelSTZRTGviAwmWNb9AvU/qUz3WApPgQIV70dqWvy t4Wg2N8B3McEY4133Usxayp1oSOBAwROdpW6jiYILLPEGMQLDm3tO9gaf vpIeduz0R5kkWy8Nh2UlItYNxnMkgvevac97qJNrW8iS98ZPQIiT+S3kX z8qqorl0PXhBqdqUxtwFm6vXVM+onEq4XQ+OdD8cUD1rbfWPLao72DgRB A==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660535" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660535" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146004" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:29 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 04/15] ice: track q_id in representor Date: Tue, 24 Oct 2023 13:09:18 +0200 Message-ID: <20231024110929.19423-5-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Previously queue index of control plane VSI used by port representor was always id of VF. If we want to allow adding port representors for different devices we have to track queue index in the port representor structure. Reviewed-by: Wojciech Drewek Reviewed-by: Piotr Raczynski Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 2 +- drivers/net/ethernet/intel/ice/ice_repr.c | 1 + drivers/net/ethernet/intel/ice/ice_repr.h | 1 + 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index a862681c0f64..119185564450 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -38,7 +38,7 @@ ice_eswitch_add_vf_sp_rule(struct ice_pf *pf, struct ice_vf *vf) rule_info.sw_act.vsi_handle = ctrl_vsi->idx; rule_info.sw_act.fltr_act = ICE_FWD_TO_Q; rule_info.sw_act.fwd_id.q_id = hw->func_caps.common_cap.rxq_first_id + - ctrl_vsi->rxq_map[vf->vf_id]; + ctrl_vsi->rxq_map[vf->repr->q_id]; rule_info.flags_info.act |= ICE_SINGLE_ACT_LB_ENABLE; rule_info.flags_info.act_valid = true; rule_info.tun_type = ICE_SW_TUN_AND_NON_TUN; diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index c686ac0935eb..a2dc216c964f 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -306,6 +306,7 @@ static int ice_repr_add(struct ice_vf *vf) repr->src_vsi = vsi; repr->vf = vf; + repr->q_id = vf->vf_id; vf->repr = repr; np = netdev_priv(repr->netdev); np->repr = repr; diff --git a/drivers/net/ethernet/intel/ice/ice_repr.h b/drivers/net/ethernet/intel/ice/ice_repr.h index e1ee2d2c1d2d..f350273b8874 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.h +++ b/drivers/net/ethernet/intel/ice/ice_repr.h @@ -13,6 +13,7 @@ struct ice_repr { struct net_device *netdev; struct metadata_dst *dst; struct ice_esw_br_port *br_port; + int q_id; #ifdef CONFIG_ICE_SWITCHDEV /* info about slow path rule */ struct ice_rule_query_data sp_rule; From patchwork Tue Oct 24 11:09:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434268 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FBE9266BE for ; Tue, 24 Oct 2023 11:34:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KJxqTNWD" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F98ED79 for ; Tue, 24 Oct 2023 04:34:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147291; x=1729683291; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pOP4TaA1QFKZPS4/quiy5ENIcIphfAumHFGGIkT2f5k=; b=KJxqTNWD1iIc2HqEjCPFyMbt7BH0GjV6rtvwM4PhlLYglC4kPFxl5V8/ CzXHd0OnhZtw50nsoxDNJwPQmDLYNlWYr3sayu6+kLzJfL1E/Zzl1GrXx IlTB7t17+z8A+84KmJKN+OxIow+VREfN/e6GUkcXU5sOy5HyWrFo5z2S5 /JpG1JgRAc5+eMf1Oyr1mQptspv3Et1T4/jglpkjOSFXWnkzmrLjihXJP 1GgzffX6IR1LU8nkXq0+D2yx8tCt982hgISx1LG+zUccCZHYCPBcHlyvA ewZwON3QktTwXhqYUL8InBsjoKF1uZPbiOmk+cckeb8Ey/YySAX250LP8 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660541" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660541" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146037" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:32 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 05/15] ice: use repr instead of vf->repr Date: Tue, 24 Oct 2023 13:09:19 +0200 Message-ID: <20231024110929.19423-6-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Extract repr from vf->repr as it is often use in the ice_repr_rem(). Remove meaningless clearing of q_vector and netdev pointers as kfree is called on repr pointer. Reviewed-by: Przemek Kitszel Reviewed-by: Wojciech Drewek Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_repr.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index a2dc216c964f..903a3385eacb 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -355,16 +355,16 @@ static int ice_repr_add(struct ice_vf *vf) */ static void ice_repr_rem(struct ice_vf *vf) { - if (!vf->repr) + struct ice_repr *repr = vf->repr; + + if (!repr) return; - kfree(vf->repr->q_vector); - vf->repr->q_vector = NULL; - unregister_netdev(vf->repr->netdev); + kfree(repr->q_vector); + unregister_netdev(repr->netdev); ice_devlink_destroy_vf_port(vf); - free_netdev(vf->repr->netdev); - vf->repr->netdev = NULL; - kfree(vf->repr); + free_netdev(repr->netdev); + kfree(repr); vf->repr = NULL; ice_virtchnl_set_dflt_ops(vf); From patchwork Tue Oct 24 11:09:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434269 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50A4C266A8 for ; Tue, 24 Oct 2023 11:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="H+dnWgSW" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31212D68 for ; Tue, 24 Oct 2023 04:34:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147294; x=1729683294; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pLazSz8ytqNlUTioQJrSz6Xd1zBasX3ytkydNEXXFt0=; b=H+dnWgSWNFK6NP5QwjEIjczXngh5gOEWags0OvdhCzbGOs6d+X6f8ppj jPtnEUWMahqe2WFLllE528gP0dI+S+9gIooxqVq55FtkUrexlQQhrk9HB 16GngAyvugoAGMXVhzlOoZ9JYfCgYRnUQGodYzEtQYztqC0/1ijFjGSYe /CUr+UST0XAL/2AEmnq7C/lNW+P/ZKPzLMu9qBd+I9FI8R3bkw8YG3wDX A1IiSC81DlEObh6aX2qWyWhsETXHKkdR+ptmaR2Nn4KdSQq8JqMC14UIk xUzNT8WO/lXQ8hT/A4ITpEVj0IjMB0MEQWNAFODS/VFbBoARN1O4QLBC3 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660544" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660544" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146065" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:34 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 06/15] ice: track port representors in xarray Date: Tue, 24 Oct 2023 13:09:20 +0200 Message-ID: <20231024110929.19423-7-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Instead of assuming that each VF has pointer to port representor save it in xarray. It will allow adding port representor for other device types. Drop reference to VF where it is use only to get port representor. Get it from xarray instead. The functions will no longer by specific for VF, rename them. Track id assigned by xarray in port representor structure. The id can't be used as ::q_id, because it is fixed during port representor lifetime. ::q_id can change after adding / removing other port representors. Side effect of removing VF pointer is that we are losing VF MAC information used in unrolling. Store it in port representor as parent MAC. Reviewed-by: Piotr Raczynski Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice.h | 1 + drivers/net/ethernet/intel/ice/ice_eswitch.c | 182 +++++++++---------- drivers/net/ethernet/intel/ice/ice_main.c | 2 + drivers/net/ethernet/intel/ice/ice_repr.c | 8 + drivers/net/ethernet/intel/ice/ice_repr.h | 2 + 5 files changed, 94 insertions(+), 101 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 6c59ca86d959..597bdb6945c6 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -526,6 +526,7 @@ struct ice_eswitch { struct ice_vsi *control_vsi; struct ice_vsi *uplink_vsi; struct ice_esw_br_offloads *br_offloads; + struct xarray reprs; bool is_running; }; diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 119185564450..a6b528bc2023 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -11,15 +11,15 @@ #include "ice_tc_lib.h" /** - * ice_eswitch_add_vf_sp_rule - add adv rule with VF's VSI index + * ice_eswitch_add_sp_rule - add adv rule with device's VSI index * @pf: pointer to PF struct - * @vf: pointer to VF struct + * @repr: pointer to the repr struct * * This function adds advanced rule that forwards packets with - * VF's VSI index to the corresponding eswitch ctrl VSI queue. + * device's VSI index to the corresponding eswitch ctrl VSI queue. */ static int -ice_eswitch_add_vf_sp_rule(struct ice_pf *pf, struct ice_vf *vf) +ice_eswitch_add_sp_rule(struct ice_pf *pf, struct ice_repr *repr) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; struct ice_adv_rule_info rule_info = { 0 }; @@ -38,35 +38,32 @@ ice_eswitch_add_vf_sp_rule(struct ice_pf *pf, struct ice_vf *vf) rule_info.sw_act.vsi_handle = ctrl_vsi->idx; rule_info.sw_act.fltr_act = ICE_FWD_TO_Q; rule_info.sw_act.fwd_id.q_id = hw->func_caps.common_cap.rxq_first_id + - ctrl_vsi->rxq_map[vf->repr->q_id]; + ctrl_vsi->rxq_map[repr->q_id]; rule_info.flags_info.act |= ICE_SINGLE_ACT_LB_ENABLE; rule_info.flags_info.act_valid = true; rule_info.tun_type = ICE_SW_TUN_AND_NON_TUN; - rule_info.src_vsi = vf->lan_vsi_idx; + rule_info.src_vsi = repr->src_vsi->idx; err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, - &vf->repr->sp_rule); + &repr->sp_rule); if (err) - dev_err(ice_pf_to_dev(pf), "Unable to add VF slow-path rule in switchdev mode for VF %d", - vf->vf_id); + dev_err(ice_pf_to_dev(pf), "Unable to add slow-path rule in switchdev mode"); kfree(list); return err; } /** - * ice_eswitch_del_vf_sp_rule - delete adv rule with VF's VSI index - * @vf: pointer to the VF struct + * ice_eswitch_del_sp_rule - delete adv rule with device's VSI index + * @pf: pointer to the PF struct + * @repr: pointer to the repr struct * - * Delete the advanced rule that was used to forward packets with the VF's VSI - * index to the corresponding eswitch ctrl VSI queue. + * Delete the advanced rule that was used to forward packets with the device's + * VSI index to the corresponding eswitch ctrl VSI queue. */ -static void ice_eswitch_del_vf_sp_rule(struct ice_vf *vf) +static void ice_eswitch_del_sp_rule(struct ice_pf *pf, struct ice_repr *repr) { - if (!vf->repr) - return; - - ice_rem_adv_rule_by_id(&vf->pf->hw, &vf->repr->sp_rule); + ice_rem_adv_rule_by_id(&pf->hw, &repr->sp_rule); } /** @@ -193,26 +190,24 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) static void ice_eswitch_release_reprs(struct ice_pf *pf) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; - ice_for_each_vf(pf, bkt, vf) { - struct ice_vsi *vsi = vf->repr->src_vsi; + xa_for_each(&pf->eswitch.reprs, id, repr) { + struct ice_vsi *vsi = repr->src_vsi; - /* Skip VFs that aren't configured */ - if (!vf->repr->dst) + /* Skip representors that aren't configured */ + if (!repr->dst) continue; ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); - metadata_dst_free(vf->repr->dst); - vf->repr->dst = NULL; - ice_eswitch_del_vf_sp_rule(vf); - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, + metadata_dst_free(repr->dst); + repr->dst = NULL; + ice_eswitch_del_sp_rule(pf, repr); + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, ICE_FWD_TO_VSI); - netif_napi_del(&vf->repr->q_vector->napi); + netif_napi_del(&repr->q_vector->napi); } } @@ -223,56 +218,53 @@ ice_eswitch_release_reprs(struct ice_pf *pf) static int ice_eswitch_setup_reprs(struct ice_pf *pf) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; - ice_for_each_vf(pf, bkt, vf) { - struct ice_vsi *vsi = vf->repr->src_vsi; + xa_for_each(&pf->eswitch.reprs, id, repr) { + struct ice_vsi *vsi = repr->src_vsi; ice_remove_vsi_fltr(&pf->hw, vsi->idx); - vf->repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, - GFP_KERNEL); - if (!vf->repr->dst) { - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, + repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, + GFP_KERNEL); + if (!repr->dst) { + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, ICE_FWD_TO_VSI); goto err; } - if (ice_eswitch_add_vf_sp_rule(pf, vf)) { - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, + if (ice_eswitch_add_sp_rule(pf, repr)) { + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, ICE_FWD_TO_VSI); goto err; } if (ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof)) { - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, ICE_FWD_TO_VSI); - ice_eswitch_del_vf_sp_rule(vf); - metadata_dst_free(vf->repr->dst); - vf->repr->dst = NULL; + ice_eswitch_del_sp_rule(pf, repr); + metadata_dst_free(repr->dst); + repr->dst = NULL; goto err; } if (ice_vsi_add_vlan_zero(vsi)) { - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, ICE_FWD_TO_VSI); - ice_eswitch_del_vf_sp_rule(vf); - metadata_dst_free(vf->repr->dst); - vf->repr->dst = NULL; + ice_eswitch_del_sp_rule(pf, repr); + metadata_dst_free(repr->dst); + repr->dst = NULL; ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); goto err; } - netif_napi_add(vf->repr->netdev, &vf->repr->q_vector->napi, + netif_napi_add(repr->netdev, &repr->q_vector->napi, ice_napi_poll); - netif_keep_dst(vf->repr->netdev); + netif_keep_dst(repr->netdev); } - ice_for_each_vf(pf, bkt, vf) { - struct ice_repr *repr = vf->repr; + xa_for_each(&pf->eswitch.reprs, id, repr) { struct ice_vsi *vsi = repr->src_vsi; struct metadata_dst *dst; @@ -291,7 +283,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) } /** - * ice_eswitch_update_repr - reconfigure VF port representor + * ice_eswitch_update_repr - reconfigure port representor * @vsi: VF VSI for which port representor is configured */ void ice_eswitch_update_repr(struct ice_vsi *vsi) @@ -420,47 +412,41 @@ ice_eswitch_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) /** * ice_eswitch_napi_del - remove NAPI handle for all port representors - * @pf: pointer to PF structure + * @reprs: xarray of reprs */ -static void ice_eswitch_napi_del(struct ice_pf *pf) +static void ice_eswitch_napi_del(struct xarray *reprs) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; - ice_for_each_vf(pf, bkt, vf) - netif_napi_del(&vf->repr->q_vector->napi); + xa_for_each(reprs, id, repr) + netif_napi_del(&repr->q_vector->napi); } /** * ice_eswitch_napi_enable - enable NAPI for all port representors - * @pf: pointer to PF structure + * @reprs: xarray of reprs */ -static void ice_eswitch_napi_enable(struct ice_pf *pf) +static void ice_eswitch_napi_enable(struct xarray *reprs) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; - ice_for_each_vf(pf, bkt, vf) - napi_enable(&vf->repr->q_vector->napi); + xa_for_each(reprs, id, repr) + napi_enable(&repr->q_vector->napi); } /** * ice_eswitch_napi_disable - disable NAPI for all port representors - * @pf: pointer to PF structure + * @reprs: xarray of reprs */ -static void ice_eswitch_napi_disable(struct ice_pf *pf) +static void ice_eswitch_napi_disable(struct xarray *reprs) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; - ice_for_each_vf(pf, bkt, vf) - napi_disable(&vf->repr->q_vector->napi); + xa_for_each(reprs, id, repr) + napi_disable(&repr->q_vector->napi); } /** @@ -505,7 +491,7 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) if (ice_eswitch_br_offloads_init(pf)) goto err_br_offloads; - ice_eswitch_napi_enable(pf); + ice_eswitch_napi_enable(&pf->eswitch.reprs); return 0; @@ -528,7 +514,7 @@ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; - ice_eswitch_napi_disable(pf); + ice_eswitch_napi_disable(&pf->eswitch.reprs); ice_eswitch_br_offloads_deinit(pf); ice_eswitch_release_env(pf); ice_eswitch_release_reprs(pf); @@ -561,6 +547,7 @@ ice_eswitch_mode_set(struct devlink *devlink, u16 mode, case DEVLINK_ESWITCH_MODE_LEGACY: dev_info(ice_pf_to_dev(pf), "PF %d changed eswitch mode to legacy", pf->hw.pf_id); + xa_destroy(&pf->eswitch.reprs); NL_SET_ERR_MSG_MOD(extack, "Changed eswitch mode to legacy"); break; case DEVLINK_ESWITCH_MODE_SWITCHDEV: @@ -573,6 +560,7 @@ ice_eswitch_mode_set(struct devlink *devlink, u16 mode, dev_info(ice_pf_to_dev(pf), "PF %d changed eswitch mode to switchdev", pf->hw.pf_id); + xa_init_flags(&pf->eswitch.reprs, XA_FLAGS_ALLOC); NL_SET_ERR_MSG_MOD(extack, "Changed eswitch mode to switchdev"); break; } @@ -649,18 +637,14 @@ int ice_eswitch_configure(struct ice_pf *pf) */ static void ice_eswitch_start_all_tx_queues(struct ice_pf *pf) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; if (test_bit(ICE_DOWN, pf->state)) return; - ice_for_each_vf(pf, bkt, vf) { - if (vf->repr) - ice_repr_start_tx_queues(vf->repr); - } + xa_for_each(&pf->eswitch.reprs, id, repr) + ice_repr_start_tx_queues(repr); } /** @@ -669,18 +653,14 @@ static void ice_eswitch_start_all_tx_queues(struct ice_pf *pf) */ void ice_eswitch_stop_all_tx_queues(struct ice_pf *pf) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); + struct ice_repr *repr; + unsigned long id; if (test_bit(ICE_DOWN, pf->state)) return; - ice_for_each_vf(pf, bkt, vf) { - if (vf->repr) - ice_repr_stop_tx_queues(vf->repr); - } + xa_for_each(&pf->eswitch.reprs, id, repr) + ice_repr_stop_tx_queues(repr); } /** @@ -692,8 +672,8 @@ int ice_eswitch_rebuild(struct ice_pf *pf) struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; int status; - ice_eswitch_napi_disable(pf); - ice_eswitch_napi_del(pf); + ice_eswitch_napi_disable(&pf->eswitch.reprs); + ice_eswitch_napi_del(&pf->eswitch.reprs); status = ice_eswitch_setup_env(pf); if (status) @@ -711,7 +691,7 @@ int ice_eswitch_rebuild(struct ice_pf *pf) if (status) return status; - ice_eswitch_napi_enable(pf); + ice_eswitch_napi_enable(&pf->eswitch.reprs); ice_eswitch_start_all_tx_queues(pf); return 0; diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 66095e9b094e..cb0ff015647f 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -4702,6 +4702,8 @@ static void ice_deinit_features(struct ice_pf *pf) ice_ptp_release(pf); if (test_bit(ICE_FLAG_DPLL, pf->flags)) ice_dpll_deinit(pf); + if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) + xa_destroy(&pf->eswitch.reprs); } static void ice_init_wakeup(struct ice_pf *pf) diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index 903a3385eacb..e56c59a304ef 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -318,6 +318,11 @@ static int ice_repr_add(struct ice_vf *vf) } repr->q_vector = q_vector; + err = xa_alloc(&vf->pf->eswitch.reprs, &repr->id, repr, + xa_limit_32b, GFP_KERNEL); + if (err) + goto err_xa_alloc; + err = ice_devlink_create_vf_port(vf); if (err) goto err_devlink; @@ -338,6 +343,8 @@ static int ice_repr_add(struct ice_vf *vf) err_netdev: ice_devlink_destroy_vf_port(vf); err_devlink: + xa_erase(&vf->pf->eswitch.reprs, repr->id); +err_xa_alloc: kfree(repr->q_vector); vf->repr->q_vector = NULL; err_alloc_q_vector: @@ -363,6 +370,7 @@ static void ice_repr_rem(struct ice_vf *vf) kfree(repr->q_vector); unregister_netdev(repr->netdev); ice_devlink_destroy_vf_port(vf); + xa_erase(&vf->pf->eswitch.reprs, repr->id); free_netdev(repr->netdev); kfree(repr); vf->repr = NULL; diff --git a/drivers/net/ethernet/intel/ice/ice_repr.h b/drivers/net/ethernet/intel/ice/ice_repr.h index f350273b8874..735cb556c620 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.h +++ b/drivers/net/ethernet/intel/ice/ice_repr.h @@ -14,6 +14,8 @@ struct ice_repr { struct metadata_dst *dst; struct ice_esw_br_port *br_port; int q_id; + u32 id; + u8 parent_mac[ETH_ALEN]; #ifdef CONFIG_ICE_SWITCHDEV /* info about slow path rule */ struct ice_rule_query_data sp_rule; From patchwork Tue Oct 24 11:09:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434270 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34884262BF for ; Tue, 24 Oct 2023 11:34:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WsQqXVUT" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0812D7B for ; Tue, 24 Oct 2023 04:34:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147296; x=1729683296; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=P8Uxdqm7LCe9w0pzTpg2S+dVJ+WoFhyJhxi4rgKgiew=; b=WsQqXVUTgP6+At1fSGo/wHqeYI6qexu8nRUBHdY5a0zuJCmzt86ZCU02 6mch5GPnIsxYUFlD/wFhs3nrVbHH/IWkCT3Fv4Bi8i7gZqJcFDKaeN+2i w+NprL90u0TMI4c7eQJyTHOdWUiWXtK/t5928//RGuXIkLlgT5EtVJs91 uYyo84qhXz6Wn88lZoZDl1OOU0orCGjL0y/WHBtGjbgLqL2BlL7ZtROKw hKgVQYVI2BoNd/XmcDT4ee/tuIuiBj6naOVzL5SZF36LpqVrjQJuayWX7 bwh5iGjKQCCQdzpJ5o/3tBsZCqqE7X2M4e5NIEXVcSJmIh48rvzLoxisR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660546" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660546" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146087" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:37 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 07/15] ice: remove VF pointer reference in eswitch code Date: Tue, 24 Oct 2023 13:09:21 +0200 Message-ID: <20231024110929.19423-8-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Make eswitch code generic by removing VF pointer reference in functions. It is needed to support eswitch mode for other type of devices. Previously queue id used for Rx was based on VF number. Use ::q_id saved in port representor instead. After adding or removing port representor ::q_id value can change. It isn't good idea to iterate over representors list using this value. Use xa_find starting from the first one instead to get next port representor to remap. The number of port representors has to be equal to ::num_rx/tx_q. Warn if it isn't true. Reviewed-by: Przemek Kitszel Reviewed-by: Wojciech Drewek Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 39 ++++++++++---------- drivers/net/ethernet/intel/ice/ice_eswitch.h | 5 ++- drivers/net/ethernet/intel/ice/ice_repr.c | 1 + drivers/net/ethernet/intel/ice/ice_vf_lib.c | 2 +- 4 files changed, 25 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index a6b528bc2023..66cbe2c80fea 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -47,7 +47,8 @@ ice_eswitch_add_sp_rule(struct ice_pf *pf, struct ice_repr *repr) err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, &repr->sp_rule); if (err) - dev_err(ice_pf_to_dev(pf), "Unable to add slow-path rule in switchdev mode"); + dev_err(ice_pf_to_dev(pf), "Unable to add slow-path rule for eswitch for PR %d", + repr->id); kfree(list); return err; @@ -142,6 +143,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) { struct ice_vsi *vsi = pf->eswitch.control_vsi; + unsigned long repr_id = 0; int q_id; ice_for_each_txq(vsi, q_id) { @@ -149,13 +151,14 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) struct ice_tx_ring *tx_ring; struct ice_rx_ring *rx_ring; struct ice_repr *repr; - struct ice_vf *vf; - vf = ice_get_vf_by_id(pf, q_id); - if (WARN_ON(!vf)) - continue; + repr = xa_find(&pf->eswitch.reprs, &repr_id, U32_MAX, + XA_PRESENT); + if (WARN_ON(!repr)) + break; - repr = vf->repr; + repr_id += 1; + repr->q_id = q_id; q_vector = repr->q_vector; tx_ring = vsi->tx_rings[q_id]; rx_ring = vsi->rx_rings[q_id]; @@ -178,8 +181,6 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) rx_ring->q_vector = q_vector; rx_ring->next = NULL; rx_ring->netdev = repr->netdev; - - ice_put_vf(vf); } } @@ -284,20 +285,17 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) /** * ice_eswitch_update_repr - reconfigure port representor - * @vsi: VF VSI for which port representor is configured + * @repr: pointer to repr struct + * @vsi: VSI for which port representor is configured */ -void ice_eswitch_update_repr(struct ice_vsi *vsi) +void ice_eswitch_update_repr(struct ice_repr *repr, struct ice_vsi *vsi) { struct ice_pf *pf = vsi->back; - struct ice_repr *repr; - struct ice_vf *vf; int ret; if (!ice_is_switchdev_running(pf)) return; - vf = vsi->vf; - repr = vf->repr; repr->src_vsi = vsi; repr->dst->u.port_info.port_id = vsi->vsi_num; @@ -306,9 +304,10 @@ void ice_eswitch_update_repr(struct ice_vsi *vsi) ret = ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof); if (ret) { - ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr, ICE_FWD_TO_VSI); - dev_err(ice_pf_to_dev(pf), "Failed to update VF %d port representor", - vsi->vf->vf_id); + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, + ICE_FWD_TO_VSI); + dev_err(ice_pf_to_dev(pf), "Failed to update VSI of port representor %d", + repr->id); } } @@ -340,7 +339,7 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev) skb_dst_drop(skb); dst_hold((struct dst_entry *)repr->dst); skb_dst_set(skb, (struct dst_entry *)repr->dst); - skb->queue_mapping = repr->vf->vf_id; + skb->queue_mapping = repr->q_id; return ice_start_xmit(skb, netdev); } @@ -486,7 +485,7 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) ice_eswitch_remap_rings_to_vectors(pf); if (ice_vsi_open(ctrl_vsi)) - goto err_setup_reprs; + goto err_vsi_open; if (ice_eswitch_br_offloads_init(pf)) goto err_br_offloads; @@ -497,6 +496,8 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) err_br_offloads: ice_vsi_close(ctrl_vsi); +err_vsi_open: + ice_eswitch_release_reprs(pf); err_setup_reprs: ice_repr_rem_from_all_vfs(pf); err_repr_add: diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.h b/drivers/net/ethernet/intel/ice/ice_eswitch.h index b18bf83a2f5b..f43db1cce3ad 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.h @@ -17,7 +17,7 @@ ice_eswitch_mode_set(struct devlink *devlink, u16 mode, struct netlink_ext_ack *extack); bool ice_is_eswitch_mode_switchdev(struct ice_pf *pf); -void ice_eswitch_update_repr(struct ice_vsi *vsi); +void ice_eswitch_update_repr(struct ice_repr *repr, struct ice_vsi *vsi); void ice_eswitch_stop_all_tx_queues(struct ice_pf *pf); @@ -34,7 +34,8 @@ static inline void ice_eswitch_set_target_vsi(struct sk_buff *skb, struct ice_tx_offload_params *off) { } -static inline void ice_eswitch_update_repr(struct ice_vsi *vsi) { } +static inline void +ice_eswitch_update_repr(struct ice_repr *repr, struct ice_vsi *vsi) { } static inline int ice_eswitch_configure(struct ice_pf *pf) { diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index e56c59a304ef..77cc77ab826a 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -336,6 +336,7 @@ static int ice_repr_add(struct ice_vf *vf) if (err) goto err_netdev; + ether_addr_copy(repr->parent_mac, vf->hw_lan_addr); ice_virtchnl_set_repr_ops(vf); return 0; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index aca1f2ea5034..462ee9fdf815 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -928,7 +928,7 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags) goto out_unlock; } - ice_eswitch_update_repr(vsi); + ice_eswitch_update_repr(vf->repr, vsi); /* if the VF has been reset allow it to come up again */ ice_mbx_clear_malvf(&vf->mbx_info); From patchwork Tue Oct 24 11:09:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434271 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4850D262B6 for ; Tue, 24 Oct 2023 11:35:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nKQcXG89" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A32CD68 for ; Tue, 24 Oct 2023 04:34:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147299; x=1729683299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AP+y12GRoQalT5VERNzTbm35WHbO6P/KzFlLfUGsmYM=; b=nKQcXG89XtAeNBqoF76N91btaEObeCofLNYhs9itKEP0qUcMiSu0Mt6q 0tfvq/dyyWDTOx+R4F+2VG6B3czkC+Qmb3z5giBKGMmW+TErCkGzfHpsx FLxFhcgjmF5ZXwakAWFe0+oIDb3rjuTbZtdTWDtq3fIKpU8PlyPtXxAes TLxzJa1BPzxSmB0hDs5zzrRtfFhFY4zMh8nIpc+hoGlvrf/V6QwtPiVMC As2wv8A2uyQLusDSWljf/l+stCryLi9IuHg/VL6r4GsuYcC6XjhaTfT0t iYWru1rgo4AHQIq00uA/i0ZMuTIucjvlg4nCTrlQzJzKuUIzOqpY4dub2 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660549" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660549" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:34:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146112" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:39 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 08/15] ice: make representor code generic Date: Tue, 24 Oct 2023 13:09:22 +0200 Message-ID: <20231024110929.19423-9-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Representor code needs to be independent from specific device type, like in this case VF. Make generic add / remove representor function and specific add VF / rem VF function. New device types will follow this scheme. In bridge offload code there is a need to get representor pointer based on VSI. Implement helper function to achieve that. Reviewed-by: Piotr Raczynski Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 9 +- drivers/net/ethernet/intel/ice/ice_eswitch.h | 4 +- .../net/ethernet/intel/ice/ice_eswitch_br.c | 10 +- drivers/net/ethernet/intel/ice/ice_lib.c | 10 +- drivers/net/ethernet/intel/ice/ice_repr.c | 184 ++++++++++-------- drivers/net/ethernet/intel/ice/ice_repr.h | 2 + drivers/net/ethernet/intel/ice/ice_vf_lib.c | 2 +- drivers/net/ethernet/intel/ice/ice_vf_lib.h | 2 +- 8 files changed, 131 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 66cbe2c80fea..67231e43ffa6 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -285,17 +285,22 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) /** * ice_eswitch_update_repr - reconfigure port representor - * @repr: pointer to repr struct + * @repr_id: representor ID * @vsi: VSI for which port representor is configured */ -void ice_eswitch_update_repr(struct ice_repr *repr, struct ice_vsi *vsi) +void ice_eswitch_update_repr(unsigned long repr_id, struct ice_vsi *vsi) { struct ice_pf *pf = vsi->back; + struct ice_repr *repr; int ret; if (!ice_is_switchdev_running(pf)) return; + repr = xa_load(&pf->eswitch.reprs, repr_id); + if (!repr) + return; + repr->src_vsi = vsi; repr->dst->u.port_info.port_id = vsi->vsi_num; diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.h b/drivers/net/ethernet/intel/ice/ice_eswitch.h index f43db1cce3ad..ff110bd9fc4c 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.h @@ -17,7 +17,7 @@ ice_eswitch_mode_set(struct devlink *devlink, u16 mode, struct netlink_ext_ack *extack); bool ice_is_eswitch_mode_switchdev(struct ice_pf *pf); -void ice_eswitch_update_repr(struct ice_repr *repr, struct ice_vsi *vsi); +void ice_eswitch_update_repr(unsigned long repr_id, struct ice_vsi *vsi); void ice_eswitch_stop_all_tx_queues(struct ice_pf *pf); @@ -35,7 +35,7 @@ ice_eswitch_set_target_vsi(struct sk_buff *skb, struct ice_tx_offload_params *off) { } static inline void -ice_eswitch_update_repr(struct ice_repr *repr, struct ice_vsi *vsi) { } +ice_eswitch_update_repr(unsigned long repr_id, struct ice_vsi *vsi) { } static inline int ice_eswitch_configure(struct ice_pf *pf) { diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 16bbcaca8fda..ac5beecd028b 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -893,10 +893,14 @@ ice_eswitch_br_port_deinit(struct ice_esw_br *bridge, ice_eswitch_br_fdb_entry_delete(bridge, fdb_entry); } - if (br_port->type == ICE_ESWITCH_BR_UPLINK_PORT && vsi->back) + if (br_port->type == ICE_ESWITCH_BR_UPLINK_PORT && vsi->back) { vsi->back->br_port = NULL; - else if (vsi->vf && vsi->vf->repr) - vsi->vf->repr->br_port = NULL; + } else { + struct ice_repr *repr = ice_repr_get_by_vsi(vsi); + + if (repr) + repr->br_port = NULL; + } xa_erase(&bridge->ports, br_port->vsi_idx); ice_eswitch_br_port_vlans_flush(br_port); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 4b1e56396293..ae4b4220e1bb 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -519,16 +519,14 @@ static irqreturn_t ice_eswitch_msix_clean_rings(int __always_unused irq, void *d { struct ice_q_vector *q_vector = (struct ice_q_vector *)data; struct ice_pf *pf = q_vector->vsi->back; - struct ice_vf *vf; - unsigned int bkt; + struct ice_repr *repr; + unsigned long id; if (!q_vector->tx.tx_ring && !q_vector->rx.rx_ring) return IRQ_HANDLED; - rcu_read_lock(); - ice_for_each_vf_rcu(pf, bkt, vf) - napi_schedule(&vf->repr->q_vector->napi); - rcu_read_unlock(); + xa_for_each(&pf->eswitch.reprs, id, repr) + napi_schedule(&repr->q_vector->napi); return IRQ_HANDLED; } diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index 77cc77ab826a..fce25472d053 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -14,7 +14,7 @@ */ static int ice_repr_get_sw_port_id(struct ice_repr *repr) { - return repr->vf->pf->hw.port_info->lport; + return repr->src_vsi->back->hw.port_info->lport; } /** @@ -35,7 +35,7 @@ ice_repr_get_phys_port_name(struct net_device *netdev, char *buf, size_t len) return -EOPNOTSUPP; res = snprintf(buf, len, "pf%dvfr%d", ice_repr_get_sw_port_id(repr), - repr->vf->vf_id); + repr->id); if (res <= 0) return -EOPNOTSUPP; return 0; @@ -279,24 +279,72 @@ ice_repr_reg_netdev(struct net_device *netdev) } /** - * ice_repr_add - add representor for VF - * @vf: pointer to VF structure + * ice_repr_rem - remove representor from VF + * @reprs: xarray storing representors + * @repr: pointer to representor structure */ -static int ice_repr_add(struct ice_vf *vf) +static void ice_repr_rem(struct xarray *reprs, struct ice_repr *repr) +{ + xa_erase(reprs, repr->id); + kfree(repr->q_vector); + free_netdev(repr->netdev); + kfree(repr); +} + +static void ice_repr_rem_vf(struct ice_vf *vf) +{ + struct ice_repr *repr = xa_load(&vf->pf->eswitch.reprs, vf->repr_id); + + if (!repr) + return; + + unregister_netdev(repr->netdev); + ice_repr_rem(&vf->pf->eswitch.reprs, repr); + ice_devlink_destroy_vf_port(vf); + ice_virtchnl_set_dflt_ops(vf); +} + +/** + * ice_repr_rem_from_all_vfs - remove port representor for all VFs + * @pf: pointer to PF structure + */ +void ice_repr_rem_from_all_vfs(struct ice_pf *pf) +{ + struct devlink *devlink; + struct ice_vf *vf; + unsigned int bkt; + + lockdep_assert_held(&pf->vfs.table_lock); + + ice_for_each_vf(pf, bkt, vf) + ice_repr_rem_vf(vf); + + /* since all port representors are destroyed, there is + * no point in keeping the nodes + */ + devlink = priv_to_devlink(pf); + devl_lock(devlink); + devl_rate_nodes_destroy(devlink); + devl_unlock(devlink); +} + +/** + * ice_repr_add - add representor for generic VSI + * @pf: pointer to PF structure + * @src_vsi: pointer to VSI structure of device to represent + * @parent_mac: device MAC address + */ +static struct ice_repr * +ice_repr_add(struct ice_pf *pf, struct ice_vsi *src_vsi, const u8 *parent_mac) { struct ice_q_vector *q_vector; struct ice_netdev_priv *np; struct ice_repr *repr; - struct ice_vsi *vsi; int err; - vsi = ice_get_vf_vsi(vf); - if (!vsi) - return -EINVAL; - repr = kzalloc(sizeof(*repr), GFP_KERNEL); if (!repr) - return -ENOMEM; + return ERR_PTR(-ENOMEM); repr->netdev = alloc_etherdev(sizeof(struct ice_netdev_priv)); if (!repr->netdev) { @@ -304,10 +352,7 @@ static int ice_repr_add(struct ice_vf *vf) goto err_alloc; } - repr->src_vsi = vsi; - repr->vf = vf; - repr->q_id = vf->vf_id; - vf->repr = repr; + repr->src_vsi = src_vsi; np = netdev_priv(repr->netdev); np->repr = repr; @@ -318,14 +363,47 @@ static int ice_repr_add(struct ice_vf *vf) } repr->q_vector = q_vector; - err = xa_alloc(&vf->pf->eswitch.reprs, &repr->id, repr, - xa_limit_32b, GFP_KERNEL); + err = xa_alloc(&pf->eswitch.reprs, &repr->id, repr, + XA_LIMIT(1, INT_MAX), GFP_KERNEL); if (err) goto err_xa_alloc; + repr->q_id = repr->id; + + ether_addr_copy(repr->parent_mac, parent_mac); + + return repr; + +err_xa_alloc: + kfree(repr->q_vector); +err_alloc_q_vector: + free_netdev(repr->netdev); +err_alloc: + kfree(repr); + return ERR_PTR(err); +} + +static int ice_repr_add_vf(struct ice_vf *vf) +{ + struct ice_repr *repr; + struct ice_vsi *vsi; + int err; + + vsi = ice_get_vf_vsi(vf); + if (!vsi) + return -EINVAL; err = ice_devlink_create_vf_port(vf); if (err) - goto err_devlink; + return err; + + repr = ice_repr_add(vf->pf, vsi, vf->hw_lan_addr); + if (IS_ERR(repr)) { + err = PTR_ERR(repr); + goto err_repr_add; + } + + vf->repr_id = repr->id; + repr->vf = vf; repr->netdev->min_mtu = ETH_MIN_MTU; repr->netdev->max_mtu = ICE_MAX_MTU; @@ -336,73 +414,17 @@ static int ice_repr_add(struct ice_vf *vf) if (err) goto err_netdev; - ether_addr_copy(repr->parent_mac, vf->hw_lan_addr); ice_virtchnl_set_repr_ops(vf); return 0; err_netdev: + ice_repr_rem(&vf->pf->eswitch.reprs, repr); +err_repr_add: ice_devlink_destroy_vf_port(vf); -err_devlink: - xa_erase(&vf->pf->eswitch.reprs, repr->id); -err_xa_alloc: - kfree(repr->q_vector); - vf->repr->q_vector = NULL; -err_alloc_q_vector: - free_netdev(repr->netdev); - repr->netdev = NULL; -err_alloc: - kfree(repr); - vf->repr = NULL; return err; } -/** - * ice_repr_rem - remove representor from VF - * @vf: pointer to VF structure - */ -static void ice_repr_rem(struct ice_vf *vf) -{ - struct ice_repr *repr = vf->repr; - - if (!repr) - return; - - kfree(repr->q_vector); - unregister_netdev(repr->netdev); - ice_devlink_destroy_vf_port(vf); - xa_erase(&vf->pf->eswitch.reprs, repr->id); - free_netdev(repr->netdev); - kfree(repr); - vf->repr = NULL; - - ice_virtchnl_set_dflt_ops(vf); -} - -/** - * ice_repr_rem_from_all_vfs - remove port representor for all VFs - * @pf: pointer to PF structure - */ -void ice_repr_rem_from_all_vfs(struct ice_pf *pf) -{ - struct devlink *devlink; - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); - - ice_for_each_vf(pf, bkt, vf) - ice_repr_rem(vf); - - /* since all port representors are destroyed, there is - * no point in keeping the nodes - */ - devlink = priv_to_devlink(pf); - devl_lock(devlink); - devl_rate_nodes_destroy(devlink); - devl_unlock(devlink); -} - /** * ice_repr_add_for_all_vfs - add port representor for all VFs * @pf: pointer to PF structure @@ -417,7 +439,7 @@ int ice_repr_add_for_all_vfs(struct ice_pf *pf) lockdep_assert_held(&pf->vfs.table_lock); ice_for_each_vf(pf, bkt, vf) { - err = ice_repr_add(vf); + err = ice_repr_add_vf(vf); if (err) goto err; } @@ -437,6 +459,14 @@ int ice_repr_add_for_all_vfs(struct ice_pf *pf) return err; } +struct ice_repr *ice_repr_get_by_vsi(struct ice_vsi *vsi) +{ + if (!vsi->vf) + return NULL; + + return xa_load(&vsi->back->eswitch.reprs, vsi->vf->repr_id); +} + /** * ice_repr_start_tx_queues - start Tx queues of port representor * @repr: pointer to repr structure diff --git a/drivers/net/ethernet/intel/ice/ice_repr.h b/drivers/net/ethernet/intel/ice/ice_repr.h index 735cb556c620..a3cd256d82b7 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.h +++ b/drivers/net/ethernet/intel/ice/ice_repr.h @@ -32,4 +32,6 @@ void ice_repr_set_traffic_vsi(struct ice_repr *repr, struct ice_vsi *vsi); struct ice_repr *ice_netdev_to_repr(struct net_device *netdev); bool ice_is_port_repr_netdev(const struct net_device *netdev); + +struct ice_repr *ice_repr_get_by_vsi(struct ice_vsi *vsi); #endif diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index 462ee9fdf815..68f9de0a7a8f 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -928,7 +928,7 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags) goto out_unlock; } - ice_eswitch_update_repr(vf->repr, vsi); + ice_eswitch_update_repr(vf->repr_id, vsi); /* if the VF has been reset allow it to come up again */ ice_mbx_clear_malvf(&vf->mbx_info); diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h index 93c774f2f437..35866553f288 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h @@ -130,7 +130,7 @@ struct ice_vf { struct ice_mdd_vf_events mdd_tx_events; DECLARE_BITMAP(opcodes_allowlist, VIRTCHNL_OP_MAX); - struct ice_repr *repr; + unsigned long repr_id; const struct ice_virtchnl_ops *virtchnl_ops; const struct ice_vf_ops *vf_ops; From patchwork Tue Oct 24 11:09:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434272 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D83AE262B8 for ; Tue, 24 Oct 2023 11:35:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NuYN9AIi" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A773DD7A for ; Tue, 24 Oct 2023 04:35:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147302; x=1729683302; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9SZ9VIt19nuY75ONe6pBJXwTW3GSNQB+O88Sp7ZJIak=; b=NuYN9AIi5d99XPXqeZn7qjA5y2iCddF8rrpaTTPSz+n5cCd8awMbVExf My1k0pk13aniQbISDHTEvRbVq9fHSeXhnDkMi/k5FjiKsXI1PN8H8BQnI 0yv4MZwtm0NI+6hSH0l3ejJ1hHuZUnq0VZguYykpGapPsCWfbh0+MZx7x WYNK99WPxoIzpKB9AZh5oeE0TTImLAB75YyUfdImY1xB8fw2vONB/ptUW /mvohFY8Ce+BgqyEdLToxLPLTmPDTrd3humyapRkjDrwFXrXYLrJyYVV9 yA+JnEXBoeJUtM/K3fPJTFa9zB8VlfS3tKzUuSahF4YNDKaaiNv7AAe7m w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660551" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660551" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146143" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:42 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 09/15] ice: return pointer to representor Date: Tue, 24 Oct 2023 13:09:23 +0200 Message-ID: <20231024110929.19423-10-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org In follow up patches it will be easier to obtain created port representor pointer instead of the id. Without it the pattern from eswitch side will look like: - create PR - get PR based on the id Reviewed-by: Przemek Kitszel Reviewed-by: Wojciech Drewek Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_repr.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index fce25472d053..b29a3d010780 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -382,7 +382,7 @@ ice_repr_add(struct ice_pf *pf, struct ice_vsi *src_vsi, const u8 *parent_mac) return ERR_PTR(err); } -static int ice_repr_add_vf(struct ice_vf *vf) +static struct ice_repr *ice_repr_add_vf(struct ice_vf *vf) { struct ice_repr *repr; struct ice_vsi *vsi; @@ -390,11 +390,11 @@ static int ice_repr_add_vf(struct ice_vf *vf) vsi = ice_get_vf_vsi(vf); if (!vsi) - return -EINVAL; + return ERR_PTR(-ENOENT); err = ice_devlink_create_vf_port(vf); if (err) - return err; + return ERR_PTR(err); repr = ice_repr_add(vf->pf, vsi, vf->hw_lan_addr); if (IS_ERR(repr)) { @@ -416,13 +416,13 @@ static int ice_repr_add_vf(struct ice_vf *vf) ice_virtchnl_set_repr_ops(vf); - return 0; + return repr; err_netdev: ice_repr_rem(&vf->pf->eswitch.reprs, repr); err_repr_add: ice_devlink_destroy_vf_port(vf); - return err; + return ERR_PTR(err); } /** @@ -432,6 +432,7 @@ static int ice_repr_add_vf(struct ice_vf *vf) int ice_repr_add_for_all_vfs(struct ice_pf *pf) { struct devlink *devlink; + struct ice_repr *repr; struct ice_vf *vf; unsigned int bkt; int err; @@ -439,9 +440,11 @@ int ice_repr_add_for_all_vfs(struct ice_pf *pf) lockdep_assert_held(&pf->vfs.table_lock); ice_for_each_vf(pf, bkt, vf) { - err = ice_repr_add_vf(vf); - if (err) + repr = ice_repr_add_vf(vf); + if (IS_ERR(repr)) { + err = PTR_ERR(repr); goto err; + } } /* only export if ADQ and DCB disabled */ From patchwork Tue Oct 24 11:09:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434273 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A67B726E14 for ; Tue, 24 Oct 2023 11:35:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WJ4DGqZQ" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 301DA128 for ; Tue, 24 Oct 2023 04:35:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147304; x=1729683304; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3bhAqaROnfOK6CfL0o9BQvxvGocbUJeZ7jqOkT9I820=; b=WJ4DGqZQTnbmHqY00dy+wSN+5luofcfwoINyOcbGB0F0nX6u3cusV2YZ aqCDrEyBRibyOCHKDp/S0uy43Y+5pHJPdoTU3iuII3yBQllwMONQCBgm/ k7ElTsHiGrFpQX+dZ3KdDyFSlgmWSz6eeNkVVSOqASBJlSOMs8wfh82vx mL9jjR0UcKng1lCPF0t3Z/ig50jX+Z+SsPjRtCgfSBZVYn1bTeqJCjZ5j eYTSoTYpv6DKfDW31Xf9VFKqqp7lOapo6BjrA2La0zGak3PB4draSexQA a3G2hzOtPi8i5ARUQh5XyEfW50Xtp3EzZ6Ah9JvwoIHHtjQSpiP60WuDv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660554" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660554" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146177" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:44 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 10/15] ice: allow changing SWITCHDEV_CTRL VSI queues Date: Tue, 24 Oct 2023 13:09:24 +0200 Message-ID: <20231024110929.19423-11-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement mechanism to change number of queues for SWITCHDEV_CTRL VSI type. Value from ::req_txq/rxq will be written to ::alloc_txq/rxq after calling ice_vsi_rebuild(). Reviewed-by: Piotr Raczynski Reviewed-by: Wojciech Drewek Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_lib.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index ae4b4220e1bb..85a8cb28a489 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -212,11 +212,18 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi) vsi->alloc_txq)); break; case ICE_VSI_SWITCHDEV_CTRL: - /* The number of queues for ctrl VSI is equal to number of VFs. + /* The number of queues for ctrl VSI is equal to number of PRs * Each ring is associated to the corresponding VF_PR netdev. + * Tx and Rx rings are always equal */ - vsi->alloc_txq = ice_get_num_vfs(pf); - vsi->alloc_rxq = vsi->alloc_txq; + if (vsi->req_txq && vsi->req_rxq) { + vsi->alloc_txq = vsi->req_txq; + vsi->alloc_rxq = vsi->req_rxq; + } else { + vsi->alloc_txq = 1; + vsi->alloc_rxq = 1; + } + vsi->num_q_vectors = 1; break; case ICE_VSI_VF: From patchwork Tue Oct 24 11:09:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434274 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A246A266D1 for ; Tue, 24 Oct 2023 11:35:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UMRbjVlm" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CA6CD68 for ; Tue, 24 Oct 2023 04:35:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147306; x=1729683306; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UDza0CtWUshxrTl/FeH1wXYRvzIQIlSrAmdeUD85wSM=; b=UMRbjVlmcpQLewJI1WtOGBlamdMI1w/0n043ZET3y0KuQjwOm6oa+ZkE qmIlg4gBXes+HLk8UKEP90p3RS4B77syLM6Bp8gwa++1FDAuQyfoy35KX 3b/9B+fqLYfEZ2pEekOi7V5wfhmZUEIzM2FSrRe/aIEBJBJcfLzPSEQKw gkV4QwknfxsD3Iti+gsvGHmFUJi1f7zExEsR0eQEHTx5MKQLi3DRgen2W gaDXF3KW1UBgdaE5a69YO+BXEmODRrM0xQ35jGAll1xbdrB+jtiZpsKUu SeRDIYyCJxqgIWbHYx0OWzjv7fV4Lem4MxtHCUBFZHMZ9roR2htuqcLRg A==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660558" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660558" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146195" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:46 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 11/15] ice: set Tx topology every time new repr is added Date: Tue, 24 Oct 2023 13:09:25 +0200 Message-ID: <20231024110929.19423-12-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org It is needed to track correct Tx topology. Update it every time new representor is created or remove node in case of removing corresponding representor. Still clear all node when removing switchdev mode as part of Tx topology isn't related only to representors. Also clear ::rate_note value to prevent skipping this node next time Tx topology is created. Reviewed-by: Piotr Raczynski Reviewed-by: Wojciech Drewek Reviewed-by: Jacob Keller Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_devlink.c | 29 ++++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_devlink.h | 1 + drivers/net/ethernet/intel/ice/ice_eswitch.c | 9 ++++++ drivers/net/ethernet/intel/ice/ice_repr.c | 27 +++++++++++++----- 4 files changed, 59 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c index 80dc5445b50d..f4e24d11ebd0 100644 --- a/drivers/net/ethernet/intel/ice/ice_devlink.c +++ b/drivers/net/ethernet/intel/ice/ice_devlink.c @@ -810,6 +810,10 @@ static void ice_traverse_tx_tree(struct devlink *devlink, struct ice_sched_node struct ice_vf *vf; int i; + if (node->rate_node) + /* already added, skip to the next */ + goto traverse_children; + if (node->parent == tc_node) { /* create root node */ rate_node = devl_rate_node_create(devlink, node, node->name, NULL); @@ -831,6 +835,7 @@ static void ice_traverse_tx_tree(struct devlink *devlink, struct ice_sched_node if (rate_node && !IS_ERR(rate_node)) node->rate_node = rate_node; +traverse_children: for (i = 0; i < node->num_children; i++) ice_traverse_tx_tree(devlink, node->children[i], tc_node, pf); } @@ -861,6 +866,30 @@ int ice_devlink_rate_init_tx_topology(struct devlink *devlink, struct ice_vsi *v return 0; } +static void ice_clear_rate_nodes(struct ice_sched_node *node) +{ + node->rate_node = NULL; + + for (int i = 0; i < node->num_children; i++) + ice_clear_rate_nodes(node->children[i]); +} + +/** + * ice_devlink_rate_clear_tx_topology - clear node->rate_node + * @vsi: main vsi struct + * + * Clear rate_node to cleanup creation of Tx topology. + * + */ +void ice_devlink_rate_clear_tx_topology(struct ice_vsi *vsi) +{ + struct ice_port_info *pi = vsi->port_info; + + mutex_lock(&pi->sched_lock); + ice_clear_rate_nodes(pi->root->children[0]); + mutex_unlock(&pi->sched_lock); +} + /** * ice_set_object_tx_share - sets node scheduling parameter * @pi: devlink struct instance diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.h b/drivers/net/ethernet/intel/ice/ice_devlink.h index 6ec96779f52e..d291c0e2e17b 100644 --- a/drivers/net/ethernet/intel/ice/ice_devlink.h +++ b/drivers/net/ethernet/intel/ice/ice_devlink.h @@ -20,5 +20,6 @@ void ice_devlink_destroy_regions(struct ice_pf *pf); int ice_devlink_rate_init_tx_topology(struct devlink *devlink, struct ice_vsi *vsi); void ice_tear_down_devlink_rate_tree(struct ice_pf *pf); +void ice_devlink_rate_clear_tx_topology(struct ice_vsi *vsi); #endif /* _ICE_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 67231e43ffa6..db70a62429e3 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -519,6 +519,7 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) static void ice_eswitch_disable_switchdev(struct ice_pf *pf) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; + struct devlink *devlink = priv_to_devlink(pf); ice_eswitch_napi_disable(&pf->eswitch.reprs); ice_eswitch_br_offloads_deinit(pf); @@ -526,6 +527,14 @@ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) ice_eswitch_release_reprs(pf); ice_vsi_release(ctrl_vsi); ice_repr_rem_from_all_vfs(pf); + + /* since all port representors are destroyed, there is + * no point in keeping the nodes + */ + ice_devlink_rate_clear_tx_topology(ice_get_main_vsi(pf)); + devl_lock(devlink); + devl_rate_nodes_destroy(devlink); + devl_unlock(devlink); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index b29a3d010780..fa36cc932c5f 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -278,6 +278,13 @@ ice_repr_reg_netdev(struct net_device *netdev) return register_netdev(netdev); } +static void ice_repr_remove_node(struct devlink_port *devlink_port) +{ + devl_lock(devlink_port->devlink); + devl_rate_leaf_destroy(devlink_port); + devl_unlock(devlink_port->devlink); +} + /** * ice_repr_rem - remove representor from VF * @reprs: xarray storing representors @@ -298,6 +305,7 @@ static void ice_repr_rem_vf(struct ice_vf *vf) if (!repr) return; + ice_repr_remove_node(&repr->vf->devlink_port); unregister_netdev(repr->netdev); ice_repr_rem(&vf->pf->eswitch.reprs, repr); ice_devlink_destroy_vf_port(vf); @@ -310,7 +318,6 @@ static void ice_repr_rem_vf(struct ice_vf *vf) */ void ice_repr_rem_from_all_vfs(struct ice_pf *pf) { - struct devlink *devlink; struct ice_vf *vf; unsigned int bkt; @@ -318,14 +325,19 @@ void ice_repr_rem_from_all_vfs(struct ice_pf *pf) ice_for_each_vf(pf, bkt, vf) ice_repr_rem_vf(vf); +} + +static void ice_repr_set_tx_topology(struct ice_pf *pf) +{ + struct devlink *devlink; + + /* only export if ADQ and DCB disabled and eswitch enabled*/ + if (ice_is_adq_active(pf) || ice_is_dcb_active(pf) || + !ice_is_switchdev_running(pf)) + return; - /* since all port representors are destroyed, there is - * no point in keeping the nodes - */ devlink = priv_to_devlink(pf); - devl_lock(devlink); - devl_rate_nodes_destroy(devlink); - devl_unlock(devlink); + ice_devlink_rate_init_tx_topology(devlink, ice_get_main_vsi(pf)); } /** @@ -415,6 +427,7 @@ static struct ice_repr *ice_repr_add_vf(struct ice_vf *vf) goto err_netdev; ice_virtchnl_set_repr_ops(vf); + ice_repr_set_tx_topology(vf->pf); return repr; From patchwork Tue Oct 24 11:09:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434275 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 146A9266A4 for ; Tue, 24 Oct 2023 11:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="T1VAN4L/" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E28F4D7A for ; Tue, 24 Oct 2023 04:35:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147309; x=1729683309; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OUcJvgJWXQWcfwHNkkMbqspEbu1ht2Z3DPxFpq8iD7c=; b=T1VAN4L/DMAHlBAfmbMQcsZ3Rf6CnV1vEwZGK8ZUeTg9ET5Efx2gRZH6 bZER1bWgfIB5czuyZxMSo181zBzhcaXaoumZTbJa/UmwH9BGrEk4XnFKt rfa2z7MsJ0IsTIQWSaiYyNKQ2/ZWJQWkmmg/VOoG+8VIjVVZsmRelrlKu dR/kSMfzUx9/GZlceJgCvjCVYRoq3leD1Hff6iN/Rdo9v4uZDyKjZWrkC iEqeprOBuc+lMLI/SOjZ0/gbo7Z7zTOJZ0HX+Oj+EOhMfy2JJaFDyRtJr mc80kqJ1xHcDSZ2XRAml4CiIGb0/6gZ11axq8jqItqiRi+4m/lB4c2nDj w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660564" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660564" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146223" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:49 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 12/15] ice: realloc VSI stats arrays Date: Tue, 24 Oct 2023 13:09:26 +0200 Message-ID: <20231024110929.19423-13-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Previously only case when queues amount is lower was covered. Implement realloc for case when queues amount is higher than previous one. Use krealloc() function and zero new allocated elements. It has to be done before ice_vsi_def_cfg(), because stats element for ring is set there. Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_lib.c | 58 ++++++++++++++++-------- 1 file changed, 39 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 85a8cb28a489..d826b5afa143 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3076,27 +3076,26 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi, } /** - * ice_vsi_realloc_stat_arrays - Frees unused stat structures + * ice_vsi_realloc_stat_arrays - Frees unused stat structures or alloc new ones * @vsi: VSI pointer - * @prev_txq: Number of Tx rings before ring reallocation - * @prev_rxq: Number of Rx rings before ring reallocation */ -static void -ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq) +static int +ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi) { + u16 req_txq = vsi->req_txq ? vsi->req_txq : vsi->alloc_txq; + u16 req_rxq = vsi->req_rxq ? vsi->req_rxq : vsi->alloc_rxq; + struct ice_ring_stats **tx_ring_stats; + struct ice_ring_stats **rx_ring_stats; struct ice_vsi_stats *vsi_stat; struct ice_pf *pf = vsi->back; + u16 prev_txq = vsi->alloc_txq; + u16 prev_rxq = vsi->alloc_rxq; int i; - if (!prev_txq || !prev_rxq) - return; - if (vsi->type == ICE_VSI_CHNL) - return; - vsi_stat = pf->vsi_stats[vsi->idx]; - if (vsi->num_txq < prev_txq) { - for (i = vsi->num_txq; i < prev_txq; i++) { + if (req_txq < prev_txq) { + for (i = req_txq; i < prev_txq; i++) { if (vsi_stat->tx_ring_stats[i]) { kfree_rcu(vsi_stat->tx_ring_stats[i], rcu); WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL); @@ -3104,14 +3103,36 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq) } } - if (vsi->num_rxq < prev_rxq) { - for (i = vsi->num_rxq; i < prev_rxq; i++) { + tx_ring_stats = vsi_stat->rx_ring_stats; + vsi_stat->tx_ring_stats = + krealloc_array(vsi_stat->tx_ring_stats, req_txq, + sizeof(*vsi_stat->tx_ring_stats), + GFP_KERNEL | __GFP_ZERO); + if (!vsi_stat->tx_ring_stats) { + vsi_stat->tx_ring_stats = tx_ring_stats; + return -ENOMEM; + } + + if (req_rxq < prev_rxq) { + for (i = req_rxq; i < prev_rxq; i++) { if (vsi_stat->rx_ring_stats[i]) { kfree_rcu(vsi_stat->rx_ring_stats[i], rcu); WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL); } } } + + rx_ring_stats = vsi_stat->rx_ring_stats; + vsi_stat->rx_ring_stats = + krealloc_array(vsi_stat->rx_ring_stats, req_rxq, + sizeof(*vsi_stat->rx_ring_stats), + GFP_KERNEL | __GFP_ZERO); + if (!vsi_stat->rx_ring_stats) { + vsi_stat->rx_ring_stats = rx_ring_stats; + return -ENOMEM; + } + + return 0; } /** @@ -3128,9 +3149,9 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags) { struct ice_vsi_cfg_params params = {}; struct ice_coalesce_stored *coalesce; - int ret, prev_txq, prev_rxq; int prev_num_q_vectors = 0; struct ice_pf *pf; + int ret; if (!vsi) return -EINVAL; @@ -3149,8 +3170,9 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags) prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); - prev_txq = vsi->num_txq; - prev_rxq = vsi->num_rxq; + ret = ice_vsi_realloc_stat_arrays(vsi); + if (ret) + goto err_vsi_cfg; ice_vsi_decfg(vsi); ret = ice_vsi_cfg_def(vsi, ¶ms); @@ -3168,8 +3190,6 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags) return ice_schedule_reset(pf, ICE_RESET_PFR); } - ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq); - ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors); kfree(coalesce); From patchwork Tue Oct 24 11:09:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434276 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 653E1266A4 for ; Tue, 24 Oct 2023 11:35:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I8iFJFpS" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91ED8128 for ; Tue, 24 Oct 2023 04:35:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147311; x=1729683311; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eCxmgDFuGCrotH7garTUMJPe+dvYbOM9E+IHGurnswA=; b=I8iFJFpStSVz4IL2hn/+aEP8kLeNNzQkUNJsjj3NuGXr1xvIeqbBmT2X jlnuDrDh4O7FN5InRIwv+0oBodFcXFDPNZNnh3XkF0LXbcoKppqkJPpPT 5ScIlXkPsqnrvPfEDwZf4BawmJQ2ScRkL9MqcXt49TQJZy7vVfBmbwyZi mbvIBCr3a5JU7G1exu4dH9QMRLIhhCiiugLl3GGx9F0cYXYccid2hUFBm +nYwzBySfq3pJCliCSsFBsgmLgL0bWxUj8d+1wrYlQOQ4ZoiZKUb9LB+u 9yDAKYFSlSl4i6DQoiEZ5lINs9hV7p+S2puWAVyXyzEHQX+JQ3T2uO+kJ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660568" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660568" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146255" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:51 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 13/15] ice: add VF representors one by one Date: Tue, 24 Oct 2023 13:09:27 +0200 Message-ID: <20231024110929.19423-14-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement adding representors one by one. Always set switchdev environment when first representor is being added and clear environment when last one is being removed. Basic switchdev configuration remains the same. Code related to creating and configuring representor was changed. Instead of setting whole representors in one function handle only one representor in setup function. The same with removing representors. Stop representors when new one is being added or removed. Stop means, disabling napi, stopping traffic and removing slow path rule. It is needed because ::q_id will change after remapping, so each representor will need new rule. When representor are stopped rebuild control plane VSI with one more or one less queue. One more if new representor is being added, one less if representor is being removed. Bridge port is removed during unregister_netdev() call on PR, so there is no need to call it from driver side. After that do remap new queues to correct vector. At the end start all representors (napi enable, start queues, add slow path rule). Reviewed-by: Piotr Raczynski Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 351 +++++++++++-------- drivers/net/ethernet/intel/ice/ice_eswitch.h | 13 +- drivers/net/ethernet/intel/ice/ice_repr.c | 85 +---- drivers/net/ethernet/intel/ice/ice_repr.h | 4 +- drivers/net/ethernet/intel/ice/ice_sriov.c | 17 +- 5 files changed, 228 insertions(+), 242 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index db70a62429e3..de5744aa5c2a 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -10,6 +10,24 @@ #include "ice_devlink.h" #include "ice_tc_lib.h" +/** + * ice_eswitch_del_sp_rules - delete adv rules added on PRs + * @pf: pointer to the PF struct + * + * Delete all advanced rules that were used to forward packets with the + * device's VSI index to the corresponding eswitch ctrl VSI queue. + */ +static void ice_eswitch_del_sp_rules(struct ice_pf *pf) +{ + struct ice_repr *repr; + unsigned long id; + + xa_for_each(&pf->eswitch.reprs, id, repr) { + if (repr->sp_rule.rid) + ice_rem_adv_rule_by_id(&pf->hw, &repr->sp_rule); + } +} + /** * ice_eswitch_add_sp_rule - add adv rule with device's VSI index * @pf: pointer to PF struct @@ -18,8 +36,7 @@ * This function adds advanced rule that forwards packets with * device's VSI index to the corresponding eswitch ctrl VSI queue. */ -static int -ice_eswitch_add_sp_rule(struct ice_pf *pf, struct ice_repr *repr) +static int ice_eswitch_add_sp_rule(struct ice_pf *pf, struct ice_repr *repr) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; struct ice_adv_rule_info rule_info = { 0 }; @@ -54,17 +71,22 @@ ice_eswitch_add_sp_rule(struct ice_pf *pf, struct ice_repr *repr) return err; } -/** - * ice_eswitch_del_sp_rule - delete adv rule with device's VSI index - * @pf: pointer to the PF struct - * @repr: pointer to the repr struct - * - * Delete the advanced rule that was used to forward packets with the device's - * VSI index to the corresponding eswitch ctrl VSI queue. - */ -static void ice_eswitch_del_sp_rule(struct ice_pf *pf, struct ice_repr *repr) +static int +ice_eswitch_add_sp_rules(struct ice_pf *pf) { - ice_rem_adv_rule_by_id(&pf->hw, &repr->sp_rule); + struct ice_repr *repr; + unsigned long id; + int err; + + xa_for_each(&pf->eswitch.reprs, id, repr) { + err = ice_eswitch_add_sp_rule(pf, repr); + if (err) { + ice_eswitch_del_sp_rules(pf); + return err; + } + } + + return 0; } /** @@ -131,7 +153,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) /** * ice_eswitch_remap_rings_to_vectors - reconfigure rings of eswitch ctrl VSI - * @pf: pointer to PF struct + * @eswitch: pointer to eswitch struct * * In eswitch number of allocated Tx/Rx rings is equal. * @@ -140,9 +162,9 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) * will have dedicated 1 Tx/Rx ring pair, so number of rings pair is equal to * number of VFs. */ -static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) +static void ice_eswitch_remap_rings_to_vectors(struct ice_eswitch *eswitch) { - struct ice_vsi *vsi = pf->eswitch.control_vsi; + struct ice_vsi *vsi = eswitch->control_vsi; unsigned long repr_id = 0; int q_id; @@ -152,7 +174,7 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) struct ice_rx_ring *rx_ring; struct ice_repr *repr; - repr = xa_find(&pf->eswitch.reprs, &repr_id, U32_MAX, + repr = xa_find(&eswitch->reprs, &repr_id, U32_MAX, XA_PRESENT); if (WARN_ON(!repr)) break; @@ -185,100 +207,70 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_pf *pf) } /** - * ice_eswitch_release_reprs - clear PR VSIs configuration + * ice_eswitch_release_repr - clear PR VSI configuration * @pf: poiner to PF struct + * @repr: pointer to PR */ static void -ice_eswitch_release_reprs(struct ice_pf *pf) +ice_eswitch_release_repr(struct ice_pf *pf, struct ice_repr *repr) { - struct ice_repr *repr; - unsigned long id; + struct ice_vsi *vsi = repr->src_vsi; - xa_for_each(&pf->eswitch.reprs, id, repr) { - struct ice_vsi *vsi = repr->src_vsi; - - /* Skip representors that aren't configured */ - if (!repr->dst) - continue; + /* Skip representors that aren't configured */ + if (!repr->dst) + return; - ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); - metadata_dst_free(repr->dst); - repr->dst = NULL; - ice_eswitch_del_sp_rule(pf, repr); - ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, - ICE_FWD_TO_VSI); + ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); + metadata_dst_free(repr->dst); + repr->dst = NULL; + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, + ICE_FWD_TO_VSI); - netif_napi_del(&repr->q_vector->napi); - } + netif_napi_del(&repr->q_vector->napi); } /** - * ice_eswitch_setup_reprs - configure port reprs to run in switchdev mode + * ice_eswitch_setup_repr - configure PR to run in switchdev mode * @pf: pointer to PF struct + * @repr: pointer to PR struct */ -static int ice_eswitch_setup_reprs(struct ice_pf *pf) +static int ice_eswitch_setup_repr(struct ice_pf *pf, struct ice_repr *repr) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; - struct ice_repr *repr; - unsigned long id; - - xa_for_each(&pf->eswitch.reprs, id, repr) { - struct ice_vsi *vsi = repr->src_vsi; - - ice_remove_vsi_fltr(&pf->hw, vsi->idx); - repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, - GFP_KERNEL); - if (!repr->dst) { - ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, - ICE_FWD_TO_VSI); - goto err; - } + struct ice_vsi *vsi = repr->src_vsi; + struct metadata_dst *dst; - if (ice_eswitch_add_sp_rule(pf, repr)) { - ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, - ICE_FWD_TO_VSI); - goto err; - } + ice_remove_vsi_fltr(&pf->hw, vsi->idx); + repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, + GFP_KERNEL); + if (!repr->dst) + goto err_add_mac_fltr; - if (ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof)) { - ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, - ICE_FWD_TO_VSI); - ice_eswitch_del_sp_rule(pf, repr); - metadata_dst_free(repr->dst); - repr->dst = NULL; - goto err; - } + if (ice_vsi_update_security(vsi, ice_vsi_ctx_clear_antispoof)) + goto err_dst_free; - if (ice_vsi_add_vlan_zero(vsi)) { - ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, - ICE_FWD_TO_VSI); - ice_eswitch_del_sp_rule(pf, repr); - metadata_dst_free(repr->dst); - repr->dst = NULL; - ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); - goto err; - } + if (ice_vsi_add_vlan_zero(vsi)) + goto err_update_security; - netif_napi_add(repr->netdev, &repr->q_vector->napi, - ice_napi_poll); + netif_napi_add(repr->netdev, &repr->q_vector->napi, + ice_napi_poll); - netif_keep_dst(repr->netdev); - } + netif_keep_dst(repr->netdev); - xa_for_each(&pf->eswitch.reprs, id, repr) { - struct ice_vsi *vsi = repr->src_vsi; - struct metadata_dst *dst; - - dst = repr->dst; - dst->u.port_info.port_id = vsi->vsi_num; - dst->u.port_info.lower_dev = repr->netdev; - ice_repr_set_traffic_vsi(repr, ctrl_vsi); - } + dst = repr->dst; + dst->u.port_info.port_id = vsi->vsi_num; + dst->u.port_info.lower_dev = repr->netdev; + ice_repr_set_traffic_vsi(repr, ctrl_vsi); return 0; -err: - ice_eswitch_release_reprs(pf); +err_update_security: + ice_vsi_update_security(vsi, ice_vsi_ctx_set_antispoof); +err_dst_free: + metadata_dst_free(repr->dst); + repr->dst = NULL; +err_add_mac_fltr: + ice_fltr_add_mac_and_broadcast(vsi, repr->parent_mac, ICE_FWD_TO_VSI); return -ENODEV; } @@ -481,31 +473,14 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) if (ice_eswitch_setup_env(pf)) goto err_vsi; - if (ice_repr_add_for_all_vfs(pf)) - goto err_repr_add; - - if (ice_eswitch_setup_reprs(pf)) - goto err_setup_reprs; - - ice_eswitch_remap_rings_to_vectors(pf); - - if (ice_vsi_open(ctrl_vsi)) - goto err_vsi_open; - if (ice_eswitch_br_offloads_init(pf)) goto err_br_offloads; - ice_eswitch_napi_enable(&pf->eswitch.reprs); + pf->eswitch.is_running = true; return 0; err_br_offloads: - ice_vsi_close(ctrl_vsi); -err_vsi_open: - ice_eswitch_release_reprs(pf); -err_setup_reprs: - ice_repr_rem_from_all_vfs(pf); -err_repr_add: ice_eswitch_release_env(pf); err_vsi: ice_vsi_release(ctrl_vsi); @@ -519,22 +494,12 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) static void ice_eswitch_disable_switchdev(struct ice_pf *pf) { struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; - struct devlink *devlink = priv_to_devlink(pf); - ice_eswitch_napi_disable(&pf->eswitch.reprs); ice_eswitch_br_offloads_deinit(pf); ice_eswitch_release_env(pf); - ice_eswitch_release_reprs(pf); ice_vsi_release(ctrl_vsi); - ice_repr_rem_from_all_vfs(pf); - - /* since all port representors are destroyed, there is - * no point in keeping the nodes - */ - ice_devlink_rate_clear_tx_topology(ice_get_main_vsi(pf)); - devl_lock(devlink); - devl_rate_nodes_destroy(devlink); - devl_unlock(devlink); + + pf->eswitch.is_running = false; } /** @@ -613,39 +578,6 @@ bool ice_is_eswitch_mode_switchdev(struct ice_pf *pf) return pf->eswitch_mode == DEVLINK_ESWITCH_MODE_SWITCHDEV; } -/** - * ice_eswitch_release - cleanup eswitch - * @pf: pointer to PF structure - */ -void ice_eswitch_release(struct ice_pf *pf) -{ - if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_LEGACY) - return; - - ice_eswitch_disable_switchdev(pf); - pf->eswitch.is_running = false; -} - -/** - * ice_eswitch_configure - configure eswitch - * @pf: pointer to PF structure - */ -int ice_eswitch_configure(struct ice_pf *pf) -{ - int status; - - if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_LEGACY || - pf->eswitch.is_running) - return 0; - - status = ice_eswitch_enable_switchdev(pf); - if (status) - return status; - - pf->eswitch.is_running = true; - return 0; -} - /** * ice_eswitch_start_all_tx_queues - start Tx queues of all port representors * @pf: pointer to PF structure @@ -678,6 +610,20 @@ void ice_eswitch_stop_all_tx_queues(struct ice_pf *pf) ice_repr_stop_tx_queues(repr); } +static void ice_eswitch_stop_reprs(struct ice_pf *pf) +{ + ice_eswitch_del_sp_rules(pf); + ice_eswitch_stop_all_tx_queues(pf); + ice_eswitch_napi_disable(&pf->eswitch.reprs); +} + +static void ice_eswitch_start_reprs(struct ice_pf *pf) +{ + ice_eswitch_napi_enable(&pf->eswitch.reprs); + ice_eswitch_start_all_tx_queues(pf); + ice_eswitch_add_sp_rules(pf); +} + /** * ice_eswitch_rebuild - rebuild eswitch * @pf: pointer to PF structure @@ -694,11 +640,7 @@ int ice_eswitch_rebuild(struct ice_pf *pf) if (status) return status; - status = ice_eswitch_setup_reprs(pf); - if (status) - return status; - - ice_eswitch_remap_rings_to_vectors(pf); + ice_eswitch_remap_rings_to_vectors(&pf->eswitch); ice_replay_tc_fltrs(pf); @@ -711,3 +653,102 @@ int ice_eswitch_rebuild(struct ice_pf *pf) return 0; } + +static void +ice_eswitch_cp_change_queues(struct ice_eswitch *eswitch, int change) +{ + struct ice_vsi *cp = eswitch->control_vsi; + + ice_vsi_close(cp); + + cp->req_txq = cp->alloc_txq + change; + cp->req_rxq = cp->alloc_rxq + change; + ice_vsi_rebuild(cp, ICE_VSI_FLAG_NO_INIT); + ice_eswitch_remap_rings_to_vectors(eswitch); + + ice_vsi_open(cp); +} + +int +ice_eswitch_attach(struct ice_pf *pf, struct ice_vf *vf) +{ + struct ice_repr *repr; + int change = 1; + int err; + + if (pf->eswitch_mode == DEVLINK_ESWITCH_MODE_LEGACY) + return 0; + + if (xa_empty(&pf->eswitch.reprs)) { + err = ice_eswitch_enable_switchdev(pf); + if (err) + return err; + /* Control plane VSI is created with 1 queue as default */ + change = 0; + } + + ice_eswitch_stop_reprs(pf); + + repr = ice_repr_add_vf(vf); + if (IS_ERR(repr)) + goto err_create_repr; + + err = ice_eswitch_setup_repr(pf, repr); + if (err) + goto err_setup_repr; + + err = xa_alloc(&pf->eswitch.reprs, &repr->id, repr, + XA_LIMIT(1, INT_MAX), GFP_KERNEL); + if (err) + goto err_xa_alloc; + + vf->repr_id = repr->id; + + ice_eswitch_cp_change_queues(&pf->eswitch, change); + ice_eswitch_start_reprs(pf); + + return 0; + +err_xa_alloc: + ice_eswitch_release_repr(pf, repr); +err_setup_repr: + ice_repr_rem_vf(repr); +err_create_repr: + if (xa_empty(&pf->eswitch.reprs)) + ice_eswitch_disable_switchdev(pf); + ice_eswitch_start_reprs(pf); + + return err; +} + +void ice_eswitch_detach(struct ice_pf *pf, struct ice_vf *vf) +{ + struct ice_repr *repr = xa_load(&pf->eswitch.reprs, vf->repr_id); + struct devlink *devlink = priv_to_devlink(pf); + + if (!repr) + return; + + ice_eswitch_stop_reprs(pf); + xa_erase(&pf->eswitch.reprs, repr->id); + + if (xa_empty(&pf->eswitch.reprs)) + ice_eswitch_disable_switchdev(pf); + else + ice_eswitch_cp_change_queues(&pf->eswitch, -1); + + ice_eswitch_release_repr(pf, repr); + ice_repr_rem_vf(repr); + + if (xa_empty(&pf->eswitch.reprs)) { + /* since all port representors are destroyed, there is + * no point in keeping the nodes + */ + ice_devlink_rate_clear_tx_topology(ice_get_main_vsi(pf)); + devl_lock(devlink); + devl_rate_nodes_destroy(devlink); + devl_unlock(devlink); + } else { + ice_eswitch_start_reprs(pf); + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.h b/drivers/net/ethernet/intel/ice/ice_eswitch.h index ff110bd9fc4c..59d51c0d14e5 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.h @@ -7,8 +7,9 @@ #include #ifdef CONFIG_ICE_SWITCHDEV -void ice_eswitch_release(struct ice_pf *pf); -int ice_eswitch_configure(struct ice_pf *pf); +void ice_eswitch_detach(struct ice_pf *pf, struct ice_vf *vf); +int +ice_eswitch_attach(struct ice_pf *pf, struct ice_vf *vf); int ice_eswitch_rebuild(struct ice_pf *pf); int ice_eswitch_mode_get(struct devlink *devlink, u16 *mode); @@ -26,7 +27,13 @@ void ice_eswitch_set_target_vsi(struct sk_buff *skb, netdev_tx_t ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev); #else /* CONFIG_ICE_SWITCHDEV */ -static inline void ice_eswitch_release(struct ice_pf *pf) { } +static inline void ice_eswitch_detach(struct ice_pf *pf, struct ice_vf *vf) { } + +static inline int +ice_eswitch_attach(struct ice_pf *pf, struct ice_vf *vf) +{ + return -EOPNOTSUPP; +} static inline void ice_eswitch_stop_all_tx_queues(struct ice_pf *pf) { } diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index fa36cc932c5f..5f30fb131f74 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -287,44 +287,26 @@ static void ice_repr_remove_node(struct devlink_port *devlink_port) /** * ice_repr_rem - remove representor from VF - * @reprs: xarray storing representors * @repr: pointer to representor structure */ -static void ice_repr_rem(struct xarray *reprs, struct ice_repr *repr) +static void ice_repr_rem(struct ice_repr *repr) { - xa_erase(reprs, repr->id); kfree(repr->q_vector); free_netdev(repr->netdev); kfree(repr); } -static void ice_repr_rem_vf(struct ice_vf *vf) -{ - struct ice_repr *repr = xa_load(&vf->pf->eswitch.reprs, vf->repr_id); - - if (!repr) - return; - - ice_repr_remove_node(&repr->vf->devlink_port); - unregister_netdev(repr->netdev); - ice_repr_rem(&vf->pf->eswitch.reprs, repr); - ice_devlink_destroy_vf_port(vf); - ice_virtchnl_set_dflt_ops(vf); -} - /** - * ice_repr_rem_from_all_vfs - remove port representor for all VFs - * @pf: pointer to PF structure + * ice_repr_rem_vf - remove representor from VF + * @repr: pointer to representor structure */ -void ice_repr_rem_from_all_vfs(struct ice_pf *pf) +void ice_repr_rem_vf(struct ice_repr *repr) { - struct ice_vf *vf; - unsigned int bkt; - - lockdep_assert_held(&pf->vfs.table_lock); - - ice_for_each_vf(pf, bkt, vf) - ice_repr_rem_vf(vf); + ice_repr_remove_node(&repr->vf->devlink_port); + unregister_netdev(repr->netdev); + ice_devlink_destroy_vf_port(repr->vf); + ice_virtchnl_set_dflt_ops(repr->vf); + ice_repr_rem(repr); } static void ice_repr_set_tx_topology(struct ice_pf *pf) @@ -374,19 +356,12 @@ ice_repr_add(struct ice_pf *pf, struct ice_vsi *src_vsi, const u8 *parent_mac) goto err_alloc_q_vector; } repr->q_vector = q_vector; - - err = xa_alloc(&pf->eswitch.reprs, &repr->id, repr, - XA_LIMIT(1, INT_MAX), GFP_KERNEL); - if (err) - goto err_xa_alloc; repr->q_id = repr->id; ether_addr_copy(repr->parent_mac, parent_mac); return repr; -err_xa_alloc: - kfree(repr->q_vector); err_alloc_q_vector: free_netdev(repr->netdev); err_alloc: @@ -394,7 +369,7 @@ ice_repr_add(struct ice_pf *pf, struct ice_vsi *src_vsi, const u8 *parent_mac) return ERR_PTR(err); } -static struct ice_repr *ice_repr_add_vf(struct ice_vf *vf) +struct ice_repr *ice_repr_add_vf(struct ice_vf *vf) { struct ice_repr *repr; struct ice_vsi *vsi; @@ -414,7 +389,6 @@ static struct ice_repr *ice_repr_add_vf(struct ice_vf *vf) goto err_repr_add; } - vf->repr_id = repr->id; repr->vf = vf; repr->netdev->min_mtu = ETH_MIN_MTU; @@ -432,49 +406,12 @@ static struct ice_repr *ice_repr_add_vf(struct ice_vf *vf) return repr; err_netdev: - ice_repr_rem(&vf->pf->eswitch.reprs, repr); + ice_repr_rem(repr); err_repr_add: ice_devlink_destroy_vf_port(vf); return ERR_PTR(err); } -/** - * ice_repr_add_for_all_vfs - add port representor for all VFs - * @pf: pointer to PF structure - */ -int ice_repr_add_for_all_vfs(struct ice_pf *pf) -{ - struct devlink *devlink; - struct ice_repr *repr; - struct ice_vf *vf; - unsigned int bkt; - int err; - - lockdep_assert_held(&pf->vfs.table_lock); - - ice_for_each_vf(pf, bkt, vf) { - repr = ice_repr_add_vf(vf); - if (IS_ERR(repr)) { - err = PTR_ERR(repr); - goto err; - } - } - - /* only export if ADQ and DCB disabled */ - if (ice_is_adq_active(pf) || ice_is_dcb_active(pf)) - return 0; - - devlink = priv_to_devlink(pf); - ice_devlink_rate_init_tx_topology(devlink, ice_get_main_vsi(pf)); - - return 0; - -err: - ice_repr_rem_from_all_vfs(pf); - - return err; -} - struct ice_repr *ice_repr_get_by_vsi(struct ice_vsi *vsi) { if (!vsi->vf) diff --git a/drivers/net/ethernet/intel/ice/ice_repr.h b/drivers/net/ethernet/intel/ice/ice_repr.h index a3cd256d82b7..f9aede315716 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.h +++ b/drivers/net/ethernet/intel/ice/ice_repr.h @@ -22,8 +22,8 @@ struct ice_repr { #endif }; -int ice_repr_add_for_all_vfs(struct ice_pf *pf); -void ice_repr_rem_from_all_vfs(struct ice_pf *pf); +struct ice_repr *ice_repr_add_vf(struct ice_vf *vf); +void ice_repr_rem_vf(struct ice_repr *repr); void ice_repr_start_tx_queues(struct ice_repr *repr); void ice_repr_stop_tx_queues(struct ice_repr *repr); diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 2a5e6616cc0a..51f5f420d632 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -174,11 +174,10 @@ void ice_free_vfs(struct ice_pf *pf) mutex_lock(&vfs->table_lock); - ice_eswitch_release(pf); - ice_for_each_vf(pf, bkt, vf) { mutex_lock(&vf->cfg_lock); + ice_eswitch_detach(pf, vf); ice_dis_vf_qs(vf); if (test_bit(ICE_VF_STATE_INIT, vf->vf_states)) { @@ -614,6 +613,14 @@ static int ice_start_vfs(struct ice_pf *pf) goto teardown; } + retval = ice_eswitch_attach(pf, vf); + if (retval) { + dev_err(ice_pf_to_dev(pf), "Failed to attach VF %d to eswitch, error %d", + vf->vf_id, retval); + ice_vf_vsi_release(vf); + goto teardown; + } + set_bit(ICE_VF_STATE_INIT, vf->vf_states); ice_ena_vf_mappings(vf); wr32(hw, VFGEN_RSTAT(vf->vf_id), VIRTCHNL_VFR_VFACTIVE); @@ -932,12 +939,6 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs) clear_bit(ICE_VF_DIS, pf->state); - ret = ice_eswitch_configure(pf); - if (ret) { - dev_err(dev, "Failed to configure eswitch, err %d\n", ret); - goto err_unroll_sriov; - } - /* rearm global interrupts */ if (test_and_clear_bit(ICE_OICR_INTR_DIS, pf->state)) ice_irq_dynamic_ena(hw, NULL, NULL); From patchwork Tue Oct 24 11:09:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434277 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFA6D262A6 for ; Tue, 24 Oct 2023 11:35:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HEDlyugo" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34FF1D79 for ; Tue, 24 Oct 2023 04:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147314; x=1729683314; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TMvyGmG61yrdJNFhvJa2O2xbJbO7TvBcJk5g0EOCods=; b=HEDlyugo6DZmG0Vaz8yFoUgLqPxGqGLNLcfSPykoSVywX9kMmDzD/05R iDESGKxdowS/nW+fkL9ejuh6w3k7H6me5YORfoxx7uTp2HfL4E5lBBr1L U+n9mJqYccoXMvTGTjFgJSKT/mhg78v8qrtFCq9l1Hlx7jGjBsaBaQzQF r8P80QaIMzQONMpiBjRziilx638c9SDsBrZiEMdzARoKTwz06LflBwEtK juda1GH9NT1xXfNaNOMtgBk+zWOqxB8zdklBbnW7E/UHGIvrn+s/8sDKG 54hZTFsE4O/F8Gg0UxGDoo8tB9z1EeS0bsV+pav1dWvGyjW2Hvq6Y32lG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660574" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660574" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146293" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:54 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 14/15] ice: adjust switchdev rebuild path Date: Tue, 24 Oct 2023 13:09:28 +0200 Message-ID: <20231024110929.19423-15-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org There is no need to use specific functions for rebuilding path. Let's use current implementation by removing all representors and as the result remove switchdev environment. It will be added in devices rebuild path. For example during adding VFs, port representors for them also will be created. Rebuild control plane VSI before removing representors with INIT_VSI flag set to reinit VSI in hardware after reset. Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 66 +++++++------------- drivers/net/ethernet/intel/ice/ice_main.c | 4 +- drivers/net/ethernet/intel/ice/ice_vf_lib.c | 7 +-- 3 files changed, 28 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index de5744aa5c2a..9ff4fe4fb133 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -406,19 +406,6 @@ ice_eswitch_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) return ice_vsi_setup(pf, ¶ms); } -/** - * ice_eswitch_napi_del - remove NAPI handle for all port representors - * @reprs: xarray of reprs - */ -static void ice_eswitch_napi_del(struct xarray *reprs) -{ - struct ice_repr *repr; - unsigned long id; - - xa_for_each(reprs, id, repr) - netif_napi_del(&repr->q_vector->napi); -} - /** * ice_eswitch_napi_enable - enable NAPI for all port representors * @reprs: xarray of reprs @@ -624,36 +611,6 @@ static void ice_eswitch_start_reprs(struct ice_pf *pf) ice_eswitch_add_sp_rules(pf); } -/** - * ice_eswitch_rebuild - rebuild eswitch - * @pf: pointer to PF structure - */ -int ice_eswitch_rebuild(struct ice_pf *pf) -{ - struct ice_vsi *ctrl_vsi = pf->eswitch.control_vsi; - int status; - - ice_eswitch_napi_disable(&pf->eswitch.reprs); - ice_eswitch_napi_del(&pf->eswitch.reprs); - - status = ice_eswitch_setup_env(pf); - if (status) - return status; - - ice_eswitch_remap_rings_to_vectors(&pf->eswitch); - - ice_replay_tc_fltrs(pf); - - status = ice_vsi_open(ctrl_vsi); - if (status) - return status; - - ice_eswitch_napi_enable(&pf->eswitch.reprs); - ice_eswitch_start_all_tx_queues(pf); - - return 0; -} - static void ice_eswitch_cp_change_queues(struct ice_eswitch *eswitch, int change) { @@ -752,3 +709,26 @@ void ice_eswitch_detach(struct ice_pf *pf, struct ice_vf *vf) ice_eswitch_start_reprs(pf); } } + +/** + * ice_eswitch_rebuild - rebuild eswitch + * @pf: pointer to PF structure + */ +int ice_eswitch_rebuild(struct ice_pf *pf) +{ + struct ice_repr *repr; + unsigned long id; + int err; + + if (!ice_is_switchdev_running(pf)) + return 0; + + err = ice_vsi_rebuild(pf->eswitch.control_vsi, ICE_VSI_FLAG_INIT); + if (err) + return err; + + xa_for_each(&pf->eswitch.reprs, id, repr) + ice_eswitch_detach(pf, repr->vf); + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index cb0ff015647f..58d2a6267918 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -7412,9 +7412,9 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) ice_ptp_cfg_timestamp(pf, true); } - err = ice_vsi_rebuild_by_type(pf, ICE_VSI_SWITCHDEV_CTRL); + err = ice_eswitch_rebuild(pf); if (err) { - dev_err(dev, "Switchdev CTRL VSI rebuild failed: %d\n", err); + dev_err(dev, "Switchdev rebuild failed: %d\n", err); goto err_vsi_rebuild; } diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index 68f9de0a7a8f..d2a99a20c4ad 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -760,6 +760,7 @@ void ice_reset_all_vfs(struct ice_pf *pf) ice_for_each_vf(pf, bkt, vf) { mutex_lock(&vf->cfg_lock); + ice_eswitch_detach(pf, vf); vf->driver_caps = 0; ice_vc_set_default_allowlist(vf); @@ -775,13 +776,11 @@ void ice_reset_all_vfs(struct ice_pf *pf) ice_vf_rebuild_vsi(vf); ice_vf_post_vsi_rebuild(vf); + ice_eswitch_attach(pf, vf); + mutex_unlock(&vf->cfg_lock); } - if (ice_is_eswitch_mode_switchdev(pf)) - if (ice_eswitch_rebuild(pf)) - dev_warn(dev, "eswitch rebuild failed\n"); - ice_flush(hw); clear_bit(ICE_VF_DIS, pf->state); From patchwork Tue Oct 24 11:09:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13434278 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5743A262AB for ; Tue, 24 Oct 2023 11:35:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QWTS8td/" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A685FAC for ; Tue, 24 Oct 2023 04:35:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698147317; x=1729683317; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EAMD0jNiRaewglGSb4G3nZUn8vvQClD5emTa18/MaeY=; b=QWTS8td/R9FIqld01zQjRoUYFwXwaDI8eWH3SR82ABgFzfahmJ1OiPaQ hrUhnuaPhd8Fk5WjnPKfyCnCBgsFCKj+PSmdTSx35gNi05jfUixhWRjAk HItA1wfY5RUcRytBkBrUDLZgASF6B18buWB9lxGbGEf7FTz+jgUOg56EV r8sgG1jE/bD5ZX0t/yFBKBtq+grvn2H91nTfQ/VlYrc6Od8I3vqe61JiB Uh8l+rYJ4kDMgv8+fr6H195eTYI+4TQQpJlSr5vpp9PQYfj0YRjvC1h3Q BoD8bAMUY6DNq3ZFqQf3L+6uX5oes9ZKJS0Ld57zbhYj2QqV3UPj1C3B9 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="5660579" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="5660579" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 04:35:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="6146314" Received: from wasp.igk.intel.com ([10.102.20.192]) by orviesa001.jf.intel.com with ESMTP; 24 Oct 2023 04:33:56 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, piotr.raczynski@intel.com, wojciech.drewek@intel.com, marcin.szycik@intel.com, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, jesse.brandeburg@intel.com, Michal Swiatkowski Subject: [PATCH iwl-next v1 15/15] ice: reserve number of CP queues Date: Tue, 24 Oct 2023 13:09:29 +0200 Message-ID: <20231024110929.19423-16-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> References: <20231024110929.19423-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Rebuilding CP VSI each time the PR is created drastically increase the time of maximum VFs creation. Add function to reserve number of CP queues to deal with this problem. Use the same function to decrease number of queues in case of removing VFs. Assume that caller of ice_eswitch_reserve_cp_queues() will also call ice_eswitch_attach/detach() correct number of times. Still one by one PR adding is handy for VF resetting routine. Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski Tested-by: Sujai Buvaneswaran --- drivers/net/ethernet/intel/ice/ice.h | 6 +++ drivers/net/ethernet/intel/ice/ice_eswitch.c | 52 +++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_eswitch.h | 4 ++ drivers/net/ethernet/intel/ice/ice_sriov.c | 3 ++ 4 files changed, 58 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 597bdb6945c6..cd7dcd0fa7f2 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -528,6 +528,12 @@ struct ice_eswitch { struct ice_esw_br_offloads *br_offloads; struct xarray reprs; bool is_running; + /* struct to allow cp queues management optimization */ + struct { + int to_reach; + int value; + bool is_reaching; + } qs; }; struct ice_agg_node { diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 9ff4fe4fb133..3f80e2081e5d 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -176,7 +176,7 @@ static void ice_eswitch_remap_rings_to_vectors(struct ice_eswitch *eswitch) repr = xa_find(&eswitch->reprs, &repr_id, U32_MAX, XA_PRESENT); - if (WARN_ON(!repr)) + if (!repr) break; repr_id += 1; @@ -455,6 +455,8 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) return -ENODEV; ctrl_vsi = pf->eswitch.control_vsi; + /* cp VSI is createad with 1 queue as default */ + pf->eswitch.qs.value = 1; pf->eswitch.uplink_vsi = uplink_vsi; if (ice_eswitch_setup_env(pf)) @@ -487,6 +489,7 @@ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) ice_vsi_release(ctrl_vsi); pf->eswitch.is_running = false; + pf->eswitch.qs.is_reaching = false; } /** @@ -615,15 +618,33 @@ static void ice_eswitch_cp_change_queues(struct ice_eswitch *eswitch, int change) { struct ice_vsi *cp = eswitch->control_vsi; + int queues = 0; + + if (eswitch->qs.is_reaching) { + if (eswitch->qs.to_reach >= eswitch->qs.value + change) { + queues = eswitch->qs.to_reach; + eswitch->qs.is_reaching = false; + } else { + queues = 0; + } + } else if ((change > 0 && cp->alloc_txq <= eswitch->qs.value) || + change < 0) { + queues = cp->alloc_txq + change; + } - ice_vsi_close(cp); + if (queues) { + cp->req_txq = queues; + cp->req_rxq = queues; + ice_vsi_close(cp); + ice_vsi_rebuild(cp, ICE_VSI_FLAG_NO_INIT); + ice_vsi_open(cp); + } else if (!change) { + /* change == 0 means that VSI wasn't open, open it here */ + ice_vsi_open(cp); + } - cp->req_txq = cp->alloc_txq + change; - cp->req_rxq = cp->alloc_rxq + change; - ice_vsi_rebuild(cp, ICE_VSI_FLAG_NO_INIT); + eswitch->qs.value += change; ice_eswitch_remap_rings_to_vectors(eswitch); - - ice_vsi_open(cp); } int @@ -641,6 +662,7 @@ ice_eswitch_attach(struct ice_pf *pf, struct ice_vf *vf) if (err) return err; /* Control plane VSI is created with 1 queue as default */ + pf->eswitch.qs.to_reach -= 1; change = 0; } @@ -732,3 +754,19 @@ int ice_eswitch_rebuild(struct ice_pf *pf) return 0; } + +/** + * ice_eswitch_reserve_cp_queues - reserve control plane VSI queues + * @pf: pointer to PF structure + * @change: how many more (or less) queues is needed + * + * Remember to call ice_eswitch_attach/detach() the "change" times. + */ +void ice_eswitch_reserve_cp_queues(struct ice_pf *pf, int change) +{ + if (pf->eswitch.qs.value + change < 0) + return; + + pf->eswitch.qs.to_reach = pf->eswitch.qs.value + change; + pf->eswitch.qs.is_reaching = true; +} diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.h b/drivers/net/ethernet/intel/ice/ice_eswitch.h index 59d51c0d14e5..1a288a03a79a 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.h @@ -26,6 +26,7 @@ void ice_eswitch_set_target_vsi(struct sk_buff *skb, struct ice_tx_offload_params *off); netdev_tx_t ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev); +void ice_eswitch_reserve_cp_queues(struct ice_pf *pf, int change); #else /* CONFIG_ICE_SWITCHDEV */ static inline void ice_eswitch_detach(struct ice_pf *pf, struct ice_vf *vf) { } @@ -76,5 +77,8 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev) { return NETDEV_TX_BUSY; } + +static inline void +ice_eswitch_reserve_cp_queues(struct ice_pf *pf, int change) { } #endif /* CONFIG_ICE_SWITCHDEV */ #endif /* _ICE_ESWITCH_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 51f5f420d632..5a45bd5ce6ad 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -172,6 +172,8 @@ void ice_free_vfs(struct ice_pf *pf) else dev_warn(dev, "VFs are assigned - not disabling SR-IOV\n"); + ice_eswitch_reserve_cp_queues(pf, -ice_get_num_vfs(pf)); + mutex_lock(&vfs->table_lock); ice_for_each_vf(pf, bkt, vf) { @@ -930,6 +932,7 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs) goto err_unroll_sriov; } + ice_eswitch_reserve_cp_queues(pf, num_vfs); ret = ice_start_vfs(pf); if (ret) { dev_err(dev, "Failed to start %d VFs, err %d\n", num_vfs, ret);