From patchwork Mon Oct 9 09:07:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mateusz Polchlopek X-Patchwork-Id: 13413173 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E540FC04 for ; Mon, 9 Oct 2023 09:10:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ofu8NSpE" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84ED3A3 for ; Mon, 9 Oct 2023 02:10:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696842641; x=1728378641; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c6zZJweTyeHofPvPL00/6e5qNZyuSiVqBJjm+rSh+Iw=; b=Ofu8NSpEiKOf0IBbQDDq3H5WL+BPijLr+St3dRRqU44cMWwrSNbjPnZi 3rTuldwoTrCsOfg6yAOW5KrDUxaiXDjou+fUGfYoYC61X7wLHarT1gW6x 6n/kDGwAvvSndXQrtHRd/RKF9dMJrVseXCvdOQlr18AEVeBx1RnkID601 vY61fL73PvtPKupAmwjzn37AiPSeGrokftvu1h3GXC9xs68fecoWr0g0Q r1jBCzJa55Cu6n+nNP34ChJJtKRVkZQ8B28Eig1i/DGPx2TmB53y779yb 7wG0yTLF5ETt3u5RvArffLPsPD7VkbUxPxnQyZAlHDdqrj9s6ssH+lX/t A==; X-IronPort-AV: E=McAfee;i="6600,9927,10857"; a="382978401" X-IronPort-AV: E=Sophos;i="6.03,210,1694761200"; d="scan'208";a="382978401" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2023 02:10:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10857"; a="843635711" X-IronPort-AV: E=Sophos;i="6.03,210,1694761200"; d="scan'208";a="843635711" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by FMSMGA003.fm.intel.com with ESMTP; 09 Oct 2023 02:10:38 -0700 Received: from fedora.igk.intel.com (Metan_eth.igk.intel.com [10.123.220.124]) by irvmail002.ir.intel.com (Postfix) with ESMTP id A11D43635D; Mon, 9 Oct 2023 10:10:37 +0100 (IST) From: Mateusz Polchlopek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, Raj Victor , Michal Wilczynski , Przemek Kitszel , Mateusz Polchlopek Subject: [Intel-wired-lan] [PATCH iwl-next v3 2/5] ice: Adjust the VSI/Aggregator layers Date: Mon, 9 Oct 2023 05:07:08 -0400 Message-Id: <20231009090711.136777-3-mateusz.polchlopek@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20231009090711.136777-1-mateusz.polchlopek@intel.com> References: <20231009090711.136777-1-mateusz.polchlopek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Raj Victor Adjust the VSI/Aggregator layers based on the number of logical layers supported by the FW. Currently the VSI and Aggregator layers are fixed based on the 9 layer scheduler tree layout. Due to performance reasons the number of layers of the scheduler tree is changing from 9 to 5. It requires a readjustment of these VSI/Aggregator layer values. Signed-off-by: Raj Victor Co-developed-by: Michal Wilczynski Signed-off-by: Michal Wilczynski Reviewed-by: Przemek Kitszel Signed-off-by: Mateusz Polchlopek --- drivers/net/ethernet/intel/ice/ice_sched.c | 37 +++++++++++----------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index c0533d7b66b9..5a65cf8a1806 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -1140,12 +1140,11 @@ u8 ice_sched_get_vsi_layer(struct ice_hw *hw) * 5 or less sw_entry_point_layer */ /* calculate the VSI layer based on number of layers. */ - if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) { - u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; - - if (layer > hw->sw_entry_point_layer) - return layer; - } + if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) + return hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; + else if (hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS) + /* qgroup and VSI layers are same */ + return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET; return hw->sw_entry_point_layer; } @@ -1162,13 +1161,10 @@ u8 ice_sched_get_agg_layer(struct ice_hw *hw) * 7 or less sw_entry_point_layer */ /* calculate the aggregator layer based on number of layers. */ - if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) { - u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET; - - if (layer > hw->sw_entry_point_layer) - return layer; - } - return hw->sw_entry_point_layer; + if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) + return hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET; + else + return hw->sw_entry_point_layer; } /** @@ -1523,10 +1519,11 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, { struct ice_sched_node *vsi_node, *qgrp_node; struct ice_vsi_ctx *vsi_ctx; + u8 qgrp_layer, vsi_layer; u16 max_children; - u8 qgrp_layer; qgrp_layer = ice_sched_get_qgrp_layer(pi->hw); + vsi_layer = ice_sched_get_vsi_layer(pi->hw); max_children = pi->hw->max_children[qgrp_layer]; vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); @@ -1537,6 +1534,12 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, if (!vsi_node) return NULL; + /* If the queue group and VSI layer are same then queues + * are all attached directly to VSI + */ + if (qgrp_layer == vsi_layer) + return vsi_node; + /* get the first queue group node from VSI sub-tree */ qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer); while (qgrp_node) { @@ -3220,7 +3223,7 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, u8 profile_type; int status; - if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM) + if (!pi || layer_num >= pi->hw->num_tx_sched_layers) return NULL; switch (rl_type) { case ICE_MIN_BW: @@ -3236,8 +3239,6 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, return NULL; } - if (!pi) - return NULL; hw = pi->hw; list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num], list_entry) @@ -3467,7 +3468,7 @@ ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type, struct ice_aqc_rl_profile_info *rl_prof_elem; int status = 0; - if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM) + if (layer_num >= pi->hw->num_tx_sched_layers) return -EINVAL; /* Check the existing list for RL profile */ list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num],