From patchwork Wed Aug 16 03:33:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13354534 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12715443E for ; Wed, 16 Aug 2023 03:29:36 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BC822733 for ; Tue, 15 Aug 2023 20:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692156573; x=1723692573; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gmAmptHxzIwdrunMmjawOfnsESEc3HYM2fJ6sgUa1Z4=; b=WCChz5zh+TiON8AP/hofW0lhYuP66WcGgDC6Qdbz0jmYJCjII2IUdfO7 0hJfI/N+1KL7dFY0E7GtQJy/3zBDOCGICw6KfR52OnKW9COK9S7HFoeRL qfgd10hdgji/Rknta2x+6+YHYeZnU7d3iASuMpe/VGysSFUv0yjtLVCTK xkHFiiGkXff33+CioOtslBtjtuZewvTneuZFDPa9J2MDALO1qUxKlpbpo AbQVzOR7EFiRUBuhxgz4Dx4laDtCmV43nHflZppfL49nHuiIc9Y9yQ3wF VWYivGShF3yefkhuROpVAUMn8CiV3B2q2ZmkbG4giXGyyAcPQLiEkdzwN w==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="372427756" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="372427756" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 20:29:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="710958636" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="710958636" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by orsmga006.jf.intel.com with ESMTP; 15 Aug 2023 20:29:31 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com, Wenjun Wu Subject: [PATCH iwl-next v3 1/5] virtchnl: support queue rate limit and quanta size configuration Date: Wed, 16 Aug 2023 11:33:49 +0800 Message-Id: <20230816033353.94565-2-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230816033353.94565-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230816033353.94565-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net This patch adds new virtchnl opcodes and structures for rate limit and quanta size configuration, which include: 1. VIRTCHNL_OP_CONFIG_QUEUE_BW, to configure max bandwidth for each VF per queue. 2. VIRTCHNL_OP_CONFIG_QUANTA, to configure quanta size per queue. 3. VIRTCHNL_OP_GET_QOS_CAPS, VF queries current QoS configuration, such as enabled TCs, arbiter type, up2tc and bandwidth of VSI node. The configuration is previously set by DCB and PF, and now is the potential QoS capability of VF. VF can take it as reference to configure queue TC mapping. Signed-off-by: Wenjun Wu --- include/linux/avf/virtchnl.h | 119 +++++++++++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index d0807ad43f93..0132c002ca06 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -84,6 +84,9 @@ enum virtchnl_rx_hsplit { VIRTCHNL_RX_HSPLIT_SPLIT_SCTP = 8, }; +enum virtchnl_bw_limit_type { + VIRTCHNL_BW_SHAPER = 0, +}; /* END GENERIC DEFINES */ /* Opcodes for VF-PF communication. These are placed in the v_opcode field @@ -145,6 +148,11 @@ enum virtchnl_ops { VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55, VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56, VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57, + /* opcode 57 - 65 are reserved */ + VIRTCHNL_OP_GET_QOS_CAPS = 66, + /* opcode 68 through 111 are reserved */ + VIRTCHNL_OP_CONFIG_QUEUE_BW = 112, + VIRTCHNL_OP_CONFIG_QUANTA = 113, VIRTCHNL_OP_MAX, }; @@ -253,6 +261,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC BIT(26) #define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF BIT(27) #define VIRTCHNL_VF_OFFLOAD_FDIR_PF BIT(28) +#define VIRTCHNL_VF_OFFLOAD_QOS BIT(29) #define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \ VIRTCHNL_VF_OFFLOAD_VLAN | \ @@ -1377,6 +1386,85 @@ struct virtchnl_fdir_del { VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del); +struct virtchnl_shaper_bw { + /* Unit is Kbps */ + u32 committed; + u32 peak; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_shaper_bw); + +/* VIRTCHNL_OP_GET_QOS_CAPS + * VF sends this message to get its QoS Caps, such as + * TC number, Arbiter and Bandwidth. + */ +struct virtchnl_qos_cap_elem { + u8 tc_num; + u8 tc_prio; +#define VIRTCHNL_ABITER_STRICT 0 +#define VIRTCHNL_ABITER_ETS 2 + u8 arbiter; +#define VIRTCHNL_STRICT_WEIGHT 1 + u8 weight; + enum virtchnl_bw_limit_type type; + union { + struct virtchnl_shaper_bw shaper; + u8 pad2[32]; + }; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_qos_cap_elem); + +struct virtchnl_qos_cap_list { + u16 vsi_id; + u16 num_elem; + struct virtchnl_qos_cap_elem cap[]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_qos_cap_list); +#define virtchnl_qos_cap_list_LEGACY_SIZEOF 44 + +/* VIRTCHNL_OP_CONFIG_QUEUE_BW */ +struct virtchnl_queue_bw { + u16 queue_id; + u8 tc; + u8 pad; + struct virtchnl_shaper_bw shaper; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_bw); + +struct virtchnl_queues_bw_cfg { + u16 vsi_id; + u16 num_queues; + struct virtchnl_queue_bw cfg[]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_queues_bw_cfg); +#define virtchnl_queues_bw_cfg_LEGACY_SIZEOF 16 + +enum virtchnl_queue_type { + VIRTCHNL_QUEUE_TYPE_TX = 0, + VIRTCHNL_QUEUE_TYPE_RX = 1, +}; + +/* structure to specify a chunk of contiguous queues */ +struct virtchnl_queue_chunk { + /* see enum virtchnl_queue_type */ + s32 type; + u16 start_queue_id; + u16 num_queues; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk); + +struct virtchnl_quanta_cfg { + u16 quanta_size; + struct virtchnl_queue_chunk queue_select; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_quanta_cfg); + #define __vss_byone(p, member, count, old) \ (struct_size(p, member, count) + (old - 1 - struct_size(p, member, 0))) @@ -1399,6 +1487,8 @@ VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del); __vss(virtchnl_vlan_filter_list_v2, __vss_byelem, p, m, c), \ __vss(virtchnl_tc_info, __vss_byelem, p, m, c), \ __vss(virtchnl_rdma_qvlist_info, __vss_byelem, p, m, c), \ + __vss(virtchnl_qos_cap_list, __vss_byelem, p, m, c), \ + __vss(virtchnl_queues_bw_cfg, __vss_byelem, p, m, c), \ __vss(virtchnl_rss_key, __vss_byone, p, m, c), \ __vss(virtchnl_rss_lut, __vss_byone, p, m, c)) @@ -1595,6 +1685,35 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: valid_len = sizeof(struct virtchnl_vlan_setting); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + valid_len = virtchnl_queues_bw_cfg_LEGACY_SIZEOF; + if (msglen >= valid_len) { + struct virtchnl_queues_bw_cfg *q_bw = + (struct virtchnl_queues_bw_cfg *)msg; + + valid_len = virtchnl_struct_size(q_bw, cfg, + q_bw->num_queues); + if (q_bw->num_queues == 0) { + err_msg_format = true; + break; + } + } + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + valid_len = sizeof(struct virtchnl_quanta_cfg); + if (msglen >= valid_len) { + struct virtchnl_quanta_cfg *q_quanta = + (struct virtchnl_quanta_cfg *)msg; + + if (q_quanta->quanta_size == 0 || + q_quanta->queue_select.num_queues == 0) { + err_msg_format = true; + break; + } + } + break; /* These are always errors coming from the VF. */ case VIRTCHNL_OP_EVENT: case VIRTCHNL_OP_UNKNOWN: From patchwork Wed Aug 16 03:33:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13354535 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 271C23D7B for ; Wed, 16 Aug 2023 03:29:39 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9FFF273D for ; Tue, 15 Aug 2023 20:29:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692156576; x=1723692576; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IfM7IGx5hdRT2vkxZDLL+oXHXAIKrZtpoo6KVNUSdqo=; b=juBpBmUirtlM5mkobHdeP3p3F2+uLmSenmgP4+pwVq369kycdH2L8XDq zIOs48wScH5g3EINUPHmSm3ljL61VKNpIiMGoJbyeNrrsBKlfXyfwzQhA YCam2AaRlIITfc3S87ReGRaHKo5K192fdPb1F8FFRCUVwb1dh5b0DXcJd a77wduQZzY9T7ZrrXDCk6Xt82LHzmJaoS/ukYdJ5Fjbyl+DnkdIxlBrdr n9BCv7bhCelZKNJus5fsD+ZYZOrarEeWLaDg9bSCdLZ5DhG9ViRFXPzI0 CO+wPbKDVsvaLmjr8Mgvv3b12maLYcQlY02CFjMbvkahLne4pWXIYjKbT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="372427762" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="372427762" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 20:29:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="710958641" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="710958641" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by orsmga006.jf.intel.com with ESMTP; 15 Aug 2023 20:29:34 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com, Wenjun Wu Subject: [PATCH iwl-next v3 2/5] ice: Support VF queue rate limit and quanta size configuration Date: Wed, 16 Aug 2023 11:33:50 +0800 Message-Id: <20230816033353.94565-3-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230816033353.94565-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230816033353.94565-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add support to configure VF queue rate limit and quanta size. For quanta size configuration, the quanta profiles are divided evenly by PF numbers. For each port, the first quanta profile is reserved for default. When VF is asked to set queue quanta size, PF will search for an available profile, change the fields and assigned this profile to the queue. Signed-off-by: Wenjun Wu --- drivers/net/ethernet/intel/ice/ice.h | 2 + drivers/net/ethernet/intel/ice/ice_base.c | 2 + drivers/net/ethernet/intel/ice/ice_common.c | 19 ++ .../net/ethernet/intel/ice/ice_hw_autogen.h | 8 + drivers/net/ethernet/intel/ice/ice_txrx.h | 2 + drivers/net/ethernet/intel/ice/ice_type.h | 1 + drivers/net/ethernet/intel/ice/ice_vf_lib.h | 9 + drivers/net/ethernet/intel/ice/ice_virtchnl.c | 312 ++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_virtchnl.h | 11 + .../intel/ice/ice_virtchnl_allowlist.c | 6 + 10 files changed, 372 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 5d307bacf7c6..a4c9e6523fba 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -644,6 +644,8 @@ struct ice_pf { #define ICE_VF_AGG_NODE_ID_START 65 #define ICE_MAX_VF_AGG_NODES 32 struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES]; + + u8 num_quanta_prof_used; }; extern struct workqueue_struct *ice_lag_wq; diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 7fa43827a3f0..2b9319801dc3 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -377,6 +377,8 @@ ice_setup_tx_ctx(struct ice_tx_ring *ring, struct ice_tlan_ctx *tlan_ctx, u16 pf break; } + tlan_ctx->quanta_prof_idx = ring->quanta_prof_id; + tlan_ctx->tso_ena = ICE_TX_LEGACY; tlan_ctx->tso_qnum = pf_q; diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 2652e4f5c4a2..7076bc1d85ab 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -2251,6 +2251,23 @@ ice_parse_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, ice_recalc_port_limited_caps(hw, &func_p->common_cap); } +/** + * ice_func_id_to_logical_id - map from function id to logical pf id + * @active_function_bitmap: active function bitmap + * @pf_id: function number of device + */ +static int ice_func_id_to_logical_id(u32 active_function_bitmap, u8 pf_id) +{ + u8 logical_id = 0; + u8 i; + + for (i = 0; i < pf_id; i++) + if (active_function_bitmap & BIT(i)) + logical_id++; + + return logical_id; +} + /** * ice_parse_valid_functions_cap - Parse ICE_AQC_CAPS_VALID_FUNCTIONS caps * @hw: pointer to the HW struct @@ -2268,6 +2285,8 @@ ice_parse_valid_functions_cap(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, dev_p->num_funcs = hweight32(number); ice_debug(hw, ICE_DBG_INIT, "dev caps: num_funcs = %d\n", dev_p->num_funcs); + + hw->logical_pf_id = ice_func_id_to_logical_id(number, hw->pf_id); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h index 6756f3d51d14..9da94e000394 100644 --- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h +++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h @@ -6,6 +6,14 @@ #ifndef _ICE_HW_AUTOGEN_H_ #define _ICE_HW_AUTOGEN_H_ +#define GLCOMM_QUANTA_PROF(_i) (0x002D2D68 + ((_i) * 4)) +#define GLCOMM_QUANTA_PROF_MAX_INDEX 15 +#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S 0 +#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M ICE_M(0x3FFF, 0) +#define GLCOMM_QUANTA_PROF_MAX_CMD_S 16 +#define GLCOMM_QUANTA_PROF_MAX_CMD_M ICE_M(0xFF, 16) +#define GLCOMM_QUANTA_PROF_MAX_DESC_S 24 +#define GLCOMM_QUANTA_PROF_MAX_DESC_M ICE_M(0x3F, 24) #define QTX_COMM_DBELL(_DBQM) (0x002C0000 + ((_DBQM) * 4)) #define QTX_COMM_HEAD(_DBQM) (0x000E0000 + ((_DBQM) * 4)) #define QTX_COMM_HEAD_HEAD_S 0 diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index 166413fc33f4..7e152ab5b727 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -381,6 +381,8 @@ struct ice_tx_ring { u8 flags; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_tx; + + u16 quanta_prof_id; } ____cacheline_internodealigned_in_smp; static inline bool ice_ring_uses_build_skb(struct ice_rx_ring *ring) diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index f6061b508857..e9164c866315 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -833,6 +833,7 @@ struct ice_hw { u8 revision_id; u8 pf_id; /* device profile info */ + u8 logical_pf_id; enum ice_phy_model phy_model; u16 max_burst_size; /* driver sets this value */ diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h index 48fea6fa0362..a6078b583b79 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h @@ -52,6 +52,13 @@ struct ice_mdd_vf_events { u16 last_printed; }; +struct ice_vf_qs_bw { + u16 queue_id; + u32 committed; + u32 peak; + u8 tc; +}; + /* VF operations */ struct ice_vf_ops { enum ice_disq_rst_src reset_type; @@ -133,6 +140,8 @@ struct ice_vf { /* devlink port data */ struct devlink_port devlink_port; + + struct ice_vf_qs_bw qs_bw[ICE_MAX_RSS_QS_PER_VF]; }; /* Flags for controlling behavior of ice_reset_vf */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index b03426ac932b..3aec6b5ad3aa 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -495,6 +495,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_USO; + if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_QOS) + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_QOS; + vfres->num_vsis = 1; /* Tx and Rx queue are equal for VF */ vfres->num_queue_pairs = vsi->num_txq; @@ -985,6 +988,170 @@ static int ice_vc_config_rss_lut(struct ice_vf *vf, u8 *msg) NULL, 0); } +/** + * ice_vc_get_qos_caps - Get current QoS caps from PF + * @vf: pointer to the VF info + * + * Get VF's QoS capabilities, such as TC number, arbiter and + * bandwidth from PF. + */ +static int ice_vc_get_qos_caps(struct ice_vf *vf) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_qos_cap_list *cap_list = NULL; + u8 tc_prio[ICE_MAX_TRAFFIC_CLASS] = { 0 }; + struct virtchnl_qos_cap_elem *cfg = NULL; + struct ice_vsi_ctx *vsi_ctx; + struct ice_pf *pf = vf->pf; + struct ice_port_info *pi; + struct ice_vsi *vsi; + u8 numtc, tc; + u16 len = 0; + int ret, i; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + pi = pf->hw.port_info; + numtc = vsi->tc_cfg.numtc; + + vsi_ctx = ice_get_vsi_ctx(pi->hw, vf->lan_vsi_idx); + if (!vsi_ctx) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + len = struct_size(cap_list, cap, numtc); + cap_list = kzalloc(len, GFP_KERNEL); + if (!cap_list) { + v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; + len = 0; + goto err; + } + + cap_list->vsi_id = vsi->vsi_num; + cap_list->num_elem = numtc; + + /* Store the UP2TC configuration from DCB to a user priority bitmap + * of each TC. Each element of prio_of_tc represents one TC. Each + * bitmap indicates the user priorities belong to this TC. + */ + for (i = 0; i < ICE_MAX_USER_PRIORITY; i++) { + tc = pi->qos_cfg.local_dcbx_cfg.etscfg.prio_table[i]; + tc_prio[tc] |= BIT(i); + } + + for (i = 0; i < numtc; i++) { + cfg = &cap_list->cap[i]; + cfg->tc_num = i; + cfg->tc_prio = tc_prio[i]; + cfg->arbiter = pi->qos_cfg.local_dcbx_cfg.etscfg.tsatable[i]; + cfg->weight = VIRTCHNL_STRICT_WEIGHT; + cfg->type = VIRTCHNL_BW_SHAPER; + cfg->shaper.committed = vsi_ctx->sched.bw_t_info[i].cir_bw.bw; + cfg->shaper.peak = vsi_ctx->sched.bw_t_info[i].eir_bw.bw; + } + +err: + ret = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_QOS_CAPS, v_ret, + (u8 *)cap_list, len); + kfree(cap_list); + return ret; +} + +/** + * ice_vf_cfg_qs_bw - Configure per queue bandwidth + * @vf: pointer to the VF info + * @num_queues: number of queues to be configured + * + * Configure per queue bandwidth. + */ +static int ice_vf_cfg_qs_bw(struct ice_vf *vf, u16 num_queues) +{ + struct ice_hw *hw = &vf->pf->hw; + struct ice_vsi *vsi; + u32 p_rate; + int ret; + u16 i; + u8 tc; + + vsi = ice_get_vf_vsi(vf); + if (!vsi) + return -EINVAL; + + for (i = 0; i < num_queues; i++) { + p_rate = vf->qs_bw[i].peak; + tc = vf->qs_bw[i].tc; + if (p_rate) { + ret = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx, tc, + vf->qs_bw[i].queue_id, + ICE_MAX_BW, p_rate); + } else { + ret = ice_cfg_q_bw_dflt_lmt(hw->port_info, vsi->idx, tc, + vf->qs_bw[i].queue_id, + ICE_MAX_BW); + } + if (ret) + return ret; + } + + return 0; +} + +/** + * ice_vf_cfg_q_quanta_profile + * @vf: pointer to the VF info + * @quanta_prof_idx: pointer to the quanta profile index + * @quanta_size: quanta size to be set + * + * This function chooses available quanta profile and configures the register. + * The quanta profile is evenly divided by the number of device ports, and then + * available to the specific PF and VFs. The first profile for each PF is a + * reserved default profile. Only quanta size of the rest unused profile can be + * modified. + */ +static int ice_vf_cfg_q_quanta_profile(struct ice_vf *vf, u16 quanta_size, + u16 *quanta_prof_idx) +{ + const u16 n_desc = calc_quanta_desc(quanta_size); + struct ice_hw *hw = &vf->pf->hw; + const u16 n_cmd = 2 * n_desc; + struct ice_pf *pf = vf->pf; + u16 per_pf, begin_id; + u8 n_used; + u32 reg; + + per_pf = (GLCOMM_QUANTA_PROF_MAX_INDEX + 1) / hw->dev_caps.num_funcs; + begin_id = hw->logical_pf_id * per_pf; + n_used = pf->num_quanta_prof_used; + + if (quanta_size == ICE_DFLT_QUANTA) { + *quanta_prof_idx = begin_id; + } else { + if (n_used < per_pf) { + *quanta_prof_idx = begin_id + 1 + n_used; + pf->num_quanta_prof_used++; + } else { + return -EINVAL; + } + } + + reg = FIELD_PREP(GLCOMM_QUANTA_PROF_QUANTA_SIZE_M, quanta_size) | + FIELD_PREP(GLCOMM_QUANTA_PROF_MAX_CMD_M, n_cmd) | + FIELD_PREP(GLCOMM_QUANTA_PROF_MAX_DESC_M, n_desc); + wr32(hw, GLCOMM_QUANTA_PROF(*quanta_prof_idx), reg); + + return 0; +} + /** * ice_vc_cfg_promiscuous_mode_msg * @vf: pointer to the VF info @@ -1587,6 +1754,136 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg) NULL, 0); } +/** + * ice_vc_cfg_q_bw - Configure per queue bandwidth + * @vf: pointer to the VF info + * @msg: pointer to the msg buffer which holds the command descriptor + * + * Configure VF queues bandwidth. + */ +static int ice_vc_cfg_q_bw(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_queues_bw_cfg *qbw = + (struct virtchnl_queues_bw_cfg *)msg; + struct ice_vf_qs_bw *qs_bw; + struct ice_vsi *vsi; + size_t len; + u16 i; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) || + !ice_vc_isvalid_vsi_id(vf, qbw->vsi_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi || vsi->vsi_num != qbw->vsi_id) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + if (qbw->num_queues > ICE_MAX_RSS_QS_PER_VF || + qbw->num_queues > min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)) { + dev_err(ice_pf_to_dev(vf->pf), "VF-%d trying to configure more than allocated number of queues: %d\n", + vf->vf_id, min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + len = sizeof(struct ice_vf_qs_bw) * qbw->num_queues; + qs_bw = kzalloc(len, GFP_KERNEL); + if (!qs_bw) { + v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; + goto err_bw; + } + + for (i = 0; i < qbw->num_queues; i++) { + qs_bw[i].queue_id = qbw->cfg[i].queue_id; + qs_bw[i].peak = qbw->cfg[i].shaper.peak; + qs_bw[i].committed = qbw->cfg[i].shaper.committed; + qs_bw[i].tc = qbw->cfg[i].tc; + } + + memcpy(vf->qs_bw, qs_bw, len); + +err_bw: + kfree(qs_bw); + +err: + /* send the response to the VF */ + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_QUEUE_BW, + v_ret, NULL, 0); +} + +/** + * ice_vc_cfg_q_quanta - Configure per queue quanta + * @vf: pointer to the VF info + * @msg: pointer to the msg buffer which holds the command descriptor + * + * Configure VF queues quanta. + */ +static int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg) +{ + u16 quanta_prof_id, quanta_size, start_qid, num_queues, end_qid, i; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_quanta_cfg *qquanta = + (struct virtchnl_quanta_cfg *)msg; + struct ice_vsi *vsi; + int ret; + + start_qid = qquanta->queue_select.start_queue_id; + num_queues = qquanta->queue_select.num_queues; + quanta_size = qquanta->quanta_size; + end_qid = start_qid + num_queues; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + if (end_qid > ICE_MAX_RSS_QS_PER_VF || + end_qid > min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)) { + dev_err(ice_pf_to_dev(vf->pf), "VF-%d trying to configure more than allocated number of queues: %d\n", + vf->vf_id, min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + if (quanta_size > ICE_MAX_QUANTA_SIZE || + quanta_size < ICE_MIN_QUANTA_SIZE) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + if (quanta_size % 64) { + dev_err(ice_pf_to_dev(vf->pf), "quanta size should be the product of 64\n"); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + ret = ice_vf_cfg_q_quanta_profile(vf, quanta_size, + &quanta_prof_id); + if (ret) { + v_ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED; + goto err; + } + + for (i = start_qid; i < end_qid; i++) + vsi->tx_rings[i]->quanta_prof_id = quanta_prof_id; + +err: + /* send the response to the VF */ + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_QUANTA, + v_ret, NULL, 0); +} + /** * ice_vc_cfg_qs_msg * @vf: pointer to the VF info @@ -1710,6 +2007,9 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) } } + if (ice_vf_cfg_qs_bw(vf, qci->num_queue_pairs)) + goto error_param; + /* send the response to the VF */ return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_VSI_QUEUES, VIRTCHNL_STATUS_SUCCESS, NULL, 0); @@ -3687,6 +3987,9 @@ static const struct ice_virtchnl_ops ice_virtchnl_dflt_ops = { .dis_vlan_stripping_v2_msg = ice_vc_dis_vlan_stripping_v2_msg, .ena_vlan_insertion_v2_msg = ice_vc_ena_vlan_insertion_v2_msg, .dis_vlan_insertion_v2_msg = ice_vc_dis_vlan_insertion_v2_msg, + .get_qos_caps = ice_vc_get_qos_caps, + .cfg_q_bw = ice_vc_cfg_q_bw, + .cfg_q_quanta = ice_vc_cfg_q_quanta, }; /** @@ -4039,6 +4342,15 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event, case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: err = ops->dis_vlan_insertion_v2_msg(vf, msg); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + err = ops->get_qos_caps(vf); + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + err = ops->cfg_q_bw(vf, msg); + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + err = ops->cfg_q_quanta(vf, msg); + break; case VIRTCHNL_OP_UNKNOWN: default: dev_err(dev, "Unsupported opcode %d from VF %d\n", v_opcode, diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.h b/drivers/net/ethernet/intel/ice/ice_virtchnl.h index cd747718de73..0efb9c0f669a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.h @@ -13,6 +13,13 @@ /* Restrict number of MAC Addr and VLAN that non-trusted VF can programmed */ #define ICE_MAX_VLAN_PER_VF 8 +#define ICE_DFLT_QUANTA 1024 +#define ICE_MAX_QUANTA_SIZE 4096 +#define ICE_MIN_QUANTA_SIZE 256 + +#define calc_quanta_desc(x) \ + max_t(u16, 12, min_t(u16, 63, (((x) + 66) / 132) * 2 + 4)) + /* MAC filters: 1 is reserved for the VF's default/perm_addr/LAA MAC, 1 for * broadcast, and 16 for additional unicast/multicast filters */ @@ -51,6 +58,10 @@ struct ice_virtchnl_ops { int (*dis_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); int (*ena_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); int (*dis_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*get_qos_caps)(struct ice_vf *vf); + int (*cfg_q_tc_map)(struct ice_vf *vf, u8 *msg); + int (*cfg_q_bw)(struct ice_vf *vf, u8 *msg); + int (*cfg_q_quanta)(struct ice_vf *vf, u8 *msg); }; #ifdef CONFIG_PCI_IOV diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c index 7d547fa616fa..2e3f63a429cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c @@ -85,6 +85,11 @@ static const u32 fdir_pf_allowlist_opcodes[] = { VIRTCHNL_OP_ADD_FDIR_FILTER, VIRTCHNL_OP_DEL_FDIR_FILTER, }; +static const u32 tc_allowlist_opcodes[] = { + VIRTCHNL_OP_GET_QOS_CAPS, VIRTCHNL_OP_CONFIG_QUEUE_BW, + VIRTCHNL_OP_CONFIG_QUANTA, +}; + struct allowlist_opcode_info { const u32 *opcodes; size_t size; @@ -105,6 +110,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = { ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF, adv_rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF, fdir_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes), + ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_QOS, tc_allowlist_opcodes), }; /** From patchwork Wed Aug 16 03:33:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13354536 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A46805CBC for ; Wed, 16 Aug 2023 03:29:41 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 127462D47 for ; Tue, 15 Aug 2023 20:29:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692156580; x=1723692580; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nrUfx1hcay1fyAatSeoZP2JhVHQ3JzJ49sL2fgGYUIw=; b=J3b/ryj+CEYt0s//4Rl9ALca3DQPo7XGkEnZPsy9fifaLmB/Y+9Kbbgc jd5wbWrqqx4BNIktE1HLHakMJkzAssG23AtTDElyb9E9QZevIEyWFf9wc pto4YqQHYnDR71U7zUbDysezB14uuQuGSYHkCumwIuJpwVzO0OrBBJQVm NvHrDrr+rZVsQTrLcc70yswaMjfIj//nqJmoskVEReyyusw0Y5XSSXWRQ VY/L3CLFA1KwF16XFvXUkecpMHqgc5PQnMppa8ivtWSy5lHIGf3TmRPzD xDuQsKJt5Gt7P/6WXCpjlbWOC3jxyPbLWNjzQQeowB2kONxNfJlAV747o g==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="372427775" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="372427775" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 20:29:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="710958645" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="710958645" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by orsmga006.jf.intel.com with ESMTP; 15 Aug 2023 20:29:37 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com Subject: [PATCH iwl-next v3 3/5] iavf: Add devlink and devlink port support Date: Wed, 16 Aug 2023 11:33:51 +0800 Message-Id: <20230816033353.94565-4-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230816033353.94565-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230816033353.94565-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jun Zhang To allow user to configure queue bandwidth, devlink port support is added to support devlink port rate API. Add devlink framework registration/unregistration on iavf driver initialization and remove, and devlink port of DEVLINK_PORT_FLAVOUR_VIRTUAL is created to be associated iavf net device. Signed-off-by: Jun Zhang --- drivers/net/ethernet/intel/Kconfig | 1 + drivers/net/ethernet/intel/iavf/Makefile | 2 +- drivers/net/ethernet/intel/iavf/iavf.h | 6 ++ .../net/ethernet/intel/iavf/iavf_devlink.c | 93 +++++++++++++++++++ .../net/ethernet/intel/iavf/iavf_devlink.h | 17 ++++ drivers/net/ethernet/intel/iavf/iavf_main.c | 14 +++ 6 files changed, 132 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/intel/iavf/iavf_devlink.c create mode 100644 drivers/net/ethernet/intel/iavf/iavf_devlink.h diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index d57f70d6e4d4..5bda31c4c652 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -256,6 +256,7 @@ config I40EVF tristate "Intel(R) Ethernet Adaptive Virtual Function support" select IAVF depends on PCI_MSI + select NET_DEVLINK help This driver supports virtual functions for Intel XL710, X710, X722, XXV710, and all devices advertising support for diff --git a/drivers/net/ethernet/intel/iavf/Makefile b/drivers/net/ethernet/intel/iavf/Makefile index 9c3e45c54d01..b5d7db97ab8b 100644 --- a/drivers/net/ethernet/intel/iavf/Makefile +++ b/drivers/net/ethernet/intel/iavf/Makefile @@ -12,5 +12,5 @@ subdir-ccflags-y += -I$(src) obj-$(CONFIG_IAVF) += iavf.o iavf-objs := iavf_main.o iavf_ethtool.o iavf_virtchnl.o iavf_fdir.o \ - iavf_adv_rss.o \ + iavf_adv_rss.o iavf_devlink.o \ iavf_txrx.o iavf_common.o iavf_adminq.o iavf_client.o diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 85fba85fbb23..eec294b5a426 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -33,9 +33,11 @@ #include #include #include +#include #include "iavf_type.h" #include +#include "iavf_devlink.h" #include "iavf_txrx.h" #include "iavf_fdir.h" #include "iavf_adv_rss.h" @@ -369,6 +371,10 @@ struct iavf_adapter { struct net_device *netdev; struct pci_dev *pdev; + /* devlink & port data */ + struct devlink *devlink; + struct devlink_port devlink_port; + struct iavf_hw hw; /* defined in iavf_type.h */ enum iavf_state_t state; diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.c b/drivers/net/ethernet/intel/iavf/iavf_devlink.c new file mode 100644 index 000000000000..991d041e5922 --- /dev/null +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.c @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#include "iavf.h" +#include "iavf_devlink.h" + +static const struct devlink_ops iavf_devlink_ops = {}; + +/** + * iavf_devlink_register - Register allocated devlink instance for iavf adapter + * @adapter: the iavf adapter to register the devlink for. + * + * Register the devlink instance associated with this iavf adapter + * + * Return: zero on success or an error code on failure. + */ +int iavf_devlink_register(struct iavf_adapter *adapter) +{ + struct device *dev = &adapter->pdev->dev; + struct iavf_devlink *ref; + struct devlink *devlink; + + /* Allocate devlink instance */ + devlink = devlink_alloc(&iavf_devlink_ops, sizeof(struct iavf_devlink), + dev); + if (!devlink) + return -ENOMEM; + + /* Init iavf adapter devlink */ + adapter->devlink = devlink; + ref = devlink_priv(devlink); + ref->devlink_ref = adapter; + + devlink_register(devlink); + + return 0; +} + +/** + * iavf_devlink_unregister - Unregister devlink resources for iavf adapter. + * @adapter: the iavf adapter structure + * + * Releases resources used by devlink and cleans up associated memory. + */ +void iavf_devlink_unregister(struct iavf_adapter *adapter) +{ + devlink_unregister(adapter->devlink); + devlink_free(adapter->devlink); +} + +/** + * iavf_devlink_port_register - Register devlink port for iavf adapter + * @adapter: the iavf adapter to register the devlink port for. + * + * Register the devlink port instance associated with this iavf adapter + * before iavf adapter registers with netdevice + * + * Return: zero on success or an error code on failure. + */ +int iavf_devlink_port_register(struct iavf_adapter *adapter) +{ + struct device *dev = &adapter->pdev->dev; + struct devlink_port_attrs attrs = {}; + int err; + + /* Create devlink port: attr/port flavour, port index */ + SET_NETDEV_DEVLINK_PORT(adapter->netdev, &adapter->devlink_port); + attrs.flavour = DEVLINK_PORT_FLAVOUR_VIRTUAL; + memset(&adapter->devlink_port, 0, sizeof(adapter->devlink_port)); + devlink_port_attrs_set(&adapter->devlink_port, &attrs); + + /* Register with driver specific index (device id) */ + err = devlink_port_register(adapter->devlink, &adapter->devlink_port, + adapter->hw.bus.device); + if (err) + dev_err(dev, "devlink port registration failed: %d\n", err); + + return err; +} + +/** + * iavf_devlink_port_unregister - Unregister devlink port for iavf adapter. + * @adapter: the iavf adapter structure + * + * Releases resources used by devlink port and registration with devlink. + */ +void iavf_devlink_port_unregister(struct iavf_adapter *adapter) +{ + if (!adapter->devlink_port.registered) + return; + + devlink_port_unregister(&adapter->devlink_port); +} diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.h b/drivers/net/ethernet/intel/iavf/iavf_devlink.h new file mode 100644 index 000000000000..5c122278611a --- /dev/null +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#ifndef _IAVF_DEVLINK_H_ +#define _IAVF_DEVLINK_H_ + +/* iavf devlink structure pointing to iavf adapter */ +struct iavf_devlink { + struct iavf_adapter *devlink_ref; /* ref to iavf adapter */ +}; + +int iavf_devlink_register(struct iavf_adapter *adapter); +void iavf_devlink_unregister(struct iavf_adapter *adapter); +int iavf_devlink_port_register(struct iavf_adapter *adapter); +void iavf_devlink_port_unregister(struct iavf_adapter *adapter); + +#endif /* _IAVF_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index b23ca9d80189..1fb14f3f1ad0 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2038,6 +2038,7 @@ static void iavf_finish_config(struct work_struct *work) iavf_free_rss(adapter); iavf_free_misc_irq(adapter); iavf_reset_interrupt_capability(adapter); + iavf_devlink_port_unregister(adapter); iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); goto out; @@ -2709,6 +2710,9 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) if (err) goto err_sw_init; + if (!adapter->netdev_registered) + iavf_devlink_port_register(adapter); + netif_carrier_off(netdev); adapter->link_up = false; netif_tx_stop_all_queues(netdev); @@ -2750,6 +2754,7 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) err_mem: iavf_free_rss(adapter); iavf_free_misc_irq(adapter); + iavf_devlink_port_unregister(adapter); err_sw_init: iavf_reset_interrupt_capability(adapter); err: @@ -4996,6 +5001,12 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Setup the wait queue for indicating virtchannel events */ init_waitqueue_head(&adapter->vc_waitqueue); + /* Register iavf adapter with devlink */ + err = iavf_devlink_register(adapter); + if (err) + dev_err(&pdev->dev, "devlink registration failed: %d\n", err); + + /* Keep driver interface even on devlink registration failure */ return 0; err_ioremap: @@ -5140,6 +5151,9 @@ static void iavf_remove(struct pci_dev *pdev) err); } + iavf_devlink_port_unregister(adapter); + iavf_devlink_unregister(adapter); + mutex_lock(&adapter->crit_lock); dev_info(&adapter->pdev->dev, "Removing device\n"); iavf_change_state(adapter, __IAVF_REMOVE); From patchwork Wed Aug 16 03:33:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13354537 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25257C2C0 for ; Wed, 16 Aug 2023 03:29:46 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 866B92D4C for ; Tue, 15 Aug 2023 20:29:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692156582; x=1723692582; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=j0wgjcNVCJ5zRgTCdrgVv1+3vTY4mB9jUu7YmBacpJY=; b=ibLIZrjWhVt4YgyEcG/IgcppMVktIlliSXWCbCeVhAuVZ0SkbvRwvE6+ 6+61grqAdK4JOTc+lsjQrvfNrwzlTesGJomOiri0/yef6Gl99VE/bb4NL mm0kzmtCR6emLbXSZR8n0M44vaCe9nBfZlkoLK01HxYszN1he6WkwtLWu JaVNkgu1dCQijRNjiwGuGKD9dnjGvNBD9/0HluP0rq19XbEX92bdvUQkQ PloEhF6vBguV01OvTptO2a2emorVZkYjWyyHZcceGB9Af/kJpy3kiMTW+ Wch//wXfsy9kM4cEtBpPuI2k2qqE537IXu99tthAes4EZ0aZgGzN7ijUx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="372427788" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="372427788" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 20:29:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="710958649" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="710958649" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by orsmga006.jf.intel.com with ESMTP; 15 Aug 2023 20:29:40 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com Subject: [PATCH iwl-next v3 4/5] iavf: Add devlink port function rate API support Date: Wed, 16 Aug 2023 11:33:52 +0800 Message-Id: <20230816033353.94565-5-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230816033353.94565-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230816033353.94565-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jun Zhang To allow user to configure queue based parameters, devlink port function rate api functions are added for setting node tx_max and tx_share parameters. iavf rate tree with root node and queue nodes is created and registered with devlink rate when iavf adapter is configured. Signed-off-by: Jun Zhang --- .../net/ethernet/intel/iavf/iavf_devlink.c | 258 +++++++++++++++++- .../net/ethernet/intel/iavf/iavf_devlink.h | 21 ++ drivers/net/ethernet/intel/iavf/iavf_main.c | 7 +- 3 files changed, 283 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.c b/drivers/net/ethernet/intel/iavf/iavf_devlink.c index 991d041e5922..24ba3744859a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.c +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.c @@ -4,7 +4,261 @@ #include "iavf.h" #include "iavf_devlink.h" -static const struct devlink_ops iavf_devlink_ops = {}; +/** + * iavf_devlink_rate_init_rate_tree - export rate tree to devlink rate + * @adapter: iavf adapter struct instance + * + * This function builds Rate Tree based on iavf adapter configuration + * and exports it's contents to devlink rate. + */ +void iavf_devlink_rate_init_rate_tree(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + struct iavf_dev_rate_node *iavf_r_node; + struct iavf_dev_rate_node *iavf_q_node; + struct devlink_rate *dl_root_node; + struct devlink_rate *dl_tmp_node; + int q_num, size, i; + + if (!adapter->devlink_port.registered) + return; + + iavf_r_node = &dl_priv->root_node; + memset(iavf_r_node, 0, sizeof(*iavf_r_node)); + iavf_r_node->tx_max = adapter->link_speed; + strscpy(iavf_r_node->name, "iavf_root", IAVF_RATE_NODE_NAME); + + devl_lock(adapter->devlink); + dl_root_node = devl_rate_node_create(adapter->devlink, iavf_r_node, + iavf_r_node->name, NULL); + if (!dl_root_node || IS_ERR(dl_root_node)) + goto err_node; + + iavf_r_node->rate_node = dl_root_node; + + /* Allocate queue nodes, and chain them under root */ + q_num = adapter->num_active_queues; + if (q_num > 0) { + size = q_num * sizeof(struct iavf_dev_rate_node); + dl_priv->queue_nodes = kzalloc(size, GFP_KERNEL); + if (!dl_priv->queue_nodes) + goto err_node; + + memset(dl_priv->queue_nodes, 0, size); + + for (i = 0; i < q_num; ++i) { + iavf_q_node = &dl_priv->queue_nodes[i]; + snprintf(iavf_q_node->name, IAVF_RATE_NODE_NAME, + "txq_%d", i); + dl_tmp_node = devl_rate_node_create(adapter->devlink, + iavf_q_node, + iavf_q_node->name, + dl_root_node); + if (!dl_tmp_node || IS_ERR(dl_tmp_node)) { + kfree(dl_priv->queue_nodes); + goto err_node; + } + + iavf_q_node->rate_node = dl_tmp_node; + iavf_q_node->tx_max = IAVF_TX_DEFAULT; + iavf_q_node->tx_share = 0; + } + } + + dl_priv->update_in_progress = false; + dl_priv->iavf_dev_rate_initialized = true; + devl_unlock(adapter->devlink); + return; +err_node: + devl_rate_nodes_destroy(adapter->devlink); + dl_priv->iavf_dev_rate_initialized = false; + devl_unlock(adapter->devlink); +} + +/** + * iavf_devlink_rate_deinit_rate_tree - Unregister rate tree with devlink rate + * @adapter: iavf adapter struct instance + * + * This function unregisters the current iavf rate tree registered with devlink + * rate and frees resources. + */ +void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + + if (!dl_priv->iavf_dev_rate_initialized) + return; + + devl_lock(adapter->devlink); + devl_rate_leaf_destroy(&adapter->devlink_port); + devl_rate_nodes_destroy(adapter->devlink); + kfree(dl_priv->queue_nodes); + devl_unlock(adapter->devlink); +} + +/** + * iavf_check_update_config - check if updating queue parameters needed + * @adapter: iavf adapter struct instance + * @node: iavf rate node struct instance + * + * This function sets queue bw & quanta size configuration if all + * queue parameters are set + */ +static int iavf_check_update_config(struct iavf_adapter *adapter, + struct iavf_dev_rate_node *node) +{ + /* Update queue bw if any one of the queues have been fully updated by + * user, the other queues either use the default value or the last + * fully updated value + */ + if (node->tx_update_flag == + (IAVF_FLAG_TX_MAX_UPDATED | IAVF_FLAG_TX_SHARE_UPDATED)) { + node->tx_max = node->tx_max_temp; + node->tx_share = node->tx_share_temp; + } else { + return 0; + } + + /* Reconfig queue bw only when iavf driver on running state */ + if (adapter->state != __IAVF_RUNNING) + return -EBUSY; + + return 0; +} + +/** + * iavf_update_queue_tx_share - sets tx min parameter + * @adapter: iavf adapter struct instance + * @node: iavf rate node struct instance + * @bw: bandwidth in bytes per second + * @extack: extended netdev ack structure + * + * This function sets min BW limit. + */ +static int iavf_update_queue_tx_share(struct iavf_adapter *adapter, + struct iavf_dev_rate_node *node, + u64 bw, struct netlink_ext_ack *extack) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + u64 tx_share_sum = 0; + + /* Keep in kbps */ + node->tx_share_temp = div_u64(bw, IAVF_RATE_DIV_FACTOR); + + if (ADV_LINK_SUPPORT(adapter)) { + int i; + + for (i = 0; i < adapter->num_active_queues; ++i) { + if (node != &dl_priv->queue_nodes[i]) + tx_share_sum += + dl_priv->queue_nodes[i].tx_share; + else + tx_share_sum += node->tx_share_temp; + } + + if (tx_share_sum / 1000 > adapter->link_speed_mbps) + return -EINVAL; + } + + node->tx_update_flag |= IAVF_FLAG_TX_SHARE_UPDATED; + return iavf_check_update_config(adapter, node); +} + +/** + * iavf_update_queue_tx_max - sets tx max parameter + * @adapter: iavf adapter struct instance + * @node: iavf rate node struct instance + * @bw: bandwidth in bytes per second + * @extack: extended netdev ack structure + * + * This function sets max BW limit. + */ +static int iavf_update_queue_tx_max(struct iavf_adapter *adapter, + struct iavf_dev_rate_node *node, + u64 bw, struct netlink_ext_ack *extack) +{ + /* Keep in kbps */ + node->tx_max_temp = div_u64(bw, IAVF_RATE_DIV_FACTOR); + if (ADV_LINK_SUPPORT(adapter)) { + if (node->tx_max_temp / 1000 > adapter->link_speed_mbps) + return -EINVAL; + } + + node->tx_update_flag |= IAVF_FLAG_TX_MAX_UPDATED; + + return iavf_check_update_config(adapter, node); +} + +static int iavf_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, + void *priv, u64 tx_max, + struct netlink_ext_ack *extack) +{ + struct iavf_dev_rate_node *node = priv; + struct iavf_devlink *dl_priv; + struct iavf_adapter *adapter; + + if (!node) + return 0; + + dl_priv = devlink_priv(rate_node->devlink); + adapter = dl_priv->devlink_ref; + + /* Check if last update is in progress */ + if (dl_priv->update_in_progress) + return -EBUSY; + + if (node == &dl_priv->root_node) + return 0; + + return iavf_update_queue_tx_max(adapter, node, tx_max, extack); +} + +static int iavf_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, + void *priv, u64 tx_share, + struct netlink_ext_ack *extack) +{ + struct iavf_dev_rate_node *node = priv; + struct iavf_devlink *dl_priv; + struct iavf_adapter *adapter; + + if (!node) + return 0; + + dl_priv = devlink_priv(rate_node->devlink); + adapter = dl_priv->devlink_ref; + + /* Check if last update is in progress */ + if (dl_priv->update_in_progress) + return -EBUSY; + + if (node == &dl_priv->root_node) + return 0; + + return iavf_update_queue_tx_share(adapter, node, tx_share, extack); +} + +static int iavf_devlink_rate_node_del(struct devlink_rate *rate_node, + void *priv, + struct netlink_ext_ack *extack) +{ + return -EINVAL; +} + +static int iavf_devlink_set_parent(struct devlink_rate *devlink_rate, + struct devlink_rate *parent, + void *priv, void *parent_priv, + struct netlink_ext_ack *extack) +{ + return -EINVAL; +} + +static const struct devlink_ops iavf_devlink_ops = { + .rate_node_tx_share_set = iavf_devlink_rate_node_tx_share_set, + .rate_node_tx_max_set = iavf_devlink_rate_node_tx_max_set, + .rate_node_del = iavf_devlink_rate_node_del, + .rate_leaf_parent_set = iavf_devlink_set_parent, + .rate_node_parent_set = iavf_devlink_set_parent, +}; /** * iavf_devlink_register - Register allocated devlink instance for iavf adapter @@ -30,7 +284,7 @@ int iavf_devlink_register(struct iavf_adapter *adapter) adapter->devlink = devlink; ref = devlink_priv(devlink); ref->devlink_ref = adapter; - + ref->iavf_dev_rate_initialized = false; devlink_register(devlink); return 0; diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.h b/drivers/net/ethernet/intel/iavf/iavf_devlink.h index 5c122278611a..897ff5fc87af 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.h +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.h @@ -4,14 +4,35 @@ #ifndef _IAVF_DEVLINK_H_ #define _IAVF_DEVLINK_H_ +#define IAVF_RATE_NODE_NAME 12 +struct iavf_dev_rate_node { + char name[IAVF_RATE_NODE_NAME]; + struct devlink_rate *rate_node; + u8 tx_update_flag; +#define IAVF_FLAG_TX_SHARE_UPDATED BIT(0) +#define IAVF_FLAG_TX_MAX_UPDATED BIT(1) + u64 tx_max; + u64 tx_share; + u64 tx_max_temp; + u64 tx_share_temp; +#define IAVF_RATE_DIV_FACTOR 125 +#define IAVF_TX_DEFAULT 100000 +}; + /* iavf devlink structure pointing to iavf adapter */ struct iavf_devlink { struct iavf_adapter *devlink_ref; /* ref to iavf adapter */ + struct iavf_dev_rate_node root_node; + struct iavf_dev_rate_node *queue_nodes; + bool iavf_dev_rate_initialized; + bool update_in_progress; }; int iavf_devlink_register(struct iavf_adapter *adapter); void iavf_devlink_unregister(struct iavf_adapter *adapter); int iavf_devlink_port_register(struct iavf_adapter *adapter); void iavf_devlink_port_unregister(struct iavf_adapter *adapter); +void iavf_devlink_rate_init_rate_tree(struct iavf_adapter *adapter); +void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter); #endif /* _IAVF_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 1fb14f3f1ad0..2aec6427d5e2 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2038,6 +2038,7 @@ static void iavf_finish_config(struct work_struct *work) iavf_free_rss(adapter); iavf_free_misc_irq(adapter); iavf_reset_interrupt_capability(adapter); + iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); @@ -2710,8 +2711,10 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) if (err) goto err_sw_init; - if (!adapter->netdev_registered) + if (!adapter->netdev_registered) { iavf_devlink_port_register(adapter); + iavf_devlink_rate_init_rate_tree(adapter); + } netif_carrier_off(netdev); adapter->link_up = false; @@ -2754,6 +2757,7 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) err_mem: iavf_free_rss(adapter); iavf_free_misc_irq(adapter); + iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); err_sw_init: iavf_reset_interrupt_capability(adapter); @@ -5151,6 +5155,7 @@ static void iavf_remove(struct pci_dev *pdev) err); } + iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); iavf_devlink_unregister(adapter); From patchwork Wed Aug 16 03:33:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13354538 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C00DF3D70 for ; Wed, 16 Aug 2023 03:29:54 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A62BF199D for ; Tue, 15 Aug 2023 20:29:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692156588; x=1723692588; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gyXmiazhK0lS4Mi2GSq5dcAu9HOPmkNnnNrBHe5ODaw=; b=U3kwnT5Y8T0W0VpTfnMLLDQXmTqaIJUD7+7jT1+UwNRTfkeKlzDpfK7i m/psHwHnAYO1dm3EQ6p7t/NP98QWfeALpO7Z8+EvAtmfk8M14KLGR+Gra geWCol79BC8stMBRJW2/Io2JHjM1lrXBtn2kc1HmQc3wuOzXaK2IniNbA 4M2Ww+g4AXquvhr0KAulMf3CMOstUPyTKEptAvd+MXwhyKAr9rDX5cfOs xv685quk8C/Om6CTx3jx98ni7AvDt1oHKlV+v3fRBYUlg16J3RF9MWqn7 Oi/J4DAOoTT7wgIcaxYOwbe3AIyd5mcIq47im4Lh7jV+RUs36x9SLksvN g==; X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="372427793" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="372427793" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2023 20:29:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10803"; a="710958659" X-IronPort-AV: E=Sophos;i="6.01,175,1684825200"; d="scan'208";a="710958659" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by orsmga006.jf.intel.com with ESMTP; 15 Aug 2023 20:29:45 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com Subject: [PATCH iwl-next v3 5/5] iavf: Add VIRTCHNL Opcodes Support for Queue bw Setting Date: Wed, 16 Aug 2023 11:33:53 +0800 Message-Id: <20230816033353.94565-6-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230816033353.94565-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230816033353.94565-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jun Zhang iavf rate tree with root node and queue nodes is created and registered with devlink rate when iavf adapter is configured. User can configure the tx_max and tx_share of each queue. If any one of the queues have been fully updated by user, i.e. both tx_max and tx_share have been updated for that queue, VIRTCHNL opcodes of VIRTCHNL_OP_CONFIG_QUEUE_BW and VIRTCHNL_OP_CONFIG_QUANTA will be sent to PF to configure queues allocated to VF if PF indicates support of VIRTCHNL_VF_OFFLOAD_QOS through VF Resource / Capability Exchange. Signed-off-by: Jun Zhang --- drivers/net/ethernet/intel/iavf/iavf.h | 14 ++ .../net/ethernet/intel/iavf/iavf_devlink.c | 29 +++ .../net/ethernet/intel/iavf/iavf_devlink.h | 1 + drivers/net/ethernet/intel/iavf/iavf_main.c | 45 +++- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 230 +++++++++++++++++- 5 files changed, 315 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index eec294b5a426..27a230f58816 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -252,6 +252,9 @@ struct iavf_cloud_filter { #define IAVF_RESET_WAIT_DETECTED_COUNT 500 #define IAVF_RESET_WAIT_COMPLETE_COUNT 2000 +#define IAVF_MAX_QOS_TC_NUM 8 +#define IAVF_DEFAULT_QUANTA_SIZE 1024 + /* board specific private data structure */ struct iavf_adapter { struct workqueue_struct *wq; @@ -351,6 +354,9 @@ struct iavf_adapter { #define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION BIT_ULL(36) #define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION BIT_ULL(37) #define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION BIT_ULL(38) +#define IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW BIT_ULL(39) +#define IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE BIT_ULL(40) +#define IAVF_FLAG_AQ_GET_QOS_CAPS BIT_ULL(41) /* flags for processing extended capability messages during * __IAVF_INIT_EXTENDED_CAPS. Each capability exchange requires @@ -374,6 +380,7 @@ struct iavf_adapter { /* devlink & port data */ struct devlink *devlink; struct devlink_port devlink_port; + bool devlink_update; struct iavf_hw hw; /* defined in iavf_type.h */ @@ -423,6 +430,8 @@ struct iavf_adapter { VIRTCHNL_VF_OFFLOAD_FDIR_PF) #define ADV_RSS_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF) +#define QOS_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ + VIRTCHNL_VF_OFFLOAD_QOS) struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */ struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */ struct virtchnl_version_info pf_version; @@ -431,6 +440,7 @@ struct iavf_adapter { struct virtchnl_vlan_caps vlan_v2_caps; u16 msg_enable; struct iavf_eth_stats current_stats; + struct virtchnl_qos_cap_list *qos_caps; struct iavf_vsi vsi; u32 aq_wait_count; /* RSS stuff */ @@ -577,6 +587,10 @@ void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len); void iavf_notify_client_l2_params(struct iavf_vsi *vsi); void iavf_notify_client_open(struct iavf_vsi *vsi); void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset); +void iavf_update_queue_config(struct iavf_adapter *adapter); +void iavf_configure_queues_bw(struct iavf_adapter *adapter); +void iavf_configure_queues_quanta_size(struct iavf_adapter *adapter); +void iavf_get_qos_caps(struct iavf_adapter *adapter); void iavf_enable_channels(struct iavf_adapter *adapter); void iavf_disable_channels(struct iavf_adapter *adapter); void iavf_add_cloud_filter(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.c b/drivers/net/ethernet/intel/iavf/iavf_devlink.c index 24ba3744859a..0ab9a0a9823e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.c +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.c @@ -96,6 +96,30 @@ void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter) devl_unlock(adapter->devlink); } +/** + * iavf_notify_queue_config_complete - notify updating queue completion + * @adapter: iavf adapter struct instance + * + * This function sets the queue configuration update status when all + * queue parameters have been sent to PF + */ +void iavf_notify_queue_config_complete(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + int q_num = adapter->num_active_queues; + int i; + + /* clean up rate tree update flags*/ + for (i = 0; i < q_num; i++) + if (dl_priv->queue_nodes[i].tx_update_flag == + (IAVF_FLAG_TX_MAX_UPDATED | IAVF_FLAG_TX_SHARE_UPDATED)) { + dl_priv->queue_nodes[i].tx_update_flag = 0; + break; + } + + dl_priv->update_in_progress = false; +} + /** * iavf_check_update_config - check if updating queue parameters needed * @adapter: iavf adapter struct instance @@ -107,6 +131,8 @@ void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter) static int iavf_check_update_config(struct iavf_adapter *adapter, struct iavf_dev_rate_node *node) { + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + /* Update queue bw if any one of the queues have been fully updated by * user, the other queues either use the default value or the last * fully updated value @@ -123,6 +149,8 @@ static int iavf_check_update_config(struct iavf_adapter *adapter, if (adapter->state != __IAVF_RUNNING) return -EBUSY; + dl_priv->update_in_progress = true; + iavf_update_queue_config(adapter); return 0; } @@ -282,6 +310,7 @@ int iavf_devlink_register(struct iavf_adapter *adapter) /* Init iavf adapter devlink */ adapter->devlink = devlink; + adapter->devlink_update = false; ref = devlink_priv(devlink); ref->devlink_ref = adapter; ref->iavf_dev_rate_initialized = false; diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.h b/drivers/net/ethernet/intel/iavf/iavf_devlink.h index 897ff5fc87af..a8a41f343f56 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.h +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.h @@ -34,5 +34,6 @@ int iavf_devlink_port_register(struct iavf_adapter *adapter); void iavf_devlink_port_unregister(struct iavf_adapter *adapter); void iavf_devlink_rate_init_rate_tree(struct iavf_adapter *adapter); void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter); +void iavf_notify_queue_config_complete(struct iavf_adapter *adapter); #endif /* _IAVF_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 2aec6427d5e2..58795a15c09b 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2131,6 +2131,21 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) return 0; } + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW) { + iavf_configure_queues_bw(adapter); + return 0; + } + + if (adapter->aq_required & IAVF_FLAG_AQ_GET_QOS_CAPS) { + iavf_get_qos_caps(adapter); + return 0; + } + + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE) { + iavf_configure_queues_quanta_size(adapter); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES) { iavf_configure_queues(adapter); return 0; @@ -2713,7 +2728,9 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) if (!adapter->netdev_registered) { iavf_devlink_port_register(adapter); - iavf_devlink_rate_init_rate_tree(adapter); + + if (QOS_ALLOWED(adapter)) + iavf_devlink_rate_init_rate_tree(adapter); } netif_carrier_off(netdev); @@ -3136,6 +3153,19 @@ static void iavf_reset_task(struct work_struct *work) err = iavf_reinit_interrupt_scheme(adapter, running); if (err) goto reset_err; + + if (QOS_ALLOWED(adapter)) { + iavf_devlink_rate_deinit_rate_tree(adapter); + iavf_devlink_rate_init_rate_tree(adapter); + } + } + + if (adapter->devlink_update) { + adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + adapter->aq_required |= IAVF_FLAG_AQ_GET_QOS_CAPS; + adapter->aq_required |= + IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE; + adapter->devlink_update = false; } if (RSS_AQ(adapter)) { @@ -4901,7 +4931,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) struct net_device *netdev; struct iavf_adapter *adapter = NULL; struct iavf_hw *hw = NULL; - int err; + int err, len; err = pci_enable_device(pdev); if (err) @@ -5005,10 +5035,18 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Setup the wait queue for indicating virtchannel events */ init_waitqueue_head(&adapter->vc_waitqueue); + len = struct_size(adapter->qos_caps, cap, IAVF_MAX_QOS_TC_NUM); + adapter->qos_caps = kzalloc(len, GFP_KERNEL); + if (!adapter->qos_caps) + goto err_ioremap; + /* Register iavf adapter with devlink */ err = iavf_devlink_register(adapter); - if (err) + if (err) { dev_err(&pdev->dev, "devlink registration failed: %d\n", err); + kfree(adapter->qos_caps); + goto err_ioremap; + } /* Keep driver interface even on devlink registration failure */ return 0; @@ -5158,6 +5196,7 @@ static void iavf_remove(struct pci_dev *pdev) iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); iavf_devlink_unregister(adapter); + kfree(adapter->qos_caps); mutex_lock(&adapter->crit_lock); dev_info(&adapter->pdev->dev, "Removing device\n"); diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index f9727e9c3d63..146f06831bd3 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -148,7 +148,8 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_USO | VIRTCHNL_VF_OFFLOAD_FDIR_PF | VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF | - VIRTCHNL_VF_CAP_ADV_LINK_SPEED; + VIRTCHNL_VF_CAP_ADV_LINK_SPEED | + VIRTCHNL_VF_OFFLOAD_QOS; adapter->current_op = VIRTCHNL_OP_GET_VF_RESOURCES; adapter->aq_required &= ~IAVF_FLAG_AQ_GET_CONFIG; @@ -1465,6 +1466,209 @@ iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter, adapter->link_speed = vpe->event_data.link_event.link_speed; } +/** + * iavf_get_qos_caps - get qos caps support + * @adapter: iavf adapter struct instance + * + * This function requests PF for Supported QoS Caps. + */ +void iavf_get_qos_caps(struct iavf_adapter *adapter) +{ + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot get qos caps, command %d pending\n", + adapter->current_op); + return; + } + + adapter->current_op = VIRTCHNL_OP_GET_QOS_CAPS; + adapter->aq_required &= ~IAVF_FLAG_AQ_GET_QOS_CAPS; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_GET_QOS_CAPS, NULL, 0); +} + +/** + * iavf_set_quanta_size - set quanta size of queue chunk + * @adapter: iavf adapter struct instance + * @quanta_size: quanta size in bytes + * @queue_index: starting index of queue chunk + * @num_queues: number of queues in the queue chunk + * + * This function requests PF to set quanta size of queue chunk + * starting at queue_index. + */ +static void +iavf_set_quanta_size(struct iavf_adapter *adapter, u16 quanta_size, + u16 queue_index, u16 num_queues) +{ + struct virtchnl_quanta_cfg quanta_cfg; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set queue quanta size, command %d pending\n", + adapter->current_op); + return; + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUANTA; + quanta_cfg.quanta_size = quanta_size; + quanta_cfg.queue_select.type = VIRTCHNL_QUEUE_TYPE_TX; + quanta_cfg.queue_select.start_queue_id = queue_index; + quanta_cfg.queue_select.num_queues = num_queues; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUANTA, + (u8 *)&quanta_cfg, sizeof(quanta_cfg)); +} + +/** + * iavf_set_queue_bw - set bw of allocated queues + * @adapter: iavf adapter struct instance + * + * This function requests PF to set queue bw of tc0 queues + */ +static void iavf_set_queue_bw(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + struct virtchnl_queues_bw_cfg *queues_bw_cfg; + struct iavf_dev_rate_node *queue_rate; + size_t len; + int i; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set tc queue bw, command %d pending\n", + adapter->current_op); + return; + } + + len = struct_size(queues_bw_cfg, cfg, adapter->num_active_queues); + queues_bw_cfg = kzalloc(len, GFP_KERNEL); + if (!queues_bw_cfg) + return; + + queue_rate = dl_priv->queue_nodes; + queues_bw_cfg->vsi_id = adapter->vsi.id; + queues_bw_cfg->num_queues = adapter->num_active_queues; + + for (i = 0; i < queues_bw_cfg->num_queues; i++) { + queues_bw_cfg->cfg[i].queue_id = i; + queues_bw_cfg->cfg[i].shaper.peak = queue_rate[i].tx_max; + queues_bw_cfg->cfg[i].shaper.committed = + queue_rate[i].tx_share; + queues_bw_cfg->cfg[i].tc = 0; + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUEUE_BW; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUEUE_BW, + (u8 *)queues_bw_cfg, len); + kfree(queues_bw_cfg); +} + +/** + * iavf_set_tc_queue_bw - set bw of allocated tc/queues + * @adapter: iavf adapter struct instance + * + * This function requests PF to set queue bw of multiple tc(s) + */ +static void iavf_set_tc_queue_bw(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + struct virtchnl_queues_bw_cfg *queues_bw_cfg; + struct iavf_dev_rate_node *queue_rate; + u16 queue_to_tc[256]; + size_t len; + int q_idx; + int i, j; + u16 tc; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set tc queue bw, command %d pending\n", + adapter->current_op); + return; + } + + len = struct_size(queues_bw_cfg, cfg, adapter->num_active_queues); + queues_bw_cfg = kzalloc(len, GFP_KERNEL); + if (!queues_bw_cfg) + return; + + queue_rate = dl_priv->queue_nodes; + queues_bw_cfg->vsi_id = adapter->vsi.id; + queues_bw_cfg->num_queues = adapter->ch_config.total_qps; + + /* build tc[queue] */ + for (i = 0; i < adapter->num_tc; i++) { + for (j = 0; j < adapter->ch_config.ch_info[i].count; ++j) { + q_idx = j + adapter->ch_config.ch_info[i].offset; + queue_to_tc[q_idx] = i; + } + } + + for (i = 0; i < queues_bw_cfg->num_queues; i++) { + tc = queue_to_tc[i]; + queues_bw_cfg->cfg[i].queue_id = i; + queues_bw_cfg->cfg[i].shaper.peak = queue_rate[i].tx_max; + queues_bw_cfg->cfg[i].shaper.committed = + queue_rate[i].tx_share; + queues_bw_cfg->cfg[i].tc = tc; + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUEUE_BW; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUEUE_BW, + (u8 *)queues_bw_cfg, len); + kfree(queues_bw_cfg); +} + +/** + * iavf_configure_queues_bw - configure bw of allocated tc/queues + * @adapter: iavf adapter struct instance + * + * This function requests PF to configure queue bw of allocated + * tc/queues + */ +void iavf_configure_queues_bw(struct iavf_adapter *adapter) +{ + /* Set Queue bw */ + if (adapter->ch_config.state == __IAVF_TC_INVALID) + iavf_set_queue_bw(adapter); + else + iavf_set_tc_queue_bw(adapter); +} + +/** + * iavf_configure_queues_quanta_size - configure quanta size of queues + * @adapter: adapter structure + * + * Request that the PF configure quanta size of allocated queues. + **/ +void iavf_configure_queues_quanta_size(struct iavf_adapter *adapter) +{ + int quanta_size = IAVF_DEFAULT_QUANTA_SIZE; + + /* Set Queue Quanta Size to default */ + iavf_set_quanta_size(adapter, quanta_size, 0, + adapter->num_active_queues); +} + +/** + * iavf_update_queue_config - request queue configuration update + * @adapter: adapter structure + * + * Request that the PF configure queue quanta size and queue bw + * of allocated queues. + **/ +void iavf_update_queue_config(struct iavf_adapter *adapter) +{ + adapter->devlink_update = true; + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); +} + /** * iavf_enable_channels * @adapter: adapter structure @@ -2124,6 +2328,18 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, dev_warn(&adapter->pdev->dev, "Failed to add VLAN filter, error %s\n", iavf_stat_str(&adapter->hw, v_retval)); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + dev_warn(&adapter->pdev->dev, "Failed to Get Qos CAPs, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + dev_warn(&adapter->pdev->dev, "Failed to Config Quanta, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + dev_warn(&adapter->pdev->dev, "Failed to Config Queue BW, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; default: dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n", v_retval, iavf_stat_str(&adapter->hw, v_retval), @@ -2456,6 +2672,18 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, if (!v_retval) iavf_netdev_features_vlan_strip_set(netdev, false); break; + case VIRTCHNL_OP_GET_QOS_CAPS: { + u16 len = struct_size(adapter->qos_caps, cap, + IAVF_MAX_QOS_TC_NUM); + + memcpy(adapter->qos_caps, msg, min(msglen, len)); + } + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + iavf_notify_queue_config_complete(adapter); + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + break; default: if (adapter->current_op && (v_opcode != adapter->current_op)) dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",