From patchwork Tue Aug 22 03:39:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13360130 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 572A57F for ; Tue, 22 Aug 2023 03:35:31 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1697187 for ; Mon, 21 Aug 2023 20:35:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692675329; x=1724211329; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gmAmptHxzIwdrunMmjawOfnsESEc3HYM2fJ6sgUa1Z4=; b=ObLH+NCwjSA+WBXoLXs/CIecaVREUGzAGVkNfjh9nqSglCIvPb+vWH5E xXhjUxGALCEBEIvBlsMRkqejcq2qSCzRLc2dNh0kevMwaMGKRIBWGXziv yrcc3microcXVaf4pDzbwy7J5/karn9VEX7TeG1LicJTTU127+iQNXXh8 qih+7dwfJLhcwbcGN6ycgudPRgsUgnvBwE4Ju3Cnz3S1eJqwEBvZW7K41 +Lfyb3DTFYlnqX3hOEpfY6l0brFRfgDAoLuwMj+naWxIe8z8m94WsZtUn EY20U5n/UqabEgDub4fIKQ5vVyEimJZ2ChxzAEiXoX8hRebFpeKw5ZhEz Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="373738207" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="373738207" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 20:35:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="739149340" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="739149340" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by fmsmga007.fm.intel.com with ESMTP; 21 Aug 2023 20:35:27 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com, Wenjun Wu Subject: [PATCH iwl-next v4 1/5] virtchnl: support queue rate limit and quanta size configuration Date: Tue, 22 Aug 2023 11:39:59 +0800 Message-Id: <20230822034003.31628-2-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822034003.31628-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230822034003.31628-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net This patch adds new virtchnl opcodes and structures for rate limit and quanta size configuration, which include: 1. VIRTCHNL_OP_CONFIG_QUEUE_BW, to configure max bandwidth for each VF per queue. 2. VIRTCHNL_OP_CONFIG_QUANTA, to configure quanta size per queue. 3. VIRTCHNL_OP_GET_QOS_CAPS, VF queries current QoS configuration, such as enabled TCs, arbiter type, up2tc and bandwidth of VSI node. The configuration is previously set by DCB and PF, and now is the potential QoS capability of VF. VF can take it as reference to configure queue TC mapping. Signed-off-by: Wenjun Wu --- include/linux/avf/virtchnl.h | 119 +++++++++++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index d0807ad43f93..0132c002ca06 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -84,6 +84,9 @@ enum virtchnl_rx_hsplit { VIRTCHNL_RX_HSPLIT_SPLIT_SCTP = 8, }; +enum virtchnl_bw_limit_type { + VIRTCHNL_BW_SHAPER = 0, +}; /* END GENERIC DEFINES */ /* Opcodes for VF-PF communication. These are placed in the v_opcode field @@ -145,6 +148,11 @@ enum virtchnl_ops { VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55, VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56, VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57, + /* opcode 57 - 65 are reserved */ + VIRTCHNL_OP_GET_QOS_CAPS = 66, + /* opcode 68 through 111 are reserved */ + VIRTCHNL_OP_CONFIG_QUEUE_BW = 112, + VIRTCHNL_OP_CONFIG_QUANTA = 113, VIRTCHNL_OP_MAX, }; @@ -253,6 +261,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC BIT(26) #define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF BIT(27) #define VIRTCHNL_VF_OFFLOAD_FDIR_PF BIT(28) +#define VIRTCHNL_VF_OFFLOAD_QOS BIT(29) #define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \ VIRTCHNL_VF_OFFLOAD_VLAN | \ @@ -1377,6 +1386,85 @@ struct virtchnl_fdir_del { VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del); +struct virtchnl_shaper_bw { + /* Unit is Kbps */ + u32 committed; + u32 peak; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_shaper_bw); + +/* VIRTCHNL_OP_GET_QOS_CAPS + * VF sends this message to get its QoS Caps, such as + * TC number, Arbiter and Bandwidth. + */ +struct virtchnl_qos_cap_elem { + u8 tc_num; + u8 tc_prio; +#define VIRTCHNL_ABITER_STRICT 0 +#define VIRTCHNL_ABITER_ETS 2 + u8 arbiter; +#define VIRTCHNL_STRICT_WEIGHT 1 + u8 weight; + enum virtchnl_bw_limit_type type; + union { + struct virtchnl_shaper_bw shaper; + u8 pad2[32]; + }; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_qos_cap_elem); + +struct virtchnl_qos_cap_list { + u16 vsi_id; + u16 num_elem; + struct virtchnl_qos_cap_elem cap[]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_qos_cap_list); +#define virtchnl_qos_cap_list_LEGACY_SIZEOF 44 + +/* VIRTCHNL_OP_CONFIG_QUEUE_BW */ +struct virtchnl_queue_bw { + u16 queue_id; + u8 tc; + u8 pad; + struct virtchnl_shaper_bw shaper; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_bw); + +struct virtchnl_queues_bw_cfg { + u16 vsi_id; + u16 num_queues; + struct virtchnl_queue_bw cfg[]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_queues_bw_cfg); +#define virtchnl_queues_bw_cfg_LEGACY_SIZEOF 16 + +enum virtchnl_queue_type { + VIRTCHNL_QUEUE_TYPE_TX = 0, + VIRTCHNL_QUEUE_TYPE_RX = 1, +}; + +/* structure to specify a chunk of contiguous queues */ +struct virtchnl_queue_chunk { + /* see enum virtchnl_queue_type */ + s32 type; + u16 start_queue_id; + u16 num_queues; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk); + +struct virtchnl_quanta_cfg { + u16 quanta_size; + struct virtchnl_queue_chunk queue_select; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_quanta_cfg); + #define __vss_byone(p, member, count, old) \ (struct_size(p, member, count) + (old - 1 - struct_size(p, member, 0))) @@ -1399,6 +1487,8 @@ VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del); __vss(virtchnl_vlan_filter_list_v2, __vss_byelem, p, m, c), \ __vss(virtchnl_tc_info, __vss_byelem, p, m, c), \ __vss(virtchnl_rdma_qvlist_info, __vss_byelem, p, m, c), \ + __vss(virtchnl_qos_cap_list, __vss_byelem, p, m, c), \ + __vss(virtchnl_queues_bw_cfg, __vss_byelem, p, m, c), \ __vss(virtchnl_rss_key, __vss_byone, p, m, c), \ __vss(virtchnl_rss_lut, __vss_byone, p, m, c)) @@ -1595,6 +1685,35 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: valid_len = sizeof(struct virtchnl_vlan_setting); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + valid_len = virtchnl_queues_bw_cfg_LEGACY_SIZEOF; + if (msglen >= valid_len) { + struct virtchnl_queues_bw_cfg *q_bw = + (struct virtchnl_queues_bw_cfg *)msg; + + valid_len = virtchnl_struct_size(q_bw, cfg, + q_bw->num_queues); + if (q_bw->num_queues == 0) { + err_msg_format = true; + break; + } + } + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + valid_len = sizeof(struct virtchnl_quanta_cfg); + if (msglen >= valid_len) { + struct virtchnl_quanta_cfg *q_quanta = + (struct virtchnl_quanta_cfg *)msg; + + if (q_quanta->quanta_size == 0 || + q_quanta->queue_select.num_queues == 0) { + err_msg_format = true; + break; + } + } + break; /* These are always errors coming from the VF. */ case VIRTCHNL_OP_EVENT: case VIRTCHNL_OP_UNKNOWN: From patchwork Tue Aug 22 03:40:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13360131 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DA6D7F for ; Tue, 22 Aug 2023 03:35:35 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D54B4186 for ; Mon, 21 Aug 2023 20:35:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692675332; x=1724211332; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VyEo1Ejv24LWKK+kvo6BcccZgs9NRwzUmNn3HIph3sY=; b=VcR/Yg6ibcIGo/rlddp3xzNexEHHFvqR+t1RK0HhphTyI6NLudgeTiCp s8KD7mVu5WGY5m77nDzwQt9WXPH1pmiHgb9zm98L28h018s3Ppmrdb1DB 6IHv1xMNYvKMzMmq5ncszaAXTBP9/R4qilaPde5t06RiaBk6uuvCnGFqS OY64hUSI21dz9R+ZwFyujXCRafuKDOZjdfzVxhBFxt4dMfjcNeizXm7mq hCmoTZ5d5/qvKczin4A0MkwfwfsNp2+UN2N+JL9ihs44yL5RtoTBdaOWN EWJ4uGDljBdGsYmsRfI7o7XfYNBANyO2LPp4Qb153p0KDQjRHSz2UVClG A==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="373738213" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="373738213" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 20:35:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="739149349" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="739149349" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by fmsmga007.fm.intel.com with ESMTP; 21 Aug 2023 20:35:30 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com, Wenjun Wu Subject: [PATCH iwl-next v4 2/5] ice: Support VF queue rate limit and quanta size configuration Date: Tue, 22 Aug 2023 11:40:00 +0800 Message-Id: <20230822034003.31628-3-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822034003.31628-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230822034003.31628-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add support to configure VF queue rate limit and quanta size. For quanta size configuration, the quanta profiles are divided evenly by PF numbers. For each port, the first quanta profile is reserved for default. When VF is asked to set queue quanta size, PF will search for an available profile, change the fields and assigned this profile to the queue. Signed-off-by: Wenjun Wu --- drivers/net/ethernet/intel/ice/ice.h | 2 + drivers/net/ethernet/intel/ice/ice_base.c | 2 + drivers/net/ethernet/intel/ice/ice_common.c | 19 ++ .../net/ethernet/intel/ice/ice_hw_autogen.h | 8 + drivers/net/ethernet/intel/ice/ice_txrx.h | 2 + drivers/net/ethernet/intel/ice/ice_type.h | 1 + drivers/net/ethernet/intel/ice/ice_vf_lib.h | 9 + drivers/net/ethernet/intel/ice/ice_virtchnl.c | 310 ++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_virtchnl.h | 11 + .../intel/ice/ice_virtchnl_allowlist.c | 6 + 10 files changed, 370 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index cf6c961e8d9b..25cdf8623063 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -641,6 +641,8 @@ struct ice_pf { #define ICE_VF_AGG_NODE_ID_START 65 #define ICE_MAX_VF_AGG_NODES 32 struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES]; + + u8 num_quanta_prof_used; }; extern struct workqueue_struct *ice_lag_wq; diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 7fa43827a3f0..2b9319801dc3 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -377,6 +377,8 @@ ice_setup_tx_ctx(struct ice_tx_ring *ring, struct ice_tlan_ctx *tlan_ctx, u16 pf break; } + tlan_ctx->quanta_prof_idx = ring->quanta_prof_id; + tlan_ctx->tso_ena = ICE_TX_LEGACY; tlan_ctx->tso_qnum = pf_q; diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 2a19802847a5..86128ca1b7a5 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -2463,6 +2463,23 @@ ice_parse_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, ice_recalc_port_limited_caps(hw, &func_p->common_cap); } +/** + * ice_func_id_to_logical_id - map from function id to logical pf id + * @active_function_bitmap: active function bitmap + * @pf_id: function number of device + */ +static int ice_func_id_to_logical_id(u32 active_function_bitmap, u8 pf_id) +{ + u8 logical_id = 0; + u8 i; + + for (i = 0; i < pf_id; i++) + if (active_function_bitmap & BIT(i)) + logical_id++; + + return logical_id; +} + /** * ice_parse_valid_functions_cap - Parse ICE_AQC_CAPS_VALID_FUNCTIONS caps * @hw: pointer to the HW struct @@ -2480,6 +2497,8 @@ ice_parse_valid_functions_cap(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, dev_p->num_funcs = hweight32(number); ice_debug(hw, ICE_DBG_INIT, "dev caps: num_funcs = %d\n", dev_p->num_funcs); + + hw->logical_pf_id = ice_func_id_to_logical_id(number, hw->pf_id); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h index 6756f3d51d14..9da94e000394 100644 --- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h +++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h @@ -6,6 +6,14 @@ #ifndef _ICE_HW_AUTOGEN_H_ #define _ICE_HW_AUTOGEN_H_ +#define GLCOMM_QUANTA_PROF(_i) (0x002D2D68 + ((_i) * 4)) +#define GLCOMM_QUANTA_PROF_MAX_INDEX 15 +#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S 0 +#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M ICE_M(0x3FFF, 0) +#define GLCOMM_QUANTA_PROF_MAX_CMD_S 16 +#define GLCOMM_QUANTA_PROF_MAX_CMD_M ICE_M(0xFF, 16) +#define GLCOMM_QUANTA_PROF_MAX_DESC_S 24 +#define GLCOMM_QUANTA_PROF_MAX_DESC_M ICE_M(0x3F, 24) #define QTX_COMM_DBELL(_DBQM) (0x002C0000 + ((_DBQM) * 4)) #define QTX_COMM_HEAD(_DBQM) (0x000E0000 + ((_DBQM) * 4)) #define QTX_COMM_HEAD_HEAD_S 0 diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index 166413fc33f4..7e152ab5b727 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -381,6 +381,8 @@ struct ice_tx_ring { u8 flags; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_tx; + + u16 quanta_prof_id; } ____cacheline_internodealigned_in_smp; static inline bool ice_ring_uses_build_skb(struct ice_rx_ring *ring) diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index a5429eca4350..504b367f1c77 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -850,6 +850,7 @@ struct ice_hw { u8 revision_id; u8 pf_id; /* device profile info */ + u8 logical_pf_id; enum ice_phy_model phy_model; u16 max_burst_size; /* driver sets this value */ diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h index 48fea6fa0362..7fe81208c62c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h @@ -52,6 +52,13 @@ struct ice_mdd_vf_events { u16 last_printed; }; +struct ice_vf_qs_bw { + u32 committed; + u32 peak; + u16 queue_id; + u8 tc; +}; + /* VF operations */ struct ice_vf_ops { enum ice_disq_rst_src reset_type; @@ -133,6 +140,8 @@ struct ice_vf { /* devlink port data */ struct devlink_port devlink_port; + + struct ice_vf_qs_bw qs_bw[ICE_MAX_RSS_QS_PER_VF]; }; /* Flags for controlling behavior of ice_reset_vf */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index b03426ac932b..b1b14377559e 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -495,6 +495,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_USO; + if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_QOS) + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_QOS; + vfres->num_vsis = 1; /* Tx and Rx queue are equal for VF */ vfres->num_queue_pairs = vsi->num_txq; @@ -985,6 +988,172 @@ static int ice_vc_config_rss_lut(struct ice_vf *vf, u8 *msg) NULL, 0); } +/** + * ice_vc_get_qos_caps - Get current QoS caps from PF + * @vf: pointer to the VF info + * + * Get VF's QoS capabilities, such as TC number, arbiter and + * bandwidth from PF. + */ +static int ice_vc_get_qos_caps(struct ice_vf *vf) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_qos_cap_list *cap_list = NULL; + u8 tc_prio[ICE_MAX_TRAFFIC_CLASS] = { 0 }; + struct virtchnl_qos_cap_elem *cfg = NULL; + struct ice_vsi_ctx *vsi_ctx; + struct ice_pf *pf = vf->pf; + struct ice_port_info *pi; + struct ice_vsi *vsi; + u8 numtc, tc; + u16 len = 0; + int ret, i; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + pi = pf->hw.port_info; + numtc = vsi->tc_cfg.numtc; + + vsi_ctx = ice_get_vsi_ctx(pi->hw, vf->lan_vsi_idx); + if (!vsi_ctx) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + len = struct_size(cap_list, cap, numtc); + cap_list = kzalloc(len, GFP_KERNEL); + if (!cap_list) { + v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; + len = 0; + goto err; + } + + cap_list->vsi_id = vsi->vsi_num; + cap_list->num_elem = numtc; + + /* Store the UP2TC configuration from DCB to a user priority bitmap + * of each TC. Each element of prio_of_tc represents one TC. Each + * bitmap indicates the user priorities belong to this TC. + */ + for (i = 0; i < ICE_MAX_USER_PRIORITY; i++) { + tc = pi->qos_cfg.local_dcbx_cfg.etscfg.prio_table[i]; + tc_prio[tc] |= BIT(i); + } + + for (i = 0; i < numtc; i++) { + cfg = &cap_list->cap[i]; + cfg->tc_num = i; + cfg->tc_prio = tc_prio[i]; + cfg->arbiter = pi->qos_cfg.local_dcbx_cfg.etscfg.tsatable[i]; + cfg->weight = VIRTCHNL_STRICT_WEIGHT; + cfg->type = VIRTCHNL_BW_SHAPER; + cfg->shaper.committed = vsi_ctx->sched.bw_t_info[i].cir_bw.bw; + cfg->shaper.peak = vsi_ctx->sched.bw_t_info[i].eir_bw.bw; + } + +err: + ret = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_QOS_CAPS, v_ret, + (u8 *)cap_list, len); + kfree(cap_list); + return ret; +} + +/** + * ice_vf_cfg_qs_bw - Configure per queue bandwidth + * @vf: pointer to the VF info + * @num_queues: number of queues to be configured + * + * Configure per queue bandwidth. + */ +static int ice_vf_cfg_qs_bw(struct ice_vf *vf, u16 num_queues) +{ + struct ice_hw *hw = &vf->pf->hw; + struct ice_vsi *vsi; + int ret; + u16 i; + + vsi = ice_get_vf_vsi(vf); + if (!vsi) + return -EINVAL; + + for (i = 0; i < num_queues; i++) { + u32 p_rate; + u8 tc; + + p_rate = vf->qs_bw[i].peak; + tc = vf->qs_bw[i].tc; + if (p_rate) + ret = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx, tc, + vf->qs_bw[i].queue_id, + ICE_MAX_BW, p_rate); + else + ret = ice_cfg_q_bw_dflt_lmt(hw->port_info, vsi->idx, tc, + vf->qs_bw[i].queue_id, + ICE_MAX_BW); + if (ret) + return ret; + } + + return 0; +} + +/** + * ice_vf_cfg_q_quanta_profile + * @vf: pointer to the VF info + * @quanta_prof_idx: pointer to the quanta profile index + * @quanta_size: quanta size to be set + * + * This function chooses available quanta profile and configures the register. + * The quanta profile is evenly divided by the number of device ports, and then + * available to the specific PF and VFs. The first profile for each PF is a + * reserved default profile. Only quanta size of the rest unused profile can be + * modified. + */ +static int ice_vf_cfg_q_quanta_profile(struct ice_vf *vf, u16 quanta_size, + u16 *quanta_prof_idx) +{ + const u16 n_desc = calc_quanta_desc(quanta_size); + struct ice_hw *hw = &vf->pf->hw; + const u16 n_cmd = 2 * n_desc; + struct ice_pf *pf = vf->pf; + u16 per_pf, begin_id; + u8 n_used; + u32 reg; + + begin_id = (GLCOMM_QUANTA_PROF_MAX_INDEX + 1) / hw->dev_caps.num_funcs * + hw->logical_pf_id; + + if (quanta_size == ICE_DFLT_QUANTA) { + *quanta_prof_idx = begin_id; + } else { + per_pf = (GLCOMM_QUANTA_PROF_MAX_INDEX + 1) / + hw->dev_caps.num_funcs; + n_used = pf->num_quanta_prof_used; + if (n_used < per_pf) { + *quanta_prof_idx = begin_id + 1 + n_used; + pf->num_quanta_prof_used++; + } else { + return -EINVAL; + } + } + + reg = FIELD_PREP(GLCOMM_QUANTA_PROF_QUANTA_SIZE_M, quanta_size) | + FIELD_PREP(GLCOMM_QUANTA_PROF_MAX_CMD_M, n_cmd) | + FIELD_PREP(GLCOMM_QUANTA_PROF_MAX_DESC_M, n_desc); + wr32(hw, GLCOMM_QUANTA_PROF(*quanta_prof_idx), reg); + + return 0; +} + /** * ice_vc_cfg_promiscuous_mode_msg * @vf: pointer to the VF info @@ -1587,6 +1756,132 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg) NULL, 0); } +/** + * ice_vc_cfg_q_bw - Configure per queue bandwidth + * @vf: pointer to the VF info + * @msg: pointer to the msg buffer which holds the command descriptor + * + * Configure VF queues bandwidth. + */ +static int ice_vc_cfg_q_bw(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_queues_bw_cfg *qbw = + (struct virtchnl_queues_bw_cfg *)msg; + struct ice_vsi *vsi; + u16 i; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) || + !ice_vc_isvalid_vsi_id(vf, qbw->vsi_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi || vsi->vsi_num != qbw->vsi_id) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + if (qbw->num_queues > ICE_MAX_RSS_QS_PER_VF || + qbw->num_queues > min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)) { + dev_err(ice_pf_to_dev(vf->pf), "VF-%d trying to configure more than allocated number of queues: %d\n", + vf->vf_id, min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + for (i = 0; i < qbw->num_queues; i++) { + if (qbw->cfg[i].shaper.peak != 0 && vf->max_tx_rate != 0 && + qbw->cfg[i].shaper.peak > vf->max_tx_rate) + dev_warn(ice_pf_to_dev(vf->pf), "The maximum queue %d rate limit configuration may not take effect because the maximum TX rate for VF-%d is %d\n", + qbw->cfg[i].queue_id, vf->vf_id, vf->max_tx_rate); + if (qbw->cfg[i].shaper.committed != 0 && vf->min_tx_rate != 0 && + qbw->cfg[i].shaper.committed < vf->min_tx_rate) + dev_warn(ice_pf_to_dev(vf->pf), "The minimum queue %d rate limit configuration may not take effect because the minimum TX rate for VF-%d is %d\n", + qbw->cfg[i].queue_id, vf->vf_id, vf->max_tx_rate); + } + + for (i = 0; i < qbw->num_queues; i++) { + vf->qs_bw[i].queue_id = qbw->cfg[i].queue_id; + vf->qs_bw[i].peak = qbw->cfg[i].shaper.peak; + vf->qs_bw[i].committed = qbw->cfg[i].shaper.committed; + vf->qs_bw[i].tc = qbw->cfg[i].tc; + } + +err: + /* send the response to the VF */ + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_QUEUE_BW, + v_ret, NULL, 0); +} + +/** + * ice_vc_cfg_q_quanta - Configure per queue quanta + * @vf: pointer to the VF info + * @msg: pointer to the msg buffer which holds the command descriptor + * + * Configure VF queues quanta. + */ +static int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg) +{ + u16 quanta_prof_id, quanta_size, start_qid, end_qid, i; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_quanta_cfg *qquanta = + (struct virtchnl_quanta_cfg *)msg; + struct ice_vsi *vsi; + int ret; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + end_qid = qquanta->queue_select.start_queue_id + + qquanta->queue_select.num_queues; + if (end_qid > ICE_MAX_RSS_QS_PER_VF || + end_qid > min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)) { + dev_err(ice_pf_to_dev(vf->pf), "VF-%d trying to configure more than allocated number of queues: %d\n", + vf->vf_id, min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + quanta_size = qquanta->quanta_size; + if (quanta_size > ICE_MAX_QUANTA_SIZE || + quanta_size < ICE_MIN_QUANTA_SIZE) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + if (quanta_size % 64) { + dev_err(ice_pf_to_dev(vf->pf), "quanta size should be the product of 64\n"); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto err; + } + + ret = ice_vf_cfg_q_quanta_profile(vf, quanta_size, + &quanta_prof_id); + if (ret) { + v_ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED; + goto err; + } + + start_qid = qquanta->queue_select.start_queue_id; + for (i = start_qid; i < end_qid; i++) + vsi->tx_rings[i]->quanta_prof_id = quanta_prof_id; + +err: + /* send the response to the VF */ + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_QUANTA, + v_ret, NULL, 0); +} + /** * ice_vc_cfg_qs_msg * @vf: pointer to the VF info @@ -1710,6 +2005,9 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) } } + if (ice_vf_cfg_qs_bw(vf, qci->num_queue_pairs)) + goto error_param; + /* send the response to the VF */ return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_VSI_QUEUES, VIRTCHNL_STATUS_SUCCESS, NULL, 0); @@ -3687,6 +3985,9 @@ static const struct ice_virtchnl_ops ice_virtchnl_dflt_ops = { .dis_vlan_stripping_v2_msg = ice_vc_dis_vlan_stripping_v2_msg, .ena_vlan_insertion_v2_msg = ice_vc_ena_vlan_insertion_v2_msg, .dis_vlan_insertion_v2_msg = ice_vc_dis_vlan_insertion_v2_msg, + .get_qos_caps = ice_vc_get_qos_caps, + .cfg_q_bw = ice_vc_cfg_q_bw, + .cfg_q_quanta = ice_vc_cfg_q_quanta, }; /** @@ -4039,6 +4340,15 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event, case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: err = ops->dis_vlan_insertion_v2_msg(vf, msg); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + err = ops->get_qos_caps(vf); + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + err = ops->cfg_q_bw(vf, msg); + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + err = ops->cfg_q_quanta(vf, msg); + break; case VIRTCHNL_OP_UNKNOWN: default: dev_err(dev, "Unsupported opcode %d from VF %d\n", v_opcode, diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.h b/drivers/net/ethernet/intel/ice/ice_virtchnl.h index cd747718de73..0efb9c0f669a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.h @@ -13,6 +13,13 @@ /* Restrict number of MAC Addr and VLAN that non-trusted VF can programmed */ #define ICE_MAX_VLAN_PER_VF 8 +#define ICE_DFLT_QUANTA 1024 +#define ICE_MAX_QUANTA_SIZE 4096 +#define ICE_MIN_QUANTA_SIZE 256 + +#define calc_quanta_desc(x) \ + max_t(u16, 12, min_t(u16, 63, (((x) + 66) / 132) * 2 + 4)) + /* MAC filters: 1 is reserved for the VF's default/perm_addr/LAA MAC, 1 for * broadcast, and 16 for additional unicast/multicast filters */ @@ -51,6 +58,10 @@ struct ice_virtchnl_ops { int (*dis_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); int (*ena_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); int (*dis_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*get_qos_caps)(struct ice_vf *vf); + int (*cfg_q_tc_map)(struct ice_vf *vf, u8 *msg); + int (*cfg_q_bw)(struct ice_vf *vf, u8 *msg); + int (*cfg_q_quanta)(struct ice_vf *vf, u8 *msg); }; #ifdef CONFIG_PCI_IOV diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c index 7d547fa616fa..2e3f63a429cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c @@ -85,6 +85,11 @@ static const u32 fdir_pf_allowlist_opcodes[] = { VIRTCHNL_OP_ADD_FDIR_FILTER, VIRTCHNL_OP_DEL_FDIR_FILTER, }; +static const u32 tc_allowlist_opcodes[] = { + VIRTCHNL_OP_GET_QOS_CAPS, VIRTCHNL_OP_CONFIG_QUEUE_BW, + VIRTCHNL_OP_CONFIG_QUANTA, +}; + struct allowlist_opcode_info { const u32 *opcodes; size_t size; @@ -105,6 +110,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = { ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF, adv_rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF, fdir_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes), + ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_QOS, tc_allowlist_opcodes), }; /** From patchwork Tue Aug 22 03:40:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13360132 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BC2E7F for ; Tue, 22 Aug 2023 03:35:36 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DCBC186 for ; Mon, 21 Aug 2023 20:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692675335; x=1724211335; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r2KlngXOvw/4d7xldPjwHVLfP88ZSwnG8lWgyTfMXSE=; b=Za22QCTNK5GlCd5sIQoSPeLXyzCF7mAZAf2D7Ie4PHBe/iKrA+jNwG8A Zkq+uOhsiCCgRxU55wsk4pubIcWGhO6lj5jnsy/HnOnDM3FFzL41jl9s1 LHONFiKrg2yb3Fj/s3HbUaT46hVMa0z5jxkczH5qUUZYKfgT/Vq66de9s 8orTzAuGkY7oGRXCoqPlA/TP9mnOW7oOxPwfDuFVi23Dn0tgbuQsu3D7D 4DaphNjzgHcDDduQvjtEZ/R/IvoZQ2uNJtDxNKUocmr3O4KzNoejJswMy 1bSyskB9xM/oBQT8vbBFOl5tnnpfURXAvmVdj+LPJOaUPBFEggJeVYy+g Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="373738218" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="373738218" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 20:35:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="739149356" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="739149356" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by fmsmga007.fm.intel.com with ESMTP; 21 Aug 2023 20:35:33 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com Subject: [PATCH iwl-next v4 3/5] iavf: Add devlink and devlink port support Date: Tue, 22 Aug 2023 11:40:01 +0800 Message-Id: <20230822034003.31628-4-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822034003.31628-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230822034003.31628-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jun Zhang To allow user to configure queue bandwidth, devlink port support is added to support devlink port rate API. Add devlink framework registration/unregistration on iavf driver initialization and remove, and devlink port of DEVLINK_PORT_FLAVOUR_VIRTUAL is created to be associated iavf net device. Signed-off-by: Jun Zhang --- drivers/net/ethernet/intel/Kconfig | 1 + drivers/net/ethernet/intel/iavf/Makefile | 2 +- drivers/net/ethernet/intel/iavf/iavf.h | 5 + .../net/ethernet/intel/iavf/iavf_devlink.c | 94 +++++++++++++++++++ .../net/ethernet/intel/iavf/iavf_devlink.h | 16 ++++ drivers/net/ethernet/intel/iavf/iavf_main.c | 17 ++++ 6 files changed, 134 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/intel/iavf/iavf_devlink.c create mode 100644 drivers/net/ethernet/intel/iavf/iavf_devlink.h diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 9bc0a9519899..f916b8ef6acb 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -256,6 +256,7 @@ config I40EVF tristate "Intel(R) Ethernet Adaptive Virtual Function support" select IAVF depends on PCI_MSI + select NET_DEVLINK help This driver supports virtual functions for Intel XL710, X710, X722, XXV710, and all devices advertising support for diff --git a/drivers/net/ethernet/intel/iavf/Makefile b/drivers/net/ethernet/intel/iavf/Makefile index 9c3e45c54d01..b5d7db97ab8b 100644 --- a/drivers/net/ethernet/intel/iavf/Makefile +++ b/drivers/net/ethernet/intel/iavf/Makefile @@ -12,5 +12,5 @@ subdir-ccflags-y += -I$(src) obj-$(CONFIG_IAVF) += iavf.o iavf-objs := iavf_main.o iavf_ethtool.o iavf_virtchnl.o iavf_fdir.o \ - iavf_adv_rss.o \ + iavf_adv_rss.o iavf_devlink.o \ iavf_txrx.o iavf_common.o iavf_adminq.o iavf_client.o diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 85fba85fbb23..72a68061e396 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -33,9 +33,11 @@ #include #include #include +#include #include "iavf_type.h" #include +#include "iavf_devlink.h" #include "iavf_txrx.h" #include "iavf_fdir.h" #include "iavf_adv_rss.h" @@ -369,6 +371,9 @@ struct iavf_adapter { struct net_device *netdev; struct pci_dev *pdev; + struct devlink *devlink; + struct devlink_port devlink_port; + struct iavf_hw hw; /* defined in iavf_type.h */ enum iavf_state_t state; diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.c b/drivers/net/ethernet/intel/iavf/iavf_devlink.c new file mode 100644 index 000000000000..1cace56e3f56 --- /dev/null +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.c @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#include "iavf.h" +#include "iavf_devlink.h" + +static const struct devlink_ops iavf_devlink_ops = {}; + +/** + * iavf_devlink_register - Register devlink interface for this VF adapter + * @adapter: the iavf adapter to register the devlink for. + * + * Allocate a devlinkin instance for this VF, and register the devlink + * instance associated with this VF adapter + * + * Return: zero on success or an error code on failure. + */ +int iavf_devlink_register(struct iavf_adapter *adapter) +{ + struct device *dev = &adapter->pdev->dev; + struct iavf_devlink *ref; + struct devlink *devlink; + + devlink = devlink_alloc(&iavf_devlink_ops, sizeof(struct iavf_devlink), + dev); + adapter->devlink = devlink; + if (!devlink) + return -ENOMEM; + + ref = devlink_priv(devlink); + ref->devlink_ref = adapter; + + devlink_register(devlink); + + return 0; +} + +/** + * iavf_devlink_unregister - Unregister devlink resources for iavf adapter. + * @adapter: the iavf adapter structure + * + * Releases resources used by devlink and cleans up associated memory. + */ +void iavf_devlink_unregister(struct iavf_adapter *adapter) +{ + if (!adapter->devlink) + return; + + devlink_unregister(adapter->devlink); + devlink_free(adapter->devlink); +} + +/** + * iavf_devlink_port_register - Register devlink port for iavf adapter + * @adapter: the iavf adapter to register the devlink port for. + * + * Register the devlink port instance associated with this iavf adapter + * before iavf adapter registers with netdevice + * + * Return: zero on success or an error code on failure. + */ +int iavf_devlink_port_register(struct iavf_adapter *adapter) +{ + struct device *dev = &adapter->pdev->dev; + struct devlink_port_attrs attrs = {}; + int err; + + SET_NETDEV_DEVLINK_PORT(adapter->netdev, &adapter->devlink_port); + attrs.flavour = DEVLINK_PORT_FLAVOUR_VIRTUAL; + memset(&adapter->devlink_port, 0, sizeof(adapter->devlink_port)); + devlink_port_attrs_set(&adapter->devlink_port, &attrs); + + /* Register with driver specific index (device id) */ + err = devlink_port_register(adapter->devlink, &adapter->devlink_port, + adapter->hw.bus.device); + if (err) + dev_err(dev, "devlink port registration failed: %d\n", err); + + return err; +} + +/** + * iavf_devlink_port_unregister - Unregister devlink port for iavf adapter. + * @adapter: the iavf adapter structure + * + * Releases resources used by devlink port and registration with devlink. + */ +void iavf_devlink_port_unregister(struct iavf_adapter *adapter) +{ + if (!adapter->devlink_port.registered) + return; + + devlink_port_unregister(&adapter->devlink_port); +} diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.h b/drivers/net/ethernet/intel/iavf/iavf_devlink.h new file mode 100644 index 000000000000..65e453bbd1a8 --- /dev/null +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#ifndef _IAVF_DEVLINK_H_ +#define _IAVF_DEVLINK_H_ + +struct iavf_devlink { + struct iavf_adapter *devlink_ref; /* ref to iavf adapter */ +}; + +int iavf_devlink_register(struct iavf_adapter *adapter); +void iavf_devlink_unregister(struct iavf_adapter *adapter); +int iavf_devlink_port_register(struct iavf_adapter *adapter); +void iavf_devlink_port_unregister(struct iavf_adapter *adapter); + +#endif /* _IAVF_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index b23ca9d80189..3a93d0cac60c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2038,6 +2038,7 @@ static void iavf_finish_config(struct work_struct *work) iavf_free_rss(adapter); iavf_free_misc_irq(adapter); iavf_reset_interrupt_capability(adapter); + iavf_devlink_port_unregister(adapter); iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); goto out; @@ -2709,6 +2710,9 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) if (err) goto err_sw_init; + if (!adapter->netdev_registered) + iavf_devlink_port_register(adapter); + netif_carrier_off(netdev); adapter->link_up = false; netif_tx_stop_all_queues(netdev); @@ -2750,6 +2754,7 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) err_mem: iavf_free_rss(adapter); iavf_free_misc_irq(adapter); + iavf_devlink_port_unregister(adapter); err_sw_init: iavf_reset_interrupt_capability(adapter); err: @@ -4960,6 +4965,13 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) hw->bus.func = PCI_FUNC(pdev->devfn); hw->bus.bus_id = pdev->bus->number; + /* Register iavf adapter with devlink */ + err = iavf_devlink_register(adapter); + if (err) { + dev_err(&pdev->dev, "devlink registration failed: %d\n", err); + goto err_devlink_reg; + } + /* set up the locks for the AQ, do this only once in probe * and destroy them only once in remove */ @@ -4998,6 +5010,8 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) return 0; +err_devlink_reg: + iounmap(hw->hw_addr); err_ioremap: destroy_workqueue(adapter->wq); err_alloc_wq: @@ -5140,6 +5154,9 @@ static void iavf_remove(struct pci_dev *pdev) err); } + iavf_devlink_port_unregister(adapter); + iavf_devlink_unregister(adapter); + mutex_lock(&adapter->crit_lock); dev_info(&adapter->pdev->dev, "Removing device\n"); iavf_change_state(adapter, __IAVF_REMOVE); From patchwork Tue Aug 22 03:40:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13360133 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96C867F for ; Tue, 22 Aug 2023 03:35:39 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 167F6187 for ; Mon, 21 Aug 2023 20:35:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692675338; x=1724211338; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B0Cp8MTTWNqE5fEUISJmf+uk4VbcCGeci6BtLBvDKaY=; b=RwEykf9kb0jQSpJYMKuIHelT7dd3qhaUkWyJrbBYZ3Qv3sN/GzE0GOx4 hJji86OIn0V7SOWGFLK5ZgrW7MNYBuLvMZbx8W1MWLnuQw0kjRqdOAjZz Pbh5UTNM2st+a+xJVLwm3T+1cWoVxw0Ljxsuf4OtyH3YGddVsiQyJPcqQ 1w+VA8I2ggFrqJH4Si58BafISUCF/OwqaQQFMrfbGdREVuu1/kNn2umB6 ekYGIFa4EQSQIUWkt2Y+UsxsNfz5BY2M4bbqapHXqWwdqKbubZJ+JNJvT 32jAuYYkN+UgeaMn9BM5kC+4OLzL2RR8k7rot3Jtz1NOTCbfLtGw5qLA6 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="373738239" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="373738239" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 20:35:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="739149399" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="739149399" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by fmsmga007.fm.intel.com with ESMTP; 21 Aug 2023 20:35:35 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com Subject: [PATCH iwl-next v4 4/5] iavf: Add devlink port function rate API support Date: Tue, 22 Aug 2023 11:40:02 +0800 Message-Id: <20230822034003.31628-5-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822034003.31628-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230822034003.31628-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jun Zhang To allow user to configure queue based parameters, devlink port function rate api functions are added for setting node tx_max and tx_share parameters. iavf rate tree with root node and queue nodes is created and registered with devlink rate when iavf adapter is configured. Signed-off-by: Jun Zhang --- .../net/ethernet/intel/iavf/iavf_devlink.c | 258 +++++++++++++++++- .../net/ethernet/intel/iavf/iavf_devlink.h | 21 ++ drivers/net/ethernet/intel/iavf/iavf_main.c | 7 +- 3 files changed, 283 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.c b/drivers/net/ethernet/intel/iavf/iavf_devlink.c index 1cace56e3f56..732076c2126f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.c +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.c @@ -4,7 +4,261 @@ #include "iavf.h" #include "iavf_devlink.h" -static const struct devlink_ops iavf_devlink_ops = {}; +/** + * iavf_devlink_rate_init_rate_tree - export rate tree to devlink rate + * @adapter: iavf adapter struct instance + * + * This function builds Rate Tree based on iavf adapter configuration + * and exports it's contents to devlink rate. + */ +void iavf_devlink_rate_init_rate_tree(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + struct iavf_dev_rate_node *iavf_r_node; + struct iavf_dev_rate_node *iavf_q_node; + struct devlink_rate *dl_root_node; + struct devlink_rate *dl_tmp_node; + int q_num; + + if (!adapter->devlink_port.registered) + return; + + iavf_r_node = &dl_priv->root_node; + memset(iavf_r_node, 0, sizeof(*iavf_r_node)); + iavf_r_node->tx_max = adapter->link_speed; + strscpy(iavf_r_node->name, "iavf_root", IAVF_RATE_NODE_NAME); + + devl_lock(adapter->devlink); + dl_root_node = devl_rate_node_create(adapter->devlink, iavf_r_node, + iavf_r_node->name, NULL); + if (!dl_root_node || IS_ERR(dl_root_node)) + goto err_node; + + iavf_r_node->rate_node = dl_root_node; + + /* Allocate queue nodes, and chain them under root */ + q_num = adapter->num_active_queues; + if (q_num > 0) { + int i; + + dl_priv->queue_nodes = + kcalloc(q_num, sizeof(struct iavf_dev_rate_node), + GFP_KERNEL); + if (!dl_priv->queue_nodes) + goto err_node; + + for (i = 0; i < q_num; ++i) { + iavf_q_node = &dl_priv->queue_nodes[i]; + snprintf(iavf_q_node->name, IAVF_RATE_NODE_NAME, + "txq_%d", i); + dl_tmp_node = devl_rate_node_create(adapter->devlink, + iavf_q_node, + iavf_q_node->name, + dl_root_node); + if (!dl_tmp_node || IS_ERR(dl_tmp_node)) { + kfree(dl_priv->queue_nodes); + goto err_node; + } + + iavf_q_node->rate_node = dl_tmp_node; + iavf_q_node->tx_max = IAVF_TX_DEFAULT; + iavf_q_node->tx_share = 0; + } + } + + dl_priv->update_in_progress = false; + dl_priv->iavf_dev_rate_initialized = true; + devl_unlock(adapter->devlink); + return; +err_node: + devl_rate_nodes_destroy(adapter->devlink); + dl_priv->iavf_dev_rate_initialized = false; + devl_unlock(adapter->devlink); +} + +/** + * iavf_devlink_rate_deinit_rate_tree - Unregister rate tree with devlink rate + * @adapter: iavf adapter struct instance + * + * This function unregisters the current iavf rate tree registered with devlink + * rate and frees resources. + */ +void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + + if (!dl_priv->iavf_dev_rate_initialized) + return; + + devl_lock(adapter->devlink); + devl_rate_leaf_destroy(&adapter->devlink_port); + devl_rate_nodes_destroy(adapter->devlink); + kfree(dl_priv->queue_nodes); + devl_unlock(adapter->devlink); +} + +/** + * iavf_check_update_config - check if updating queue parameters needed + * @adapter: iavf adapter struct instance + * @node: iavf rate node struct instance + * + * This function sets queue bw & quanta size configuration if all + * queue parameters are set + */ +static int iavf_check_update_config(struct iavf_adapter *adapter, + struct iavf_dev_rate_node *node) +{ + /* Update queue bw if any one of the queues have been fully updated by + * user, the other queues either use the default value or the last + * fully updated value + */ + if (node->tx_update_flag != + (IAVF_FLAG_TX_MAX_UPDATED | IAVF_FLAG_TX_SHARE_UPDATED)) + return 0; + + node->tx_max = node->tx_max_temp; + node->tx_share = node->tx_share_temp; + + /* Reconfig queue bw only when iavf driver on running state */ + if (adapter->state != __IAVF_RUNNING) + return -EBUSY; + + return 0; +} + +/** + * iavf_update_queue_tx_share - sets tx min parameter + * @adapter: iavf adapter struct instance + * @node: iavf rate node struct instance + * @bw: bandwidth in bytes per second + * @extack: extended netdev ack structure + * + * This function sets min BW limit. + */ +static int iavf_update_queue_tx_share(struct iavf_adapter *adapter, + struct iavf_dev_rate_node *node, + u64 bw, struct netlink_ext_ack *extack) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + u64 tx_share_sum = 0; + + /* Keep in kbps */ + node->tx_share_temp = div_u64(bw, IAVF_RATE_DIV_FACTOR); + + if (ADV_LINK_SUPPORT(adapter)) { + int i; + + for (i = 0; i < adapter->num_active_queues; ++i) { + if (node != &dl_priv->queue_nodes[i]) + tx_share_sum += + dl_priv->queue_nodes[i].tx_share; + else + tx_share_sum += node->tx_share_temp; + } + + if (tx_share_sum / 1000 > adapter->link_speed_mbps) + return -EINVAL; + } + + node->tx_update_flag |= IAVF_FLAG_TX_SHARE_UPDATED; + return iavf_check_update_config(adapter, node); +} + +/** + * iavf_update_queue_tx_max - sets tx max parameter + * @adapter: iavf adapter struct instance + * @node: iavf rate node struct instance + * @bw: bandwidth in bytes per second + * @extack: extended netdev ack structure + * + * This function sets max BW limit. + */ +static int iavf_update_queue_tx_max(struct iavf_adapter *adapter, + struct iavf_dev_rate_node *node, + u64 bw, struct netlink_ext_ack *extack) +{ + /* Keep in kbps */ + node->tx_max_temp = div_u64(bw, IAVF_RATE_DIV_FACTOR); + if (ADV_LINK_SUPPORT(adapter)) { + if (node->tx_max_temp / 1000 > adapter->link_speed_mbps) + return -EINVAL; + } + + node->tx_update_flag |= IAVF_FLAG_TX_MAX_UPDATED; + + return iavf_check_update_config(adapter, node); +} + +static int iavf_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, + void *priv, u64 tx_max, + struct netlink_ext_ack *extack) +{ + struct iavf_dev_rate_node *node = priv; + struct iavf_devlink *dl_priv; + struct iavf_adapter *adapter; + + if (!node) + return 0; + + dl_priv = devlink_priv(rate_node->devlink); + adapter = dl_priv->devlink_ref; + + /* Check if last update is in progress */ + if (dl_priv->update_in_progress) + return -EBUSY; + + if (node == &dl_priv->root_node) + return 0; + + return iavf_update_queue_tx_max(adapter, node, tx_max, extack); +} + +static int iavf_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, + void *priv, u64 tx_share, + struct netlink_ext_ack *extack) +{ + struct iavf_dev_rate_node *node = priv; + struct iavf_devlink *dl_priv; + struct iavf_adapter *adapter; + + if (!node) + return 0; + + dl_priv = devlink_priv(rate_node->devlink); + adapter = dl_priv->devlink_ref; + + /* Check if last update is in progress */ + if (dl_priv->update_in_progress) + return -EBUSY; + + if (node == &dl_priv->root_node) + return 0; + + return iavf_update_queue_tx_share(adapter, node, tx_share, extack); +} + +static int iavf_devlink_rate_node_del(struct devlink_rate *rate_node, + void *priv, + struct netlink_ext_ack *extack) +{ + return -EINVAL; +} + +static int iavf_devlink_set_parent(struct devlink_rate *devlink_rate, + struct devlink_rate *parent, + void *priv, void *parent_priv, + struct netlink_ext_ack *extack) +{ + return -EINVAL; +} + +static const struct devlink_ops iavf_devlink_ops = { + .rate_node_tx_share_set = iavf_devlink_rate_node_tx_share_set, + .rate_node_tx_max_set = iavf_devlink_rate_node_tx_max_set, + .rate_node_del = iavf_devlink_rate_node_del, + .rate_leaf_parent_set = iavf_devlink_set_parent, + .rate_node_parent_set = iavf_devlink_set_parent, +}; /** * iavf_devlink_register - Register devlink interface for this VF adapter @@ -29,7 +283,7 @@ int iavf_devlink_register(struct iavf_adapter *adapter) ref = devlink_priv(devlink); ref->devlink_ref = adapter; - + ref->iavf_dev_rate_initialized = false; devlink_register(devlink); return 0; diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.h b/drivers/net/ethernet/intel/iavf/iavf_devlink.h index 65e453bbd1a8..751e9e093ab1 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.h +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.h @@ -4,13 +4,34 @@ #ifndef _IAVF_DEVLINK_H_ #define _IAVF_DEVLINK_H_ +#define IAVF_RATE_NODE_NAME 12 +struct iavf_dev_rate_node { + char name[IAVF_RATE_NODE_NAME]; + struct devlink_rate *rate_node; + u8 tx_update_flag; +#define IAVF_FLAG_TX_SHARE_UPDATED BIT(0) +#define IAVF_FLAG_TX_MAX_UPDATED BIT(1) + u64 tx_max; + u64 tx_share; + u64 tx_max_temp; + u64 tx_share_temp; +#define IAVF_RATE_DIV_FACTOR 125 +#define IAVF_TX_DEFAULT 100000 +}; + struct iavf_devlink { struct iavf_adapter *devlink_ref; /* ref to iavf adapter */ + struct iavf_dev_rate_node root_node; + struct iavf_dev_rate_node *queue_nodes; + bool iavf_dev_rate_initialized; + bool update_in_progress; }; int iavf_devlink_register(struct iavf_adapter *adapter); void iavf_devlink_unregister(struct iavf_adapter *adapter); int iavf_devlink_port_register(struct iavf_adapter *adapter); void iavf_devlink_port_unregister(struct iavf_adapter *adapter); +void iavf_devlink_rate_init_rate_tree(struct iavf_adapter *adapter); +void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter); #endif /* _IAVF_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 3a93d0cac60c..699c6375200a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2038,6 +2038,7 @@ static void iavf_finish_config(struct work_struct *work) iavf_free_rss(adapter); iavf_free_misc_irq(adapter); iavf_reset_interrupt_capability(adapter); + iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); @@ -2710,8 +2711,10 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) if (err) goto err_sw_init; - if (!adapter->netdev_registered) + if (!adapter->netdev_registered) { iavf_devlink_port_register(adapter); + iavf_devlink_rate_init_rate_tree(adapter); + } netif_carrier_off(netdev); adapter->link_up = false; @@ -2754,6 +2757,7 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) err_mem: iavf_free_rss(adapter); iavf_free_misc_irq(adapter); + iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); err_sw_init: iavf_reset_interrupt_capability(adapter); @@ -5154,6 +5158,7 @@ static void iavf_remove(struct pci_dev *pdev) err); } + iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); iavf_devlink_unregister(adapter); From patchwork Tue Aug 22 03:40:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 13360134 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E40297F for ; Tue, 22 Aug 2023 03:36:04 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBF68187 for ; Mon, 21 Aug 2023 20:36:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692675362; x=1724211362; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SigTalB6+eNew/mwaFTjWiNEioFKYvr5WHCV7a5X3zc=; b=dskZBHwxGEjfF7R/YU/tEj9OGsuzkDLSFU6VwpP4bbhd5t4o+DM9A4xj 1njXBFs5RR9iY8bHWn4UsS/6LNEAogwHeXZqYG/7KzCUfhKSXpJB7YT0d scof4qoRrC9Sd3bKoanCHJ0uswXNbUfJEfoiMmBjJZehm/qPEX2lqSE96 tsP6VRnYkT3l7oslNCAUprvB71dub8HpeLRVqtD5OvH3fAtWJP84aXup9 NO+bKbjVuv528ATFM3c/tC5rfxzQ1vvu5xK3taZgAy8q+jSG0zCYks8A9 QZdo79x9lv+WoJoYNJUmsMbpnDvAU1U8yTPULgfok0ECAjD5Tzmq5ltjK Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="373738258" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="373738258" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 20:35:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="739149436" X-IronPort-AV: E=Sophos;i="6.01,191,1684825200"; d="scan'208";a="739149436" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.152]) by fmsmga007.fm.intel.com with ESMTP; 21 Aug 2023 20:35:38 -0700 From: Wenjun Wu To: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org Cc: xuejun.zhang@intel.com, madhu.chittim@intel.com, qi.z.zhang@intel.com, anthony.l.nguyen@intel.com Subject: [PATCH iwl-next v4 5/5] iavf: Add VIRTCHNL Opcodes Support for Queue bw Setting Date: Tue, 22 Aug 2023 11:40:03 +0800 Message-Id: <20230822034003.31628-6-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822034003.31628-1-wenjun1.wu@intel.com> References: <20230727021021.961119-1-wenjun1.wu@intel.com> <20230822034003.31628-1-wenjun1.wu@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jun Zhang iavf rate tree with root node and queue nodes is created and registered with devlink rate when iavf adapter is configured. User can configure the tx_max and tx_share of each queue. If any one of the queues have been fully updated by user, i.e. both tx_max and tx_share have been updated for that queue, VIRTCHNL opcodes of VIRTCHNL_OP_CONFIG_QUEUE_BW and VIRTCHNL_OP_CONFIG_QUANTA will be sent to PF to configure queues allocated to VF if PF indicates support of VIRTCHNL_VF_OFFLOAD_QOS through VF Resource / Capability Exchange. Signed-off-by: Jun Zhang --- drivers/net/ethernet/intel/iavf/iavf.h | 14 ++ .../net/ethernet/intel/iavf/iavf_devlink.c | 29 +++ .../net/ethernet/intel/iavf/iavf_devlink.h | 1 + drivers/net/ethernet/intel/iavf/iavf_main.c | 46 +++- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 231 +++++++++++++++++- 5 files changed, 317 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 72a68061e396..c04cd1d45be7 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -252,6 +252,9 @@ struct iavf_cloud_filter { #define IAVF_RESET_WAIT_DETECTED_COUNT 500 #define IAVF_RESET_WAIT_COMPLETE_COUNT 2000 +#define IAVF_MAX_QOS_TC_NUM 8 +#define IAVF_DEFAULT_QUANTA_SIZE 1024 + /* board specific private data structure */ struct iavf_adapter { struct workqueue_struct *wq; @@ -351,6 +354,9 @@ struct iavf_adapter { #define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION BIT_ULL(36) #define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION BIT_ULL(37) #define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION BIT_ULL(38) +#define IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW BIT_ULL(39) +#define IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE BIT_ULL(40) +#define IAVF_FLAG_AQ_GET_QOS_CAPS BIT_ULL(41) /* flags for processing extended capability messages during * __IAVF_INIT_EXTENDED_CAPS. Each capability exchange requires @@ -373,6 +379,7 @@ struct iavf_adapter { struct devlink *devlink; struct devlink_port devlink_port; + bool devlink_update; struct iavf_hw hw; /* defined in iavf_type.h */ @@ -422,6 +429,8 @@ struct iavf_adapter { VIRTCHNL_VF_OFFLOAD_FDIR_PF) #define ADV_RSS_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF) +#define QOS_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ + VIRTCHNL_VF_OFFLOAD_QOS) struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */ struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */ struct virtchnl_version_info pf_version; @@ -430,6 +439,7 @@ struct iavf_adapter { struct virtchnl_vlan_caps vlan_v2_caps; u16 msg_enable; struct iavf_eth_stats current_stats; + struct virtchnl_qos_cap_list *qos_caps; struct iavf_vsi vsi; u32 aq_wait_count; /* RSS stuff */ @@ -576,6 +586,10 @@ void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len); void iavf_notify_client_l2_params(struct iavf_vsi *vsi); void iavf_notify_client_open(struct iavf_vsi *vsi); void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset); +void iavf_update_queue_config(struct iavf_adapter *adapter); +void iavf_configure_queues_bw(struct iavf_adapter *adapter); +void iavf_configure_queues_quanta_size(struct iavf_adapter *adapter); +void iavf_get_qos_caps(struct iavf_adapter *adapter); void iavf_enable_channels(struct iavf_adapter *adapter); void iavf_disable_channels(struct iavf_adapter *adapter); void iavf_add_cloud_filter(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.c b/drivers/net/ethernet/intel/iavf/iavf_devlink.c index 732076c2126f..aefe707aafbc 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.c +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.c @@ -97,6 +97,30 @@ void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter) devl_unlock(adapter->devlink); } +/** + * iavf_notify_queue_config_complete - notify updating queue completion + * @adapter: iavf adapter struct instance + * + * This function sets the queue configuration update status when all + * queue parameters have been sent to PF + */ +void iavf_notify_queue_config_complete(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + int q_num = adapter->num_active_queues; + int i; + + /* clean up rate tree update flags*/ + for (i = 0; i < q_num; i++) + if (dl_priv->queue_nodes[i].tx_update_flag == + (IAVF_FLAG_TX_MAX_UPDATED | IAVF_FLAG_TX_SHARE_UPDATED)) { + dl_priv->queue_nodes[i].tx_update_flag = 0; + break; + } + + dl_priv->update_in_progress = false; +} + /** * iavf_check_update_config - check if updating queue parameters needed * @adapter: iavf adapter struct instance @@ -108,6 +132,8 @@ void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter) static int iavf_check_update_config(struct iavf_adapter *adapter, struct iavf_dev_rate_node *node) { + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + /* Update queue bw if any one of the queues have been fully updated by * user, the other queues either use the default value or the last * fully updated value @@ -123,6 +149,8 @@ static int iavf_check_update_config(struct iavf_adapter *adapter, if (adapter->state != __IAVF_RUNNING) return -EBUSY; + dl_priv->update_in_progress = true; + iavf_update_queue_config(adapter); return 0; } @@ -281,6 +309,7 @@ int iavf_devlink_register(struct iavf_adapter *adapter) if (!devlink) return -ENOMEM; + adapter->devlink_update = false; ref = devlink_priv(devlink); ref->devlink_ref = adapter; ref->iavf_dev_rate_initialized = false; diff --git a/drivers/net/ethernet/intel/iavf/iavf_devlink.h b/drivers/net/ethernet/intel/iavf/iavf_devlink.h index 751e9e093ab1..4709aa1a0341 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_devlink.h +++ b/drivers/net/ethernet/intel/iavf/iavf_devlink.h @@ -33,5 +33,6 @@ int iavf_devlink_port_register(struct iavf_adapter *adapter); void iavf_devlink_port_unregister(struct iavf_adapter *adapter); void iavf_devlink_rate_init_rate_tree(struct iavf_adapter *adapter); void iavf_devlink_rate_deinit_rate_tree(struct iavf_adapter *adapter); +void iavf_notify_queue_config_complete(struct iavf_adapter *adapter); #endif /* _IAVF_DEVLINK_H_ */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 699c6375200a..c69c8beab3b5 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2131,6 +2131,21 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) return 0; } + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW) { + iavf_configure_queues_bw(adapter); + return 0; + } + + if (adapter->aq_required & IAVF_FLAG_AQ_GET_QOS_CAPS) { + iavf_get_qos_caps(adapter); + return 0; + } + + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE) { + iavf_configure_queues_quanta_size(adapter); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES) { iavf_configure_queues(adapter); return 0; @@ -2713,7 +2728,9 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) if (!adapter->netdev_registered) { iavf_devlink_port_register(adapter); - iavf_devlink_rate_init_rate_tree(adapter); + + if (QOS_ALLOWED(adapter)) + iavf_devlink_rate_init_rate_tree(adapter); } netif_carrier_off(netdev); @@ -3136,6 +3153,19 @@ static void iavf_reset_task(struct work_struct *work) err = iavf_reinit_interrupt_scheme(adapter, running); if (err) goto reset_err; + + if (QOS_ALLOWED(adapter)) { + iavf_devlink_rate_deinit_rate_tree(adapter); + iavf_devlink_rate_init_rate_tree(adapter); + } + } + + if (adapter->devlink_update) { + adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + adapter->aq_required |= IAVF_FLAG_AQ_GET_QOS_CAPS; + adapter->aq_required |= + IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE; + adapter->devlink_update = false; } if (RSS_AQ(adapter)) { @@ -4901,7 +4931,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) struct net_device *netdev; struct iavf_adapter *adapter = NULL; struct iavf_hw *hw = NULL; - int err; + int err, len; err = pci_enable_device(pdev); if (err) @@ -4969,6 +4999,13 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) hw->bus.func = PCI_FUNC(pdev->devfn); hw->bus.bus_id = pdev->bus->number; + len = struct_size(adapter->qos_caps, cap, IAVF_MAX_QOS_TC_NUM); + adapter->qos_caps = kzalloc(len, GFP_KERNEL); + if (!adapter->qos_caps) { + err = -ENOMEM; + goto err_alloc_qos_cap; + } + /* Register iavf adapter with devlink */ err = iavf_devlink_register(adapter); if (err) { @@ -5014,8 +5051,10 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) return 0; -err_devlink_reg: +err_alloc_qos_cap: iounmap(hw->hw_addr); +err_devlink_reg: + kfree(adapter->qos_caps); err_ioremap: destroy_workqueue(adapter->wq); err_alloc_wq: @@ -5161,6 +5200,7 @@ static void iavf_remove(struct pci_dev *pdev) iavf_devlink_rate_deinit_rate_tree(adapter); iavf_devlink_port_unregister(adapter); iavf_devlink_unregister(adapter); + kfree(adapter->qos_caps); mutex_lock(&adapter->crit_lock); dev_info(&adapter->pdev->dev, "Removing device\n"); diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index f9727e9c3d63..2eaa93705527 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -148,7 +148,8 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_USO | VIRTCHNL_VF_OFFLOAD_FDIR_PF | VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF | - VIRTCHNL_VF_CAP_ADV_LINK_SPEED; + VIRTCHNL_VF_CAP_ADV_LINK_SPEED | + VIRTCHNL_VF_OFFLOAD_QOS; adapter->current_op = VIRTCHNL_OP_GET_VF_RESOURCES; adapter->aq_required &= ~IAVF_FLAG_AQ_GET_CONFIG; @@ -1465,6 +1466,210 @@ iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter, adapter->link_speed = vpe->event_data.link_event.link_speed; } +/** + * iavf_get_qos_caps - get qos caps support + * @adapter: iavf adapter struct instance + * + * This function requests PF for Supported QoS Caps. + */ +void iavf_get_qos_caps(struct iavf_adapter *adapter) +{ + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot get qos caps, command %d pending\n", + adapter->current_op); + return; + } + + adapter->current_op = VIRTCHNL_OP_GET_QOS_CAPS; + adapter->aq_required &= ~IAVF_FLAG_AQ_GET_QOS_CAPS; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_GET_QOS_CAPS, NULL, 0); +} + +/** + * iavf_set_quanta_size - set quanta size of queue chunk + * @adapter: iavf adapter struct instance + * @quanta_size: quanta size in bytes + * @queue_index: starting index of queue chunk + * @num_queues: number of queues in the queue chunk + * + * This function requests PF to set quanta size of queue chunk + * starting at queue_index. + */ +static void +iavf_set_quanta_size(struct iavf_adapter *adapter, u16 quanta_size, + u16 queue_index, u16 num_queues) +{ + struct virtchnl_quanta_cfg quanta_cfg; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set queue quanta size, command %d pending\n", + adapter->current_op); + return; + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUANTA; + quanta_cfg.quanta_size = quanta_size; + quanta_cfg.queue_select.type = VIRTCHNL_QUEUE_TYPE_TX; + quanta_cfg.queue_select.start_queue_id = queue_index; + quanta_cfg.queue_select.num_queues = num_queues; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_QUANTA_SIZE; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUANTA, + (u8 *)&quanta_cfg, sizeof(quanta_cfg)); +} + +/** + * iavf_set_queue_bw - set bw of allocated queues + * @adapter: iavf adapter struct instance + * + * This function requests PF to set queue bw of tc0 queues + */ +static void iavf_set_queue_bw(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + struct virtchnl_queues_bw_cfg *queues_bw_cfg; + struct iavf_dev_rate_node *queue_rate; + size_t len; + int i; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set tc queue bw, command %d pending\n", + adapter->current_op); + return; + } + + len = struct_size(queues_bw_cfg, cfg, adapter->num_active_queues); + queues_bw_cfg = kzalloc(len, GFP_KERNEL); + if (!queues_bw_cfg) + return; + + queue_rate = dl_priv->queue_nodes; + queues_bw_cfg->vsi_id = adapter->vsi.id; + queues_bw_cfg->num_queues = adapter->num_active_queues; + + for (i = 0; i < queues_bw_cfg->num_queues; i++) { + queues_bw_cfg->cfg[i].queue_id = i; + queues_bw_cfg->cfg[i].shaper.peak = queue_rate[i].tx_max; + queues_bw_cfg->cfg[i].shaper.committed = + queue_rate[i].tx_share; + queues_bw_cfg->cfg[i].tc = 0; + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUEUE_BW; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUEUE_BW, + (u8 *)queues_bw_cfg, len); + kfree(queues_bw_cfg); +} + +/** + * iavf_set_tc_queue_bw - set bw of allocated tc/queues + * @adapter: iavf adapter struct instance + * + * This function requests PF to set queue bw of multiple tc(s) + */ +static void iavf_set_tc_queue_bw(struct iavf_adapter *adapter) +{ + struct iavf_devlink *dl_priv = devlink_priv(adapter->devlink); + struct virtchnl_queues_bw_cfg *queues_bw_cfg; + struct iavf_dev_rate_node *queue_rate; + u16 queue_to_tc[256]; + size_t len; + u16 tc; + int i; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set tc queue bw, command %d pending\n", + adapter->current_op); + return; + } + + len = struct_size(queues_bw_cfg, cfg, adapter->num_active_queues); + queues_bw_cfg = kzalloc(len, GFP_KERNEL); + if (!queues_bw_cfg) + return; + + queue_rate = dl_priv->queue_nodes; + queues_bw_cfg->vsi_id = adapter->vsi.id; + queues_bw_cfg->num_queues = adapter->ch_config.total_qps; + + /* build tc[queue] */ + for (i = 0; i < adapter->num_tc; i++) { + int j, q_idx; + + for (j = 0; j < adapter->ch_config.ch_info[i].count; ++j) { + q_idx = j + adapter->ch_config.ch_info[i].offset; + queue_to_tc[q_idx] = i; + } + } + + for (i = 0; i < queues_bw_cfg->num_queues; i++) { + tc = queue_to_tc[i]; + queues_bw_cfg->cfg[i].queue_id = i; + queues_bw_cfg->cfg[i].shaper.peak = queue_rate[i].tx_max; + queues_bw_cfg->cfg[i].shaper.committed = + queue_rate[i].tx_share; + queues_bw_cfg->cfg[i].tc = tc; + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUEUE_BW; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUEUE_BW, + (u8 *)queues_bw_cfg, len); + kfree(queues_bw_cfg); +} + +/** + * iavf_configure_queues_bw - configure bw of allocated tc/queues + * @adapter: iavf adapter struct instance + * + * This function requests PF to configure queue bw of allocated + * tc/queues + */ +void iavf_configure_queues_bw(struct iavf_adapter *adapter) +{ + /* Set Queue bw */ + if (adapter->ch_config.state == __IAVF_TC_INVALID) + iavf_set_queue_bw(adapter); + else + iavf_set_tc_queue_bw(adapter); +} + +/** + * iavf_configure_queues_quanta_size - configure quanta size of queues + * @adapter: adapter structure + * + * Request that the PF configure quanta size of allocated queues. + **/ +void iavf_configure_queues_quanta_size(struct iavf_adapter *adapter) +{ + int quanta_size = IAVF_DEFAULT_QUANTA_SIZE; + + /* Set Queue Quanta Size to default */ + iavf_set_quanta_size(adapter, quanta_size, 0, + adapter->num_active_queues); +} + +/** + * iavf_update_queue_config - request queue configuration update + * @adapter: adapter structure + * + * Request that the PF configure queue quanta size and queue bw + * of allocated queues. + **/ +void iavf_update_queue_config(struct iavf_adapter *adapter) +{ + adapter->devlink_update = true; + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); +} + /** * iavf_enable_channels * @adapter: adapter structure @@ -2124,6 +2329,18 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, dev_warn(&adapter->pdev->dev, "Failed to add VLAN filter, error %s\n", iavf_stat_str(&adapter->hw, v_retval)); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + dev_warn(&adapter->pdev->dev, "Failed to Get Qos CAPs, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + dev_warn(&adapter->pdev->dev, "Failed to Config Quanta, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + dev_warn(&adapter->pdev->dev, "Failed to Config Queue BW, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; default: dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n", v_retval, iavf_stat_str(&adapter->hw, v_retval), @@ -2456,6 +2673,18 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, if (!v_retval) iavf_netdev_features_vlan_strip_set(netdev, false); break; + case VIRTCHNL_OP_GET_QOS_CAPS: { + u16 len = struct_size(adapter->qos_caps, cap, + IAVF_MAX_QOS_TC_NUM); + + memcpy(adapter->qos_caps, msg, min(msglen, len)); + } + break; + case VIRTCHNL_OP_CONFIG_QUANTA: + iavf_notify_queue_config_complete(adapter); + break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + break; default: if (adapter->current_op && (v_opcode != adapter->current_op)) dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",