From patchwork Wed Jul 24 20:24:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 13741340 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E50AC16EBF6 for ; Wed, 24 Jul 2024 20:26:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721852775; cv=none; b=oBGvweHFYKxLqfoUSls7CUz0BZYgaIKvWqpMHV9ouch6ZkV42rd8aQk0sPHpeXN1R17HhvJgOB4gN87v3KlyVB5xwnsHsZI/5OOPGOAub7WyI2K/BRoZWcKx7Si5+l31D5Zczos3A1+p+JYWhr1iYYn5kbU7EOFzaH2htiysMuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721852775; c=relaxed/simple; bh=PvS7YTxZXeP9/oE6yDCz7sLiAKaqXXvXCPZKWmoBF8Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XJsVNFvv0syNZ6tVi0fb0btfJ4cVVqYU7Whj7V5KHSYY/5whV41qtdDZiwIVfWSpbWVbRJCKwmTVNsem13hlZVUzi7q82FfwRRfAWUqk3OwTtzYYZAy9vloGy3RUVh/hWOi3+JBxxJnAvMrLjqZ2Y5BOzEIsIZh3juikVUKsP3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FBrzoQG3; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FBrzoQG3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1721852772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yb4s49S2lca0ZBcuWM3yglTfyVVqCEEf7CZcxgRpxJM=; b=FBrzoQG3B7ZMidaFMZbSNEDxHs0VDBDhH7fo4f6Fwr6z5cnhQdoqsNArRj5RD1jJkOs3Kd S8AkyjJnjEDJIu8CksjgZKJKD2WtNAndme9j7W4I1wDmZydD+kkIZ0aoOwmthIf0YE9P9N 2sVL3P6rr35neCOUcIMTIHavEQTeWFo= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-572-w2ky7kjWPk6YZX1iusB1vw-1; Wed, 24 Jul 2024 16:26:08 -0400 X-MC-Unique: w2ky7kjWPk6YZX1iusB1vw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 35A0D195608B; Wed, 24 Jul 2024 20:26:07 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.45.224.6]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2024019560AE; Wed, 24 Jul 2024 20:26:02 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: Jakub Kicinski , Jiri Pirko , Madhu Chittim , Sridhar Samudrala , Simon Horman , John Fastabend , Sunil Kovvuri Goutham , Jamal Hadi Salim Subject: [PATCH RFC v2 10/11] iavf: Add net_shaper_ops support Date: Wed, 24 Jul 2024 22:24:56 +0200 Message-ID: <403db492c2994a749d287e37a7c32f3d0ebfa60c.1721851988.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Sudheer Mogilappagari Implement net_shaper_ops support for IAVF. This enables configuration of rate limiting on per queue basis. Customer intends to enforce bandwidth limit on Tx traffic steered to the queue by configuring rate limits on the queue. To set rate limiting for a queue, update shaper object of given queues in driver and send VIRTCHNL_OP_CONFIG_QUEUE_BW to PF to update HW configuration. Deleting shaper configured for queue is nothing but configuring shaper with bw_max 0. The PF restores the default rate limiting config when bw_max is zero. Signed-off-by: Sudheer Mogilappagari --- drivers/net/ethernet/intel/Kconfig | 1 + drivers/net/ethernet/intel/iavf/iavf.h | 3 + drivers/net/ethernet/intel/iavf/iavf_main.c | 171 ++++++++++++++++++ drivers/net/ethernet/intel/iavf/iavf_txrx.h | 2 + .../net/ethernet/intel/iavf/iavf_virtchnl.c | 65 +++++++ 5 files changed, 242 insertions(+) diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 0375c7448a57..20bc40eec487 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -258,6 +258,7 @@ config I40E_DCB config IAVF tristate select LIBIE + select NET_SHAPER config I40EVF tristate "Intel(R) Ethernet Adaptive Virtual Function support" diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 23a6557fc3db..f5d1142ea427 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -33,6 +33,7 @@ #include #include #include +#include #include "iavf_type.h" #include @@ -335,6 +336,7 @@ struct iavf_adapter { #define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION BIT_ULL(36) #define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION BIT_ULL(37) #define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION BIT_ULL(38) +#define IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW BIT_ULL(39) /* flags for processing extended capability messages during * __IAVF_INIT_EXTENDED_CAPS. Each capability exchange requires @@ -551,6 +553,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, int iavf_config_rss(struct iavf_adapter *adapter); int iavf_lan_add_device(struct iavf_adapter *adapter); int iavf_lan_del_device(struct iavf_adapter *adapter); +void iavf_cfg_queues_bw(struct iavf_adapter *adapter); void iavf_enable_channels(struct iavf_adapter *adapter); void iavf_disable_channels(struct iavf_adapter *adapter); void iavf_add_cloud_filter(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index ff11bafb3b4f..3a5ae0cd31c7 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2085,6 +2085,11 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) return 0; } + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW) { + iavf_cfg_queues_bw(adapter); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES) { iavf_configure_queues(adapter); return 0; @@ -2918,6 +2923,25 @@ static void iavf_disable_vf(struct iavf_adapter *adapter) dev_info(&adapter->pdev->dev, "Reset task did not complete, VF disabled\n"); } +/** + * iavf_reconfig_qs_bw - Call-back task to handle hardware reset + * @adapter: board private structure + * + * After a reset, the shaper parameters of queues need to be replayed again. + * Since the net_shaper_info object inside TX rings persists across reset, + * set the update flag for all queues so that the virtchnl message is triggered + * for all queues. + **/ +static void iavf_reconfig_qs_bw(struct iavf_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_active_queues; i++) + adapter->tx_rings[i].q_shaper_update = true; + + adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; +} + /** * iavf_reset_task - Call-back task to handle hardware reset * @work: pointer to work_struct @@ -3124,6 +3148,8 @@ static void iavf_reset_task(struct work_struct *work) iavf_up_complete(adapter); iavf_irq_enable(adapter, true); + + iavf_reconfig_qs_bw(adapter); } else { iavf_change_state(adapter, __IAVF_DOWN); wake_up(&adapter->down_waitqueue); @@ -4743,6 +4769,150 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev, return iavf_fix_strip_features(adapter, features); } +static int iavf_verify_handle(struct net_device *dev, u32 handle, + struct netlink_ext_ack *extack) +{ + struct iavf_adapter *adapter = netdev_priv(dev); + enum net_shaper_scope scope; + int qid; + + scope = net_shaper_handle_scope(handle); + qid = net_shaper_handle_id(handle); + + if (scope != NET_SHAPER_SCOPE_QUEUE) { + NL_SET_ERR_MSG_FMT(extack, "Invalid shaper handle %x, unsupported scope %d", + handle, scope); + return -EOPNOTSUPP; + } + + if (qid >= adapter->num_active_queues) { + NL_SET_ERR_MSG_FMT(extack, "Invalid shaper handle %x, queued id %d max %d", + handle, qid, adapter->num_active_queues); + return -EINVAL; + } + return 0; +} + +/** + * iavf_verify_shaper_info - check that shaper info received + * @dev: pointer to netdev + * @shaper: configuration of shaper. + * @extack: Netlink extended ACK for reporting errors + * + * Returns: + * * %0 - Success + * * %-EOPNOTSUPP - Driver doesn't support this scope. + * * %-EINVAL - Invalid queue number in input + **/ +static int +iavf_verify_shaper_info(struct net_device *dev, + const struct net_shaper_info *shaper, + struct netlink_ext_ack *extack) +{ + return iavf_verify_handle(dev, shaper->handle, extack); +} + +/** + * iavf_shaper_set - check that shaper info received + * @dev: pointer to netdev + * @shaper: configuration of shaper. + * @extack: Netlink extended ACK for reporting errors + * + * Returns: + * * %0 - Success + * * %-EOPNOTSUPP - Driver doesn't support this scope. + * * %-EINVAL - Invalid queue number in input + **/ +static int +iavf_shaper_set(struct net_device *dev, + const struct net_shaper_info *shaper, + struct netlink_ext_ack *extack) +{ + struct iavf_adapter *adapter = netdev_priv(dev); + bool need_cfg_update = false; + enum net_shaper_scope scope; + int id, ret = 0; + + ret = iavf_verify_shaper_info(dev, shaper, extack); + if (ret) + return ret; + + scope = net_shaper_handle_scope(shaper->handle); + id = net_shaper_handle_id(shaper->handle); + + if (scope == NET_SHAPER_SCOPE_QUEUE) { + struct iavf_ring *tx_ring = &adapter->tx_rings[id]; + + tx_ring->q_shaper.bw_min = div_u64(shaper->bw_min, 1000); + tx_ring->q_shaper.bw_max = div_u64(shaper->bw_max, 1000); + tx_ring->q_shaper_update = true; + need_cfg_update = true; + } + + if (need_cfg_update) + adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + + return 0; +} + +static int iavf_shaper_del(struct net_device *dev, + const u32 handle, + struct netlink_ext_ack *extack) +{ + struct iavf_adapter *adapter = netdev_priv(dev); + bool need_cfg_update = false; + enum net_shaper_scope scope; + int qid, ret; + + ret = iavf_verify_handle(dev, handle, extack); + if (ret < 0) + return ret; + + scope = net_shaper_handle_scope(handle); + qid = net_shaper_handle_id(handle); + + if (scope == NET_SHAPER_SCOPE_QUEUE) { + struct iavf_ring *tx_ring = &adapter->tx_rings[qid]; + + tx_ring->q_shaper.bw_min = 0; + tx_ring->q_shaper.bw_max = 0; + tx_ring->q_shaper_update = true; + need_cfg_update = true; + } + + if (need_cfg_update) + adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + + return 0; +} + +static int iavf_shaper_group(struct net_device *dev, int nr_inputs, + const struct net_shaper_info *inputs, + const struct net_shaper_info *output, + struct netlink_ext_ack *extack) +{ + return -EOPNOTSUPP; +} + +static int iavf_shaper_cap(struct net_device *dev, enum net_shaper_scope scope, + unsigned long *flags) +{ + if (scope != NET_SHAPER_SCOPE_QUEUE) + return -EOPNOTSUPP; + + *flags = BIT(NET_SHAPER_A_CAPABILITIES_SUPPORT_BW_MIN) | + BIT(NET_SHAPER_A_CAPABILITIES_SUPPORT_BW_MAX) | + BIT(NET_SHAPER_A_CAPABILITIES_SUPPORT_METRIC_BPS); + return 0; +} + +static const struct net_shaper_ops iavf_shaper_ops = { + .set = iavf_shaper_set, + .delete = iavf_shaper_del, + .group = iavf_shaper_group, + .capabilities = iavf_shaper_cap, +}; + static const struct net_device_ops iavf_netdev_ops = { .ndo_open = iavf_open, .ndo_stop = iavf_close, @@ -4758,6 +4928,7 @@ static const struct net_device_ops iavf_netdev_ops = { .ndo_fix_features = iavf_fix_features, .ndo_set_features = iavf_set_features, .ndo_setup_tc = iavf_setup_tc, + .net_shaper_ops = &iavf_shaper_ops, }; /** diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index d7b5587aeb8e..dd503ee50b7f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -296,6 +296,8 @@ struct iavf_ring { */ u32 rx_buf_len; + struct net_shaper_info q_shaper; + bool q_shaper_update; } ____cacheline_internodealigned_in_smp; #define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002 diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 1e543f6a7c30..c0611608d332 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1506,6 +1506,60 @@ iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter, adapter->link_speed = vpe->event_data.link_event.link_speed; } +/** + * iavf_cfg_queues_bw - configure bandwidth of allocated queues + * @adapter: iavf adapter structure instance + * + * This function requests PF to configure queue bandwidth of allocated queues + */ +void iavf_cfg_queues_bw(struct iavf_adapter *adapter) +{ + struct virtchnl_queues_bw_cfg *qs_bw_cfg; + struct net_shaper_info *q_shaper; + int qs_to_update = 0; + int i, inx = 0; + size_t len; + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, + "Cannot set tc queue bw, command %d pending\n", + adapter->current_op); + return; + } + + for (i = 0; i < adapter->num_active_queues; i++) { + if (adapter->tx_rings[i].q_shaper_update) + qs_to_update++; + } + len = struct_size(qs_bw_cfg, cfg, qs_to_update); + qs_bw_cfg = kzalloc(len, GFP_KERNEL); + if (!qs_bw_cfg) + return; + + qs_bw_cfg->vsi_id = adapter->vsi.id; + qs_bw_cfg->num_queues = qs_to_update; + + for (i = 0; i < adapter->num_active_queues; i++) { + struct iavf_ring *tx_ring = &adapter->tx_rings[i]; + + q_shaper = &tx_ring->q_shaper; + if (tx_ring->q_shaper_update) { + qs_bw_cfg->cfg[inx].queue_id = i; + qs_bw_cfg->cfg[inx].shaper.peak = q_shaper->bw_max; + qs_bw_cfg->cfg[inx].shaper.committed = q_shaper->bw_min; + qs_bw_cfg->cfg[inx].tc = 0; + inx++; + } + } + + adapter->current_op = VIRTCHNL_OP_CONFIG_QUEUE_BW; + adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_CONFIG_QUEUE_BW, + (u8 *)qs_bw_cfg, len); + kfree(qs_bw_cfg); +} + /** * iavf_enable_channels * @adapter: adapter structure @@ -2226,6 +2280,10 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC; break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: + dev_warn(&adapter->pdev->dev, "Failed to Config Queue BW, error %s\n", + iavf_stat_str(&adapter->hw, v_retval)); + break; default: dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n", v_retval, iavf_stat_str(&adapter->hw, v_retval), @@ -2560,6 +2618,13 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, if (!v_retval) iavf_netdev_features_vlan_strip_set(netdev, false); break; + case VIRTCHNL_OP_CONFIG_QUEUE_BW: { + int i; + /* shaper configuration is successful for all queues */ + for (i = 0; i < adapter->num_active_queues; i++) + adapter->tx_rings[i].q_shaper_update = false; + } + break; default: if (adapter->current_op && (v_opcode != adapter->current_op)) dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",