From patchwork Tue Oct 22 16:23:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845946 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F39B1A4F1B; Tue, 22 Oct 2024 16:24:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614267; cv=none; b=YDpLqOLkUpAGgL0g5HmhAW1kwIooC9OkmtlJnMXulbG26zytQ2SKxnHZeUEACvQ1wn16Wxqn1Kquksq5wFMI098maZLjlKCaMiRpZ1LG0cNpDUAwyDZ5/MRFp0xwtuU4h2EWzXv6SSFcNfE9ukn6zI0L8SygHOwV7FFLGRh/tyg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614267; c=relaxed/simple; bh=cdCZuCYBhRjhIok+8BCv4C9bK3F4eYbFO6BT1g0htAA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bI+EHPCpJgPpMuxrpPBZwHW0HIRoDTenRUyQd5Bs314Wn8lImETya8DeE+/lvNZwlXSDK/DAuERLM4t/M/yPJ7/+lQdxGgGNXk3eQrjb7Zp9omp1+tqCDaRsBL2fE66WJ+Jtfj25jTfYe8TDQGWdCEbeKXiy4ceT0iZVaFmnPN4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dFfpHhdG; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dFfpHhdG" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-20e6981ca77so42659665ad.2; Tue, 22 Oct 2024 09:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614265; x=1730219065; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kC3zPs9vHcRcEk1NcCTxS12B6W3bruMpQFnLMfN/DKQ=; b=dFfpHhdGV1begHy+bfyvtdICQhdgtiLIW2p86QV0otjsLOYyo+qqKrqA+OGKyDLHAt eTk8eQkEXbxpgxSResic95wxl5ULK9sE3a1Wha3nMiuE7+JXJCL73cyjOnNsX+SAJFDG vEpGm8a6xvrILAnCmhuftMGYLvDtiBKoE26BboKEB5Skkk7we9zOLm14sYMG9/TgFFgs rvf+MYXSZYr5ihK00NXJpxANLJDX4kqFGKAAUCVaQPYbLVifqVDcGGV78oM14vjlKuqp Y0lQjXZ6tJ7Ww5NLfxiKJZddmkYsOdcMtI/i/LpobPbTj9LJf3Yeq/VF93R1SxjKwK40 hEQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614265; x=1730219065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kC3zPs9vHcRcEk1NcCTxS12B6W3bruMpQFnLMfN/DKQ=; b=SWOApBbKFGW0vWwJre88isvZlQf48BgLNOnYb08u6Dzd6BGjJS9+0k47t6RUkFljxs 75k03GfzjLbVs0Us+SKzmVCLoAANetRdnIwhrGW6b5EuhYkPLZK32bEz4vSodyvUWxuf YrRxzUpGpMNgFjqllH5iY7WZiFPxv5wCzSKUIS0Fc8TVFaFG80Ha6RuaBUdbaYNASaJu 1KuVahvnd7roiqDt/lGqwXEHXOKSEMz4B3vHfo9AOjKdVDFZb1Z663vrMq/NMHeyz06X MR8bsb4pPEFx9xorh58gmOvJAYKHHkRbWtJG8KvQr3ALSs0nv6CG5Tc9xj4OnTbUXUcy ATLg== X-Forwarded-Encrypted: i=1; AJvYcCVrQ6MT+WePimqaz/ihd3VM9aZk92/KXMJsovlda7FKKsk8I3v7q2pkde8Ocx3DK+HxA4kG+bv4tmg=@vger.kernel.org, AJvYcCVybG4dvmiJJKPjtqLp8w+3Yr0glLB6ltdh9XZ1C/hCFAq9uS5PbUqKEnPmtYBs684AW+jauo8L@vger.kernel.org X-Gm-Message-State: AOJu0Yy+XS608fGFcjPvnQf0sHrPyLwZHLgOyz0/JcvJ+RFYtcl7WdJx VSEkgM81Wh/L0LtOQ5f4/kQDzZc34nJgemZVbjvXkxNjV0FsyYUuTqOIyVsJ X-Google-Smtp-Source: AGHT+IGjwiMLB2lTZZhTP27oxSG/YnVOU4I1ocz/IesLOlu1QR97gCjBzNRbUmUZbqLgEQg+g0Qqtw== X-Received: by 2002:a17:903:1387:b0:20c:a055:9f07 with SMTP id d9443c01a7336-20e9849c672mr30875625ad.26.1729614264782; Tue, 22 Oct 2024 09:24:24 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.24.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:24:23 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 1/8] bnxt_en: add support for rx-copybreak ethtool command Date: Tue, 22 Oct 2024 16:23:52 +0000 Message-Id: <20241022162359.2713094-2-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The bnxt_en driver supports rx-copybreak, but it couldn't be set by userspace. Only the default value(256) has worked. This patch makes the bnxt_en driver support following command. `ethtool --set-tunable rx-copybreak ` and `ethtool --get-tunable rx-copybreak`. Reviewed-by: Brett Creeley Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Remove min rx-copybreak value. - Add Review tag from Brett. - Add Test tag from Stanislav. v3: - Update copybreak value after closing nic and before opening nic when the device is running. v2: - Define max/vim rx_copybreak value. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 23 +++++---- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 5 +- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 48 ++++++++++++++++++- 3 files changed, 65 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index bda3742d4e32..0f5fe9ba691d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -81,7 +81,6 @@ MODULE_DESCRIPTION("Broadcom NetXtreme network driver"); #define BNXT_RX_OFFSET (NET_SKB_PAD + NET_IP_ALIGN) #define BNXT_RX_DMA_OFFSET NET_SKB_PAD -#define BNXT_RX_COPY_THRESH 256 #define BNXT_TX_PUSH_THRESH 164 @@ -1330,13 +1329,13 @@ static struct sk_buff *bnxt_copy_data(struct bnxt_napi *bnapi, u8 *data, if (!skb) return NULL; - dma_sync_single_for_cpu(&pdev->dev, mapping, bp->rx_copy_thresh, + dma_sync_single_for_cpu(&pdev->dev, mapping, bp->rx_copybreak, bp->rx_dir); memcpy(skb->data - NET_IP_ALIGN, data - NET_IP_ALIGN, len + NET_IP_ALIGN); - dma_sync_single_for_device(&pdev->dev, mapping, bp->rx_copy_thresh, + dma_sync_single_for_device(&pdev->dev, mapping, bp->rx_copybreak, bp->rx_dir); skb_put(skb, len); @@ -1829,7 +1828,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, return NULL; } - if (len <= bp->rx_copy_thresh) { + if (len <= bp->rx_copybreak) { skb = bnxt_copy_skb(bnapi, data_ptr, len, mapping); if (!skb) { bnxt_abort_tpa(cpr, idx, agg_bufs); @@ -2162,7 +2161,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } } - if (len <= bp->rx_copy_thresh) { + if (len <= bp->rx_copybreak) { if (!xdp_active) skb = bnxt_copy_skb(bnapi, data_ptr, len, dma_addr); else @@ -4451,6 +4450,11 @@ void bnxt_set_tpa_flags(struct bnxt *bp) bp->flags |= BNXT_FLAG_GRO; } +static void bnxt_init_ring_params(struct bnxt *bp) +{ + bp->rx_copybreak = BNXT_DEFAULT_RX_COPYBREAK; +} + /* bp->rx_ring_size, bp->tx_ring_size, dev->mtu, BNXT_FLAG_{G|L}RO flags must * be set on entry. */ @@ -4465,7 +4469,6 @@ void bnxt_set_ring_params(struct bnxt *bp) rx_space = rx_size + ALIGN(max(NET_SKB_PAD, XDP_PACKET_HEADROOM), 8) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - bp->rx_copy_thresh = BNXT_RX_COPY_THRESH; ring_size = bp->rx_ring_size; bp->rx_agg_ring_size = 0; bp->rx_agg_nr_pages = 0; @@ -4510,7 +4513,8 @@ void bnxt_set_ring_params(struct bnxt *bp) ALIGN(max(NET_SKB_PAD, XDP_PACKET_HEADROOM), 8) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); } else { - rx_size = SKB_DATA_ALIGN(BNXT_RX_COPY_THRESH + NET_IP_ALIGN); + rx_size = SKB_DATA_ALIGN(bp->rx_copybreak + + NET_IP_ALIGN); rx_space = rx_size + NET_SKB_PAD + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); } @@ -6424,8 +6428,8 @@ static int bnxt_hwrm_vnic_set_hds(struct bnxt *bp, struct bnxt_vnic_info *vnic) VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6); req->enables |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID); - req->jumbo_thresh = cpu_to_le16(bp->rx_copy_thresh); - req->hds_threshold = cpu_to_le16(bp->rx_copy_thresh); + req->jumbo_thresh = cpu_to_le16(bp->rx_copybreak); + req->hds_threshold = cpu_to_le16(bp->rx_copybreak); } req->vnic_id = cpu_to_le32(vnic->fw_vnic_id); return hwrm_req_send(bp, req); @@ -15865,6 +15869,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) bnxt_init_l2_fltr_tbl(bp); bnxt_set_rx_skb_mode(bp, false); bnxt_set_tpa_flags(bp); + bnxt_init_ring_params(bp); bnxt_set_ring_params(bp); bnxt_rdma_aux_device_init(bp); rc = bnxt_set_dflt_rings(bp, true); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 69231e85140b..1b83a2c8027b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -34,6 +34,9 @@ #include #endif +#define BNXT_DEFAULT_RX_COPYBREAK 256 +#define BNXT_MAX_RX_COPYBREAK 1024 + extern struct list_head bnxt_block_cb_list; struct page_pool; @@ -2299,7 +2302,7 @@ struct bnxt { enum dma_data_direction rx_dir; u32 rx_ring_size; u32 rx_agg_ring_size; - u32 rx_copy_thresh; + u32 rx_copybreak; u32 rx_ring_mask; u32 rx_agg_ring_mask; int rx_nr_pages; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index f71cc8188b4e..9af0a3f34750 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -4319,6 +4319,50 @@ static int bnxt_get_eee(struct net_device *dev, struct ethtool_keee *edata) return 0; } +static int bnxt_set_tunable(struct net_device *dev, + const struct ethtool_tunable *tuna, + const void *data) +{ + struct bnxt *bp = netdev_priv(dev); + u32 rx_copybreak; + + switch (tuna->id) { + case ETHTOOL_RX_COPYBREAK: + rx_copybreak = *(u32 *)data; + if (rx_copybreak > BNXT_MAX_RX_COPYBREAK) + return -ERANGE; + if (rx_copybreak != bp->rx_copybreak) { + if (netif_running(dev)) { + bnxt_close_nic(bp, false, false); + bp->rx_copybreak = rx_copybreak; + bnxt_set_ring_params(bp); + bnxt_open_nic(bp, false, false); + } else { + bp->rx_copybreak = rx_copybreak; + } + } + return 0; + default: + return -EOPNOTSUPP; + } +} + +static int bnxt_get_tunable(struct net_device *dev, + const struct ethtool_tunable *tuna, void *data) +{ + struct bnxt *bp = netdev_priv(dev); + + switch (tuna->id) { + case ETHTOOL_RX_COPYBREAK: + *(u32 *)data = bp->rx_copybreak; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + static int bnxt_read_sfp_module_eeprom_info(struct bnxt *bp, u16 i2c_addr, u16 page_number, u8 bank, u16 start_addr, u16 data_length, @@ -4769,7 +4813,7 @@ static int bnxt_run_loopback(struct bnxt *bp) cpr = &rxr->bnapi->cp_ring; if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) cpr = rxr->rx_cpr; - pkt_size = min(bp->dev->mtu + ETH_HLEN, bp->rx_copy_thresh); + pkt_size = min(bp->dev->mtu + ETH_HLEN, bp->rx_copybreak); skb = netdev_alloc_skb(bp->dev, pkt_size); if (!skb) return -ENOMEM; @@ -5342,6 +5386,8 @@ const struct ethtool_ops bnxt_ethtool_ops = { .get_link_ext_stats = bnxt_get_link_ext_stats, .get_eee = bnxt_get_eee, .set_eee = bnxt_set_eee, + .get_tunable = bnxt_get_tunable, + .set_tunable = bnxt_set_tunable, .get_module_info = bnxt_get_module_info, .get_module_eeprom = bnxt_get_module_eeprom, .get_module_eeprom_by_page = bnxt_get_module_eeprom_by_page, From patchwork Tue Oct 22 16:23:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845947 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65F0C1A3049; Tue, 22 Oct 2024 16:24:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614275; cv=none; b=NRrZPMtDOeDKvg/xrCsMrVyFn9Gm1yKVySXBuvjwqbJSZu0iPRBKeBgqlFy1bVzk3JwBY249ix4zdGqCMpF4/R0Oha1zzHCBHywtSBMf0XjODq4cDd/eAngo7eFyg7qNIJStzgnSn9lLdl5QAdY8PKa5jOqro3skESXFtpTm2s4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614275; c=relaxed/simple; bh=iYt/2vPZTTZW0yh6HzQiKqk1JIMrRmfkl+8bbDI91EE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kj0FBsoNYY8KfqW3wVCLAhNf7WCKaogP+EjOvA5fnSA20io0wsLrKIYRRe9QKkVL/VKEMKlgl99ZnRb3BdZSY7YTPtIMoRRmBv9gidcA4K0946jZbY70LWxjF/IAJa+EypmQn9w+v9UnVAU+x1sJHRJ8HDQFaNhLehPWbXmfWxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nEVvjAtC; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nEVvjAtC" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-20cbca51687so58507685ad.1; Tue, 22 Oct 2024 09:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614274; x=1730219074; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NdsyleTYEsE/bmQMS5mJMOaKVUmrh+ZSoQ7yWH3vIsc=; b=nEVvjAtCIOFZbJnIkVv0RDE2MhE2WQdnza7qKmDDTBvromymnyHvF5kXFcGFtSf59E Mzs2kgbfC+D2W3awqBDbi93ZCViqGinX85WLC4oYH4fNoCw84RTrhb6bXc7MLPnjPE43 1gKYjkcoFsIxkXiVt5Ltyfh8ESeD6z3mHs3J7e4Zl6tT36N/VGPoD2+BX6V6fVF8nhcS +34ivj0r2HwLFoKzca3L1Hv1LL8/Me1jE5cw9H7taOfVvgd5uAeH48Q7begOqViNssok Ypro7Ka9Ume8gT8pX7twoMZgzr1vw+NNQq4+clfik+UY2smkxre7oUCRTUsxUXDZtjDr ezTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614274; x=1730219074; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NdsyleTYEsE/bmQMS5mJMOaKVUmrh+ZSoQ7yWH3vIsc=; b=YHA55soNFrEb+jauRkJYTIZrevNG3maAuuVJaLafQ3CvZ2O5uNgXneH0lBiDkH9Hxy NnVpSZRjA1yy+uDQPahxql7WzZFyxzJ/ezVLBxpRoL7zFy6hlQ5B7lNKyb5Z44wrvY0F rxanTaBJFOHIxYkRretDhc3CSKg6f3dle9hYgOoTv+iQ/QnCIwlQnxKnic8KS91uyTOp ivkvu0McQi9RCMLadTsrYCZxyWnNOM1ASAAo1qLK1i3+xQH/RZGUxhp08fDo5H+sgpKH sylwfN2Iuup9ZYjBHB6d6qHgD0XtIWTSfn0jBlKj20ye02MFsnecUdtjzKa4ix7kAe/k up+g== X-Forwarded-Encrypted: i=1; AJvYcCV4HOaIBJ6aHULAex5OvkmVdDTiRnE7AAPBTrZNQSbYaQYb/u/t/NzeaNKOsxQ48/LLEvFBj6Fk@vger.kernel.org, AJvYcCVbdQ/mNn7Ybm9Tk/xm2TRsXIphJcy9+xuoSnANDo5A6i6S2QH7rESAzNqCEoI/I8VGzZ1bYnqyXAk=@vger.kernel.org X-Gm-Message-State: AOJu0YyvaZreLlvV9AtNEnacsz8x2olsnlLEP488ppGyellO+d+gIPnR 5OvZLq2v3B+TPnqT08s7wLaStnPJ0KjVbFtwyi7XWmIVQjN4Yy2s X-Google-Smtp-Source: AGHT+IEK6j5nr4/p0sLJgHFw9w1osIn+yDRQdSRNMnr4tNQB5gQ1hzVcmtn9XrNxOJ0y1Rlf7bttlQ== X-Received: by 2002:a17:902:e544:b0:20c:ef81:db with SMTP id d9443c01a7336-20e5a8a103bmr199735285ad.28.1729614273680; Tue, 22 Oct 2024 09:24:33 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.24.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:24:32 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 2/8] bnxt_en: add support for tcp-data-split ethtool command Date: Tue, 22 Oct 2024 16:23:53 +0000 Message-Id: <20241022162359.2713094-3-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org NICs that uses bnxt_en driver supports tcp-data-split feature by the name of HDS(header-data-split). But there is no implementation for the HDS to enable or disable by ethtool. Only getting the current HDS status is implemented and The HDS is just automatically enabled only when either LRO, HW-GRO, or JUMBO is enabled. The hds_threshold follows rx-copybreak value. and it was unchangeable. This implements `ethtool -G tcp-data-split ` command option. The value can be , , and but the will be automatically changed to . HDS feature relies on the aggregation ring. So, if HDS is enabled, the bnxt_en driver initializes the aggregation ring. This is the reason why BNXT_FLAG_AGG_RINGS contains HDS condition. Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Do not support disable tcp-data-split. - Add Test tag from Stanislav. v3: - No changes. v2: - Do not set hds_threshold to 0. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 8 +++----- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 5 +++-- drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 13 +++++++++++++ 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 0f5fe9ba691d..91ea42ff9b17 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4473,7 +4473,7 @@ void bnxt_set_ring_params(struct bnxt *bp) bp->rx_agg_ring_size = 0; bp->rx_agg_nr_pages = 0; - if (bp->flags & BNXT_FLAG_TPA) + if (bp->flags & BNXT_FLAG_TPA || bp->flags & BNXT_FLAG_HDS) agg_factor = min_t(u32, 4, 65536 / BNXT_RX_PAGE_SIZE); bp->flags &= ~BNXT_FLAG_JUMBO; @@ -6420,15 +6420,13 @@ static int bnxt_hwrm_vnic_set_hds(struct bnxt *bp, struct bnxt_vnic_info *vnic) req->flags = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_JUMBO_PLACEMENT); req->enables = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_JUMBO_THRESH_VALID); + req->jumbo_thresh = cpu_to_le16(bp->rx_buf_use_size); - if (BNXT_RX_PAGE_MODE(bp)) { - req->jumbo_thresh = cpu_to_le16(bp->rx_buf_use_size); - } else { + if (bp->flags & BNXT_FLAG_AGG_RINGS) { req->flags |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV4 | VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6); req->enables |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID); - req->jumbo_thresh = cpu_to_le16(bp->rx_copybreak); req->hds_threshold = cpu_to_le16(bp->rx_copybreak); } req->vnic_id = cpu_to_le32(vnic->fw_vnic_id); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 1b83a2c8027b..432bc19b35ea 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -2201,8 +2201,6 @@ struct bnxt { #define BNXT_FLAG_TPA (BNXT_FLAG_LRO | BNXT_FLAG_GRO) #define BNXT_FLAG_JUMBO 0x10 #define BNXT_FLAG_STRIP_VLAN 0x20 - #define BNXT_FLAG_AGG_RINGS (BNXT_FLAG_JUMBO | BNXT_FLAG_GRO | \ - BNXT_FLAG_LRO) #define BNXT_FLAG_RFS 0x100 #define BNXT_FLAG_SHARED_RINGS 0x200 #define BNXT_FLAG_PORT_STATS 0x400 @@ -2223,6 +2221,9 @@ struct bnxt { #define BNXT_FLAG_ROCE_MIRROR_CAP 0x4000000 #define BNXT_FLAG_TX_COAL_CMPL 0x8000000 #define BNXT_FLAG_PORT_STATS_EXT 0x10000000 + #define BNXT_FLAG_HDS 0x20000000 + #define BNXT_FLAG_AGG_RINGS (BNXT_FLAG_JUMBO | BNXT_FLAG_GRO | \ + BNXT_FLAG_LRO | BNXT_FLAG_HDS) #define BNXT_FLAG_ALL_CONFIG_FEATS (BNXT_FLAG_TPA | \ BNXT_FLAG_RFS | \ diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 9af0a3f34750..5172d0547e0c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -854,9 +854,21 @@ static int bnxt_set_ringparam(struct net_device *dev, (ering->tx_pending < BNXT_MIN_TX_DESC_CNT)) return -EINVAL; + if (kernel_ering->tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_DISABLED) + return -EOPNOTSUPP; + if (netif_running(dev)) bnxt_close_nic(bp, false, false); + switch (kernel_ering->tcp_data_split) { + case ETHTOOL_TCP_DATA_SPLIT_ENABLED: + bp->flags |= BNXT_FLAG_HDS; + break; + case ETHTOOL_TCP_DATA_SPLIT_UNKNOWN: + bp->flags &= ~BNXT_FLAG_HDS; + break; + } + bp->rx_ring_size = ering->rx_pending; bp->tx_ring_size = ering->tx_pending; bnxt_set_ring_params(bp); @@ -5345,6 +5357,7 @@ const struct ethtool_ops bnxt_ethtool_ops = { ETHTOOL_COALESCE_STATS_BLOCK_USECS | ETHTOOL_COALESCE_USE_ADAPTIVE_RX | ETHTOOL_COALESCE_USE_CQE, + .supported_ring_params = ETHTOOL_RING_USE_TCP_DATA_SPLIT, .get_link_ksettings = bnxt_get_link_ksettings, .set_link_ksettings = bnxt_set_link_ksettings, .get_fec_stats = bnxt_get_fec_stats, From patchwork Tue Oct 22 16:23:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845948 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 513E11A4F2B; Tue, 22 Oct 2024 16:24:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614285; cv=none; b=bHLVKHPLv29G7dBnhkDH1JBGi0W72rfz+Yehkdwd+54DgFN3j8GkVwFALqfUSAP3f2YUZPBXyZPZZX2W89BBR8MRjCFq7wwdO+VEk8mw7mFRko0vach75/jWw+nkUoYIa36vXFxDq60LROT1lk5LArcoMZWhQ/jrVtIfk+FkmmA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614285; c=relaxed/simple; bh=YAkWNxLfVx3DPN2lOyBn8ML3qfSxLypx22iOrW0ydbI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HkZF5nmRiX6X225bJl+Ub30juHY7AQpenE3HwZcteJtVn3dskVAy6xgqiNgWIz/Ke+UAYkYvflNFF9SgYRQK3bT03OtBBqYpbcLVidjjbX0u3qFXDpXEmiAcTF8vqnO7Fy9FfSLDBeS2syUjlBjT/TRhy3Zg5Uq4RW/lvafX4RA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FbY2HkdS; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FbY2HkdS" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-20c805a0753so50642625ad.0; Tue, 22 Oct 2024 09:24:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614283; x=1730219083; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SYlVqdo4GUp2p48MbqkwL3zlzRrPB+OcWMkizOryOV8=; b=FbY2HkdS/6/ZJdzp1eNEqMCVsJn2HKPMSWLoatHasG070JmYDEtDxdTGWz5wdlRoBX x/Cuj+z3lHj/4XcmNcx+ltb18HvfqRTu1S3fsqowPzdXe47Ygru7OiOzbyR5V/er5kSf EdNqcEswBjlVgiU9QLahYaQCs2EUSIxJU+tWa90EvyywUB8+zbtgQ7DlrOB8tsWSoPC7 P0iccL1IlWcbH9co0RYNbb2xFGB0glHlJz9vj8x58OfCkcAhMrO7ZES2SGFNnIz0Z4ib tETC1DShDylelc8aqWHp/0BkH0RKP/x36zww2b1Nb2tZY+UH2WQ9oJTpi/lhDi9EIMyQ 4ATQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614283; x=1730219083; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SYlVqdo4GUp2p48MbqkwL3zlzRrPB+OcWMkizOryOV8=; b=oRBwKn9DNk6rEBrftwIEDcXVisGkuob4m0ZRAShOPGKMyytk19h3/ySwGnVHFTry1+ deLU4fSn+ipUDrJQRzu3j3Gf5gcz8bPlefIltiuEZnaWfbnIup5hx0r8tVAOZGkz8Fos dN76eYGL2S3gUVlCfj+Jf41IIK87PWR7F34c6s/AVUwwxHCOfkY9fcDMUkauOE7VRd6p XBHpy6dLboQWg2GEVl042KI7QP1MBb/KStJ4ZQHrtX9Uet+dq+JbEPRS0kRlf3nYJ9JE YpPmk7CUECkmOYb3Ccb6TNlensOsS7XnXKLlBojesdQMZDbOu8djPKFZ35t453nIlBlP JUsA== X-Forwarded-Encrypted: i=1; AJvYcCXDEA53STBK0nkhOGU31XbqYIlM7FUbM8IPV42wVIDkZz5o72s31sIfeGeYR80cr9UYu0wGUKdf@vger.kernel.org, AJvYcCXKhCcZj9zjX4za2HIiPzZGlRl5oRL+iXMjDALpghfx75O7sV1bXMY/328c8yXncyj0pe5mUSQsyWU=@vger.kernel.org X-Gm-Message-State: AOJu0YzTdUUDeCk70P80X37njywO2L+c+R9NibecZKBANnXG5WjHvlHk +RlEQT6bCeO47ABdbOclxoR4NKoAU6qLMzTkdUxlztw4CLiCWsXI X-Google-Smtp-Source: AGHT+IFgA38cNs/SIjPZSTF+cHJzYs3S/NwilgQECeiS9dBMMQSyU8FvhA6y5IOEosgq0RLvVMZJag== X-Received: by 2002:a17:903:98e:b0:20c:f648:e3a3 with SMTP id d9443c01a7336-20e5a94ca09mr185315185ad.60.1729614282455; Tue, 22 Oct 2024 09:24:42 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.24.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:24:41 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 3/8] net: ethtool: add support for configuring header-data-split-thresh Date: Tue, 22 Oct 2024 16:23:54 +0000 Message-Id: <20241022162359.2713094-4-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The header-data-split-thresh option configures the threshold value of the header-data-split. If a received packet size is larger than this threshold value, a packet will be split into header and payload. The header indicates TCP and UDP header, but it depends on driver spec. The bnxt_en driver supports HDS(Header-Data-Split) configuration at FW level, affecting TCP and UDP too. So, If header-data-split-thresh is set, it affects UDP and TCP packets. Example: # ethtool -G header-data-split-thresh # ethtool -G enp14s0f0np0 tcp-data-split on header-data-split-thresh 256 # ethtool -g enp14s0f0np0 Ring parameters for enp14s0f0np0: Pre-set maximums: ... Header data split thresh: 256 Current hardware settings: ... TCP data split: on Header data split thresh: 256 The default/min/max values are not defined in the ethtool so the drivers should define themself. The 0 value means that all TCP/UDP packets' header and payload will be split. In general cases, HDS can increase the overhead of host memory and PCIe bus because it copies data twice. So users should consider the overhead of HDS. If the HDS threshold is 0 and then the copybreak is 256 and the packet's payload is 8 bytes. So, two pages are used, one for headers and one for payloads. By the copybreak, only the headers page is copied and then it can be reused immediately and then a payloads page is still used. If the HDS threshold is larger than 8, both headers and payloads are copied and then a page can be recycled immediately. So, too low HDS threshold has larger disadvantages than advantages aspect of performance in general cases. Users should consider the overhead of this feature. Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Fix 80 charactor wrap. - Rename from tcp-data-split-thresh to header-data-split-thresh - Add description about overhead of HDS. - Add ETHTOOL_RING_USE_HDS_THRS flag. - Add dev_xdp_sb_prog_count() helper. - Add Test tag from Stanislav. v3: - Fix documentation and ynl - Update error messages - Validate configuration of tcp-data-split and tcp-data-split-thresh v2: - Patch added. Documentation/netlink/specs/ethtool.yaml | 8 ++ Documentation/networking/ethtool-netlink.rst | 79 ++++++++++++-------- include/linux/ethtool.h | 6 ++ include/linux/netdevice.h | 1 + include/uapi/linux/ethtool_netlink.h | 2 + net/core/dev.c | 13 ++++ net/ethtool/netlink.h | 2 +- net/ethtool/rings.c | 37 ++++++++- 8 files changed, 115 insertions(+), 33 deletions(-) diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml index 6a050d755b9c..3e1f54324cbc 100644 --- a/Documentation/netlink/specs/ethtool.yaml +++ b/Documentation/netlink/specs/ethtool.yaml @@ -215,6 +215,12 @@ attribute-sets: - name: tx-push-buf-len-max type: u32 + - + name: header-data-split-thresh + type: u32 + - + name: header-data-split-thresh-max + type: u32 - name: mm-stat @@ -1393,6 +1399,8 @@ operations: - rx-push - tx-push-buf-len - tx-push-buf-len-max + - header-data-split-thresh + - header-data-split-thresh-max dump: *ring-get-op - name: rings-set diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 295563e91082..513eb1517f53 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -875,24 +875,35 @@ Request contents: Kernel response contents: - ======================================= ====== =========================== - ``ETHTOOL_A_RINGS_HEADER`` nested reply header - ``ETHTOOL_A_RINGS_RX_MAX`` u32 max size of RX ring - ``ETHTOOL_A_RINGS_RX_MINI_MAX`` u32 max size of RX mini ring - ``ETHTOOL_A_RINGS_RX_JUMBO_MAX`` u32 max size of RX jumbo ring - ``ETHTOOL_A_RINGS_TX_MAX`` u32 max size of TX ring - ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring - ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring - ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring - ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring - ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the ring - ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data split - ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE - ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode - ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode - ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer - ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX`` u32 max size of TX push buffer - ======================================= ====== =========================== + ================================================ ====== ==================== + ``ETHTOOL_A_RINGS_HEADER`` nested reply header + ``ETHTOOL_A_RINGS_RX_MAX`` u32 max size of RX ring + ``ETHTOOL_A_RINGS_RX_MINI_MAX`` u32 max size of RX mini + ring + ``ETHTOOL_A_RINGS_RX_JUMBO_MAX`` u32 max size of RX jumbo + ring + ``ETHTOOL_A_RINGS_TX_MAX`` u32 max size of TX ring + ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring + ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring + ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo + ring + ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring + ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on + the ring + ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data + split + ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE + ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode + ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode + ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push + buffer + ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX`` u32 max size of TX push + buffer + ``ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH`` u32 threshold of + header / data split + ``ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH_MAX`` u32 max threshold of + header / data split + ================================================ ====== ==================== ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` indicates whether the device is usable with page-flipping TCP zero-copy receive (``getsockopt(TCP_ZEROCOPY_RECEIVE)``). @@ -927,18 +938,22 @@ Sets ring sizes like ``ETHTOOL_SRINGPARAM`` ioctl request. Request contents: - ==================================== ====== =========================== - ``ETHTOOL_A_RINGS_HEADER`` nested reply header - ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring - ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring - ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring - ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring - ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the ring - ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE - ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode - ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode - ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer - ==================================== ====== =========================== + ============================================ ====== ======================= + ``ETHTOOL_A_RINGS_HEADER`` nested reply header + ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring + ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring + ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring + ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring + ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the + ring + ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data split + ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE + ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode + ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode + ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer + ``ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH`` u32 threshold of + header / data split + ============================================ ====== ======================= Kernel checks that requested ring sizes do not exceed limits reported by driver. Driver may impose additional constraints and may not support all @@ -954,6 +969,10 @@ A bigger CQE can have more receive buffer pointers, and in turn the NIC can transfer a bigger frame from wire. Based on the NIC hardware, the overall completion queue size can be adjusted in the driver if CQE size is modified. +``ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH`` specifies the threshold value of +header / data split feature. If a received packet size is larger than this +threshold value, header and data will be split. + CHANNELS_GET ============ diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index 12f6dc567598..021fd21f3914 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -78,6 +78,8 @@ enum { * @cqe_size: Size of TX/RX completion queue event * @tx_push_buf_len: Size of TX push buffer * @tx_push_buf_max_len: Maximum allowed size of TX push buffer + * @hds_thresh: Threshold value of header-data-split-thresh + * @hds_thresh_max: Maximum allowed threshold of header-data-split-thresh */ struct kernel_ethtool_ringparam { u32 rx_buf_len; @@ -87,6 +89,8 @@ struct kernel_ethtool_ringparam { u32 cqe_size; u32 tx_push_buf_len; u32 tx_push_buf_max_len; + u32 hds_thresh; + u32 hds_thresh_max; }; /** @@ -97,6 +101,7 @@ struct kernel_ethtool_ringparam { * @ETHTOOL_RING_USE_RX_PUSH: capture for setting rx_push * @ETHTOOL_RING_USE_TX_PUSH_BUF_LEN: capture for setting tx_push_buf_len * @ETHTOOL_RING_USE_TCP_DATA_SPLIT: capture for setting tcp_data_split + * @ETHTOOL_RING_USE_HDS_THRS: capture for setting header-data-split-thresh */ enum ethtool_supported_ring_param { ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0), @@ -105,6 +110,7 @@ enum ethtool_supported_ring_param { ETHTOOL_RING_USE_RX_PUSH = BIT(3), ETHTOOL_RING_USE_TX_PUSH_BUF_LEN = BIT(4), ETHTOOL_RING_USE_TCP_DATA_SPLIT = BIT(5), + ETHTOOL_RING_USE_HDS_THRS = BIT(6), }; #define __ETH_RSS_HASH_BIT(bit) ((u32)1 << (bit)) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 8feaca12655e..e155b767a08d 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4010,6 +4010,7 @@ struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev, int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); u8 dev_xdp_prog_count(struct net_device *dev); +u8 dev_xdp_sb_prog_count(struct net_device *dev); int dev_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf); u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode); diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 283305f6b063..7087c5c51017 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -364,6 +364,8 @@ enum { ETHTOOL_A_RINGS_RX_PUSH, /* u8 */ ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN, /* u32 */ ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX, /* u32 */ + ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH, /* u32 */ + ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH_MAX, /* u32 */ /* add new constants above here */ __ETHTOOL_A_RINGS_CNT, diff --git a/net/core/dev.c b/net/core/dev.c index c682173a7642..890cd2bd0864 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9431,6 +9431,19 @@ u8 dev_xdp_prog_count(struct net_device *dev) } EXPORT_SYMBOL_GPL(dev_xdp_prog_count); +u8 dev_xdp_sb_prog_count(struct net_device *dev) +{ + u8 count = 0; + int i; + + for (i = 0; i < __MAX_XDP_MODE; i++) + if (dev->xdp_state[i].prog && + !dev->xdp_state[i].prog->aux->xdp_has_frags) + count++; + return count; +} +EXPORT_SYMBOL_GPL(dev_xdp_sb_prog_count); + int dev_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf) { if (!dev->netdev_ops->ndo_bpf) diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 203b08eb6c6f..9f51a252ebe0 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -455,7 +455,7 @@ extern const struct nla_policy ethnl_features_set_policy[ETHTOOL_A_FEATURES_WANT extern const struct nla_policy ethnl_privflags_get_policy[ETHTOOL_A_PRIVFLAGS_HEADER + 1]; extern const struct nla_policy ethnl_privflags_set_policy[ETHTOOL_A_PRIVFLAGS_FLAGS + 1]; extern const struct nla_policy ethnl_rings_get_policy[ETHTOOL_A_RINGS_HEADER + 1]; -extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX + 1]; +extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH_MAX + 1]; extern const struct nla_policy ethnl_channels_get_policy[ETHTOOL_A_CHANNELS_HEADER + 1]; extern const struct nla_policy ethnl_channels_set_policy[ETHTOOL_A_CHANNELS_COMBINED_COUNT + 1]; extern const struct nla_policy ethnl_coalesce_get_policy[ETHTOOL_A_COALESCE_HEADER + 1]; diff --git a/net/ethtool/rings.c b/net/ethtool/rings.c index b7865a14fdf8..e1fd82a91014 100644 --- a/net/ethtool/rings.c +++ b/net/ethtool/rings.c @@ -61,7 +61,11 @@ static int rings_reply_size(const struct ethnl_req_info *req_base, nla_total_size(sizeof(u8)) + /* _RINGS_TX_PUSH */ nla_total_size(sizeof(u8))) + /* _RINGS_RX_PUSH */ nla_total_size(sizeof(u32)) + /* _RINGS_TX_PUSH_BUF_LEN */ - nla_total_size(sizeof(u32)); /* _RINGS_TX_PUSH_BUF_LEN_MAX */ + nla_total_size(sizeof(u32)) + /* _RINGS_TX_PUSH_BUF_LEN_MAX */ + nla_total_size(sizeof(u32)) + + /* _RINGS_HEADER_DATA_SPLIT_THRESH */ + nla_total_size(sizeof(u32)); + /* _RINGS_HEADER_DATA_SPLIT_THRESH_MAX*/ } static int rings_fill_reply(struct sk_buff *skb, @@ -108,7 +112,12 @@ static int rings_fill_reply(struct sk_buff *skb, (nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX, kr->tx_push_buf_max_len) || nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN, - kr->tx_push_buf_len)))) + kr->tx_push_buf_len))) || + ((supported_ring_params & ETHTOOL_RING_USE_HDS_THRS) && + (nla_put_u32(skb, ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH, + kr->hds_thresh) || + nla_put_u32(skb, ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH_MAX, + kr->hds_thresh_max)))) return -EMSGSIZE; return 0; @@ -130,6 +139,7 @@ const struct nla_policy ethnl_rings_set_policy[] = { [ETHTOOL_A_RINGS_TX_PUSH] = NLA_POLICY_MAX(NLA_U8, 1), [ETHTOOL_A_RINGS_RX_PUSH] = NLA_POLICY_MAX(NLA_U8, 1), [ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN] = { .type = NLA_U32 }, + [ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH] = { .type = NLA_U32 }, }; static int @@ -155,6 +165,14 @@ ethnl_set_rings_validate(struct ethnl_req_info *req_info, return -EOPNOTSUPP; } + if (tb[ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH] && + !(ops->supported_ring_params & ETHTOOL_RING_USE_HDS_THRS)) { + NL_SET_ERR_MSG_ATTR(info->extack, + tb[ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH], + "setting header-data-split-thresh is not supported"); + return -EOPNOTSUPP; + } + if (tb[ETHTOOL_A_RINGS_CQE_SIZE] && !(ops->supported_ring_params & ETHTOOL_RING_USE_CQE_SIZE)) { NL_SET_ERR_MSG_ATTR(info->extack, @@ -222,9 +240,24 @@ ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info) tb[ETHTOOL_A_RINGS_RX_PUSH], &mod); ethnl_update_u32(&kernel_ringparam.tx_push_buf_len, tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN], &mod); + ethnl_update_u32(&kernel_ringparam.hds_thresh, + tb[ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH], &mod); if (!mod) return 0; + if (kernel_ringparam.tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_ENABLED && + dev_xdp_sb_prog_count(dev)) { + NL_SET_ERR_MSG(info->extack, + "tcp-data-split can not be enabled with single buffer XDP"); + return -EINVAL; + } + + if (kernel_ringparam.hds_thresh > kernel_ringparam.hds_thresh_max) { + NL_SET_BAD_ATTR(info->extack, + tb[ETHTOOL_A_RINGS_HEADER_DATA_SPLIT_THRESH_MAX]); + return -ERANGE; + } + /* ensure new ring parameters are within limits */ if (ringparam.rx_pending > ringparam.rx_max_pending) err_attr = tb[ETHTOOL_A_RINGS_RX]; From patchwork Tue Oct 22 16:23:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845949 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32FFF1B5328; Tue, 22 Oct 2024 16:24:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614293; cv=none; b=hHdMj/wvtF8OIuOkNpFmzmxrc08bEmK559ydL9PkxczHFH15jce1UPdD0j6dlss3STsUkctlHVha4xhD5PBymGCurvXQiBQS27RnFuT0zQ3VYWv6DLZ/SXeRnPqAFUrpfaTjtNk7EndOZCFumOx3YbJu+5g+sI7E2tJYrdcrqEM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614293; c=relaxed/simple; bh=r43HHVuG45hKVbkKwTH13LCZ57WQBhv+WpnuCvGovXI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MyPEwgMApHPRCaqggMCSci/MVbEIrBSOruTmI3Z//5wqx1TAHH8qZ2PmTrGUnQbZ4i8bLFE0dC0jdu3wZfyhLv+LZEBgG3M8eor9DWu5jyVAXwGTzCN2k61NOpwGCcWxTXA2EOrIyWhlUtW2D664EKruVOyzvJyRJYNr70Bm60Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HdlILm5R; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HdlILm5R" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-20c805a0753so50644055ad.0; Tue, 22 Oct 2024 09:24:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614291; x=1730219091; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hy95q8r37bfEG2aBYalMfLWWzDv1bix/9nybInsLQMc=; b=HdlILm5RS2uwN9KEhtG60u7OgkratYnrGLzlWyoaQY7A4J5gr3tomysoYH00niCwzg kt7EvLqnl6lIdBpBKg6nv9w++kqgj/pYor+jDwJ7jJDnL/W6CyHtQDvaMSht97OS7Os2 chWfsCyB0FoGBfMYrX7FjhB2UNUgJ01e7oQ4d34WCsbEh5oMszWrRwCs3z6eMy9SE/6D 0QjJAdFZfB/CyH7yHv+8wIlTyUuJgUGMwQ9fKQF1hg6ys9OiQhsxxR4K0dNXBQhtnEc6 ldKcW2h9XVa/v2Ut5DYI33KDUZic61w5F04f/6oKvO9wUGdyEUlP0TIxoRDZiBH09Goh ED1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614291; x=1730219091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hy95q8r37bfEG2aBYalMfLWWzDv1bix/9nybInsLQMc=; b=hEaFVF44lY4xQyRdRoup2xlQXbhyevw6V41npcbmk2bBUf8E0n8itMDI9vTu8csuHW EPgagEgyi+i3vWmum0eejhdLIUSp92bXgSxPN4pntn1u+yyVy9VZyiMJsUH1imkqyYPy bxTUn6uaRwAQ7draJv9v5KwvOs6Pxp37xvCqrcarXXaXVwIq7p46qDIX/o09NZz0m3Qk X/i8s1zzoW6QN2bb69ZMf+u03fl50DLVuxLss6Cof5WfpsiathoLLoaem3RKWqx8V2Xr YLKSEJkQszYeTSX5Z4C7MiNsJJ8Jdc/3zQ7dzo0HaEgNgYssIstbJF8beAe0ZWwCzIez B+nA== X-Forwarded-Encrypted: i=1; AJvYcCWygm+YbaX1GWS+TAR0ayj08qmJHL0LdPtm2suqgi24ZM1fzhmBzCDJN6++XvAN8JnhQBG4Puj9@vger.kernel.org, AJvYcCXLO84RnWSjGpguzOBQBT0EQxb+RPYszUjdGu9DKwxmNjMGz56XYA3W/qH485SA69+KMjoUZ34MpmM=@vger.kernel.org X-Gm-Message-State: AOJu0YzpjRHrXbBnjWuVsa8AMvk9/UI5O7GABvZFwGl0nD9Rgabh3qnS d/E46D4qDah6U2HimNn1TVyclE/5K7fiqypXo8cbbrxhMZvIrU2+ X-Google-Smtp-Source: AGHT+IEMTTlf95wdKnnc911AeYwCz+7RCJqoqFpkhp9lbJv7+wjgr8Lg6+9msqm0NE1W31mVGpgxTw== X-Received: by 2002:a17:903:2443:b0:20c:6bff:fca1 with SMTP id d9443c01a7336-20e5a7790f5mr201144755ad.23.1729614291278; Tue, 22 Oct 2024 09:24:51 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.24.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:24:50 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 4/8] bnxt_en: add support for header-data-split-thresh ethtool command Date: Tue, 22 Oct 2024 16:23:55 +0000 Message-Id: <20241022162359.2713094-5-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The bnxt_en driver has configured the hds_threshold value automatically when TPA is enabled based on the rx-copybreak default value. Now the header-data-split-thresh ethtool command is added, so it adds an implementation of header-data-split-thresh option. Configuration of the header-data-split-thresh is allowed only when the header-data-split is enabled. The default value of header-data-split-thresh is 256, which is the default value of rx-copybreak, which used to be the hds_thresh value. # Example: # ethtool -G enp14s0f0np0 tcp-data-split on header-data-split-thresh 256 # ethtool -g enp14s0f0np0 Ring parameters for enp14s0f0np0: Pre-set maximums: ... Header data split thresh: 256 Current hardware settings: ... TCP data split: on Header data split thresh: 256 Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Reduce hole in struct bnxt. - Add ETHTOOL_RING_USE_HDS_THRS to indicate bnxt_en driver support header-data-split-thresh option. - Add Test tag from Stanislav. v3: - Drop validation logic tcp-data-split and tcp-data-split-thresh. v2: - Patch added. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 3 ++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 ++ drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 7 ++++++- 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 91ea42ff9b17..7d9da483b867 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4453,6 +4453,7 @@ void bnxt_set_tpa_flags(struct bnxt *bp) static void bnxt_init_ring_params(struct bnxt *bp) { bp->rx_copybreak = BNXT_DEFAULT_RX_COPYBREAK; + bp->hds_threshold = BNXT_DEFAULT_RX_COPYBREAK; } /* bp->rx_ring_size, bp->tx_ring_size, dev->mtu, BNXT_FLAG_{G|L}RO flags must @@ -6427,7 +6428,7 @@ static int bnxt_hwrm_vnic_set_hds(struct bnxt *bp, struct bnxt_vnic_info *vnic) VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6); req->enables |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID); - req->hds_threshold = cpu_to_le16(bp->rx_copybreak); + req->hds_threshold = cpu_to_le16(bp->hds_threshold); } req->vnic_id = cpu_to_le32(vnic->fw_vnic_id); return hwrm_req_send(bp, req); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 432bc19b35ea..e467341f1e5b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -2361,6 +2361,8 @@ struct bnxt { u8 q_ids[BNXT_MAX_QUEUE]; u8 max_q; u8 num_tc; +#define BNXT_HDS_THRESHOLD_MAX 256 + u16 hds_threshold; unsigned int current_interval; #define BNXT_TIMER_INTERVAL HZ diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 5172d0547e0c..73e821a23f56 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -840,6 +840,9 @@ static void bnxt_get_ringparam(struct net_device *dev, ering->rx_pending = bp->rx_ring_size; ering->rx_jumbo_pending = bp->rx_agg_ring_size; ering->tx_pending = bp->tx_ring_size; + + kernel_ering->hds_thresh = bp->hds_threshold; + kernel_ering->hds_thresh_max = BNXT_HDS_THRESHOLD_MAX; } static int bnxt_set_ringparam(struct net_device *dev, @@ -869,6 +872,7 @@ static int bnxt_set_ringparam(struct net_device *dev, break; } + bp->hds_threshold = (u16)kernel_ering->hds_thresh; bp->rx_ring_size = ering->rx_pending; bp->tx_ring_size = ering->tx_pending; bnxt_set_ring_params(bp); @@ -5357,7 +5361,8 @@ const struct ethtool_ops bnxt_ethtool_ops = { ETHTOOL_COALESCE_STATS_BLOCK_USECS | ETHTOOL_COALESCE_USE_ADAPTIVE_RX | ETHTOOL_COALESCE_USE_CQE, - .supported_ring_params = ETHTOOL_RING_USE_TCP_DATA_SPLIT, + .supported_ring_params = ETHTOOL_RING_USE_TCP_DATA_SPLIT | + ETHTOOL_RING_USE_HDS_THRS, .get_link_ksettings = bnxt_get_link_ksettings, .set_link_ksettings = bnxt_set_link_ksettings, .get_fec_stats = bnxt_get_fec_stats, From patchwork Tue Oct 22 16:23:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845950 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A691136345; Tue, 22 Oct 2024 16:25:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614302; cv=none; b=iNGwbFRddEdsaRC3t+keijqfnCfAhW/PWzrdiFKk58tsdX0BTXLNf6BSeomAeidLrCgiqvwfd4NOxXjEXFQZpRcPzKihEEArdYnqWWrwqahObeS9gXzPm/KV7KLaJeL9GJ37agah9sDgzDFmksNI+2sepcb1AENv6i3Fa81UEeM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614302; c=relaxed/simple; bh=K7SuixB6Td6IPfBM3xqtDEm+YY7FsppOYwBMl4zeyPs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=O9Zk2V76ZO0kNMPZRvIdrzcNBpH8DizVWfYCtB98ha85bAS9qNRgzPsRTeG4m7znoBVwQFaA958qOdi3qmei9qbgjVanuS+17Fj3O3NrSjUuDmc/8ZJ8vnC+32wEfWGYn1v9IpsR1+ASrL5p5y+zMod/nK9W6Dr2HwzoBjmbWuM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aMAyNY2W; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aMAyNY2W" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-20cbb1cf324so51701035ad.0; Tue, 22 Oct 2024 09:25:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614300; x=1730219100; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=s6B6qLaEz37RxKySIXA3GdJc2yXpIMxvmLdcDYXmM+E=; b=aMAyNY2WJll8SGs9rFS0lBrIjSI+7O/Txg+3kpmoAPencIW7jzJEAkKwhiQOq4yYQz 9wrsZecwJwLRxYLixr6b0NTF/tDF3a68O8efa4BGIWZTLlE3LK/2NKixqJESa7nuISpB Mqft2nif+LkgrYLiaytseRZtKepZZt26GIjSHJuaza8GlI6zgosg0pdFVINFw5DzQHNH Wr61AA+5D0A8FedFZaH0VpQqERaJqvaVgGvUJD4mnT65IQj3nSWU4/yakw4RQhnJnY6u 7cXsq8b4qxhlbcvB+YQ5L0n/p6XI5WvrcdaHw4bYHj2BfUt93sGrqqtc33k8hMd1lePq dq7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614300; x=1730219100; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s6B6qLaEz37RxKySIXA3GdJc2yXpIMxvmLdcDYXmM+E=; b=HeoOao7u2zl5gWwJtlxR0PeIP7DZ2+FtvDdRTJ7y0BkG46cWSzFkChJMvEKv/teCf9 BwWFfEXSWqGssHWvJQqqDSiAaPLSrywi89Lop+06IiHWw0sO20Cfav4WnMEop7o54g3b ToyyCBckD2a6D7RL5zFTal5ioIY1yuPs2KbLoqhLDxfnNWAoaqleOlDfWhn6NzogjO0J /m/4InNxK/L7ENlFgLrMuLyMOTpd2YA6/z4nwzuG8eWdypF3PRVcoCW6AxW1WjX4cmhk eavzhHbZEGXYi9aWc6eXY4Ss3FC+V3idI6nbGL53cNy06i6unSXVlRuw6zF76xGWA8FB sMsg== X-Forwarded-Encrypted: i=1; AJvYcCU/nggMnXZQMuxhorTXpZ1eMatZ1gXL8rClNdsb3ylo/ZTZztQwvk1G5yW09oAlp9TJqoXGgqurmOo=@vger.kernel.org, AJvYcCUXn7fUsPJCkq529x56VIi07tb5K8lYO5Lm8/3XhsWE/DnL88etKmGJL2YmvvYTLq8G+lT6nqYd@vger.kernel.org X-Gm-Message-State: AOJu0YyxLwHknTSZh5PKZrPXot8dKDWcFU6/fX97yEydJNdXxrGFTOyw 04u+OTKa9Jw6I+CV/7DyNZI8wVMBf4yMXAn6MYrCkx/VlowUXLBk X-Google-Smtp-Source: AGHT+IGlSI3igwuCqufO9cr87MQb6+l8BwBs9XyLC/TNIxYDUcWYKtiyghkAB0xCes9pH/vW7qVgQQ== X-Received: by 2002:a17:903:41cb:b0:20c:d93c:440 with SMTP id d9443c01a7336-20e5a8a9c53mr231486415ad.35.1729614299885; Tue, 22 Oct 2024 09:24:59 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.24.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:24:59 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 5/8] net: devmem: add ring parameter filtering Date: Tue, 22 Oct 2024 16:23:56 +0000 Message-Id: <20241022162359.2713094-6-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org If driver doesn't support ring parameter or tcp-data-split configuration is not sufficient, the devmem should not be set up. Before setup the devmem, tcp-data-split should be ON and header-data-split-thresh value should be 0. Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Check condition before __netif_get_rx_queue(). - Separate condition check. - Add Test tag from Stanislav. v3: - Patch added. net/core/devmem.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/net/core/devmem.c b/net/core/devmem.c index 11b91c12ee11..3425e872e87a 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -8,6 +8,8 @@ */ #include +#include +#include #include #include #include @@ -131,6 +133,8 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct net_devmem_dmabuf_binding *binding, struct netlink_ext_ack *extack) { + struct kernel_ethtool_ringparam kernel_ringparam = {}; + struct ethtool_ringparam ringparam = {}; struct netdev_rx_queue *rxq; u32 xa_idx; int err; @@ -140,6 +144,20 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return -ERANGE; } + if (!dev->ethtool_ops->get_ringparam) + return -EOPNOTSUPP; + + dev->ethtool_ops->get_ringparam(dev, &ringparam, &kernel_ringparam, + extack); + if (kernel_ringparam.tcp_data_split != ETHTOOL_TCP_DATA_SPLIT_ENABLED) { + NL_SET_ERR_MSG(extack, "tcp-data-split is disabled"); + return -EINVAL; + } + if (kernel_ringparam.hds_thresh) { + NL_SET_ERR_MSG(extack, "header-data-split-thresh is not zero"); + return -EINVAL; + } + rxq = __netif_get_rx_queue(dev, rxq_idx); if (rxq->mp_params.mp_priv) { NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); From patchwork Tue Oct 22 16:23:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845951 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E4601A4F2B; Tue, 22 Oct 2024 16:25:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614310; cv=none; b=MQEhf0U7ud6afK0K8zNGkmGNDKWSL0vP9FoELs6qsdPls66NrYR0hCywKfrwOXv/+caU0NzGC58c6XoifrDJ0+pxAn90xIgopeUrRc8jj6IyxcgqfPMyUDQFbg5WugS0gDuXNt9CgwN7ryrQSiC4OI/41ipAI4v8T+uLv+P/uA8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614310; c=relaxed/simple; bh=sQCtlCATXqz81JNEmuLIhiRGCe+8jLrO8vl3W1Y8Im8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=opWFoxagAxokIjo6QckTWHktalC44g0py1OvwqOVIAwdXXF6DgrKFoHKF6cXFreHV7gmEoagA6mlses9Z1ticSQOasTCuUl+8GZDk63r2Vh+93cwoKZKYKhBriRGuV9FuWLAQfYV0bifXeQkTchbhUeAnxFJcQil/Af/E2rP3iE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=e8LFsSCz; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="e8LFsSCz" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-20cdbe608b3so48166835ad.1; Tue, 22 Oct 2024 09:25:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614309; x=1730219109; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uOwBPPS/0whvsnaM8DpV5hy0fk/olDaACqNimxXbEC0=; b=e8LFsSCzJCLq0nhPZceyC8+zaAyp+mJ0WkZXmXhZUYwRF2uRHt5ONX0RYrgEmq/LNr xhJXZjUIoBylFAIKH5rBAfMPplpsM2iUOB2HuYx9YGgsThjz6Cb+YvGfPKZuEdZWFMFZ rx1vrSmmTmWw3FgZnPYS8l76cUaz1jU6QDOeSmZZhx02eQI1gB5+cCxtt2/tbfeQa4xR P7wxSrbSL5j6fsyxAvHGCVCBa6WyP3kMrLab0bZmWv1nW4bnvD5hOJC8EJ/XZd2Hi7AO 9n5M6vsujhPHMG1/b4w4IMmW5mlrS1Qa298Hx7NIHk+LGn49EYg+mHb9gWAxsw6G7t7g sC9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614309; x=1730219109; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uOwBPPS/0whvsnaM8DpV5hy0fk/olDaACqNimxXbEC0=; b=j9eYn8BEpFeewdQRoTlWgT/LQO8SqD2q56FVkSV2KbrUF6OMQ9zPhV/enmRxsXnP04 awVYNwrliVZyOdC8HvTcdWK3CG0Upfap9NTgHEQZSv0cfp+BQpqb7cDbb2vFRIbD+A0W aqF/HjzziJMO3GCldnJAkEAqfUap4WFVQdNkrhMe2RIFSj0xGpI5qHDT6ghX3WnzIFr6 0NQEk4FtwPayAPZ5+m8A44uPwb4G0oAmn5DpFsfszMbkgEvQ618QUKlkDz8IpJ/qewqd bkkCI69sZrgdKYd7taHCbhhUKnhYH9sK0cJ0q2Vzd5Hx46n6amx4uOX9qigm5i6uuOOz O8/g== X-Forwarded-Encrypted: i=1; AJvYcCUFzJ84Kbp2xDauyZgq5ZmuCnmS72QtgskS8bfu/OGUH5qCjRwiYlQ1AotMpC4O28VW/IgeYiErlBo=@vger.kernel.org, AJvYcCX5KCZVaSYzTBlwvbi2bzIUmut2P/aJeWP+tnm5o5x3eViGgYxeveUB/e0somlzHMla0ixEpudt@vger.kernel.org X-Gm-Message-State: AOJu0Yy10yYNJeODvhdUdhhjadR7oBJ34kXP7x97+eoZfA6fbHmLs84I mX+06qteiYGopXVL1luDuDXDNjQIMaeZCOSE6ewO9vVIi8fF5R+j X-Google-Smtp-Source: AGHT+IHrBAiVKLRS4KoGLn29jcrvDCaH9tiNczdvEyGDQ3f2xMrWzjPHwmEcYRughboUp25HfOMSnw== X-Received: by 2002:a17:903:2341:b0:20c:950f:f45d with SMTP id d9443c01a7336-20e94b643d5mr48889235ad.61.1729614308602; Tue, 22 Oct 2024 09:25:08 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.25.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:25:07 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 6/8] net: ethtool: add ring parameter filtering Date: Tue, 22 Oct 2024 16:23:57 +0000 Message-Id: <20241022162359.2713094-7-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org While the devmem is running, the tcp-data-split and header-data-split-thresh configuration should not be changed. If user tries to change tcp-data-split and threshold value while the devmem is running, it fails and shows extack message. Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Add netdev_devmem_enabled() helper. - Add Test tag from Stanislav. v3: - Patch added include/net/netdev_rx_queue.h | 14 ++++++++++++++ net/ethtool/common.h | 1 + net/ethtool/rings.c | 13 +++++++++++++ 3 files changed, 28 insertions(+) diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index 596836abf7bf..7fbb64ce8d89 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -55,6 +55,20 @@ get_netdev_rx_queue_index(struct netdev_rx_queue *queue) return index; } +static inline bool netdev_devmem_enabled(struct net_device *dev) +{ + struct netdev_rx_queue *queue; + int i; + + for (i = 0; i < dev->real_num_rx_queues; i++) { + queue = __netif_get_rx_queue(dev, i); + if (queue->mp_params.mp_priv) + return true; + } + + return false; +} + int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq); #endif diff --git a/net/ethtool/common.h b/net/ethtool/common.h index 4a2de3ce7354..5b8e5847ba3c 100644 --- a/net/ethtool/common.h +++ b/net/ethtool/common.h @@ -5,6 +5,7 @@ #include #include +#include #define ETHTOOL_DEV_FEATURE_WORDS DIV_ROUND_UP(NETDEV_FEATURE_COUNT, 32) diff --git a/net/ethtool/rings.c b/net/ethtool/rings.c index e1fd82a91014..ca313c301081 100644 --- a/net/ethtool/rings.c +++ b/net/ethtool/rings.c @@ -258,6 +258,19 @@ ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info) return -ERANGE; } + if (netdev_devmem_enabled(dev)) { + if (kernel_ringparam.tcp_data_split != + ETHTOOL_TCP_DATA_SPLIT_ENABLED) { + NL_SET_ERR_MSG(info->extack, + "tcp-data-split should be enabled while devmem is running"); + return -EINVAL; + } else if (kernel_ringparam.hds_thresh) { + NL_SET_ERR_MSG(info->extack, + "header-data-split-thresh should be zero while devmem is running"); + return -EINVAL; + } + } + /* ensure new ring parameters are within limits */ if (ringparam.rx_pending > ringparam.rx_max_pending) err_attr = tb[ETHTOOL_A_RINGS_RX]; From patchwork Tue Oct 22 16:23:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845952 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F2DD13B588; Tue, 22 Oct 2024 16:25:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614322; cv=none; b=Gt7yrRQeBOSR4UqufdagYhfjktfwrz3nfttgTd2lD//ueEvwTSgRY8RfuX1YQZrj8EeBkT8yf7gXBjYAjHcr/uDiLuDSphSa47klPZcmwYECu8nkkgcQDMWj7yzpOMG6+B478laJ/UEUOqHyoj245iwH1DzNczxKt0Ldai7YDdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614322; c=relaxed/simple; bh=2sKMahTIOfrWlwJhACtzQNllUdo4h7AEkI5JLVzh3Fw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pJFQfp/WveLjXcZEc9j51pCumcm8jTELa/z1mNySA48h1P1b7oLsRdWnyPXjdGJtd54hGIqMNA0KXEe7+pLyJursx23HLhk/Q+QpeCzwpDtUIkh4PASviY2xp+OJaIwmRxA1WGAmnJiYGUsRFjZXOb4P7NHCauBN+n40dg8LgEo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hYyQuc1+; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hYyQuc1+" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-20cbb1cf324so51703095ad.0; Tue, 22 Oct 2024 09:25:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614321; x=1730219121; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GU8o7jgtiIkbRazmxk5E7y3TXU85oWjPMpDaC4ZPE5Q=; b=hYyQuc1+LH85hrpf6lh9T4/VfxK2E6CdgZLtv8Ef5CGYzcEnkYR8RCvzBaaMT3k9d6 mdQR6LWE+NNKUByV0oMK0zOIOtJWcNjk/D110F/ea6Z+brOyegDx1iKkE1YdqZogj3rp BrhqlVk0YLObaza/UhnvSEkhzc86vadVMJip8kvMQFYpvriIUgVPOZ0S0Yr+9K2RYHb3 9WFbaQSw9zFbbfSrcSw1FrJ9SGTs+D6uoqrQluGMW1vPCvg98i5C8XwOWyhZXBB85ZR8 adeSrZakWCiEjQxxbHZkoCU4fVum46e0wXvHB/N01UJZz9qppYaAJITkXMWPNCNJ4Lq8 jQXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614321; x=1730219121; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GU8o7jgtiIkbRazmxk5E7y3TXU85oWjPMpDaC4ZPE5Q=; b=R5HYU7lfmKftHBWa4hs8VoCV4l+mDXXSc8KBRC5qg7qxn78Wis0lPsB+m8BcPG7vdZ ZNLPXpmtSC0lvcqkFNhJgjh3JfIU8V67k8fkUep/JWqD/RAFw1C1K09NhHurJWB0GD6S 7CSnl/4B0tGNWAccA9UlfStOFtElrryGaUWavYwJy1WcLcYHaQhjj0BTwmGk7CPwYaJJ kpmRFwRHXICMAdXLMBmwk7yy/gIcRd28YyZj3WrYjmBZ60nQiLgTuW9IbgWcN4hMnKq+ RXZTEhUYOzDh0Ki3o/ewRTf5PR1djPIzadiWrH1fSo+cF7c2EVGftVB/T2QzhFt16SAr X0qQ== X-Forwarded-Encrypted: i=1; AJvYcCV/FTstGOrtIaV7wmQd09roG3NZtesSywK6/nZfdL7Mxl77n8Ew8od1ZehHfDtQ5DwxKdbi2wDadfU=@vger.kernel.org, AJvYcCVY4sOMKczP5QeufLwx0JECoQJIvvlmCPRLuza4gphxU4RJtc9ZVGhI6Kl2FbMAjbbuV2IUmA2F@vger.kernel.org X-Gm-Message-State: AOJu0YwfqWCtW1g5EL0nARiMToFR2AoHwi7Drx08Yd9SHPY0LfhpAI2H mWNeII7hWS0vFB6qBHGJ0ZwOqkXzT6sHxrMFA/O64vTFwUtG2Ek8 X-Google-Smtp-Source: AGHT+IHBrs3zxXQCLo/DEGNMasPfq1NTj2LoHBQnCpwRjhc5pMZyjOv5+CeeWy8xz1bUyV+myiD5Fg== X-Received: by 2002:a17:902:ce85:b0:20c:d072:c899 with SMTP id d9443c01a7336-20e5a7790e6mr195514345ad.24.1729614320971; Tue, 22 Oct 2024 09:25:20 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.25.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:25:17 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 7/8] net: netmem: add netmem_is_pfmemalloc() helper function Date: Tue, 22 Oct 2024 16:23:58 +0000 Message-Id: <20241022162359.2713094-8-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The netmem_is_pfmemalloc() is a netmem version of page_is_pfmemalloc(). Tested-by: Stanislav Fomichev Suggested-by: Mina Almasry Signed-off-by: Taehee Yoo --- v4: - Patch added. include/net/netmem.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/net/netmem.h b/include/net/netmem.h index 8a6e20be4b9d..49ae2bf05362 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -171,4 +171,12 @@ static inline unsigned long netmem_get_dma_addr(netmem_ref netmem) return __netmem_clear_lsb(netmem)->dma_addr; } +static inline bool netmem_is_pfmemalloc(netmem_ref netmem) +{ + if (netmem_is_net_iov(netmem)) + return false; + + return page_is_pfmemalloc(netmem_to_page(netmem)); +} + #endif /* _NET_NETMEM_H */ From patchwork Tue Oct 22 16:23:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 13845953 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAA791A7066; Tue, 22 Oct 2024 16:25:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614338; cv=none; b=HXxHmII2LirEW6FV/R+Y2iVVMHN1Xxxeekp/P65b8FqCcguVu9XGpUVSIcZcqI4rAZxjMviF83V4dw6ebbywSbjgPUr5z/K+EBmnrPYf85hmEUAuYpdRsluUjpQud519w94BiiI8QZ4GxGWoYBuJPxVx2UXUTivEliZJKFqzUEU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729614338; c=relaxed/simple; bh=j96oZcqEKsNRKCctTBOl4akz3+l+4hQVY3njXcOE2XA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LwxMLsRTpsS/9UjxzwqmB067gJzyOJNLLs2tLGvIAf7x4oakr+WHa/6fnHFfESaQKzoJ9zJetO6suYPi+yY9aPSXbAv4IWl+0yy/qF9l3nVttNrb3vQ9wyajROdGUW7lUr4+PJaNQMZ/UWYowEW1RdSmOiNXsJNvpYd/X04ikgw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=L8vPx+yd; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="L8vPx+yd" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-20c9978a221so66713435ad.1; Tue, 22 Oct 2024 09:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729614336; x=1730219136; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i0v/wmTaHSYkZlVVyyOyADC8iYOB25lwIi/yh3ZE9a4=; b=L8vPx+yd1+Yly/2xrOQZUHs8pKDcRp1CWrlAxO5s+/tkjVf148sJcPW2d8P2XwmM16 n8/5dU4onlSS4Y8cLa7oRmqfXcPVE3hl17+7MjDuZJ/2lNUzwoqhj/+AfuLm+H8L/0IT k3lDTtI3cxtVG9V6Y3HC/c9Xa92EM2BM5oldw99D2EdE5+u21+J1PzXtOqwSi++qq1NT VApysu6Xjc+NL88MKiyLRkf4dvUGCYsRB9SD0hwpIDwiir1FutVUpKZy4VpesQB0cQPc 6tzecipIoAKKh/JZ7VQ+3lE7FpaK94lIpmuzn55gBEcBzs3ShS6Pdgl6wyHVP2XPic8D rOPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729614336; x=1730219136; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i0v/wmTaHSYkZlVVyyOyADC8iYOB25lwIi/yh3ZE9a4=; b=FtmNHYIOlCinLxMTXSWQjfRL6xyNG9cqyfrWWV9TXlJIF9yyajE3grErAlSj9N90gu HVmzmoY8DZv7d0kAXJ25hwUYuA6FW5mv1ogjaflbwJ4KY9H0vHOglbuCInqlrNsKB7g8 wacRUTZjnB0zHMSojOrqNZmWyYIJoiaPJd8cqfIJH6zKahlFwgfflLXMAn7f7seOdWAD yo/ak3eQIzZyRG5CWjos6tAuO9YPsONY/3Fw+skWILzLB7a+SVCD+7NtrSlxY47AEqs6 IT/0iOM8DFHiMtbPPdEmwMR7Ho1oDt/e1f5af+0dk5GWOb/UWG4e+2YV3YeHpJVJz/08 72IQ== X-Forwarded-Encrypted: i=1; AJvYcCUhA85FNJBX/6Fq3nAIOpW/uxzSvbRJF3tO0UhXWFMbqFXgEwIG4J/Gp/fAnbuHIxpelJpLfSgYEik=@vger.kernel.org, AJvYcCVLpqi7zoUzILXP+OjGm73FDdJgFWPGwk5IR3vPsvaPz03TZkcRlRBLIP55WldPCLch4eYtyDox@vger.kernel.org X-Gm-Message-State: AOJu0YylqYQ9h9mx7XxYKrmByP1db6liMVBzZYywWWnI3DWEtl/Ax0lL qiC0TbqN+PukEJWOdjxSx66A4Cvd6gjTnWcmn7PWZonSxitZ0RqR X-Google-Smtp-Source: AGHT+IHBoNe4BAn764ooLIwU3QOWtp7XvAr+dnUijIJdGAV5OM+tqnZcaWk80eYtKEU+GIEiAzK4fg== X-Received: by 2002:a17:902:ec8e:b0:206:aac4:b844 with SMTP id d9443c01a7336-20e5a709796mr243071945ad.6.1729614336027; Tue, 22 Oct 2024 09:25:36 -0700 (PDT) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7eee6602sm44755205ad.1.2024.10.22.09.25.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2024 09:25:29 -0700 (PDT) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, almasrymina@google.com, donald.hunter@gmail.com, corbet@lwn.net, michael.chan@broadcom.com, andrew+netdev@lunn.ch, hawk@kernel.org, ilias.apalodimas@linaro.org, ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, dw@davidwei.uk, sdf@fomichev.me, asml.silence@gmail.com, brett.creeley@amd.com, linux-doc@vger.kernel.org, netdev@vger.kernel.org Cc: kory.maincent@bootlin.com, maxime.chevallier@bootlin.com, danieller@nvidia.com, hengqi@linux.alibaba.com, ecree.xilinx@gmail.com, przemyslaw.kitszel@intel.com, hkallweit1@gmail.com, ahmed.zaki@intel.com, rrameshbabu@nvidia.com, idosch@nvidia.com, jiri@resnulli.us, bigeasy@linutronix.de, lorenzo@kernel.org, jdamato@fastly.com, aleksander.lobakin@intel.com, kaiyuanz@google.com, willemb@google.com, daniel.zahka@gmail.com, ap420073@gmail.com Subject: [PATCH net-next v4 8/8] bnxt_en: add support for device memory tcp Date: Tue, 22 Oct 2024 16:23:59 +0000 Message-Id: <20241022162359.2713094-9-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241022162359.2713094-1-ap420073@gmail.com> References: <20241022162359.2713094-1-ap420073@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently, bnxt_en driver satisfies the requirements of Device memory TCP, which is tcp-data-split. So, it implements Device memory TCP for bnxt_en driver. From now on, the aggregation ring handles netmem_ref instead of page regardless of the on/off of netmem. So, for the aggregation ring, memory will be handled with the netmem page_pool API instead of generic page_pool API. If Devmem is enabled, netmem_ref is used as-is and if Devmem is not enabled, netmem_ref will be converted to page and that is used. Driver recognizes whether the devmem is set or unset based on the mp_params.mp_priv is not NULL. Only if devmem is set, it passes PP_FLAG_ALLOW_UNREADABLE_NETMEM. Tested-by: Stanislav Fomichev Signed-off-by: Taehee Yoo --- v4: - Do not select NET_DEVMEM in Kconfig. - Pass PP_FLAG_ALLOW_UNREADABLE_NETMEM flag unconditionally. - Add __bnxt_rx_agg_pages_xdp(). - Use gfp flag in __bnxt_alloc_rx_netmem(). - Do not add *offset in the __bnxt_alloc_rx_netmem(). - Do not pass queue_idx to bnxt_alloc_rx_page_pool(). - Add Test tag from Stanislav. - Add page_pool_recycle_direct_netmem() helper. v3: - Patch added. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 182 ++++++++++++++++------ drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- include/net/page_pool/helpers.h | 6 + 3 files changed, 142 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 7d9da483b867..7924b1da0413 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -55,6 +55,7 @@ #include #include #include +#include #include "bnxt_hsi.h" #include "bnxt.h" @@ -863,6 +864,22 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) bnapi->events &= ~BNXT_TX_CMP_EVENT; } +static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *bp, dma_addr_t *mapping, + struct bnxt_rx_ring_info *rxr, + unsigned int *offset, + gfp_t gfp) +{ + netmem_ref netmem; + + netmem = page_pool_alloc_netmem(rxr->page_pool, gfp); + if (!netmem) + return 0; + *offset = 0; + + *mapping = page_pool_get_dma_addr_netmem(netmem); + return netmem; +} + static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, struct bnxt_rx_ring_info *rxr, unsigned int *offset, @@ -972,21 +989,21 @@ static inline u16 bnxt_find_next_agg_idx(struct bnxt_rx_ring_info *rxr, u16 idx) return next; } -static inline int bnxt_alloc_rx_page(struct bnxt *bp, - struct bnxt_rx_ring_info *rxr, - u16 prod, gfp_t gfp) +static inline int bnxt_alloc_rx_netmem(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, + u16 prod, gfp_t gfp) { struct rx_bd *rxbd = &rxr->rx_agg_desc_ring[RX_AGG_RING(bp, prod)][RX_IDX(prod)]; struct bnxt_sw_rx_agg_bd *rx_agg_buf; - struct page *page; - dma_addr_t mapping; u16 sw_prod = rxr->rx_sw_agg_prod; unsigned int offset = 0; + dma_addr_t mapping; + netmem_ref netmem; - page = __bnxt_alloc_rx_page(bp, &mapping, rxr, &offset, gfp); + netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, &offset, gfp); - if (!page) + if (!netmem) return -ENOMEM; if (unlikely(test_bit(sw_prod, rxr->rx_agg_bmap))) @@ -996,7 +1013,7 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, rx_agg_buf = &rxr->rx_agg_ring[sw_prod]; rxr->rx_sw_agg_prod = RING_RX_AGG(bp, NEXT_RX_AGG(sw_prod)); - rx_agg_buf->page = page; + rx_agg_buf->netmem = netmem; rx_agg_buf->offset = offset; rx_agg_buf->mapping = mapping; rxbd->rx_bd_haddr = cpu_to_le64(mapping); @@ -1044,7 +1061,7 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_cp_ring_info *cpr, u16 idx, struct rx_agg_cmp *agg; struct bnxt_sw_rx_agg_bd *cons_rx_buf, *prod_rx_buf; struct rx_bd *prod_bd; - struct page *page; + netmem_ref netmem; if (p5_tpa) agg = bnxt_get_tpa_agg_p5(bp, rxr, idx, start + i); @@ -1061,11 +1078,11 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_cp_ring_info *cpr, u16 idx, cons_rx_buf = &rxr->rx_agg_ring[cons]; /* It is possible for sw_prod to be equal to cons, so - * set cons_rx_buf->page to NULL first. + * set cons_rx_buf->netmem to 0 first. */ - page = cons_rx_buf->page; - cons_rx_buf->page = NULL; - prod_rx_buf->page = page; + netmem = cons_rx_buf->netmem; + cons_rx_buf->netmem = 0; + prod_rx_buf->netmem = netmem; prod_rx_buf->offset = cons_rx_buf->offset; prod_rx_buf->mapping = cons_rx_buf->mapping; @@ -1190,29 +1207,104 @@ static struct sk_buff *bnxt_rx_skb(struct bnxt *bp, return skb; } -static u32 __bnxt_rx_agg_pages(struct bnxt *bp, - struct bnxt_cp_ring_info *cpr, - struct skb_shared_info *shinfo, - u16 idx, u32 agg_bufs, bool tpa, - struct xdp_buff *xdp) +static bool __bnxt_rx_agg_pages_skb(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct sk_buff *skb, + u16 idx, u32 agg_bufs, bool tpa) { struct bnxt_napi *bnapi = cpr->bnapi; struct pci_dev *pdev = bp->pdev; - struct bnxt_rx_ring_info *rxr = bnapi->rx_ring; - u16 prod = rxr->rx_agg_prod; + struct bnxt_rx_ring_info *rxr; u32 i, total_frag_len = 0; bool p5_tpa = false; + u16 prod; + + rxr = bnapi->rx_ring; + prod = rxr->rx_agg_prod; if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && tpa) p5_tpa = true; for (i = 0; i < agg_bufs; i++) { - skb_frag_t *frag = &shinfo->frags[i]; - u16 cons, frag_len; + struct bnxt_sw_rx_agg_bd *cons_rx_buf; struct rx_agg_cmp *agg; + u16 cons, frag_len; + dma_addr_t mapping; + netmem_ref netmem; + + if (p5_tpa) + agg = bnxt_get_tpa_agg_p5(bp, rxr, idx, i); + else + agg = bnxt_get_agg(bp, cpr, idx, i); + cons = agg->rx_agg_cmp_opaque; + frag_len = (le32_to_cpu(agg->rx_agg_cmp_len_flags_type) & + RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; + + cons_rx_buf = &rxr->rx_agg_ring[cons]; + skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem, + cons_rx_buf->offset, frag_len, + BNXT_RX_PAGE_SIZE); + __clear_bit(cons, rxr->rx_agg_bmap); + + /* It is possible for bnxt_alloc_rx_netmem() to allocate + * a sw_prod index that equals the cons index, so we + * need to clear the cons entry now. + */ + mapping = cons_rx_buf->mapping; + netmem = cons_rx_buf->netmem; + cons_rx_buf->netmem = 0; + + if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_ATOMIC) != 0) { + skb->len -= frag_len; + skb->data_len -= frag_len; + skb->truesize -= BNXT_RX_PAGE_SIZE; + --skb_shinfo(skb)->nr_frags; + cons_rx_buf->netmem = netmem; + + /* Update prod since possibly some pages have been + * allocated already. + */ + rxr->rx_agg_prod = prod; + bnxt_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i, tpa); + return 0; + } + + dma_sync_single_for_cpu(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, + bp->rx_dir); + + total_frag_len += frag_len; + prod = NEXT_RX_AGG(prod); + } + rxr->rx_agg_prod = prod; + return total_frag_len; +} + +static u32 __bnxt_rx_agg_pages_xdp(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct skb_shared_info *shinfo, + u16 idx, u32 agg_bufs, bool tpa, + struct xdp_buff *xdp) +{ + struct bnxt_napi *bnapi = cpr->bnapi; + struct pci_dev *pdev = bp->pdev; + struct bnxt_rx_ring_info *rxr; + u32 i, total_frag_len = 0; + bool p5_tpa = false; + u16 prod; + + rxr = bnapi->rx_ring; + prod = rxr->rx_agg_prod; + + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && tpa) + p5_tpa = true; + + for (i = 0; i < agg_bufs; i++) { struct bnxt_sw_rx_agg_bd *cons_rx_buf; - struct page *page; + skb_frag_t *frag = &shinfo->frags[i]; + struct rx_agg_cmp *agg; + u16 cons, frag_len; dma_addr_t mapping; + netmem_ref netmem; if (p5_tpa) agg = bnxt_get_tpa_agg_p5(bp, rxr, idx, i); @@ -1223,9 +1315,10 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; cons_rx_buf = &rxr->rx_agg_ring[cons]; - skb_frag_fill_page_desc(frag, cons_rx_buf->page, - cons_rx_buf->offset, frag_len); + skb_frag_fill_netmem_desc(frag, cons_rx_buf->netmem, + cons_rx_buf->offset, frag_len); shinfo->nr_frags = i + 1; + __clear_bit(cons, rxr->rx_agg_bmap); /* It is possible for bnxt_alloc_rx_page() to allocate @@ -1233,15 +1326,15 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, * need to clear the cons entry now. */ mapping = cons_rx_buf->mapping; - page = cons_rx_buf->page; - cons_rx_buf->page = NULL; + netmem = cons_rx_buf->netmem; + cons_rx_buf->netmem = 0; - if (xdp && page_is_pfmemalloc(page)) + if (netmem_is_pfmemalloc(netmem)) xdp_buff_set_frag_pfmemalloc(xdp); - if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_ATOMIC) != 0) { + if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_ATOMIC) != 0) { --shinfo->nr_frags; - cons_rx_buf->page = page; + cons_rx_buf->netmem = netmem; /* Update prod since possibly some pages have been * allocated already. @@ -1266,20 +1359,12 @@ static struct sk_buff *bnxt_rx_agg_pages_skb(struct bnxt *bp, struct sk_buff *skb, u16 idx, u32 agg_bufs, bool tpa) { - struct skb_shared_info *shinfo = skb_shinfo(skb); - u32 total_frag_len = 0; - - total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, idx, - agg_bufs, tpa, NULL); - if (!total_frag_len) { + if (!__bnxt_rx_agg_pages_skb(bp, cpr, skb, idx, agg_bufs, tpa)) { skb_mark_for_recycle(skb); dev_kfree_skb(skb); return NULL; } - skb->data_len += total_frag_len; - skb->len += total_frag_len; - skb->truesize += BNXT_RX_PAGE_SIZE * agg_bufs; return skb; } @@ -1294,8 +1379,8 @@ static u32 bnxt_rx_agg_pages_xdp(struct bnxt *bp, if (!xdp_buff_has_frags(xdp)) shinfo->nr_frags = 0; - total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, - idx, agg_bufs, tpa, xdp); + total_frag_len = __bnxt_rx_agg_pages_xdp(bp, cpr, shinfo, + idx, agg_bufs, tpa, xdp); if (total_frag_len) { xdp_buff_set_frags_flag(xdp); shinfo->nr_frags = agg_bufs; @@ -3341,15 +3426,15 @@ static void bnxt_free_one_rx_agg_ring(struct bnxt *bp, struct bnxt_rx_ring_info for (i = 0; i < max_idx; i++) { struct bnxt_sw_rx_agg_bd *rx_agg_buf = &rxr->rx_agg_ring[i]; - struct page *page = rx_agg_buf->page; + netmem_ref netmem = rx_agg_buf->netmem; - if (!page) + if (!netmem) continue; - rx_agg_buf->page = NULL; + rx_agg_buf->netmem = 0; __clear_bit(i, rxr->rx_agg_bmap); - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct_netmem(rxr->page_pool, netmem); } } @@ -3620,7 +3705,10 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.dev = &bp->pdev->dev; pp.dma_dir = bp->rx_dir; pp.max_len = PAGE_SIZE; - pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; + pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | + PP_FLAG_ALLOW_UNREADABLE_NETMEM; + pp.queue_idx = rxr->bnapi->index; + pp.order = 0; rxr->page_pool = page_pool_create(&pp); if (IS_ERR(rxr->page_pool)) { @@ -4153,7 +4241,7 @@ static void bnxt_alloc_one_rx_ring_page(struct bnxt *bp, prod = rxr->rx_agg_prod; for (i = 0; i < bp->rx_agg_ring_size; i++) { - if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_KERNEL)) { + if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) { netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n", ring_nr, i, bp->rx_ring_size); break; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index e467341f1e5b..c38b0a8836e2 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -894,7 +894,7 @@ struct bnxt_sw_rx_bd { }; struct bnxt_sw_rx_agg_bd { - struct page *page; + netmem_ref netmem; unsigned int offset; dma_addr_t mapping; }; diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 793e6fd78bc5..0149f6f6208f 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -382,6 +382,12 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, page_pool_put_full_page(pool, page, true); } +static inline void page_pool_recycle_direct_netmem(struct page_pool *pool, + netmem_ref netmem) +{ + page_pool_put_full_netmem(pool, netmem, true); +} + #define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \ (sizeof(dma_addr_t) > sizeof(unsigned long))