From patchwork Tue Jun 18 05:52:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13701815 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED0AC53A9 for ; Tue, 18 Jun 2024 05:52:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718689932; cv=none; b=qqXahclOiGAID8kPVDMwRf2yiHXHjUNN2zKU9vjs6952p2kICP5nqHRsfGkFR5NqfIuzWh5g7rgbKxvMFPYDyYD5lU0SoKV9cusXxLw9phBZqs3XYjhBov7H0L1y2L0C5C3yroXbZsbN35uDaZyLX3VFuloH65W//1RoN9FZgPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718689932; c=relaxed/simple; bh=du2b3PfuNUykAkmTF2Zv9pyJvNRomoqAfo8fS8nU7Xw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cG0KaVhzuqDOrbPbZki4tENF6PpZX85suXkddb2MX68BQUhmIlu/79h39xKjSXTUiDiGkTaV+rbJpk0uL/1n00eZGXXnA7K8BlVD/MCeSI0uoSVdjcrEtkBH8wFvqwQLjFOE2dS+WSZKAsPEw4RZ1K+K2+Cg5SqCS1goYgTh9d0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=V3TF+ZGj; arc=none smtp.client-ip=209.85.216.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="V3TF+ZGj" Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-2c722e54db8so227533a91.2 for ; Mon, 17 Jun 2024 22:52:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1718689930; x=1719294730; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=msOxcyV8VjH6q5sS4BAiMymQqdsZXLK1ahhPrbf9XKI=; b=V3TF+ZGjJIR7Dt+YehvB13WQ2tUbLVa1C6tFJCTjIat5varcPccIL5G9P3K4DZInyP +usSXBetqmdCdwQ8I0o7sZJHZeGDVK+eUuo8K7eV2iRYVolVnDFvDJN/npwm/nTjnoUB ZXMg5qPpMOkLTMioCOPs33Y3JeuuqJbddGqNj19WwgIX1ow7eCPSqsiXR9w1Neozi1rX Yu4cbbmT0dNuK7cSlaCHU3astZhuyNuGYOvFovGeHUjB2hNeN7JGVIc/KuxSlrvZpd+k yCpnIid7gtx2i+BBo6OR2vBbiNvFRvNUeqWEb68PuglKrkOwIWjZgUznPSNmMk2ouNrn Q9Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718689930; x=1719294730; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=msOxcyV8VjH6q5sS4BAiMymQqdsZXLK1ahhPrbf9XKI=; b=wuInXtv10pfR6FdjJXCxHNd05UOz1vHOcObkeGDot/Z0m1oxigAbTkt4jrjkJzWj/m h9DCw10nMJqWnp4hF8tNkREIFGum62HaPKmDda9Hx796hez9BtgNYYBaGwpo/bhJi476 8f6//picOS8qTeXoYB0EeTSg62s3tA76s0VrqH0kXzq5RiePMCxseMrVNlbOKxw9UawY XfsyZjQ0dX2D/IxFa2fzksj2ASpHJVUNUekDlAVU5rowqhrtZzOEpCzGOruOfpiI6Zzd QgshfXh//FuPvjR8lgmrmDOFRHj+mjORDllVzerQuFhI23vxqzjmiACNWmaxVE1cNV01 pDfA== X-Forwarded-Encrypted: i=1; AJvYcCWsV41687cjKbH/xMrSe8mdD7j4et1g7XIkZThyY1Qti/o/GGUUFAms6ZTgnkIXSTb/lPpbXRDq4ueoWV63MNsrhsa3vsHi X-Gm-Message-State: AOJu0Yz4Bt7KUhHphUBaw08nA1fAE0LN4SEtogR16Tv1IK9oW4/jDyhd Jl7VQ5BKd+cbFglRMA4IsyUETg2A9Fk2lOmTceyexdvkfrpk+22O9HNzPqs0Thk= X-Google-Smtp-Source: AGHT+IHOqEJIhLriX1eyfyArZi0FEAxsAgrFYIx8uirL92Y/sBsaBYXB79IOo60S/BoM3SinPxu3IA== X-Received: by 2002:a17:90a:df91:b0:2c2:c96c:5390 with SMTP id 98e67ed59e1d1-2c4db13505cmr11479829a91.1.1718689930191; Mon, 17 Jun 2024 22:52:10 -0700 (PDT) Received: from localhost (fwdproxy-prn-003.fbsv.net. [2a03:2880:ff:3::face:b00c]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c4a75d20b4sm12333557a91.7.2024.06.17.22.52.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 22:52:09 -0700 (PDT) From: David Wei To: Michael Chan , Andy Gospodarek , Adrian Alvarado , Somnath Kotur , netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , David Ahern , "David S. Miller" , Eric Dumazet , Paolo Abeni Subject: [PATCH net-next v2 1/3] bnxt_en: Add support to call FW to update a VNIC Date: Mon, 17 Jun 2024 22:52:00 -0700 Message-ID: <20240618055202.2530064-2-dw@davidwei.uk> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240618055202.2530064-1-dw@davidwei.uk> References: <20240618055202.2530064-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Michael Chan Add the HWRM_VNIC_UPDATE message structures and the function to send the message to firmware. This message can be used when disabling and enabling a receive ring within a VNIC. The mru which is the maximum receive size of packets received by the VNIC can be updated. Signed-off-by: Michael Chan Signed-off-by: David Wei --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 23 +++++++++++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 3 ++ drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h | 37 +++++++++++++++++++ 3 files changed, 62 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 7dc00c0d8992..ebcf393f06dc 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -6452,7 +6452,8 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) req->dflt_ring_grp = cpu_to_le16(bp->grp_info[grp_idx].fw_grp_id); req->lb_rule = cpu_to_le16(0xffff); vnic_mru: - req->mru = cpu_to_le16(bp->dev->mtu + ETH_HLEN + VLAN_HLEN); + vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; + req->mru = cpu_to_le16(vnic->mru); req->vnic_id = cpu_to_le16(vnic->fw_vnic_id); #ifdef CONFIG_BNXT_SRIOV @@ -9912,6 +9913,26 @@ static int __bnxt_setup_vnic(struct bnxt *bp, struct bnxt_vnic_info *vnic) return rc; } +int bnxt_hwrm_vnic_update(struct bnxt *bp, struct bnxt_vnic_info *vnic, + u8 valid) +{ + struct hwrm_vnic_update_input *req; + int rc; + + rc = hwrm_req_init(bp, req, HWRM_VNIC_UPDATE); + if (rc) + return rc; + + req->vnic_id = cpu_to_le32(vnic->fw_vnic_id); + + if (valid & VNIC_UPDATE_REQ_ENABLES_MRU_VALID) + req->mru = cpu_to_le16(vnic->mru); + + req->enables = cpu_to_le32(valid); + + return hwrm_req_send(bp, req); +} + int bnxt_hwrm_vnic_rss_cfg_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic) { int rc; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index bbc7edccd5a4..af4229026ed6 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1221,6 +1221,7 @@ struct bnxt_vnic_info { #define BNXT_MAX_CTX_PER_VNIC 8 u16 fw_rss_cos_lb_ctx[BNXT_MAX_CTX_PER_VNIC]; u16 fw_l2_ctx_id; + u16 mru; #define BNXT_MAX_UC_ADDRS 4 struct bnxt_l2_filter *l2_filters[BNXT_MAX_UC_ADDRS]; /* index 0 always dev_addr */ @@ -2804,6 +2805,8 @@ int bnxt_hwrm_free_wol_fltr(struct bnxt *bp); int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp, bool all); int bnxt_hwrm_func_qcaps(struct bnxt *bp); int bnxt_hwrm_fw_set_time(struct bnxt *); +int bnxt_hwrm_vnic_update(struct bnxt *bp, struct bnxt_vnic_info *vnic, + u8 valid); int bnxt_hwrm_vnic_rss_cfg_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic); int __bnxt_setup_vnic_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic); void bnxt_del_one_rss_ctx(struct bnxt *bp, struct bnxt_rss_ctx *rss_ctx, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h index 06ea86c80be1..abb1463b056c 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h @@ -6469,6 +6469,43 @@ struct hwrm_vnic_alloc_output { u8 valid; }; +/* hwrm_vnic_update_input (size:256b/32B) */ +struct hwrm_vnic_update_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 vnic_id; + __le32 enables; + #define VNIC_UPDATE_REQ_ENABLES_VNIC_STATE_VALID 0x1UL + #define VNIC_UPDATE_REQ_ENABLES_MRU_VALID 0x2UL + #define VNIC_UPDATE_REQ_ENABLES_METADATA_FORMAT_TYPE_VALID 0x4UL + u8 vnic_state; + #define VNIC_UPDATE_REQ_VNIC_STATE_NORMAL 0x0UL + #define VNIC_UPDATE_REQ_VNIC_STATE_DROP 0x1UL + #define VNIC_UPDATE_REQ_VNIC_STATE_LAST VNIC_UPDATE_REQ_VNIC_STATE_DROP + u8 metadata_format_type; + #define VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_0 0x0UL + #define VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_1 0x1UL + #define VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_2 0x2UL + #define VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_3 0x3UL + #define VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_4 0x4UL + #define VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_LAST VNIC_UPDATE_REQ_METADATA_FORMAT_TYPE_4 + __le16 mru; + u8 unused_1[4]; +}; + +/* hwrm_vnic_update_output (size:128b/16B) */ +struct hwrm_vnic_update_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + u8 unused_0[7]; + u8 valid; +}; + /* hwrm_vnic_free_input (size:192b/24B) */ struct hwrm_vnic_free_input { __le16 req_type; From patchwork Tue Jun 18 05:52:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13701816 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oa1-f41.google.com (mail-oa1-f41.google.com [209.85.160.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4572F1CA9E for ; Tue, 18 Jun 2024 05:52:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718689934; cv=none; b=LCo+5Ljj/3WKOg6Lkt037Lhyt1TzUltX9EXmTrT6vIXPnkXPiCQJs5Fo8tEubgM8mhmkh41rf8Gw0/eHKseFz3qJbGKewYgUD5adr5iYsxq8pUbc9rPcIMVyagxbPPX1/x5TTXYj8k9380WB0AROd0fRxF2pGWcf9CXr+WWR8xM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718689934; c=relaxed/simple; bh=RcDp3u42sr2U+pkAy9GBxIhDRLMvZ0JIfJH2f7mZeuI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NcihYqLXbtxbM1iynlKFMD1QVdtDvcn4KqNT+sTf+6RM4K0li4SSHpU8ZJc/rIvEITPd3G+jRZXn970YClbXfWLyjMIZnNRaSdYo5UXntMxM1SKSaDfhKtDzGdTNRbL/C51oTBxbI661kXQWUBAvekFHnvPjM4X6L5zGUhwnJOs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=O6BoHp4S; arc=none smtp.client-ip=209.85.160.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="O6BoHp4S" Received: by mail-oa1-f41.google.com with SMTP id 586e51a60fabf-254a2d8485eso2392033fac.0 for ; Mon, 17 Jun 2024 22:52:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1718689931; x=1719294731; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AYk4C/URN/TNlyuG6fOb5rdaBTWpwk1vZ12VfXVNDKE=; b=O6BoHp4SBgjzblbIvMXC0lnUwS5JhZc1cgsC7rBDUjNzfMtl0cr4XofvKv/CjvZonh /3kLZ+h3wi8YUGPW+oYRUhhvHLWuFVvkMNBJer7lgnuVce7Odku2S1tg0yrxs9OnLRYr LVDtzrWqE2d6rtySdadRnrWRRNmE1U6vWxrqx81/CFZQEoKW94Mvj8iOwo/5fM+eaHuU Rcp506cZ96t0glm4g2SNPeI67vka07mM+yHsz9K8ws4UO8zseVXvlnaDvzU/OYJ4n96w c12rcfrxeGI5PSDxNKId11Zxc40G14ywx0MQC88i1uHS61Zcc4foSqiFF8zTZZff0KHj EF8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718689931; x=1719294731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AYk4C/URN/TNlyuG6fOb5rdaBTWpwk1vZ12VfXVNDKE=; b=aJIJkpvwDp0HG6xn46bW6PmuDVJojHleQiRw7+c6sq1yvD+1Ap+PderiS6MzabY5Cc nehs36OojMcu2rvOPMVA2DW0H801HVHUaNxz2kukcZzsjwJQvrcO7UqMTuU8PxYX6Vqf vYQa4ctOSfUQ+ON7V4MK6EATtDuYE7wQLBc0Z79DzjQ5GWbKc32Ab6abWtdd0khN2c1D GwcEbpW8NfzVfwaUV/PFQbmTuTJ74ZYhkXvUypMzEu5g/LDy/LDrv5PSZxOaIPADa85C iyq92krp7mgiArpn6h8FbQaNAaavvlcjAUf++z4u203L0v6xgCBbotuE+7bkdNxxaIoh yf6Q== X-Forwarded-Encrypted: i=1; AJvYcCWsYvAibblVMamOZ8CX9W0uA9W6eYIKMgWv/PRHOEg68E8LF19hHR4oazLXgbZSrD51DjVq4X498xN7nkONq6xxauOoWOWW X-Gm-Message-State: AOJu0YwXxdHQAvxACExpSPM1UycAQipt8dmOv0Bcygr/UdiSQOmY67iL NnvAPeWdZHgpy/KEd3+CiLdQCNnL+/Kd324gGZLkw0SmBx5Khy9+e/SplSYXzaU= X-Google-Smtp-Source: AGHT+IF/RMV+D9AsBABx6d8ZAU+6QbtFD/rPVnSPogxJF080j1Y8XhR3dZBfTjRXH9HkQK3hsIwuMQ== X-Received: by 2002:a05:6870:f145:b0:258:37f4:755c with SMTP id 586e51a60fabf-25842b0cfa0mr13177202fac.46.1718689931238; Mon, 17 Jun 2024 22:52:11 -0700 (PDT) Received: from localhost (fwdproxy-prn-018.fbsv.net. [2a03:2880:ff:12::face:b00c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-705ce168be0sm8147479b3a.28.2024.06.17.22.52.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 22:52:10 -0700 (PDT) From: David Wei To: Michael Chan , Andy Gospodarek , Adrian Alvarado , Somnath Kotur , netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , David Ahern , "David S. Miller" , Eric Dumazet , Paolo Abeni Subject: [PATCH net-next v2 2/3] bnxt_en: split rx ring helpers out from ring helpers Date: Mon, 17 Jun 2024 22:52:01 -0700 Message-ID: <20240618055202.2530064-3-dw@davidwei.uk> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240618055202.2530064-1-dw@davidwei.uk> References: <20240618055202.2530064-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org To prepare for queue API implementation, split rx ring functions out from ring helpers. These new helpers will be called from queue API implementation. Signed-off-by: David Wei --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 300 ++++++++++++++-------- 1 file changed, 193 insertions(+), 107 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index ebcf393f06dc..bbe37ea8e1ef 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3317,37 +3317,12 @@ static void bnxt_free_tx_skbs(struct bnxt *bp) } } -static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) +static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) { - struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr]; struct pci_dev *pdev = bp->pdev; - struct bnxt_tpa_idx_map *map; - int i, max_idx, max_agg_idx; + int i, max_idx; max_idx = bp->rx_nr_pages * RX_DESC_CNT; - max_agg_idx = bp->rx_agg_nr_pages * RX_DESC_CNT; - if (!rxr->rx_tpa) - goto skip_rx_tpa_free; - - for (i = 0; i < bp->max_tpa; i++) { - struct bnxt_tpa_info *tpa_info = &rxr->rx_tpa[i]; - u8 *data = tpa_info->data; - - if (!data) - continue; - - dma_unmap_single_attrs(&pdev->dev, tpa_info->mapping, - bp->rx_buf_use_size, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); - - tpa_info->data = NULL; - - skb_free_frag(data); - } - -skip_rx_tpa_free: - if (!rxr->rx_buf_ring) - goto skip_rx_buf_free; for (i = 0; i < max_idx; i++) { struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i]; @@ -3367,12 +3342,15 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) skb_free_frag(data); } } +} -skip_rx_buf_free: - if (!rxr->rx_agg_ring) - goto skip_rx_agg_free; +static void bnxt_free_one_rx_agg_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) +{ + int i, max_idx; - for (i = 0; i < max_agg_idx; i++) { + max_idx = bp->rx_agg_nr_pages * RX_DESC_CNT; + + for (i = 0; i < max_idx; i++) { struct bnxt_sw_rx_agg_bd *rx_agg_buf = &rxr->rx_agg_ring[i]; struct page *page = rx_agg_buf->page; @@ -3384,6 +3362,45 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) page_pool_recycle_direct(rxr->page_pool, page); } +} + +static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) +{ + struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr]; + struct pci_dev *pdev = bp->pdev; + struct bnxt_tpa_idx_map *map; + int i; + + if (!rxr->rx_tpa) + goto skip_rx_tpa_free; + + for (i = 0; i < bp->max_tpa; i++) { + struct bnxt_tpa_info *tpa_info = &rxr->rx_tpa[i]; + u8 *data = tpa_info->data; + + if (!data) + continue; + + dma_unmap_single_attrs(&pdev->dev, tpa_info->mapping, + bp->rx_buf_use_size, bp->rx_dir, + DMA_ATTR_WEAK_ORDERING); + + tpa_info->data = NULL; + + skb_free_frag(data); + } + +skip_rx_tpa_free: + if (!rxr->rx_buf_ring) + goto skip_rx_buf_free; + + bnxt_free_one_rx_ring(bp, rxr); + +skip_rx_buf_free: + if (!rxr->rx_agg_ring) + goto skip_rx_agg_free; + + bnxt_free_one_rx_agg_ring(bp, rxr); skip_rx_agg_free: map = rxr->rx_tpa_idx_map; @@ -4062,37 +4079,55 @@ static void bnxt_init_rxbd_pages(struct bnxt_ring_struct *ring, u32 type) } } -static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr) +static void bnxt_alloc_one_rx_ring_skb(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, + int ring_nr) { - struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr]; - struct net_device *dev = bp->dev; u32 prod; int i; prod = rxr->rx_prod; for (i = 0; i < bp->rx_ring_size; i++) { if (bnxt_alloc_rx_data(bp, rxr, prod, GFP_KERNEL)) { - netdev_warn(dev, "init'ed rx ring %d with %d/%d skbs only\n", + netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d skbs only\n", ring_nr, i, bp->rx_ring_size); break; } prod = NEXT_RX(prod); } rxr->rx_prod = prod; +} - if (!(bp->flags & BNXT_FLAG_AGG_RINGS)) - return 0; +static void bnxt_alloc_one_rx_ring_page(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, + int ring_nr) +{ + u32 prod; + int i; prod = rxr->rx_agg_prod; for (i = 0; i < bp->rx_agg_ring_size; i++) { if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_KERNEL)) { - netdev_warn(dev, "init'ed rx ring %d with %d/%d pages only\n", + netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n", ring_nr, i, bp->rx_ring_size); break; } prod = NEXT_RX_AGG(prod); } rxr->rx_agg_prod = prod; +} + +static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr) +{ + struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr]; + int i; + + bnxt_alloc_one_rx_ring_skb(bp, rxr, ring_nr); + + if (!(bp->flags & BNXT_FLAG_AGG_RINGS)) + return 0; + + bnxt_alloc_one_rx_ring_page(bp, rxr, ring_nr); if (rxr->rx_tpa) { dma_addr_t mapping; @@ -4111,9 +4146,9 @@ static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr) return 0; } -static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr) +static void bnxt_init_one_rx_ring_rxbd(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) { - struct bnxt_rx_ring_info *rxr; struct bnxt_ring_struct *ring; u32 type; @@ -4123,28 +4158,43 @@ static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr) if (NET_IP_ALIGN == 2) type |= RX_BD_FLAGS_SOP; - rxr = &bp->rx_ring[ring_nr]; ring = &rxr->rx_ring_struct; bnxt_init_rxbd_pages(ring, type); - - netif_queue_set_napi(bp->dev, ring_nr, NETDEV_QUEUE_TYPE_RX, - &rxr->bnapi->napi); - - if (BNXT_RX_PAGE_MODE(bp) && bp->xdp_prog) { - bpf_prog_add(bp->xdp_prog, 1); - rxr->xdp_prog = bp->xdp_prog; - } ring->fw_ring_id = INVALID_HW_RING_ID; +} + +static void bnxt_init_one_rx_agg_ring_rxbd(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + struct bnxt_ring_struct *ring; + u32 type; ring = &rxr->rx_agg_ring_struct; ring->fw_ring_id = INVALID_HW_RING_ID; - if ((bp->flags & BNXT_FLAG_AGG_RINGS)) { type = ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | RX_BD_TYPE_RX_AGG_BD | RX_BD_FLAGS_SOP; bnxt_init_rxbd_pages(ring, type); } +} + +static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr) +{ + struct bnxt_rx_ring_info *rxr; + + rxr = &bp->rx_ring[ring_nr]; + bnxt_init_one_rx_ring_rxbd(bp, rxr); + + netif_queue_set_napi(bp->dev, ring_nr, NETDEV_QUEUE_TYPE_RX, + &rxr->bnapi->napi); + + if (BNXT_RX_PAGE_MODE(bp) && bp->xdp_prog) { + bpf_prog_add(bp->xdp_prog, 1); + rxr->xdp_prog = bp->xdp_prog; + } + + bnxt_init_one_rx_agg_ring_rxbd(bp, rxr); return bnxt_alloc_one_rx_ring(bp, ring_nr); } @@ -6870,6 +6920,48 @@ static void bnxt_set_db(struct bnxt *bp, struct bnxt_db_info *db, u32 ring_type, bnxt_set_db_mask(bp, db, ring_type); } +static int bnxt_hwrm_rx_ring_alloc(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + struct bnxt_ring_struct *ring = &rxr->rx_ring_struct; + struct bnxt_napi *bnapi = rxr->bnapi; + u32 type = HWRM_RING_ALLOC_RX; + u32 map_idx = bnapi->index; + int rc; + + rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + if (rc) + return rc; + + bnxt_set_db(bp, &rxr->rx_db, type, map_idx, ring->fw_ring_id); + bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id; + + return 0; +} + +static int bnxt_hwrm_rx_agg_ring_alloc(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct; + u32 type = HWRM_RING_ALLOC_AGG; + u32 grp_idx = ring->grp_idx; + u32 map_idx; + int rc; + + map_idx = grp_idx + bp->rx_nr_rings; + rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + if (rc) + return rc; + + bnxt_set_db(bp, &rxr->rx_agg_db, type, map_idx, + ring->fw_ring_id); + bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); + bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); + bp->grp_info[grp_idx].agg_fw_ring_id = ring->fw_ring_id; + + return 0; +} + static int bnxt_hwrm_ring_alloc(struct bnxt *bp) { bool agg_rings = !!(bp->flags & BNXT_FLAG_AGG_RINGS); @@ -6935,24 +7027,21 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) bnxt_set_db(bp, &txr->tx_db, type, map_idx, ring->fw_ring_id); } - type = HWRM_RING_ALLOC_RX; for (i = 0; i < bp->rx_nr_rings; i++) { struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; - struct bnxt_ring_struct *ring = &rxr->rx_ring_struct; - struct bnxt_napi *bnapi = rxr->bnapi; - u32 map_idx = bnapi->index; - rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc = bnxt_hwrm_rx_ring_alloc(bp, rxr); if (rc) goto err_out; - bnxt_set_db(bp, &rxr->rx_db, type, map_idx, ring->fw_ring_id); /* If we have agg rings, post agg buffers first. */ if (!agg_rings) bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); - bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id; if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { struct bnxt_cp_ring_info *cpr2 = rxr->rx_cpr; + struct bnxt_napi *bnapi = rxr->bnapi; u32 type2 = HWRM_RING_ALLOC_CMPL; + struct bnxt_ring_struct *ring; + u32 map_idx = bnapi->index; ring = &cpr2->cp_ring_struct; ring->handle = BNXT_SET_NQ_HDL(cpr2); @@ -6966,23 +7055,10 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) } if (agg_rings) { - type = HWRM_RING_ALLOC_AGG; for (i = 0; i < bp->rx_nr_rings; i++) { - struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; - struct bnxt_ring_struct *ring = - &rxr->rx_agg_ring_struct; - u32 grp_idx = ring->grp_idx; - u32 map_idx = grp_idx + bp->rx_nr_rings; - - rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc = bnxt_hwrm_rx_agg_ring_alloc(bp, &bp->rx_ring[i]); if (rc) goto err_out; - - bnxt_set_db(bp, &rxr->rx_agg_db, type, map_idx, - ring->fw_ring_id); - bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); - bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); - bp->grp_info[grp_idx].agg_fw_ring_id = ring->fw_ring_id; } } err_out: @@ -7022,6 +7098,50 @@ static int hwrm_ring_free_send_msg(struct bnxt *bp, return 0; } +static void bnxt_hwrm_rx_ring_free(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, + bool close_path) +{ + struct bnxt_ring_struct *ring = &rxr->rx_ring_struct; + u32 grp_idx = rxr->bnapi->index; + u32 cmpl_ring_id; + + if (ring->fw_ring_id == INVALID_HW_RING_ID) + return; + + cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr); + hwrm_ring_free_send_msg(bp, ring, + RING_FREE_REQ_RING_TYPE_RX, + close_path ? cmpl_ring_id : + INVALID_HW_RING_ID); + ring->fw_ring_id = INVALID_HW_RING_ID; + bp->grp_info[grp_idx].rx_fw_ring_id = INVALID_HW_RING_ID; +} + +static void bnxt_hwrm_rx_agg_ring_free(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, + bool close_path) +{ + struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct; + u32 grp_idx = rxr->bnapi->index; + u32 type, cmpl_ring_id; + + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) + type = RING_FREE_REQ_RING_TYPE_RX_AGG; + else + type = RING_FREE_REQ_RING_TYPE_RX; + + if (ring->fw_ring_id == INVALID_HW_RING_ID) + return; + + cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr); + hwrm_ring_free_send_msg(bp, ring, type, + close_path ? cmpl_ring_id : + INVALID_HW_RING_ID); + ring->fw_ring_id = INVALID_HW_RING_ID; + bp->grp_info[grp_idx].agg_fw_ring_id = INVALID_HW_RING_ID; +} + static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path) { u32 type; @@ -7046,42 +7166,8 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path) } for (i = 0; i < bp->rx_nr_rings; i++) { - struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; - struct bnxt_ring_struct *ring = &rxr->rx_ring_struct; - u32 grp_idx = rxr->bnapi->index; - - if (ring->fw_ring_id != INVALID_HW_RING_ID) { - u32 cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr); - - hwrm_ring_free_send_msg(bp, ring, - RING_FREE_REQ_RING_TYPE_RX, - close_path ? cmpl_ring_id : - INVALID_HW_RING_ID); - ring->fw_ring_id = INVALID_HW_RING_ID; - bp->grp_info[grp_idx].rx_fw_ring_id = - INVALID_HW_RING_ID; - } - } - - if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) - type = RING_FREE_REQ_RING_TYPE_RX_AGG; - else - type = RING_FREE_REQ_RING_TYPE_RX; - for (i = 0; i < bp->rx_nr_rings; i++) { - struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; - struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct; - u32 grp_idx = rxr->bnapi->index; - - if (ring->fw_ring_id != INVALID_HW_RING_ID) { - u32 cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr); - - hwrm_ring_free_send_msg(bp, ring, type, - close_path ? cmpl_ring_id : - INVALID_HW_RING_ID); - ring->fw_ring_id = INVALID_HW_RING_ID; - bp->grp_info[grp_idx].agg_fw_ring_id = - INVALID_HW_RING_ID; - } + bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path); + bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path); } /* The completion rings are about to be freed. After that the From patchwork Tue Jun 18 05:52:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13701817 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06858249EB for ; Tue, 18 Jun 2024 05:52:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718689934; cv=none; b=j2pdQf7J876i/1xhTQnJ53pIsD/NnYMBtpmZynH1PSnx/qbfzNYdwvduPJh15JX9W9povZC88L7pBKgih79YWyCYjOap+okuj9oOjQ8q8POJ8krq0Dnu9QKHVI28atP9NnmWvz29KDAh/vc3hi4RidLeokaARkbOgg/u0nQUIhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718689934; c=relaxed/simple; bh=F3lDqlJ7UruJNej8FPihz5f6vW2cFoVTwFbYgUFboBc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p6eyGzBZxvqsjayBY1y/3Mx0ImRcAWGsaUUkdGq2JAOj4+Km79gPJjeyRwAYYN4rlQttD7finBjxCRfdBKjUPS5eYvNOvCyZxvdAT0ppj6JwVN6HXLyMbRt9Lx1fWbrPBu/EN3yIOjmmu4aa5zZC2tI2li+N+ifBTHZpDTjSGM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=ZSnd9bc9; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="ZSnd9bc9" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1f8395a530dso43687335ad.0 for ; Mon, 17 Jun 2024 22:52:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1718689932; x=1719294732; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SaUyDmGQKozVhfyPY+cyzhojvOoZVxFyw5exrs8E7D8=; b=ZSnd9bc9O1WmaQlOsfdfgmdTEtP1JDaWxS2Mw0b53SZXihUMHwvh5uGOtBlKPS6v58 pr7W0uNq7pbcOQ1hkBcV5TjE9OAW6tqR/ic80DjnxX/PNjqDkhVczk/pepkz+1h/7wcq +k4YrD5rev95KGtLNwLhYQLDndeU61nLtI2VULXz5yfjQUsY1qtuSt7giGD4M6FuZAvC 1L8yeqjPcfa8FDsQdST6IyLq06IMrs/FiXO9XwNqoOBEod44KNQrECWf5eEGEHFs7S6d wOVbarS6/r870V1CrCnkvmXJD8U4TJ/zNNCs//2z5MpuCI3laj8XoHX5bGwyTFAbp1vV zXpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718689932; x=1719294732; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SaUyDmGQKozVhfyPY+cyzhojvOoZVxFyw5exrs8E7D8=; b=iLxBkv0Ivv5GbsIJctjqiV75pftwF/mV0cZOqmq7FnlW1mkp84jFANM/A02kp64mEn C1ceL1uqEYEF/Rk257lsnlq8bzjLFLx7gO46Z920ifKxjq87jXicAgr3xVxJ0i/k331u dzWF1PBN2g1oRjqRycNDLSxQop2jgcXkO7R/q5g2+zqUp2TpiC/4RgKfrZRLWZIiqYxU DJDII0/LWH6yD+D257w5qtzJMvvGG+No9OExkX08/8Gsm0OviBvCZagZWJJhix5BreXa G+RRC1HHgMLpTWTpYnuWHMJTjc2217y2HSlMCKKWviUIT4sjDCIXIPgBdxMMh93wmheL xXwA== X-Forwarded-Encrypted: i=1; AJvYcCUur8xySLVBqqRvq2Rv/tzg6DT4Ihp9wTWldtktEkHH6meB6Z+q5R12NLyhPlK2ZGxD7hH6m3bLtAfVkvimCXFoZYiaFPIp X-Gm-Message-State: AOJu0Yw3HoZBkR/Zzi2kqd6aSkjEt1TxxSBuh4hkLZPsh1JhrMOrgxNd 1AQDd8sJ9oO8e9iGALwaRA6h6tlO0pBxuTYv6m1l4hDVp2a2gFGq9+rwNfo56JY= X-Google-Smtp-Source: AGHT+IE3ahTHqFvA3Vq5G4K8Qdqf5qFEQnoyclUJObhrgxP+RPIbTp+NTxSdf+4E4KqwMO9YN0RgEQ== X-Received: by 2002:a17:903:244a:b0:1f7:1821:77ad with SMTP id d9443c01a7336-1f8625ce77amr137565135ad.14.1718689932307; Mon, 17 Jun 2024 22:52:12 -0700 (PDT) Received: from localhost (fwdproxy-prn-005.fbsv.net. [2a03:2880:ff:5::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855f40f38sm88646455ad.277.2024.06.17.22.52.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 22:52:12 -0700 (PDT) From: David Wei To: Michael Chan , Andy Gospodarek , Adrian Alvarado , Somnath Kotur , netdev@vger.kernel.org Cc: Pavel Begunkov , Jakub Kicinski , David Ahern , "David S. Miller" , Eric Dumazet , Paolo Abeni Subject: [PATCH net-next v2 3/3] bnxt_en: implement netdev_queue_mgmt_ops Date: Mon, 17 Jun 2024 22:52:02 -0700 Message-ID: <20240618055202.2530064-4-dw@davidwei.uk> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240618055202.2530064-1-dw@davidwei.uk> References: <20240618055202.2530064-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement netdev_queue_mgmt_ops for bnxt added in [1]. Two bnxt_rx_ring_info structs are allocated to hold the new/old queue memory. Queue memory is copied from/to the main bp->rx_ring[idx] bnxt_rx_ring_info. Queue memory is pre-allocated in bnxt_queue_mem_alloc() into a clone, and then copied into bp->rx_ring[idx] in bnxt_queue_mem_start(). Similarly, when bp->rx_ring[idx] is stopped its queue memory is copied into a clone, and then freed later in bnxt_queue_mem_free(). I tested this patchset with netdev_rx_queue_restart(), including inducing errors in all places that returns an error code. In all cases, the queue is left in a good working state. Rx queues are stopped/started using bnxt_hwrm_vnic_update(), which only affects queues that are not in the default RSS context. This is different to the GVE that also implemented the queue API recently where arbitrary Rx queues can be stopped. Due to this limitation, all ndos returns EOPNOTSUPP if the queue is in the default RSS context. Thanks to Somnath for helping me with using bnxt_hwrm_vnic_update() to stop/start an Rx queue. With their permission I've added them as Acked-by. [1]: https://lore.kernel.org/netdev/20240501232549.1327174-2-shailend@google.com/ Acked-by: Somnath Kotur Signed-off-by: David Wei --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 304 ++++++++++++++++++++++ 1 file changed, 304 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index bbe37ea8e1ef..9bed899e0575 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3997,6 +3997,62 @@ static int bnxt_alloc_cp_rings(struct bnxt *bp) return 0; } +static void bnxt_init_rx_ring_struct(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + struct bnxt_ring_mem_info *rmem; + struct bnxt_ring_struct *ring; + + ring = &rxr->rx_ring_struct; + rmem = &ring->ring_mem; + rmem->nr_pages = bp->rx_nr_pages; + rmem->page_size = HW_RXBD_RING_SIZE; + rmem->pg_arr = (void **)rxr->rx_desc_ring; + rmem->dma_arr = rxr->rx_desc_mapping; + rmem->vmem_size = SW_RXBD_RING_SIZE * bp->rx_nr_pages; + rmem->vmem = (void **)&rxr->rx_buf_ring; + + ring = &rxr->rx_agg_ring_struct; + rmem = &ring->ring_mem; + rmem->nr_pages = bp->rx_agg_nr_pages; + rmem->page_size = HW_RXBD_RING_SIZE; + rmem->pg_arr = (void **)rxr->rx_agg_desc_ring; + rmem->dma_arr = rxr->rx_agg_desc_mapping; + rmem->vmem_size = SW_RXBD_AGG_RING_SIZE * bp->rx_agg_nr_pages; + rmem->vmem = (void **)&rxr->rx_agg_ring; +} + +static void bnxt_reset_rx_ring_struct(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr) +{ + struct bnxt_ring_mem_info *rmem; + struct bnxt_ring_struct *ring; + int i; + + rxr->page_pool->p.napi = NULL; + rxr->page_pool = NULL; + + ring = &rxr->rx_ring_struct; + rmem = &ring->ring_mem; + rmem->pg_tbl = NULL; + rmem->pg_tbl_map = 0; + for (i = 0; i < rmem->nr_pages; i++) { + rmem->pg_arr[i] = NULL; + rmem->dma_arr[i] = 0; + } + *rmem->vmem = NULL; + + ring = &rxr->rx_agg_ring_struct; + rmem = &ring->ring_mem; + rmem->pg_tbl = NULL; + rmem->pg_tbl_map = 0; + for (i = 0; i < rmem->nr_pages; i++) { + rmem->pg_arr[i] = NULL; + rmem->dma_arr[i] = 0; + } + *rmem->vmem = NULL; +} + static void bnxt_init_ring_struct(struct bnxt *bp) { int i, j; @@ -14935,6 +14991,253 @@ static const struct netdev_stat_ops bnxt_stat_ops = { .get_base_stats = bnxt_get_base_stats, }; +static int bnxt_alloc_rx_agg_bmap(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) +{ + u16 mem_size; + + rxr->rx_agg_bmap_size = bp->rx_agg_ring_mask + 1; + mem_size = rxr->rx_agg_bmap_size / 8; + rxr->rx_agg_bmap = kzalloc(mem_size, GFP_KERNEL); + if (!rxr->rx_agg_bmap) + return -ENOMEM; + + return 0; +} + +static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx) +{ + struct bnxt_rx_ring_info *rxr, *clone; + struct bnxt *bp = netdev_priv(dev); + struct bnxt_ring_struct *ring; + int rc; + + if (bnxt_get_max_rss_ring(bp) >= idx) + return -EOPNOTSUPP; + + rxr = &bp->rx_ring[idx]; + clone = qmem; + memcpy(clone, rxr, sizeof(*rxr)); + bnxt_init_rx_ring_struct(bp, clone); + bnxt_reset_rx_ring_struct(bp, clone); + + clone->rx_prod = 0; + clone->rx_agg_prod = 0; + clone->rx_sw_agg_prod = 0; + clone->rx_next_cons = 0; + + rc = bnxt_alloc_rx_page_pool(bp, clone, rxr->page_pool->p.nid); + if (rc) + return rc; + + ring = &clone->rx_ring_struct; + rc = bnxt_alloc_ring(bp, &ring->ring_mem); + if (rc) + goto err_free_rx_ring; + + if (bp->flags & BNXT_FLAG_AGG_RINGS) { + ring = &clone->rx_agg_ring_struct; + rc = bnxt_alloc_ring(bp, &ring->ring_mem); + if (rc) + goto err_free_rx_agg_ring; + + rc = bnxt_alloc_rx_agg_bmap(bp, clone); + if (rc) + goto err_free_rx_agg_ring; + } + + bnxt_init_one_rx_ring_rxbd(bp, clone); + bnxt_init_one_rx_agg_ring_rxbd(bp, clone); + + bnxt_alloc_one_rx_ring_skb(bp, clone, idx); + if (bp->flags & BNXT_FLAG_AGG_RINGS) + bnxt_alloc_one_rx_ring_page(bp, clone, idx); + + return 0; + +err_free_rx_agg_ring: + bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem); +err_free_rx_ring: + bnxt_free_ring(bp, &clone->rx_ring_struct.ring_mem); + clone->page_pool->p.napi = NULL; + page_pool_destroy(clone->page_pool); + clone->page_pool = NULL; + return rc; +} + +static void bnxt_queue_mem_free(struct net_device *dev, void *qmem) +{ + struct bnxt_rx_ring_info *rxr = qmem; + struct bnxt *bp = netdev_priv(dev); + struct bnxt_ring_struct *ring; + + bnxt_free_one_rx_ring(bp, rxr); + bnxt_free_one_rx_agg_ring(bp, rxr); + + /* At this point, this NAPI instance has another page pool associated + * with it. Disconnect here before freeing the old page pool to avoid + * warnings. + */ + rxr->page_pool->p.napi = NULL; + page_pool_destroy(rxr->page_pool); + rxr->page_pool = NULL; + + ring = &rxr->rx_ring_struct; + bnxt_free_ring(bp, &ring->ring_mem); + + ring = &rxr->rx_agg_ring_struct; + bnxt_free_ring(bp, &ring->ring_mem); + + kfree(rxr->rx_agg_bmap); + rxr->rx_agg_bmap = NULL; +} + +static void bnxt_copy_rx_ring(struct bnxt *bp, + struct bnxt_rx_ring_info *dst, + struct bnxt_rx_ring_info *src) +{ + struct bnxt_ring_mem_info *dst_rmem, *src_rmem; + struct bnxt_ring_struct *dst_ring, *src_ring; + int i; + + dst_ring = &dst->rx_ring_struct; + dst_rmem = &dst_ring->ring_mem; + src_ring = &src->rx_ring_struct; + src_rmem = &src_ring->ring_mem; + + WARN_ON(dst_rmem->nr_pages != src_rmem->nr_pages); + WARN_ON(dst_rmem->page_size != src_rmem->page_size); + WARN_ON(dst_rmem->flags != src_rmem->flags); + WARN_ON(dst_rmem->depth != src_rmem->depth); + WARN_ON(dst_rmem->vmem_size != src_rmem->vmem_size); + WARN_ON(dst_rmem->ctx_mem != src_rmem->ctx_mem); + + dst_rmem->pg_tbl = src_rmem->pg_tbl; + dst_rmem->pg_tbl_map = src_rmem->pg_tbl_map; + *dst_rmem->vmem = *src_rmem->vmem; + for (i = 0; i < dst_rmem->nr_pages; i++) { + dst_rmem->pg_arr[i] = src_rmem->pg_arr[i]; + dst_rmem->dma_arr[i] = src_rmem->dma_arr[i]; + } + + if (!(bp->flags & BNXT_FLAG_AGG_RINGS)) + return; + + dst_ring = &dst->rx_agg_ring_struct; + dst_rmem = &dst_ring->ring_mem; + src_ring = &src->rx_agg_ring_struct; + src_rmem = &src_ring->ring_mem; + + WARN_ON(dst_rmem->nr_pages != src_rmem->nr_pages); + WARN_ON(dst_rmem->page_size != src_rmem->page_size); + WARN_ON(dst_rmem->flags != src_rmem->flags); + WARN_ON(dst_rmem->depth != src_rmem->depth); + WARN_ON(dst_rmem->vmem_size != src_rmem->vmem_size); + WARN_ON(dst_rmem->ctx_mem != src_rmem->ctx_mem); + WARN_ON(dst->rx_agg_bmap_size != src->rx_agg_bmap_size); + + dst_rmem->pg_tbl = src_rmem->pg_tbl; + dst_rmem->pg_tbl_map = src_rmem->pg_tbl_map; + *dst_rmem->vmem = *src_rmem->vmem; + for (i = 0; i < dst_rmem->nr_pages; i++) { + dst_rmem->pg_arr[i] = src_rmem->pg_arr[i]; + dst_rmem->dma_arr[i] = src_rmem->dma_arr[i]; + } + + dst->rx_agg_bmap = src->rx_agg_bmap; +} + +static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx) +{ + struct bnxt *bp = netdev_priv(dev); + struct bnxt_rx_ring_info *rxr, *clone; + struct bnxt_cp_ring_info *cpr; + struct bnxt_vnic_info *vnic; + int rc; + + if (bnxt_get_max_rss_ring(bp) >= idx) + return -EOPNOTSUPP; + + rxr = &bp->rx_ring[idx]; + clone = qmem; + + rxr->rx_prod = clone->rx_prod; + rxr->rx_agg_prod = clone->rx_agg_prod; + rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod; + rxr->rx_next_cons = clone->rx_next_cons; + rxr->page_pool = clone->page_pool; + + bnxt_copy_rx_ring(bp, rxr, clone); + + rc = bnxt_hwrm_rx_ring_alloc(bp, rxr); + if (rc) + return rc; + rc = bnxt_hwrm_rx_agg_ring_alloc(bp, rxr); + if (rc) + goto err_free_hwrm_rx_ring; + + bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); + if (bp->flags & BNXT_FLAG_AGG_RINGS) + bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); + + napi_enable(&rxr->bnapi->napi); + + vnic = &bp->vnic_info[BNXT_VNIC_NTUPLE]; + vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; + rc = bnxt_hwrm_vnic_update(bp, vnic, + VNIC_UPDATE_REQ_ENABLES_MRU_VALID); + if (rc) + goto err_free_hwrm_rx_agg_ring; + + cpr = &rxr->bnapi->cp_ring; + cpr->sw_stats->rx.rx_resets++; + + return 0; + +err_free_hwrm_rx_agg_ring: + napi_disable(&rxr->bnapi->napi); + bnxt_hwrm_rx_agg_ring_free(bp, rxr, false); +err_free_hwrm_rx_ring: + bnxt_hwrm_rx_ring_free(bp, rxr, false); + return rc; +} + +static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx) +{ + struct bnxt *bp = netdev_priv(dev); + struct bnxt_rx_ring_info *rxr; + struct bnxt_vnic_info *vnic; + int rc; + + if (bnxt_get_max_rss_ring(bp) >= idx) + return -EOPNOTSUPP; + + vnic = &bp->vnic_info[BNXT_VNIC_NTUPLE]; + vnic->mru = 0; + rc = bnxt_hwrm_vnic_update(bp, vnic, + VNIC_UPDATE_REQ_ENABLES_MRU_VALID); + if (rc) + return rc; + + rxr = &bp->rx_ring[idx]; + napi_disable(&rxr->bnapi->napi); + bnxt_hwrm_rx_ring_free(bp, rxr, false); + bnxt_hwrm_rx_agg_ring_free(bp, rxr, false); + rxr->rx_next_cons = 0; + + memcpy(qmem, rxr, sizeof(*rxr)); + bnxt_init_rx_ring_struct(bp, qmem); + + return 0; +} + +static const struct netdev_queue_mgmt_ops bnxt_queue_mgmt_ops = { + .ndo_queue_mem_size = sizeof(struct bnxt_rx_ring_info), + .ndo_queue_mem_alloc = bnxt_queue_mem_alloc, + .ndo_queue_mem_free = bnxt_queue_mem_free, + .ndo_queue_start = bnxt_queue_start, + .ndo_queue_stop = bnxt_queue_stop, +}; + static void bnxt_remove_one(struct pci_dev *pdev) { struct net_device *dev = pci_get_drvdata(pdev); @@ -15400,6 +15703,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) dev->stat_ops = &bnxt_stat_ops; dev->watchdog_timeo = BNXT_TX_TIMEOUT; dev->ethtool_ops = &bnxt_ethtool_ops; + dev->queue_mgmt_ops = &bnxt_queue_mgmt_ops; pci_set_drvdata(pdev, dev); rc = bnxt_alloc_hwrm_resources(bp);