From patchwork Wed Jun 8 03:23:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872870 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A92F3C433EF for ; Wed, 8 Jun 2022 05:43:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233879AbiFHFm5 (ORCPT ); Wed, 8 Jun 2022 01:42:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233632AbiFHFlH (ORCPT ); Wed, 8 Jun 2022 01:41:07 -0400 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61AB240968B; Tue, 7 Jun 2022 20:24:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=hR6auVRymSfv8wZvT+7WbS5RtZHhVgJnjQJxvMEJ4bw=; b=LlivIzg3L0e9OcZYWilnLcYv9W12WSMVLWOb0RMm9SSmF0QcgXdUm+EYBrQ43J 6uHgYnPr4YWDkNpoS2uImRI22/bKfgwJCu6ZGTOUSS5GyrfpMxUpDRslrJNGwh 950pkVW9EOGMJ3F76uYRjyscKshDPZ55DKXOF0KZvigjodY= Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Tue, 7 Jun 2022 20:23:55 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 914C720328; Tue, 7 Jun 2022 20:24:01 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 8AD2AAA454; Tue, 7 Jun 2022 20:24:01 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 1/8] vmxnet3: prepare for version 7 changes Date: Tue, 7 Jun 2022 20:23:46 -0700 Message-ID: <20220608032353.964-2-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org vmxnet3 is currently at version 6 and this patch initiates the preparation to accommodate changes for upto version 7. Introduced utility macros for vmxnet3 version 7 comparison and update Copyright information. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/Makefile | 2 +- drivers/net/vmxnet3/upt1_defs.h | 2 +- drivers/net/vmxnet3/vmxnet3_defs.h | 2 +- drivers/net/vmxnet3/vmxnet3_drv.c | 2 +- drivers/net/vmxnet3/vmxnet3_ethtool.c | 2 +- drivers/net/vmxnet3/vmxnet3_int.h | 5 ++++- 6 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/net/vmxnet3/Makefile b/drivers/net/vmxnet3/Makefile index 7a38925f4165..a666a88ac1ff 100644 --- a/drivers/net/vmxnet3/Makefile +++ b/drivers/net/vmxnet3/Makefile @@ -2,7 +2,7 @@ # # Linux driver for VMware's vmxnet3 ethernet NIC. # -# Copyright (C) 2007-2021, VMware, Inc. All Rights Reserved. +# Copyright (C) 2007-2022, VMware, Inc. All Rights Reserved. # # This program is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the diff --git a/drivers/net/vmxnet3/upt1_defs.h b/drivers/net/vmxnet3/upt1_defs.h index f9f3a23d1698..41c0660a0c54 100644 --- a/drivers/net/vmxnet3/upt1_defs.h +++ b/drivers/net/vmxnet3/upt1_defs.h @@ -1,7 +1,7 @@ /* * Linux driver for VMware's vmxnet3 ethernet NIC. * - * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved. + * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h index 74d4e8bc4abc..9f91ebb10137 100644 --- a/drivers/net/vmxnet3/vmxnet3_defs.h +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -1,7 +1,7 @@ /* * Linux driver for VMware's vmxnet3 ethernet NIC. * - * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved. + * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 93e8d119d45f..6fc6a2a26161 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1,7 +1,7 @@ /* * Linux driver for VMware's vmxnet3 ethernet NIC. * - * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved. + * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c index 3172d46c0335..e41e76757c5b 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethtool.c +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -1,7 +1,7 @@ /* * Linux driver for VMware's vmxnet3 ethernet NIC. * - * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved. + * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index 7027ff483fa5..5251c3439d6a 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -1,7 +1,7 @@ /* * Linux driver for VMware's vmxnet3 ethernet NIC. * - * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved. + * Copyright (C) 2008-2022, VMware, Inc. All Rights Reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the @@ -81,6 +81,7 @@ #define VMXNET3_RSS #endif +#define VMXNET3_REV_7 6 /* Vmxnet3 Rev. 7 */ #define VMXNET3_REV_6 5 /* Vmxnet3 Rev. 6 */ #define VMXNET3_REV_5 4 /* Vmxnet3 Rev. 5 */ #define VMXNET3_REV_4 3 /* Vmxnet3 Rev. 4 */ @@ -431,6 +432,8 @@ struct vmxnet3_adapter { (adapter->version >= VMXNET3_REV_5 + 1) #define VMXNET3_VERSION_GE_6(adapter) \ (adapter->version >= VMXNET3_REV_6 + 1) +#define VMXNET3_VERSION_GE_7(adapter) \ + (adapter->version >= VMXNET3_REV_7 + 1) /* must be a multiple of VMXNET3_RING_SIZE_ALIGN */ #define VMXNET3_DEF_TX_RING_SIZE 512 From patchwork Wed Jun 8 03:23:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872886 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD4A9CCA47B for ; Wed, 8 Jun 2022 06:02:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234954AbiFHGBz (ORCPT ); Wed, 8 Jun 2022 02:01:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234677AbiFHFtG (ORCPT ); Wed, 8 Jun 2022 01:49:06 -0400 Received: from EX-PRD-EDGE01.vmware.com (ex-prd-edge01.vmware.com [208.91.3.33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21D8B30723C; Tue, 7 Jun 2022 20:24:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=ye2AXpXM8HdfpBAnmf0gC/E5GIezfSWTFYYONI/trtQ=; b=s6c8QtWHu8YDTfzvQNlpgLw1dNm6zUXiTtzAPZc8laU7ytZ7ov7SGfQZbjgOL4 B3oHpURlq0AM/DMLtRUN2XxSjVWpXs2UY0N+aCS58qx4qZaMeYLN4SbBl5+35Y jWFk3CCZniEVg/EtV9J8pjFvK6e+2zmuXpNwB2XYcaQ4X90= Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX-PRD-EDGE01.vmware.com (10.188.245.6) with Microsoft SMTP Server id 15.1.2308.20; Tue, 7 Jun 2022 20:23:47 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 01998202DC; Tue, 7 Jun 2022 20:24:03 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id EFA6DAA454; Tue, 7 Jun 2022 20:24:02 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 2/8] vmxnet3: add support for capability registers Date: Tue, 7 Jun 2022 20:23:47 -0700 Message-ID: <20220608032353.964-3-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE01.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This patch enhances vmxnet3 to suuport capability registers which allows it to enable features selectively. The DCR register tracks the capabilities vmxnet3 device supports. The PTCR register states the capabilities that the passthrough device supports. With the help of these registers, vmxnet3 can enable only those features which the passthrough device supoprts. This allows smooth trasition to Uniform-Passthrough (UPT) mode if the virtual nic requests it. If PTCR register returns nothing or error it means UPT is not being requested and vnic will continue in emulation mode. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_defs.h | 37 ++++++++++++- drivers/net/vmxnet3/vmxnet3_drv.c | 97 ++++++++++++++++++++++++++++++++ drivers/net/vmxnet3/vmxnet3_ethtool.c | 101 ++++++++++++++++++++++++++++++++-- drivers/net/vmxnet3/vmxnet3_int.h | 4 ++ 4 files changed, 233 insertions(+), 6 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h index 9f91ebb10137..0157155ff677 100644 --- a/drivers/net/vmxnet3/vmxnet3_defs.h +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -40,7 +40,13 @@ enum { VMXNET3_REG_MACL = 0x28, /* MAC Address Low */ VMXNET3_REG_MACH = 0x30, /* MAC Address High */ VMXNET3_REG_ICR = 0x38, /* Interrupt Cause Register */ - VMXNET3_REG_ECR = 0x40 /* Event Cause Register */ + VMXNET3_REG_ECR = 0x40, /* Event Cause Register */ + VMXNET3_REG_DCR = 0x48, /* Device capability register, + * from 0x48 to 0x80 + */ + VMXNET3_REG_PTCR = 0x88, /* Passthru capbility register + * from 0x88 to 0xb0 + */ }; /* BAR 0 */ @@ -101,6 +107,9 @@ enum { VMXNET3_CMD_GET_RESERVED2, VMXNET3_CMD_GET_RESERVED3, VMXNET3_CMD_GET_MAX_QUEUES_CONF, + VMXNET3_CMD_GET_RESERVED4, + VMXNET3_CMD_GET_MAX_CAPABILITIES, + VMXNET3_CMD_GET_DCR0_REG, }; /* @@ -801,4 +810,30 @@ struct Vmxnet3_DriverShared { #define VMXNET3_LINK_UP (10000 << 16 | 1) /* 10 Gbps, up */ #define VMXNET3_LINK_DOWN 0 +#define VMXNET3_DCR_ERROR 31 /* error when bit 31 of DCR is set */ +#define VMXNET3_CAP_UDP_RSS 0 /* bit 0 of DCR 0 */ +#define VMXNET3_CAP_ESP_RSS_IPV4 1 /* bit 1 of DCR 0 */ +#define VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD 2 /* bit 2 of DCR 0 */ +#define VMXNET3_CAP_GENEVE_TSO 3 /* bit 3 of DCR 0 */ +#define VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD 4 /* bit 4 of DCR 0 */ +#define VMXNET3_CAP_VXLAN_TSO 5 /* bit 5 of DCR 0 */ +#define VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD 6 /* bit 6 of DCR 0 */ +#define VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD 7 /* bit 7 of DCR 0 */ +#define VMXNET3_CAP_PKT_STEERING_IPV4 8 /* bit 8 of DCR 0 */ +#define VMXNET3_CAP_VERSION_4_MAX VMXNET3_CAP_PKT_STEERING_IPV4 +#define VMXNET3_CAP_ESP_RSS_IPV6 9 /* bit 9 of DCR 0 */ +#define VMXNET3_CAP_VERSION_5_MAX VMXNET3_CAP_ESP_RSS_IPV6 +#define VMXNET3_CAP_ESP_OVER_UDP_RSS 10 /* bit 10 of DCR 0 */ +#define VMXNET3_CAP_INNER_RSS 11 /* bit 11 of DCR 0 */ +#define VMXNET3_CAP_INNER_ESP_RSS 12 /* bit 12 of DCR 0 */ +#define VMXNET3_CAP_CRC32_HASH_FUNC 13 /* bit 13 of DCR 0 */ +#define VMXNET3_CAP_VERSION_6_MAX VMXNET3_CAP_CRC32_HASH_FUNC +#define VMXNET3_CAP_OAM_FILTER 14 /* bit 14 of DCR 0 */ +#define VMXNET3_CAP_ESP_QS 15 /* bit 15 of DCR 0 */ +#define VMXNET3_CAP_LARGE_BAR 16 /* bit 16 of DCR 0 */ +#define VMXNET3_CAP_OOORX_COMP 17 /* bit 17 of DCR 0 */ +#define VMXNET3_CAP_VERSION_7_MAX 18 +/* when new capability is introduced, update VMXNET3_CAP_MAX */ +#define VMXNET3_CAP_MAX VMXNET3_CAP_VERSION_7_MAX + #endif /* _VMXNET3_DEFS_H_ */ diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 6fc6a2a26161..edc4f23d4965 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -130,6 +130,20 @@ vmxnet3_tq_stop(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter) netif_stop_subqueue(adapter->netdev, (tq - adapter->tx_queue)); } +/* Check if capability is supported by UPT device or + * UPT is even requested + */ +bool +vmxnet3_check_ptcapability(u32 cap_supported, u32 cap) +{ + if (cap_supported & (1UL << VMXNET3_DCR_ERROR) || + cap_supported & (1UL << cap)) { + return true; + } + + return false; +} + /* * Check the link state. This may start or stop the tx queue. @@ -2671,6 +2685,36 @@ vmxnet3_init_rssfields(struct vmxnet3_adapter *adapter) adapter->rss_fields = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); } else { + if (VMXNET3_VERSION_GE_7(adapter)) { + if ((adapter->rss_fields & VMXNET3_RSS_FIELDS_UDPIP4 || + adapter->rss_fields & VMXNET3_RSS_FIELDS_UDPIP6) && + vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_UDP_RSS)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_UDP_RSS; + } else { + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_UDP_RSS); + } + + if ((adapter->rss_fields & VMXNET3_RSS_FIELDS_ESPIP4) && + vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_ESP_RSS_IPV4)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV4; + } else { + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV4); + } + + if ((adapter->rss_fields & VMXNET3_RSS_FIELDS_ESPIP6) && + vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_ESP_RSS_IPV6)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV6; + } else { + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV6); + } + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG); + adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + } cmdInfo->setRssFields = adapter->rss_fields; VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_SET_RSS_FIELDS); @@ -3185,6 +3229,47 @@ vmxnet3_declare_features(struct vmxnet3_adapter *adapter) NETIF_F_GSO_UDP_TUNNEL_CSUM; } + if (VMXNET3_VERSION_GE_7(adapter)) { + unsigned long flags; + + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_GENEVE_TSO)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_TSO; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_VXLAN_TSO)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_TSO; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD; + } + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); + spin_lock_irqsave(&adapter->cmd_lock, flags); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG); + adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + spin_unlock_irqrestore(&adapter->cmd_lock, flags); + + if (!(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) && + !(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD))) { + netdev->hw_enc_features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM; + netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM; + } + } + netdev->vlan_features = netdev->hw_features & ~(NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX); @@ -3520,6 +3605,18 @@ vmxnet3_probe_device(struct pci_dev *pdev, goto err_ver; } + if (VMXNET3_VERSION_GE_7(adapter)) { + adapter->devcap_supported[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_DCR); + adapter->ptcap_supported[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_PTCR); + if (adapter->dev_caps[0]) + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); + + spin_lock_irqsave(&adapter->cmd_lock, flags); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG); + adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + spin_unlock_irqrestore(&adapter->cmd_lock, flags); + } + if (VMXNET3_VERSION_GE_6(adapter)) { spin_lock_irqsave(&adapter->cmd_lock, flags); VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c index e41e76757c5b..458f2da1ebab 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethtool.c +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -298,7 +298,7 @@ netdev_features_t vmxnet3_features_check(struct sk_buff *skb, return features; } -static void vmxnet3_enable_encap_offloads(struct net_device *netdev) +static void vmxnet3_enable_encap_offloads(struct net_device *netdev, netdev_features_t features) { struct vmxnet3_adapter *adapter = netdev_priv(netdev); @@ -306,8 +306,50 @@ static void vmxnet3_enable_encap_offloads(struct net_device *netdev) netdev->hw_enc_features |= NETIF_F_SG | NETIF_F_RXCSUM | NETIF_F_HW_CSUM | NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_TSO | NETIF_F_TSO6 | - NETIF_F_LRO | NETIF_F_GSO_UDP_TUNNEL | - NETIF_F_GSO_UDP_TUNNEL_CSUM; + NETIF_F_LRO; + if (features & NETIF_F_GSO_UDP_TUNNEL) + netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL; + if (features & NETIF_F_GSO_UDP_TUNNEL_CSUM) + netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM; + } + if (VMXNET3_VERSION_GE_7(adapter)) { + unsigned long flags; + + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_GENEVE_TSO)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_TSO; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_VXLAN_TSO)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_TSO; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD; + } + if (vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD; + } + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); + spin_lock_irqsave(&adapter->cmd_lock, flags); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG); + adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + spin_unlock_irqrestore(&adapter->cmd_lock, flags); + + if (!(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD)) && + !(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD))) { + netdev->hw_enc_features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM; + } } } @@ -322,6 +364,22 @@ static void vmxnet3_disable_encap_offloads(struct net_device *netdev) NETIF_F_LRO | NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_UDP_TUNNEL_CSUM); } + if (VMXNET3_VERSION_GE_7(adapter)) { + unsigned long flags; + + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_GENEVE_CHECKSUM_OFFLOAD | + 1UL << VMXNET3_CAP_VXLAN_CHECKSUM_OFFLOAD | + 1UL << VMXNET3_CAP_GENEVE_TSO | + 1UL << VMXNET3_CAP_VXLAN_TSO | + 1UL << VMXNET3_CAP_GENEVE_OUTER_CHECKSUM_OFFLOAD | + 1UL << VMXNET3_CAP_VXLAN_OUTER_CHECKSUM_OFFLOAD); + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); + spin_lock_irqsave(&adapter->cmd_lock, flags); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_DCR0_REG); + adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + spin_unlock_irqrestore(&adapter->cmd_lock, flags); + } } int vmxnet3_set_features(struct net_device *netdev, netdev_features_t features) @@ -357,8 +415,8 @@ int vmxnet3_set_features(struct net_device *netdev, netdev_features_t features) adapter->shared->devRead.misc.uptFeatures &= ~UPT1_F_RXVLAN; - if ((features & tun_offload_mask) != 0 && !udp_tun_enabled) { - vmxnet3_enable_encap_offloads(netdev); + if ((features & tun_offload_mask) != 0) { + vmxnet3_enable_encap_offloads(netdev, features); adapter->shared->devRead.misc.uptFeatures |= UPT1_F_RXINNEROFLD; } else if ((features & tun_offload_mask) == 0 && @@ -913,6 +971,39 @@ vmxnet3_set_rss_hash_opt(struct net_device *netdev, union Vmxnet3_CmdInfo *cmdInfo = &shared->cu.cmdInfo; unsigned long flags; + if (VMXNET3_VERSION_GE_7(adapter)) { + if ((rss_fields & VMXNET3_RSS_FIELDS_UDPIP4 || + rss_fields & VMXNET3_RSS_FIELDS_UDPIP6) && + vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_UDP_RSS)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_UDP_RSS; + } else { + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_UDP_RSS); + } + if ((rss_fields & VMXNET3_RSS_FIELDS_ESPIP4) && + vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_ESP_RSS_IPV4)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV4; + } else { + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV4); + } + if ((rss_fields & VMXNET3_RSS_FIELDS_ESPIP6) && + vmxnet3_check_ptcapability(adapter->ptcap_supported[0], + VMXNET3_CAP_ESP_RSS_IPV6)) { + adapter->dev_caps[0] |= 1UL << VMXNET3_CAP_ESP_RSS_IPV6; + } else { + adapter->dev_caps[0] &= ~(1UL << VMXNET3_CAP_ESP_RSS_IPV6); + } + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, + adapter->dev_caps[0]); + spin_lock_irqsave(&adapter->cmd_lock, flags); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_GET_DCR0_REG); + adapter->dev_caps[0] = VMXNET3_READ_BAR1_REG(adapter, + VMXNET3_REG_CMD); + spin_unlock_irqrestore(&adapter->cmd_lock, flags); + } spin_lock_irqsave(&adapter->cmd_lock, flags); cmdInfo->setRssFields = rss_fields; VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index 5251c3439d6a..a7c8f80702c2 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -403,6 +403,9 @@ struct vmxnet3_adapter { dma_addr_t pm_conf_pa; dma_addr_t rss_conf_pa; bool queuesExtEnabled; + u32 devcap_supported[8]; + u32 ptcap_supported[8]; + u32 dev_caps[8]; }; #define VMXNET3_WRITE_BAR0_REG(adapter, reg, val) \ @@ -497,6 +500,7 @@ void vmxnet3_set_ethtool_ops(struct net_device *netdev); void vmxnet3_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats); +bool vmxnet3_check_ptcapability(u32 cap_supported, u32 cap); extern char vmxnet3_driver_name[]; #endif From patchwork Wed Jun 8 03:23:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872879 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 809ABCCA481 for ; Wed, 8 Jun 2022 06:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234062AbiFHGA6 (ORCPT ); Wed, 8 Jun 2022 02:00:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234672AbiFHFtG (ORCPT ); Wed, 8 Jun 2022 01:49:06 -0400 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5EF122BAD2; Tue, 7 Jun 2022 20:24:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=Iek9hW54QhGw2V/2phmsOs4cdU2tFOpvP/eViuA1UH4=; b=QSAvjUitIkzuYEWf/jNEmVRPSVBQEn21x89YXAnLZ52l7UYJllvfhwO42pSVY/ 8TWryEEFrsPDCCdxnOxsgZbrGN7yw4KdEkaXEB9SkDqrx+slvBKIeo2A5jWhz7 HCpdgwR5/62vm/uHf+B5lvDjF7R5NG/wHXHj2gX+rfV4FQc= Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Tue, 7 Jun 2022 20:23:58 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 1DB782032A; Tue, 7 Jun 2022 20:24:04 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 1593DAA454; Tue, 7 Jun 2022 20:24:04 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 3/8] vmxnet3: add support for large passthrough BAR register Date: Tue, 7 Jun 2022 20:23:48 -0700 Message-ID: <20220608032353.964-4-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org For vmxnet3 to work in UPT mode, the BAR sizes have been increased. The PT page has been extended to 2 pages and also includes OOB pages as a part of PT BAR. This patch enhances vmxnet3 to use appropriate BAR offsets based on the capability registered. To use new offsets, VMXNET3_CAP_LARGE_BAR needs to be set by the device. If it is not set then the device will use legacy PT page layout. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_defs.h | 14 ++++++++++++-- drivers/net/vmxnet3/vmxnet3_drv.c | 25 ++++++++++++++++++++----- drivers/net/vmxnet3/vmxnet3_ethtool.c | 6 +++--- drivers/net/vmxnet3/vmxnet3_int.h | 3 +++ 4 files changed, 38 insertions(+), 10 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h index 0157155ff677..8d626611ab2d 100644 --- a/drivers/net/vmxnet3/vmxnet3_defs.h +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -57,8 +57,18 @@ enum { VMXNET3_REG_RXPROD2 = 0xA00 /* Rx Producer Index for ring 2 */ }; -#define VMXNET3_PT_REG_SIZE 4096 /* BAR 0 */ -#define VMXNET3_VD_REG_SIZE 4096 /* BAR 1 */ +/* For Large PT BAR, the following offset to DB register */ +enum { + VMXNET3_REG_LB_TXPROD = 0x1000, /* Tx Producer Index */ + VMXNET3_REG_LB_RXPROD = 0x1400, /* Rx Producer Index for ring 1 */ + VMXNET3_REG_LB_RXPROD2 = 0x1800, /* Rx Producer Index for ring 2 */ +}; + +#define VMXNET3_PT_REG_SIZE 4096 /* BAR 0 */ +#define VMXNET3_LARGE_PT_REG_SIZE 8192 /* large PT pages */ +#define VMXNET3_VD_REG_SIZE 4096 /* BAR 1 */ +#define VMXNET3_LARGE_BAR0_REG_SIZE (4096 * 4096) /* LARGE BAR 0 */ +#define VMXNET3_OOB_REG_SIZE (4094 * 4096) /* OOB pages */ #define VMXNET3_REG_ALIGN 8 /* All registers are 8-byte aligned. */ #define VMXNET3_REG_ALIGN_MASK 0x7 diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index edc4f23d4965..93f237db463d 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1207,7 +1207,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, if (tx_num_deferred >= le32_to_cpu(tq->shared->txThreshold)) { tq->shared->txNumDeferred = 0; VMXNET3_WRITE_BAR0_REG(adapter, - VMXNET3_REG_TXPROD + tq->qid * 8, + adapter->tx_prod_offset + tq->qid * 8, tq->tx_ring.next2fill); } @@ -1359,8 +1359,8 @@ static int vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, struct vmxnet3_adapter *adapter, int quota) { - static const u32 rxprod_reg[2] = { - VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2 + u32 rxprod_reg[2] = { + adapter->rx_prod_offset, adapter->rx_prod2_offset }; u32 num_pkts = 0; bool skip_page_frags = false; @@ -2783,9 +2783,9 @@ vmxnet3_activate_dev(struct vmxnet3_adapter *adapter) for (i = 0; i < adapter->num_rx_queues; i++) { VMXNET3_WRITE_BAR0_REG(adapter, - VMXNET3_REG_RXPROD + i * VMXNET3_REG_ALIGN, + adapter->rx_prod_offset + i * VMXNET3_REG_ALIGN, adapter->rx_queue[i].rx_ring[0].next2fill); - VMXNET3_WRITE_BAR0_REG(adapter, (VMXNET3_REG_RXPROD2 + + VMXNET3_WRITE_BAR0_REG(adapter, (adapter->rx_prod2_offset + (i * VMXNET3_REG_ALIGN)), adapter->rx_queue[i].rx_ring[1].next2fill); } @@ -3608,6 +3608,10 @@ vmxnet3_probe_device(struct pci_dev *pdev, if (VMXNET3_VERSION_GE_7(adapter)) { adapter->devcap_supported[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_DCR); adapter->ptcap_supported[0] = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_PTCR); + if (adapter->devcap_supported[0] & (1UL << VMXNET3_CAP_LARGE_BAR)) { + adapter->dev_caps[0] = adapter->devcap_supported[0] & + (1UL << VMXNET3_CAP_LARGE_BAR); + } if (adapter->dev_caps[0]) VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); @@ -3617,6 +3621,17 @@ vmxnet3_probe_device(struct pci_dev *pdev, spin_unlock_irqrestore(&adapter->cmd_lock, flags); } + if (VMXNET3_VERSION_GE_7(adapter) && + adapter->dev_caps[0] & (1UL << VMXNET3_CAP_LARGE_BAR)) { + adapter->tx_prod_offset = VMXNET3_REG_LB_TXPROD; + adapter->rx_prod_offset = VMXNET3_REG_LB_RXPROD; + adapter->rx_prod2_offset = VMXNET3_REG_LB_RXPROD2; + } else { + adapter->tx_prod_offset = VMXNET3_REG_TXPROD; + adapter->rx_prod_offset = VMXNET3_REG_RXPROD; + adapter->rx_prod2_offset = VMXNET3_REG_RXPROD2; + } + if (VMXNET3_VERSION_GE_6(adapter)) { spin_lock_irqsave(&adapter->cmd_lock, flags); VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c index 458f2da1ebab..397b268f7bc5 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethtool.c +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -520,7 +520,7 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p) for (i = 0; i < adapter->num_tx_queues; i++) { struct vmxnet3_tx_queue *tq = &adapter->tx_queue[i]; - buf[j++] = VMXNET3_READ_BAR0_REG(adapter, VMXNET3_REG_TXPROD + + buf[j++] = VMXNET3_READ_BAR0_REG(adapter, adapter->tx_prod_offset + i * VMXNET3_REG_ALIGN); buf[j++] = VMXNET3_GET_ADDR_LO(tq->tx_ring.basePA); @@ -548,9 +548,9 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p) for (i = 0; i < adapter->num_rx_queues; i++) { struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i]; - buf[j++] = VMXNET3_READ_BAR0_REG(adapter, VMXNET3_REG_RXPROD + + buf[j++] = VMXNET3_READ_BAR0_REG(adapter, adapter->rx_prod_offset + i * VMXNET3_REG_ALIGN); - buf[j++] = VMXNET3_READ_BAR0_REG(adapter, VMXNET3_REG_RXPROD2 + + buf[j++] = VMXNET3_READ_BAR0_REG(adapter, adapter->rx_prod2_offset + i * VMXNET3_REG_ALIGN); buf[j++] = VMXNET3_GET_ADDR_LO(rq->rx_ring[0].basePA); diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index a7c8f80702c2..a4f832f0ad5b 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -406,6 +406,9 @@ struct vmxnet3_adapter { u32 devcap_supported[8]; u32 ptcap_supported[8]; u32 dev_caps[8]; + u16 tx_prod_offset; + u16 rx_prod_offset; + u16 rx_prod2_offset; }; #define VMXNET3_WRITE_BAR0_REG(adapter, reg, val) \ From patchwork Wed Jun 8 03:23:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872881 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 090CCCCA481 for ; Wed, 8 Jun 2022 06:01:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234187AbiFHGBE (ORCPT ); Wed, 8 Jun 2022 02:01:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234679AbiFHFtG (ORCPT ); Wed, 8 Jun 2022 01:49:06 -0400 Received: from EX-PRD-EDGE01.vmware.com (ex-prd-edge01.vmware.com [208.91.3.33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58F2922C49F; Tue, 7 Jun 2022 20:24:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=NNMVPO7g7f6Jkv3OjF+afguZ7MsAQZAz3wtqXiB2wJg=; b=U+xhvdnF9QD699yCp6bcLULezh9RzuC+XZNjC4FBABRTuKq8nk9fNtEMYHpx+S 4YBcZXfGOcE65mGUHPtIM+qLB65q+X8SPcvf3ZWbbindsi8jlYdj5JDaRl4yVr rdgRyya2mEYqUOtkOhLSbnkKR0qqlTsaWzRUuRsUX5TvTMI= Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX-PRD-EDGE01.vmware.com (10.188.245.6) with Microsoft SMTP Server id 15.1.2308.20; Tue, 7 Jun 2022 20:23:50 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 5386120298; Tue, 7 Jun 2022 20:24:05 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 4DA58AA454; Tue, 7 Jun 2022 20:24:05 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 4/8] vmxnet3: add support for out of order rx completion Date: Tue, 7 Jun 2022 20:23:49 -0700 Message-ID: <20220608032353.964-5-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE01.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, vmxnet3 processes rx completions in-order i.e. no out of order completion descriptor is expected. With UPT, if hardware supports LRO, then hardware can report out of order rx completions. This patch enhances vmxnet3 to add this support. This supports gets effective only when the corresponding feature bit is set. Also, minor enhancements are done for performance. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_drv.c | 70 ++++++++++++++++++++++++++++++++------- drivers/net/vmxnet3/vmxnet3_int.h | 5 +++ 2 files changed, 63 insertions(+), 12 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 93f237db463d..94ca3bc1d540 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -585,6 +585,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx, rbi = rbi_base + ring->next2fill; gd = ring->base + ring->next2fill; + rbi->comp_state = VMXNET3_RXD_COMP_PENDING; if (rbi->buf_type == VMXNET3_RX_BUF_SKB) { if (rbi->skb == NULL) { @@ -644,8 +645,10 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx, /* Fill the last buffer but dont mark it ready, or else the * device will think that the queue is full */ - if (num_allocated == num_to_alloc) + if (num_allocated == num_to_alloc) { + rbi->comp_state = VMXNET3_RXD_COMP_DONE; break; + } gd->dword[2] |= cpu_to_le32(ring->gen << VMXNET3_RXD_GEN_SHIFT); num_allocated++; @@ -1367,6 +1370,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, struct Vmxnet3_RxCompDesc *rcd; struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx; u16 segCnt = 0, mss = 0; + int comp_offset, fill_offset; #ifdef __BIG_ENDIAN_BITFIELD struct Vmxnet3_RxDesc rxCmdDesc; struct Vmxnet3_RxCompDesc rxComp; @@ -1639,9 +1643,15 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, rcd_done: /* device may have skipped some rx descs */ - ring->next2comp = idx; - num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring); ring = rq->rx_ring + ring_idx; + rbi->comp_state = VMXNET3_RXD_COMP_DONE; + + comp_offset = vmxnet3_cmd_ring_desc_avail(ring); + fill_offset = (idx > ring->next2fill ? 0 : ring->size) + + idx - ring->next2fill - 1; + if (!ring->isOutOfOrder || fill_offset >= comp_offset) + ring->next2comp = idx; + num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring); /* Ensure that the writes to rxd->gen bits will be observed * after all other writes to rxd objects. @@ -1649,18 +1659,38 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, dma_wmb(); while (num_to_alloc) { - vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd, - &rxCmdDesc); - BUG_ON(!rxd->addr); - - /* Recv desc is ready to be used by the device */ - rxd->gen = ring->gen; - vmxnet3_cmd_ring_adv_next2fill(ring); - num_to_alloc--; + rbi = rq->buf_info[ring_idx] + ring->next2fill; + if (!(adapter->dev_caps[0] & (1UL << VMXNET3_CAP_OOORX_COMP))) + goto refill_buf; + if (ring_idx == 0) { + /* ring0 Type1 buffers can get skipped; re-fill them */ + if (rbi->buf_type != VMXNET3_RX_BUF_SKB) + goto refill_buf; + } + if (rbi->comp_state == VMXNET3_RXD_COMP_DONE) { +refill_buf: + vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd, + &rxCmdDesc); + WARN_ON(!rxd->addr); + + /* Recv desc is ready to be used by the device */ + rxd->gen = ring->gen; + vmxnet3_cmd_ring_adv_next2fill(ring); + rbi->comp_state = VMXNET3_RXD_COMP_PENDING; + num_to_alloc--; + } else { + /* rx completion hasn't occurred */ + ring->isOutOfOrder = 1; + break; + } + } + + if (num_to_alloc == 0) { + ring->isOutOfOrder = 0; } /* if needed, update the register */ - if (unlikely(rq->shared->updateRxProd)) { + if (unlikely(rq->shared->updateRxProd) && (ring->next2fill & 0xf) == 0) { VMXNET3_WRITE_BAR0_REG(adapter, rxprod_reg[ring_idx] + rq->qid * 8, ring->next2fill); @@ -1824,6 +1854,7 @@ vmxnet3_rq_init(struct vmxnet3_rx_queue *rq, memset(rq->rx_ring[i].base, 0, rq->rx_ring[i].size * sizeof(struct Vmxnet3_RxDesc)); rq->rx_ring[i].gen = VMXNET3_INIT_GEN; + rq->rx_ring[i].isOutOfOrder = 0; } if (vmxnet3_rq_alloc_rx_buf(rq, 0, rq->rx_ring[0].size - 1, adapter) == 0) { @@ -2014,8 +2045,17 @@ vmxnet3_poll_rx_only(struct napi_struct *napi, int budget) rxd_done = vmxnet3_rq_rx_complete(rq, adapter, budget); if (rxd_done < budget) { + struct Vmxnet3_RxCompDesc *rcd; +#ifdef __BIG_ENDIAN_BITFIELD + struct Vmxnet3_RxCompDesc rxComp; +#endif napi_complete_done(napi, rxd_done); vmxnet3_enable_intr(adapter, rq->comp_ring.intr_idx); + /* after unmasking the interrupt, check if any descriptors were completed */ + vmxnet3_getRxComp(rcd, &rq->comp_ring.base[rq->comp_ring.next2proc].rcd, + &rxComp); + if (rcd->gen == rq->comp_ring.gen && napi_reschedule(napi)) + vmxnet3_disable_intr(adapter, rq->comp_ring.intr_idx); } return rxd_done; } @@ -3612,6 +3652,12 @@ vmxnet3_probe_device(struct pci_dev *pdev, adapter->dev_caps[0] = adapter->devcap_supported[0] & (1UL << VMXNET3_CAP_LARGE_BAR); } + if (!(adapter->ptcap_supported[0] & (1UL << VMXNET3_DCR_ERROR)) && + adapter->ptcap_supported[0] & (1UL << VMXNET3_CAP_OOORX_COMP) && + adapter->devcap_supported[0] & (1UL << VMXNET3_CAP_OOORX_COMP)) { + adapter->dev_caps[0] |= adapter->devcap_supported[0] & + (1UL << VMXNET3_CAP_OOORX_COMP); + } if (adapter->dev_caps[0]) VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DCR, adapter->dev_caps[0]); diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index a4f832f0ad5b..5b495ef253e8 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -136,6 +136,7 @@ struct vmxnet3_cmd_ring { u32 next2fill; u32 next2comp; u8 gen; + u8 isOutOfOrder; dma_addr_t basePA; }; @@ -260,9 +261,13 @@ enum vmxnet3_rx_buf_type { VMXNET3_RX_BUF_PAGE = 2 }; +#define VMXNET3_RXD_COMP_PENDING 0 +#define VMXNET3_RXD_COMP_DONE 1 + struct vmxnet3_rx_buf_info { enum vmxnet3_rx_buf_type buf_type; u16 len; + u8 comp_state; union { struct sk_buff *skb; struct page *page; From patchwork Wed Jun 8 03:23:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872883 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6705BCCA481 for ; Wed, 8 Jun 2022 06:01:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234598AbiFHGBZ (ORCPT ); Wed, 8 Jun 2022 02:01:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234683AbiFHFtG (ORCPT ); Wed, 8 Jun 2022 01:49:06 -0400 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65EC022AE54; Tue, 7 Jun 2022 20:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=E5xm37JS8Q8Lp68/e8nr1z+KpVfKRYzxF6R13LBH1+Q=; b=M4tgQrNic+r6hfGg1a3kJ8rZQhi3pLAIa7lyFKNVTvG83OM7IuYjfKPKcwP7Wa WBjJmulXBuCnVx3dPHiD5cUtHxG+D9pphb4n/jnb5SHDlR2z9RPvFgRT/jB1Km YlDUZq2vymt1JScuev/ebdD9L0lGvfedrFyQCBoafapyssM= Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Tue, 7 Jun 2022 20:24:00 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 8992520328; Tue, 7 Jun 2022 20:24:06 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 82531AA454; Tue, 7 Jun 2022 20:24:06 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 5/8] vmxnet3: add command to set ring buffer sizes Date: Tue, 7 Jun 2022 20:23:50 -0700 Message-ID: <20220608032353.964-6-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This patch adds a new command to set ring buffer sizes. This is required to pass the buffer size information to passthrough devices. For performance reasons, with version7 and later, ring1 will contain only mtu size buffers (bound to 3K). Packets > 3K will use both ring1 and ring2. Also, ring sizes are round down to power of 2 and ring2 default size is increased to 512. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_defs.h | 11 +++++++ drivers/net/vmxnet3/vmxnet3_drv.c | 57 +++++++++++++++++++++++++++-------- drivers/net/vmxnet3/vmxnet3_ethtool.c | 7 +++++ drivers/net/vmxnet3/vmxnet3_int.h | 3 +- 4 files changed, 65 insertions(+), 13 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h index 8d626611ab2d..415e4c9993ef 100644 --- a/drivers/net/vmxnet3/vmxnet3_defs.h +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -99,6 +99,9 @@ enum { VMXNET3_CMD_SET_COALESCE, VMXNET3_CMD_REGISTER_MEMREGS, VMXNET3_CMD_SET_RSS_FIELDS, + VMXNET3_CMD_RESERVED4, + VMXNET3_CMD_RESERVED5, + VMXNET3_CMD_SET_RING_BUFFER_SIZE, VMXNET3_CMD_FIRST_GET = 0xF00D0000, VMXNET3_CMD_GET_QUEUE_STATUS = VMXNET3_CMD_FIRST_GET, @@ -743,6 +746,13 @@ enum Vmxnet3_RSSField { VMXNET3_RSS_FIELDS_ESPIP6 = 0x0020, }; +struct Vmxnet3_RingBufferSize { + __le16 ring1BufSizeType0; + __le16 ring1BufSizeType1; + __le16 ring2BufSizeType1; + __le16 pad; +}; + /* If the command data <= 16 bytes, use the shared memory directly. * otherwise, use variable length configuration descriptor. */ @@ -750,6 +760,7 @@ union Vmxnet3_CmdInfo { struct Vmxnet3_VariableLenConfDesc varConf; struct Vmxnet3_SetPolling setPolling; enum Vmxnet3_RSSField setRssFields; + struct Vmxnet3_RingBufferSize ringBufSize; __le64 data[2]; }; diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 94ca3bc1d540..c2f99c01a130 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -2681,6 +2681,23 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter) } static void +vmxnet3_init_bufsize(struct vmxnet3_adapter *adapter) +{ + struct Vmxnet3_DriverShared *shared = adapter->shared; + union Vmxnet3_CmdInfo *cmdInfo = &shared->cu.cmdInfo; + unsigned long flags; + + if (!VMXNET3_VERSION_GE_7(adapter)) + return; + + cmdInfo->ringBufSize = adapter->ringBufSize; + spin_lock_irqsave(&adapter->cmd_lock, flags); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_SET_RING_BUFFER_SIZE); + spin_unlock_irqrestore(&adapter->cmd_lock, flags); +} + +static void vmxnet3_init_coalesce(struct vmxnet3_adapter *adapter) { struct Vmxnet3_DriverShared *shared = adapter->shared; @@ -2818,6 +2835,7 @@ vmxnet3_activate_dev(struct vmxnet3_adapter *adapter) goto activate_err; } + vmxnet3_init_bufsize(adapter); vmxnet3_init_coalesce(adapter); vmxnet3_init_rssfields(adapter); @@ -2991,19 +3009,29 @@ static void vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter) { size_t sz, i, ring0_size, ring1_size, comp_size; - if (adapter->netdev->mtu <= VMXNET3_MAX_SKB_BUF_SIZE - - VMXNET3_MAX_ETH_HDR_SIZE) { - adapter->skb_buf_size = adapter->netdev->mtu + - VMXNET3_MAX_ETH_HDR_SIZE; - if (adapter->skb_buf_size < VMXNET3_MIN_T0_BUF_SIZE) - adapter->skb_buf_size = VMXNET3_MIN_T0_BUF_SIZE; - - adapter->rx_buf_per_pkt = 1; + /* With version7 ring1 will have only T0 buffers */ + if (!VMXNET3_VERSION_GE_7(adapter)) { + if (adapter->netdev->mtu <= VMXNET3_MAX_SKB_BUF_SIZE - + VMXNET3_MAX_ETH_HDR_SIZE) { + adapter->skb_buf_size = adapter->netdev->mtu + + VMXNET3_MAX_ETH_HDR_SIZE; + if (adapter->skb_buf_size < VMXNET3_MIN_T0_BUF_SIZE) + adapter->skb_buf_size = VMXNET3_MIN_T0_BUF_SIZE; + + adapter->rx_buf_per_pkt = 1; + } else { + adapter->skb_buf_size = VMXNET3_MAX_SKB_BUF_SIZE; + sz = adapter->netdev->mtu - VMXNET3_MAX_SKB_BUF_SIZE + + VMXNET3_MAX_ETH_HDR_SIZE; + adapter->rx_buf_per_pkt = 1 + (sz + PAGE_SIZE - 1) / PAGE_SIZE; + } } else { - adapter->skb_buf_size = VMXNET3_MAX_SKB_BUF_SIZE; - sz = adapter->netdev->mtu - VMXNET3_MAX_SKB_BUF_SIZE + - VMXNET3_MAX_ETH_HDR_SIZE; - adapter->rx_buf_per_pkt = 1 + (sz + PAGE_SIZE - 1) / PAGE_SIZE; + adapter->skb_buf_size = min((int)adapter->netdev->mtu + VMXNET3_MAX_ETH_HDR_SIZE, + VMXNET3_MAX_SKB_BUF_SIZE); + adapter->rx_buf_per_pkt = 1; + adapter->ringBufSize.ring1BufSizeType0 = cpu_to_le16(adapter->skb_buf_size); + adapter->ringBufSize.ring1BufSizeType1 = 0; + adapter->ringBufSize.ring2BufSizeType1 = cpu_to_le16(PAGE_SIZE); } /* @@ -3019,6 +3047,11 @@ vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter) ring1_size = (ring1_size + sz - 1) / sz * sz; ring1_size = min_t(u32, ring1_size, VMXNET3_RX_RING2_MAX_SIZE / sz * sz); + /* For v7 and later, keep ring size power of 2 for UPT */ + if (VMXNET3_VERSION_GE_7(adapter)) { + ring0_size = rounddown_pow_of_two(ring0_size); + ring1_size = rounddown_pow_of_two(ring1_size); + } comp_size = ring0_size + ring1_size; for (i = 0; i < adapter->num_rx_queues; i++) { diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c index 397b268f7bc5..ce3993282c0f 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethtool.c +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -718,6 +718,13 @@ vmxnet3_set_ringparam(struct net_device *netdev, new_rx_ring2_size = min_t(u32, new_rx_ring2_size, VMXNET3_RX_RING2_MAX_SIZE); + /* For v7 and later, keep ring size power of 2 for UPT */ + if (VMXNET3_VERSION_GE_7(adapter)) { + new_tx_ring_size = rounddown_pow_of_two(new_tx_ring_size); + new_rx_ring_size = rounddown_pow_of_two(new_rx_ring_size); + new_rx_ring2_size = rounddown_pow_of_two(new_rx_ring2_size); + } + /* rx data ring buffer size has to be a multiple of * VMXNET3_RXDATA_DESC_SIZE_ALIGN */ diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index 5b495ef253e8..cb87731f5f1c 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -408,6 +408,7 @@ struct vmxnet3_adapter { dma_addr_t pm_conf_pa; dma_addr_t rss_conf_pa; bool queuesExtEnabled; + struct Vmxnet3_RingBufferSize ringBufSize; u32 devcap_supported[8]; u32 ptcap_supported[8]; u32 dev_caps[8]; @@ -449,7 +450,7 @@ struct vmxnet3_adapter { /* must be a multiple of VMXNET3_RING_SIZE_ALIGN */ #define VMXNET3_DEF_TX_RING_SIZE 512 #define VMXNET3_DEF_RX_RING_SIZE 1024 -#define VMXNET3_DEF_RX_RING2_SIZE 256 +#define VMXNET3_DEF_RX_RING2_SIZE 512 #define VMXNET3_DEF_RXDATA_DESC_SIZE 128 From patchwork Wed Jun 8 03:23:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872884 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCBAAC433EF for ; Wed, 8 Jun 2022 06:01:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234706AbiFHGBi (ORCPT ); Wed, 8 Jun 2022 02:01:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234700AbiFHFtH (ORCPT ); Wed, 8 Jun 2022 01:49:07 -0400 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD80F40C127; Tue, 7 Jun 2022 20:24:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=j8wwcqtmDC2RnTjeTTDk0Jan/92OOGtSGZvFDg9KNRY=; b=gVT2fifgYoab+wS0x60o5zpBjKCQIUOCsx7xMEZcX1H/ZkzBa8TZGFzicI+nzm ujeRUI2qijGE58QkoHkae881o3JuZHI2DVW8ew/y4Z9csLq31e3L38kd8fg8wb VH/TU9QHYDt7jJjRUDUTDV1OOqxu+CQ8LZoK6tFee1+Lwi8= Received: from sc9-mailhost1.vmware.com (10.113.161.71) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Tue, 7 Jun 2022 20:24:01 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 7DE2623447; Tue, 7 Jun 2022 20:24:07 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 7851DAA454; Tue, 7 Jun 2022 20:24:07 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 6/8] vmxnet3: limit number of TXDs used for TSO packet Date: Tue, 7 Jun 2022 20:23:51 -0700 Message-ID: <20220608032353.964-7-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, vmxnet3 does not have a limit on number of descriptors used for a TSO packet. However, with UPT, for hardware performance reasons, this patch limits the number of transmit descriptors to 24 for a TSO packet. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_defs.h | 2 ++ drivers/net/vmxnet3/vmxnet3_drv.c | 17 +++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h index 415e4c9993ef..cb9dc72f2b3d 100644 --- a/drivers/net/vmxnet3/vmxnet3_defs.h +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -400,6 +400,8 @@ union Vmxnet3_GenericDesc { /* max # of tx descs for a non-tso pkt */ #define VMXNET3_MAX_TXD_PER_PKT 16 +/* max # of tx descs for a tso pkt */ +#define VMXNET3_MAX_TSO_TXD_PER_PKT 24 /* Max size of a single rx buffer */ #define VMXNET3_MAX_RX_BUF_SIZE ((1 << 14) - 1) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index c2f99c01a130..2ee3a2e39f10 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1061,6 +1061,23 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, } tq->stats.copy_skb_header++; } + if (unlikely(count > VMXNET3_MAX_TSO_TXD_PER_PKT)) { + /* tso pkts must not use more than + * VMXNET3_MAX_TSO_TXD_PER_PKT entries + */ + if (skb_linearize(skb) != 0) { + tq->stats.drop_too_many_frags++; + goto drop_pkt; + } + tq->stats.linearized++; + + /* recalculate the # of descriptors to use */ + count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1; + if (unlikely(count > VMXNET3_MAX_TSO_TXD_PER_PKT)) { + tq->stats.drop_too_many_frags++; + goto drop_pkt; + } + } if (skb->encapsulation) { vmxnet3_prepare_inner_tso(skb, &ctx); } else { From patchwork Wed Jun 8 03:23:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872860 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43DD1C433EF for ; Wed, 8 Jun 2022 05:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233642AbiFHFlH (ORCPT ); Wed, 8 Jun 2022 01:41:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236085AbiFHFky (ORCPT ); Wed, 8 Jun 2022 01:40:54 -0400 Received: from EX-PRD-EDGE01.vmware.com (ex-prd-edge01.vmware.com [208.91.3.33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22C3940BA5B; Tue, 7 Jun 2022 20:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=q1Tg/wqCzr6O6IDRP6nkjppRtmGVU7CRSaKK5KQoZrA=; b=gM+7zrFYzjuT6XB249YrR14hDSE0pNgadBb5EYVQBxpkux77eDEerMCkNigeDr VX6ec/RkYldNjjdjNkC9dSCkhTJsL+27eDaJz+Wm+4N+T8AtGr7m9bVIq2C8d/ dhn6J1AQpqzipRJ7BPGBgUKzSKT2Ysbu1tdVppMyHslNiKc= Received: from sc9-mailhost1.vmware.com (10.113.161.71) by EX-PRD-EDGE01.vmware.com (10.188.245.6) with Microsoft SMTP Server id 15.1.2308.20; Tue, 7 Jun 2022 20:23:53 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 9C6692346F; Tue, 7 Jun 2022 20:24:08 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 96D69AA454; Tue, 7 Jun 2022 20:24:08 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 7/8] vmxnet3: use ext1 field to indicate encapsulated packet Date: Tue, 7 Jun 2022 20:23:52 -0700 Message-ID: <20220608032353.964-8-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE01.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Till vmxnet3 version 6, om field of transmit descriptor was used to indicate encapsulated offload packet and msscof was used to indirectly indicate TSO/CSO. From version 7 and later, ext1 field will be used to indicate whether packet is encapsulated or not and om fields will continue to indicate if the packet is TSO or CSO. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_defs.h | 14 ++++++++------ drivers/net/vmxnet3/vmxnet3_drv.c | 18 +++++++++++++++--- 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h index cb9dc72f2b3d..41d6767283a6 100644 --- a/drivers/net/vmxnet3/vmxnet3_defs.h +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -148,17 +148,17 @@ struct Vmxnet3_TxDesc { #ifdef __BIG_ENDIAN_BITFIELD u32 msscof:14; /* MSS, checksum offset, flags */ - u32 ext1:1; + u32 ext1:1; /* set to 1 to indicate inner csum/tso, vmxnet3 v7 */ u32 dtype:1; /* descriptor type */ - u32 oco:1; + u32 oco:1; /* Outer csum offload */ u32 gen:1; /* generation bit */ u32 len:14; #else u32 len:14; u32 gen:1; /* generation bit */ - u32 oco:1; + u32 oco:1; /* Outer csum offload */ u32 dtype:1; /* descriptor type */ - u32 ext1:1; + u32 ext1:1; /* set to 1 to indicate inner csum/tso, vmxnet3 v7 */ u32 msscof:14; /* MSS, checksum offset, flags */ #endif /* __BIG_ENDIAN_BITFIELD */ @@ -262,11 +262,13 @@ struct Vmxnet3_RxCompDesc { u32 rqID:10; /* rx queue/ring ID */ u32 sop:1; /* Start of Packet */ u32 eop:1; /* End of Packet */ - u32 ext1:2; + u32 ext1:2; /* bit 0: indicating v4/v6/.. is for inner header */ + /* bit 1: indicating rssType is based on inner header */ u32 rxdIdx:12; /* Index of the RxDesc */ #else u32 rxdIdx:12; /* Index of the RxDesc */ - u32 ext1:2; + u32 ext1:2; /* bit 0: indicating v4/v6/.. is for inner header */ + /* bit 1: indicating rssType is based on inner header */ u32 eop:1; /* End of Packet */ u32 sop:1; /* Start of Packet */ u32 rqID:10; /* rx queue/ring ID */ diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 2ee3a2e39f10..5c42b4a008e8 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1161,7 +1161,12 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, if (ctx.mss) { if (VMXNET3_VERSION_GE_4(adapter) && skb->encapsulation) { gdesc->txd.hlen = ctx.l4_offset + ctx.l4_hdr_size; - gdesc->txd.om = VMXNET3_OM_ENCAP; + if (VMXNET3_VERSION_GE_7(adapter)) { + gdesc->txd.om = VMXNET3_OM_TSO; + gdesc->txd.ext1 = 1; + } else { + gdesc->txd.om = VMXNET3_OM_ENCAP; + } gdesc->txd.msscof = ctx.mss; if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM) @@ -1178,8 +1183,15 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, skb->encapsulation) { gdesc->txd.hlen = ctx.l4_offset + ctx.l4_hdr_size; - gdesc->txd.om = VMXNET3_OM_ENCAP; - gdesc->txd.msscof = 0; /* Reserved */ + if (VMXNET3_VERSION_GE_7(adapter)) { + gdesc->txd.om = VMXNET3_OM_CSUM; + gdesc->txd.msscof = ctx.l4_offset + + skb->csum_offset; + gdesc->txd.ext1 = 1; + } else { + gdesc->txd.om = VMXNET3_OM_ENCAP; + gdesc->txd.msscof = 0; /* Reserved */ + } } else { gdesc->txd.hlen = ctx.l4_offset; gdesc->txd.om = VMXNET3_OM_CSUM; From patchwork Wed Jun 8 03:23:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 12872882 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D741CCA485 for ; Wed, 8 Jun 2022 06:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234449AbiFHGBO (ORCPT ); Wed, 8 Jun 2022 02:01:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234711AbiFHFtH (ORCPT ); Wed, 8 Jun 2022 01:49:07 -0400 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A6842AE24; Tue, 7 Jun 2022 20:24:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=6GTDcHZ+adrQnA/VnTSgw7BGahrkpZagZo7aeFM/ljc=; b=NNAKRooLP9E2TXgb1dE2/lqKXjYH5UCsNhbCouGNthDKzSr/FxHLSGi8S+eooI 3ZwJPZ6+1LqxK4aTNlRbTL3+w1LV5mzyJbpodUU/SVP8YxMWBQcAF4xUG3QHQ4 qROp13/2c+bsIX0junZqeaNS0FQ5RKXPWe50rfNk7ikP73M= Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Tue, 7 Jun 2022 20:24:04 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 019F620327; Tue, 7 Jun 2022 20:24:10 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id F02E5AA454; Tue, 7 Jun 2022 20:24:09 -0700 (PDT) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v3 net-next 8/8] vmxnet3: update to version 7 Date: Tue, 7 Jun 2022 20:23:53 -0700 Message-ID: <20220608032353.964-9-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220608032353.964-1-doshir@vmware.com> References: <20220608032353.964-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org With all vmxnet3 version 7 changes incorporated in the vmxnet3 driver, the driver can configure emulation to run at vmxnet3 version 7, provided the emulation advertises support for version 7. Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_drv.c | 7 ++++++- drivers/net/vmxnet3/vmxnet3_int.h | 4 ++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 5c42b4a008e8..1565e1808a19 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -3659,7 +3659,12 @@ vmxnet3_probe_device(struct pci_dev *pdev, goto err_alloc_pci; ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_VRRS); - if (ver & (1 << VMXNET3_REV_6)) { + if (ver & (1 << VMXNET3_REV_7)) { + VMXNET3_WRITE_BAR1_REG(adapter, + VMXNET3_REG_VRRS, + 1 << VMXNET3_REV_7); + adapter->version = VMXNET3_REV_7 + 1; + } else if (ver & (1 << VMXNET3_REV_6)) { VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_VRRS, 1 << VMXNET3_REV_6); diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index cb87731f5f1c..3367db23aa13 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -69,12 +69,12 @@ /* * Version numbers */ -#define VMXNET3_DRIVER_VERSION_STRING "1.6.0.0-k" +#define VMXNET3_DRIVER_VERSION_STRING "1.7.0.0-k" /* Each byte of this 32-bit integer encodes a version number in * VMXNET3_DRIVER_VERSION_STRING. */ -#define VMXNET3_DRIVER_VERSION_NUM 0x01060000 +#define VMXNET3_DRIVER_VERSION_NUM 0x01070000 #if defined(CONFIG_PCI_MSI) /* RSS only makes sense if MSI-X is supported. */