From patchwork Tue Nov 5 22:23:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863630 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A01A4216439 for ; Tue, 5 Nov 2024 22:24:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845443; cv=none; b=ffGEHqlaQU+NM39RAzwPPuZrZ60DYd4cpFvM6YWj9ooJwpOEyI5oLR//qShfJUTHnQnt3V3JL2lW9yM1+MxAHfShxA3xR8GtQqHF8pt+0z8/iZP7rFEQx+3kp+rNwsiLBgCNBO0S/4HPsy/edVVoKxiOU74gV4XeOCtELS29HU8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845443; c=relaxed/simple; bh=v5PKNP1gkSUm5bpeFErvCQfDAUJ42S/R425sMZ7hg0k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oHS5i74yOZJ37tApiI8pE3CSoGB26x+IOpZJIvtoEXoGnS0OKpht91FAKWv41k8sXKag468OzR568qS9FkriFDjU9rBjciSyoSrNyyixIkO1GWqfOnX3gOhrNLTOOfHyMg173RVDJXHvxHDeSWjj7jrB/qKkvCDQVdxRrpjiY1Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OgoGvZQM; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OgoGvZQM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845442; x=1762381442; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v5PKNP1gkSUm5bpeFErvCQfDAUJ42S/R425sMZ7hg0k=; b=OgoGvZQMMzUVJj5BvREqw/lhhFpKP5fA1SarTlHGMTGgXe62u2s/mJ6+ rtRM8JHRcoxJicWt+uJWMh+PhzKIxEjijkzjiqYVoIivtI5ksgFxHZcWj x767p7VeyRc4SWsmk/+dyHlgFe/GgcCqCgNAOUUgYh8jVyAuIGWVBm0GI LKYxGAGRKKfv2s9JEMZS2lx0GIcx8tUr2RosPHkn73yjzyzUylaHNgisq 8u5HJNbOK+KcVahdbCzFJ9F91bZJ30ldfGQbzQf+5Yodiyp0VsuFP1zy3 84mGdHUHvTEhwhmRadKJAIKDb5Va+tRWD9tTzOp+Ov8mYtTfDLU40/5IO g==; X-CSE-ConnectionGUID: fhnMGyU8SX+fyQ0hRbeTCQ== X-CSE-MsgGUID: 118Uk0InTv6yO1OGurAj7A== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314264" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314264" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:00 -0800 X-CSE-ConnectionGUID: vKoz08//TQ6MOqK4Nwlkdg== X-CSE-MsgGUID: 7og3+o4XSba/cySHTru6lg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322427" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:23:57 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Paul Greenwalt , anthony.l.nguyen@intel.com, Alice Michael , Eric Joyner , Alexander Lobakin , Pucha Himasekhar Reddy Subject: [PATCH net-next 01/15] ice: Add E830 checksum offload support Date: Tue, 5 Nov 2024 14:23:35 -0800 Message-ID: <20241105222351.3320587-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Paul Greenwalt E830 supports raw receive and generic transmit checksum offloads. Raw receive checksum support is provided by hardware calculating the checksum over the whole packet, regardless of type. The calculated checksum is provided to driver in the Rx flex descriptor. Then the driver assigns the checksum to skb->csum and sets skb->ip_summed to CHECKSUM_COMPLETE. Generic transmit checksum support is provided by hardware calculating the checksum given two offsets: the start offset to begin checksum calculation, and the offset to insert the calculated checksum in the packet. Support is advertised to the stack using NETIF_F_HW_CSUM feature. E830 has the following limitations when both generic transmit checksum offload and TCP Segmentation Offload (TSO) are enabled: 1. Inner packet header modification is not supported. This restriction includes the inability to alter TCP flags, such as the push flag. As a result, this limitation can impact the receiver's ability to coalesce packets, potentially degrading network throughput. 2. The Maximum Segment Size (MSS) is limited to 1023 bytes, which prevents support of Maximum Transmission Unit (MTU) greater than 1063 bytes. Therefore NETIF_F_HW_CSUM and NETIF_F_ALL_TSO features are mutually exclusive. NETIF_F_HW_CSUM hardware feature support is indicated but is not enabled by default. Instead, IP checksums and NETIF_F_ALL_TSO are the defaults. Enforcement of mutual exclusivity of NETIF_F_HW_CSUM and NETIF_F_ALL_TSO is done in ice_fix_features_tso_gcs(). Mutual exclusivity of IP checksums and NETIF_F_HW_CSUM is handled by netdev_fix_features(). When NETIF_F_HW_CSUM is requested the provided skb->csum_start and skb->csum_offset are passed to hardware in the Tx context descriptor generic checksum (GCS) parameters. Hardware calculates the 1's complement from skb->csum_start to the end of the packet, and inserts the result in the packet at skb->csum_offset. Co-developed-by: Alice Michael Signed-off-by: Alice Michael Co-developed-by: Eric Joyner Signed-off-by: Eric Joyner Signed-off-by: Paul Greenwalt Reviewed-by: Alexander Lobakin Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 1 + .../net/ethernet/intel/ice/ice_lan_tx_rx.h | 9 +++- drivers/net/ethernet/intel/ice/ice_lib.c | 12 +++++- drivers/net/ethernet/intel/ice/ice_main.c | 43 +++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_txrx.c | 26 ++++++++++- drivers/net/ethernet/intel/ice/ice_txrx.h | 3 ++ drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 26 +++++++++++ 7 files changed, 116 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 680a81961ba1..1f30274624b2 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -205,6 +205,7 @@ enum ice_feature { ICE_F_SMA_CTRL, ICE_F_CGU, ICE_F_GNSS, + ICE_F_GCS, ICE_F_ROCE_LAG, ICE_F_SRIOV_LAG, ICE_F_MBX_LIMIT, diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index 611577ebc29d..b419933da63c 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -229,7 +229,7 @@ struct ice_32b_rx_flex_desc_nic { __le16 status_error1; u8 flexi_flags2; u8 ts_low; - __le16 l2tag2_1st; + __le16 raw_csum; __le16 l2tag2_2nd; /* Qword 3 */ @@ -500,10 +500,15 @@ enum ice_tx_desc_len_fields { struct ice_tx_ctx_desc { __le32 tunneling_params; __le16 l2tag2; - __le16 rsvd; + __le16 gcs; __le64 qw1; }; +#define ICE_TX_GCS_DESC_START_M GENMASK(7, 0) +#define ICE_TX_GCS_DESC_OFFSET_M GENMASK(11, 8) +#define ICE_TX_GCS_DESC_TYPE_M GENMASK(14, 12) +#define ICE_TX_GCS_DESC_CSUM_PSH BIT(12) + #define ICE_TXD_CTX_QW1_CMD_S 4 #define ICE_TXD_CTX_QW1_CMD_M (0x7FUL << ICE_TXD_CTX_QW1_CMD_S) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index d4e74f96a8ad..78f2d124601c 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1401,6 +1401,10 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG2; else ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG1; + + if (ice_is_feature_supported(pf, ICE_F_GCS)) + ring->flags |= ICE_TX_FLAGS_RING_GCS; + WRITE_ONCE(vsi->tx_rings[i], ring); } @@ -1420,6 +1424,10 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) ring->dev = dev; ring->count = vsi->num_rx_desc; ring->cached_phctime = pf->ptp.cached_phc_time; + + if (ice_is_feature_supported(pf, ICE_F_GCS)) + ring->flags |= ICE_RX_FLAGS_RING_GCS; + WRITE_ONCE(vsi->rx_rings[i], ring); } @@ -3881,8 +3889,10 @@ void ice_init_feature_support(struct ice_pf *pf) break; } - if (pf->hw.mac_type == ICE_MAC_E830) + if (pf->hw.mac_type == ICE_MAC_E830) { ice_set_feature_support(pf, ICE_F_MBX_LIMIT); + ice_set_feature_support(pf, ICE_F_GCS); + } } /** diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index a6f586f9bfd1..c96feb292f84 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3666,6 +3666,12 @@ void ice_set_netdev_features(struct net_device *netdev) */ netdev->hw_features |= NETIF_F_RXFCS; + /* Mutual exclusivity for TSO and GCS is enforced by the fix features + * ndo callback. + */ + if (ice_is_feature_supported(pf, ICE_F_GCS)) + netdev->hw_features |= NETIF_F_HW_CSUM; + netif_set_tso_max_size(netdev, ICE_MAX_TSO_SIZE); } @@ -6188,6 +6194,38 @@ ice_fdb_del(struct ndmsg *ndm, __always_unused struct nlattr *tb[], return err; } +/** + * ice_fix_features_gcs - enforce Generic Checksum (GCS) feature restrictions + * @netdev: ptr to the netdev that flags are being fixed on + * @features: features that need to be checked and possibly fixed + * + * Due to E830 hardware limitations on TSO (NETIF_F_ALL_TSO) with GCS + * (NETIF_F_HW_CSUM), inner packet header modification is not supported and + * maximum segment size is limited to 1023 bytes, make TSO and GCS mutually + * exclusive. If both TSO and GCS are requested, then choose TSO and drop + * GCS, else preserve existing settings. + * + * Note: IP checksums enforcement is handled by netdev_fix_features(). + * + * Return: updated features based on device GCS limitations. + */ +static netdev_features_t +ice_fix_features_gcs(struct net_device *netdev, netdev_features_t features) +{ + if (!((features & NETIF_F_HW_CSUM) && (features & NETIF_F_ALL_TSO))) + return features; + + if (netdev->features & NETIF_F_HW_CSUM) { + netdev_warn(netdev, "Dropping TSO. TSO and HW checksum are mutually exclusive.\n"); + features &= ~NETIF_F_ALL_TSO; + } else { + netdev_warn(netdev, "Dropping HW checksum. TSO and HW checksum are mutually exclusive.\n"); + features &= ~NETIF_F_HW_CSUM; + } + + return features; +} + #define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ NETIF_F_HW_VLAN_CTAG_TX | \ NETIF_F_HW_VLAN_STAG_RX | \ @@ -6235,6 +6273,8 @@ ice_fdb_del(struct ndmsg *ndm, __always_unused struct nlattr *tb[], * These are mutually exclusive as there is currently no way to * enable/disable VLAN filtering based on VLAN ethertype when using VLAN * prune rules. + * + * Return: updated features list. */ static netdev_features_t ice_fix_features(struct net_device *netdev, netdev_features_t features) @@ -6290,6 +6330,9 @@ ice_fix_features(struct net_device *netdev, netdev_features_t features) features &= ~NETIF_VLAN_STRIPPING_FEATURES; } + if (ice_is_feature_supported(np->vsi->back, ICE_F_GCS)) + features = ice_fix_features_gcs(netdev, features); + return features; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 5d2d7736fd5f..0e274eca574d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1753,6 +1753,7 @@ ice_tx_map(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first, static int ice_tx_csum(struct ice_tx_buf *first, struct ice_tx_offload_params *off) { + const struct ice_tx_ring *tx_ring = off->tx_ring; u32 l4_len = 0, l3_len = 0, l2_len = 0; struct sk_buff *skb = first->skb; union { @@ -1902,6 +1903,29 @@ int ice_tx_csum(struct ice_tx_buf *first, struct ice_tx_offload_params *off) l3_len = l4.hdr - ip.hdr; offset |= (l3_len / 4) << ICE_TX_DESC_LEN_IPLEN_S; + if ((tx_ring->netdev->features & NETIF_F_HW_CSUM) && + !(first->tx_flags & ICE_TX_FLAGS_TSO) && + !skb_csum_is_sctp(skb)) { + /* Set GCS */ + u16 csum_start = (skb->csum_start - skb->mac_header) / 2; + u16 csum_offset = skb->csum_offset / 2; + u16 gcs_params; + + gcs_params = FIELD_PREP(ICE_TX_GCS_DESC_START_M, csum_start) | + FIELD_PREP(ICE_TX_GCS_DESC_OFFSET_M, csum_offset) | + FIELD_PREP(ICE_TX_GCS_DESC_CSUM_PSH, 1); + + /* Unlike legacy HW checksums, GCS requires a context + * descriptor. + */ + off->cd_qw1 |= ICE_TX_DESC_DTYPE_CTX; + off->cd_gcs_params = gcs_params; + /* Fill out CSO info in data descriptors */ + off->td_offset |= offset; + off->td_cmd |= cmd; + return 1; + } + /* Enable L4 checksum offloads */ switch (l4_proto) { case IPPROTO_TCP: @@ -2383,7 +2407,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) /* setup context descriptor */ cdesc->tunneling_params = cpu_to_le32(offload.cd_tunnel_params); cdesc->l2tag2 = cpu_to_le16(offload.cd_l2tag2); - cdesc->rsvd = cpu_to_le16(0); + cdesc->gcs = cpu_to_le16(offload.cd_gcs_params); cdesc->qw1 = cpu_to_le64(offload.cd_qw1); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index cb347c852ba9..1fc341b30d9c 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -193,6 +193,7 @@ struct ice_tx_offload_params { u32 td_l2tag1; u32 cd_tunnel_params; u16 cd_l2tag2; + u16 cd_gcs_params; u8 header_len; }; @@ -367,6 +368,7 @@ struct ice_rx_ring { #define ICE_RX_FLAGS_RING_BUILD_SKB BIT(1) #define ICE_RX_FLAGS_CRC_STRIP_DIS BIT(2) #define ICE_RX_FLAGS_MULTIDEV BIT(3) +#define ICE_RX_FLAGS_RING_GCS BIT(4) u8 flags; /* CL5 - 5th cacheline starts here */ struct xdp_rxq_info xdp_rxq; @@ -405,6 +407,7 @@ struct ice_tx_ring { #define ICE_TX_FLAGS_RING_XDP BIT(0) #define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) #define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) +#define ICE_TX_FLAGS_RING_GCS BIT(3) u8 flags; u8 dcb_tc; /* Traffic class of ring */ u16 quanta_prof_id; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 2719f0e20933..45cfaabc41cb 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -80,6 +80,23 @@ ice_rx_hash_to_skb(const struct ice_rx_ring *rx_ring, libeth_rx_pt_set_hash(skb, hash, decoded); } +/** + * ice_rx_gcs - Set generic checksum in skb + * @skb: skb currently being received and modified + * @rx_desc: receive descriptor + */ +static void ice_rx_gcs(struct sk_buff *skb, + const union ice_32b_rx_flex_desc *rx_desc) +{ + const struct ice_32b_rx_flex_desc_nic *desc; + u16 csum; + + desc = (struct ice_32b_rx_flex_desc_nic *)rx_desc; + skb->ip_summed = CHECKSUM_COMPLETE; + csum = (__force u16)desc->raw_csum; + skb->csum = csum_unfold((__force __sum16)swab16(csum)); +} + /** * ice_rx_csum - Indicate in skb if checksum is good * @ring: the ring we care about @@ -107,6 +124,15 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); + if ((ring->flags & ICE_RX_FLAGS_RING_GCS) && + rx_desc->wb.rxdid == ICE_RXDID_FLEX_NIC && + (decoded.inner_prot == LIBETH_RX_PT_INNER_TCP || + decoded.inner_prot == LIBETH_RX_PT_INNER_UDP || + decoded.inner_prot == LIBETH_RX_PT_INNER_ICMP)) { + ice_rx_gcs(skb, rx_desc); + return; + } + /* check if HW has decoded the packet and checksum */ if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S))) return; From patchwork Tue Nov 5 22:23:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863628 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73F4B215F50 for ; Tue, 5 Nov 2024 22:24:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845442; cv=none; b=aNntbBMUE9s/Tr/jKLZcKsSW7mVHq3RCZucFu/Xw7hnFu+FbX9TQDEJKrQ1CZypq/4HOTgRMJ/uY1oNvacP+VZegoyIxT7YbUAPCZYBdHHvEtVCYrwSAIyWDQYhkVl0Q4olEP/eUyu2+5NlwCfaE51ec/OSrscgk7L6EHs4Xzvs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845442; c=relaxed/simple; bh=TIC2QdQx5U34H3HB0pxT4WJopJ2DGZhTuzBYTpfa7Kk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HBAyJnU4QJ5leh6LY1Dx8a+Laeic7qu/4gtAIzX86B7nIRd97nvYe9RgJ/RjVrTi3B8AE/zpx/Gf7u/EJOpC+KnLI3hWci0ODRDD+vsUWUdPtFUP//We5t4Q5xBoaiGDdUhcJAY8P3Sc+jgXsPe2dBUJQ9QjKZirQpN5o9lJHUk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ta08ODYP; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ta08ODYP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845441; x=1762381441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TIC2QdQx5U34H3HB0pxT4WJopJ2DGZhTuzBYTpfa7Kk=; b=Ta08ODYP/9P4sB7s4nwfhp7UTzoXGWo8dXTAlIGVdkvbbO1lqT3pLtlx BKUhOBDKil6GnbPxuo0bE9K/lWoG/wC0wAB23nd7sv3tWgfT9QFLnx0sD 0TYqWTMzV1MFiSpHGq1zLOG5C7P/xMKNNPAYbt5uKoBdJQs0m48n4JJ8I oSxCGLRMd/SwXPEB7wVzbnM9aFhG+0vN0K3Kbq1wSALH3LJkIMaERQz1f nYj66IR44FEJ1BM02Bxwd3VC/QpT0mDQE25wUyLkWqiRgdMk3dnau0OCg mV7ccuYzsCwZl/Uwvrfg9qS/viqh3wqu8OtEjPmWnSzCHG5LJ7+io2w71 g==; X-CSE-ConnectionGUID: 8OynX5InQPawkTDYvnAxCQ== X-CSE-MsgGUID: kr3XCkF4T/aFrTbo9n8YMA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314249" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314249" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:00 -0800 X-CSE-ConnectionGUID: J5K9bWKUT2Oz/jv/XCa3kQ== X-CSE-MsgGUID: vxEDRJm7QeebLimyFCEmvA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322430" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:23:57 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Mateusz Polchlopek , anthony.l.nguyen@intel.com, Przemek Kitszel , Pucha Himasekhar Reddy , Simon Horman Subject: [PATCH net-next 02/15] ice: rework of dump serdes equalizer values feature Date: Tue, 5 Nov 2024 14:23:36 -0800 Message-ID: <20241105222351.3320587-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Mateusz Polchlopek Refactor function ice_get_tx_rx_equa() to iterate over new table of params instead of multiple calls to ice_aq_get_phy_equalization(). Subsequent commit will extend that function by add more serdes equalizer values to dump. Shorten the fields of struct ice_serdes_equalization_to_ethtool for readability purposes. Reviewed-by: Przemek Kitszel Signed-off-by: Mateusz Polchlopek Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Reviewed-by: Simon Horman Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_ethtool.c | 93 ++++++-------------- drivers/net/ethernet/intel/ice/ice_ethtool.h | 22 ++--- 2 files changed, 38 insertions(+), 77 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 2924ac61300d..19e7a9d93928 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -693,75 +693,36 @@ static int ice_get_port_topology(struct ice_hw *hw, u8 lport, static int ice_get_tx_rx_equa(struct ice_hw *hw, u8 serdes_num, struct ice_serdes_equalization_to_ethtool *ptr) { + static const int tx = ICE_AQC_OP_CODE_TX_EQU; + static const int rx = ICE_AQC_OP_CODE_RX_EQU; + struct { + int data_in; + int opcode; + int *out; + } aq_params[] = { + { ICE_AQC_TX_EQU_PRE1, tx, &ptr->tx_equ_pre1 }, + { ICE_AQC_TX_EQU_PRE3, tx, &ptr->tx_equ_pre3 }, + { ICE_AQC_TX_EQU_ATTEN, tx, &ptr->tx_equ_atten }, + { ICE_AQC_TX_EQU_POST1, tx, &ptr->tx_equ_post1 }, + { ICE_AQC_TX_EQU_PRE2, tx, &ptr->tx_equ_pre2 }, + { ICE_AQC_RX_EQU_PRE2, rx, &ptr->rx_equ_pre2 }, + { ICE_AQC_RX_EQU_PRE1, rx, &ptr->rx_equ_pre1 }, + { ICE_AQC_RX_EQU_POST1, rx, &ptr->rx_equ_post1 }, + { ICE_AQC_RX_EQU_BFLF, rx, &ptr->rx_equ_bflf }, + { ICE_AQC_RX_EQU_BFHF, rx, &ptr->rx_equ_bfhf }, + { ICE_AQC_RX_EQU_DRATE, rx, &ptr->rx_equ_drate }, + }; int err; - err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE1, - ICE_AQC_OP_CODE_TX_EQU, serdes_num, - &ptr->tx_equalization_pre1); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE3, - ICE_AQC_OP_CODE_TX_EQU, serdes_num, - &ptr->tx_equalization_pre3); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_ATTEN, - ICE_AQC_OP_CODE_TX_EQU, serdes_num, - &ptr->tx_equalization_atten); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_POST1, - ICE_AQC_OP_CODE_TX_EQU, serdes_num, - &ptr->tx_equalization_post1); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE2, - ICE_AQC_OP_CODE_TX_EQU, serdes_num, - &ptr->tx_equalization_pre2); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_PRE2, - ICE_AQC_OP_CODE_RX_EQU, serdes_num, - &ptr->rx_equalization_pre2); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_PRE1, - ICE_AQC_OP_CODE_RX_EQU, serdes_num, - &ptr->rx_equalization_pre1); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_POST1, - ICE_AQC_OP_CODE_RX_EQU, serdes_num, - &ptr->rx_equalization_post1); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_BFLF, - ICE_AQC_OP_CODE_RX_EQU, serdes_num, - &ptr->rx_equalization_bflf); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_BFHF, - ICE_AQC_OP_CODE_RX_EQU, serdes_num, - &ptr->rx_equalization_bfhf); - if (err) - return err; - - err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_DRATE, - ICE_AQC_OP_CODE_RX_EQU, serdes_num, - &ptr->rx_equalization_drate); - if (err) - return err; + for (int i = 0; i < ARRAY_SIZE(aq_params); i++) { + err = ice_aq_get_phy_equalization(hw, aq_params[i].data_in, + aq_params[i].opcode, + serdes_num, aq_params[i].out); + if (err) + break; + } - return 0; + return err; } /** diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.h b/drivers/net/ethernet/intel/ice/ice_ethtool.h index 9acccae38625..98eb9c51d687 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.h +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.h @@ -10,17 +10,17 @@ struct ice_phy_type_to_ethtool { }; struct ice_serdes_equalization_to_ethtool { - int rx_equalization_pre2; - int rx_equalization_pre1; - int rx_equalization_post1; - int rx_equalization_bflf; - int rx_equalization_bfhf; - int rx_equalization_drate; - int tx_equalization_pre1; - int tx_equalization_pre3; - int tx_equalization_atten; - int tx_equalization_post1; - int tx_equalization_pre2; + int rx_equ_pre2; + int rx_equ_pre1; + int rx_equ_post1; + int rx_equ_bflf; + int rx_equ_bfhf; + int rx_equ_drate; + int tx_equ_pre1; + int tx_equ_pre3; + int tx_equ_atten; + int tx_equ_post1; + int tx_equ_pre2; }; struct ice_regdump_to_ethtool { From patchwork Tue Nov 5 22:23:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863629 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B371321642E for ; Tue, 5 Nov 2024 22:24:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845442; cv=none; b=RzJJQclMlm05287heFAfjA5DjRBbjNxp7dj40U7ww2sMrU/s8olgdMTTtNa6D0UWW3liquW1Wb8SvIrEgp8wcxXwDmhvRYqqnXWaqGEBKRNPYvmahb41+iyMbxHRW/vO+x9zn0/sYXtwyVNbT3AkHmGt9eeewKnDrYJ8WZzKShA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845442; c=relaxed/simple; bh=MlrPDZG/cfHmqD51yMuz9krG6ygJmuwjZprSgruHJV4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P0WLsjhozAnX3lrc/siEtvhqRYdox+KHMj1ZpO5rKWIGg9wBb+BFUEfceLPdxwCphk6/R0wUIpVz9ZzfPKynB8a9dC99k28lL7fqxXwi2/9H0zaqSrOQ+H1ZM/wRuhDJtPZlHlOvtqegYWTYIpC8E6pFJxWqcnSCZQRqnX8EOgY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=C8eJJVEK; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="C8eJJVEK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845441; x=1762381441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MlrPDZG/cfHmqD51yMuz9krG6ygJmuwjZprSgruHJV4=; b=C8eJJVEKzPn/qU6ULzfJGxJbVm4qwsKWFKSw4M4o57O9hnAvqiWP7rC2 FbqEIEJmdzIQIfc6ZGk+xD6bAI9g4nGkexzb4VOBDaAg8YMrDlSEKhITG 3EHxcpSEyOMdev9mPc5/yOGhxWbnA17Oj7zLd92yiMIF3BoP6NOG33+PV 8v4nyIGV3KrLcpzE4eV1yL4QJjhnOcEbCkyAeOTUKA/EJrdtcqufPj+q/ /CokSuEQEQ1ecER7l3l95xJVwbFaW2gdKKymUT2KH6h8tSS0phdgt8QEG xnPcHUQKgzj+veyV2upW+zKTL7HmhgFQDB5G8WB/Q10z6+sXsqlq+yR6x A==; X-CSE-ConnectionGUID: 9C6uhhoGQXi+jcbvrSKciQ== X-CSE-MsgGUID: 3RC8Pn4xQ0qr75nceyUR1g== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314256" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314256" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:00 -0800 X-CSE-ConnectionGUID: gqFxzGXYQ6CF8+uE5X9cjw== X-CSE-MsgGUID: csvASfcvS5uPAimaOXllog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322434" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:23:58 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Mateusz Polchlopek , anthony.l.nguyen@intel.com, Przemek Kitszel , Pucha Himasekhar Reddy , Simon Horman Subject: [PATCH net-next 03/15] ice: extend dump serdes equalizer values feature Date: Tue, 5 Nov 2024 14:23:37 -0800 Message-ID: <20241105222351.3320587-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Mateusz Polchlopek Extend the work done in commit 70838938e89c ("ice: Implement driver functionality to dump serdes equalizer values") by adding the new set of Rx registers that can be read using command: $ ethtool -d interface_name Rx equalization parameters are E810 PHY registers used by end user to gather information about configuration and status to debug link and connection issues in the field. Reviewed-by: Przemek Kitszel Signed-off-by: Mateusz Polchlopek Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Reviewed-by: Simon Horman Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_adminq_cmd.h | 17 +++++++++++++++++ drivers/net/ethernet/intel/ice/ice_ethtool.c | 17 +++++++++++++++++ drivers/net/ethernet/intel/ice/ice_ethtool.h | 17 +++++++++++++++++ 3 files changed, 51 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 1f01f3501d6b..1489a8ceec51 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -1492,6 +1492,23 @@ struct ice_aqc_dnl_equa_param { #define ICE_AQC_RX_EQU_BFLF (0x13 << ICE_AQC_RX_EQU_SHIFT) #define ICE_AQC_RX_EQU_BFHF (0x14 << ICE_AQC_RX_EQU_SHIFT) #define ICE_AQC_RX_EQU_DRATE (0x15 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_CTLE_GAINHF (0x20 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_CTLE_GAINLF (0x21 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_CTLE_GAINDC (0x22 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_CTLE_BW (0x23 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_GAIN (0x30 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_GAIN2 (0x31 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_2 (0x32 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_3 (0x33 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_4 (0x34 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_5 (0x35 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_6 (0x36 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_7 (0x37 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_8 (0x38 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_9 (0x39 << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_10 (0x3A << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_11 (0x3B << ICE_AQC_RX_EQU_SHIFT) +#define ICE_AQC_RX_EQU_DFE_12 (0x3C << ICE_AQC_RX_EQU_SHIFT) #define ICE_AQC_TX_EQU_PRE1 0x0 #define ICE_AQC_TX_EQU_PRE3 0x3 #define ICE_AQC_TX_EQU_ATTEN 0x4 diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 19e7a9d93928..3072634bf049 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -711,6 +711,23 @@ static int ice_get_tx_rx_equa(struct ice_hw *hw, u8 serdes_num, { ICE_AQC_RX_EQU_BFLF, rx, &ptr->rx_equ_bflf }, { ICE_AQC_RX_EQU_BFHF, rx, &ptr->rx_equ_bfhf }, { ICE_AQC_RX_EQU_DRATE, rx, &ptr->rx_equ_drate }, + { ICE_AQC_RX_EQU_CTLE_GAINHF, rx, &ptr->rx_equ_ctle_gainhf }, + { ICE_AQC_RX_EQU_CTLE_GAINLF, rx, &ptr->rx_equ_ctle_gainlf }, + { ICE_AQC_RX_EQU_CTLE_GAINDC, rx, &ptr->rx_equ_ctle_gaindc }, + { ICE_AQC_RX_EQU_CTLE_BW, rx, &ptr->rx_equ_ctle_bw }, + { ICE_AQC_RX_EQU_DFE_GAIN, rx, &ptr->rx_equ_dfe_gain }, + { ICE_AQC_RX_EQU_DFE_GAIN2, rx, &ptr->rx_equ_dfe_gain_2 }, + { ICE_AQC_RX_EQU_DFE_2, rx, &ptr->rx_equ_dfe_2 }, + { ICE_AQC_RX_EQU_DFE_3, rx, &ptr->rx_equ_dfe_3 }, + { ICE_AQC_RX_EQU_DFE_4, rx, &ptr->rx_equ_dfe_4 }, + { ICE_AQC_RX_EQU_DFE_5, rx, &ptr->rx_equ_dfe_5 }, + { ICE_AQC_RX_EQU_DFE_6, rx, &ptr->rx_equ_dfe_6 }, + { ICE_AQC_RX_EQU_DFE_7, rx, &ptr->rx_equ_dfe_7 }, + { ICE_AQC_RX_EQU_DFE_8, rx, &ptr->rx_equ_dfe_8 }, + { ICE_AQC_RX_EQU_DFE_9, rx, &ptr->rx_equ_dfe_9 }, + { ICE_AQC_RX_EQU_DFE_10, rx, &ptr->rx_equ_dfe_10 }, + { ICE_AQC_RX_EQU_DFE_11, rx, &ptr->rx_equ_dfe_11 }, + { ICE_AQC_RX_EQU_DFE_12, rx, &ptr->rx_equ_dfe_12 }, }; int err; diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.h b/drivers/net/ethernet/intel/ice/ice_ethtool.h index 98eb9c51d687..8f2ad1c172c0 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.h +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.h @@ -16,6 +16,23 @@ struct ice_serdes_equalization_to_ethtool { int rx_equ_bflf; int rx_equ_bfhf; int rx_equ_drate; + int rx_equ_ctle_gainhf; + int rx_equ_ctle_gainlf; + int rx_equ_ctle_gaindc; + int rx_equ_ctle_bw; + int rx_equ_dfe_gain; + int rx_equ_dfe_gain_2; + int rx_equ_dfe_2; + int rx_equ_dfe_3; + int rx_equ_dfe_4; + int rx_equ_dfe_5; + int rx_equ_dfe_6; + int rx_equ_dfe_7; + int rx_equ_dfe_8; + int rx_equ_dfe_9; + int rx_equ_dfe_10; + int rx_equ_dfe_11; + int rx_equ_dfe_12; int tx_equ_pre1; int tx_equ_pre3; int tx_equ_atten; From patchwork Tue Nov 5 22:23:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863631 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B647216440 for ; Tue, 5 Nov 2024 22:24:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845444; cv=none; b=UIF5d8DLz5L5f7ALFaqVIrORqdGqDxtqu0oN9hCznFzWNilpHhLE73Xs8PwURuzEOADv+N2QtqD8JbxmsBHrAUH+f+WO2Sgv/ILq2emGyVjKI/M4ftaOUXzyqFeP8tCihzMzzXe83PtOxqOxKoNv4wG+l3pbjm7LJVll6iCzkcM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845444; c=relaxed/simple; bh=UB1BKs1NFcAJOY2+bDKUzdHQt3Fu6qA38RZCd3GAWI0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YMUNfToxEixfc7CSGT9nolPFqECtewTr0x2cPiFl6IpiV30JMPHfZEm8Se2e4OQp/G0lXtpqIKXTwRq65BEV+22b+OqfxEfjwcg+oICTv35LtGnPUrnT3ajGPb0Muikxba2cTMasSUIvdKkSeTD58PwCCvO7sPoeCvbWL8vj2ZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DYTQIJEa; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DYTQIJEa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845442; x=1762381442; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UB1BKs1NFcAJOY2+bDKUzdHQt3Fu6qA38RZCd3GAWI0=; b=DYTQIJEaXSnfw8ideNf/yXotMOPLSUuBkfDArJ/R9C4ML9JpL6Pbuu7X nKMcltCPcew41jpku/T9FtFov1YTModLaeYxDWRtjEC6gg6emyRTs+CR8 SGRATUnL0uIEmNvnoTqDLMtlEAVFTc/HzfBBcMGBmEhZD62jWYZmqf1MA wzGkZ/J9whYZBeWjyHs7p6j4AcnhuYpO7r192Hj+Myylb/bY8Ba6F51R4 nF2DvMwoKn0dLxUdL668OtMBDkcOKMaFLrGySW/tByci+I9ICcWPDQDNu 2MaiB9xySOU4h2VTJjsIec9knviG7mgRnaWVTip781r8KGffFSscOsB6y Q==; X-CSE-ConnectionGUID: GB06rV0FQt6FoJSFAAZM4A== X-CSE-MsgGUID: PkIxZwKsSOuSaEPSjUUMcw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314267" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314267" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:00 -0800 X-CSE-ConnectionGUID: 8OEOOppQSReGxvh9cpVPQQ== X-CSE-MsgGUID: nJAHV4PUSMeAkeeDcAtuEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322438" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:23:58 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Przemek Kitszel , anthony.l.nguyen@intel.com, Paul Greenwalt , Dan Nowlin , Ahmed Zaki , Simon Horman , Michal Swiatkowski , Pucha Himasekhar Reddy Subject: [PATCH net-next 04/15] ice: refactor "last" segment of DDP pkg Date: Tue, 5 Nov 2024 14:23:38 -0800 Message-ID: <20241105222351.3320587-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Przemek Kitszel Add ice_ddp_send_hunk() that buffers "sent FW hunk" calls to AQ in order to mark the "last" one in more elegant way. Next commit will add even more complicated "sent FW" flow, so it's better to untangle a bit before. Note that metadata buffers were not skipped for NOT-@indicate_last segments, this is fixed now. Minor: + use ice_is_buffer_metadata() instead of open coding it in ice_dwnld_cfg_bufs(); + ice_dwnld_cfg_bufs_no_lock() + dependencies were moved up a bit to have better git-diff, as this function was rewritten (in terms of git-blame) CC: Paul Greenwalt CC: Dan Nowlin CC: Ahmed Zaki CC: Simon Horman Reviewed-by: Michal Swiatkowski Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Przemek Kitszel Reviewed-by: Simon Horman Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_ddp.c | 288 ++++++++++++----------- 1 file changed, 151 insertions(+), 137 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c index 272fd823a825..3e1173ef4b5c 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.c +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c @@ -1210,6 +1210,131 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, return status; } +/** + * ice_is_buffer_metadata - determine if package buffer is a metadata buffer + * @buf: pointer to buffer header + * Return: whether given @buf is a metadata one. + */ +static bool ice_is_buffer_metadata(struct ice_buf_hdr *buf) +{ + return le32_to_cpu(buf->section_entry[0].type) & ICE_METADATA_BUF; +} + +/** + * struct ice_ddp_send_ctx - sending context of current DDP segment + * @hw: pointer to the hardware struct + * + * Keeps current sending state (header, error) for the purpose of proper "last" + * bit setting in ice_aq_download_pkg(). Use via calls to ice_ddp_send_hunk(). + */ +struct ice_ddp_send_ctx { + struct ice_hw *hw; +/* private: only for ice_ddp_send_hunk() */ + struct ice_buf_hdr *hdr; + int err; +}; + +static void ice_ddp_send_ctx_set_err(struct ice_ddp_send_ctx *ctx, int err) +{ + ctx->err = err; +} + +/** + * ice_ddp_send_hunk - send one hunk of data to FW + * @ctx: current segment sending context + * @hunk: next hunk to send, size is always ICE_PKG_BUF_SIZE + * + * Send the next hunk of data to FW, retrying if needed. + * + * Notice: must be called once more with a NULL @hunk to finish up; such call + * will set up the "last" bit of an AQ request. After such call @ctx.hdr is + * cleared, @hw is still valid. + * + * Return: %ICE_DDP_PKG_SUCCESS if there were no problems; a sticky @err + * otherwise. + */ +static enum ice_ddp_state ice_ddp_send_hunk(struct ice_ddp_send_ctx *ctx, + struct ice_buf_hdr *hunk) +{ + struct ice_buf_hdr *prev_hunk = ctx->hdr; + struct ice_hw *hw = ctx->hw; + bool prev_was_last = !hunk; + enum ice_aq_err aq_err; + u32 offset, info; + int attempt, err; + + if (ctx->err) + return ctx->err; + + ctx->hdr = hunk; + if (!prev_hunk) + return ICE_DDP_PKG_SUCCESS; /* no problem so far */ + + for (attempt = 0; attempt < 5; attempt++) { + if (attempt) + msleep(20); + + err = ice_aq_download_pkg(hw, prev_hunk, ICE_PKG_BUF_SIZE, + prev_was_last, &offset, &info, NULL); + + aq_err = hw->adminq.sq_last_status; + if (aq_err != ICE_AQ_RC_ENOSEC && aq_err != ICE_AQ_RC_EBADSIG) + break; + } + + if (err) { + ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n", + err, offset, info); + ctx->err = ice_map_aq_err_to_ddp_state(aq_err); + } else if (attempt) { + dev_dbg(ice_hw_to_dev(hw), + "ice_aq_download_pkg number of retries: %d\n", attempt); + } + + return ctx->err; +} + +/** + * ice_dwnld_cfg_bufs_no_lock + * @ctx: context of the current buffers section to send + * @bufs: pointer to an array of buffers + * @start: buffer index of first buffer to download + * @count: the number of buffers to download + * + * Downloads package configuration buffers to the firmware. Metadata buffers + * are skipped, and the first metadata buffer found indicates that the rest + * of the buffers are all metadata buffers. + */ +static enum ice_ddp_state +ice_dwnld_cfg_bufs_no_lock(struct ice_ddp_send_ctx *ctx, struct ice_buf *bufs, + u32 start, u32 count) +{ + struct ice_buf_hdr *bh; + enum ice_ddp_state err; + + if (!bufs || !count) { + ice_ddp_send_ctx_set_err(ctx, ICE_DDP_PKG_ERR); + return ICE_DDP_PKG_ERR; + } + + bufs += start; + + for (int i = 0; i < count; i++, bufs++) { + bh = (struct ice_buf_hdr *)bufs; + /* Metadata buffers should not be sent to FW, + * their presence means "we are done here". + */ + if (ice_is_buffer_metadata(bh)) + break; + + err = ice_ddp_send_hunk(ctx, bh); + if (err) + return err; + } + + return 0; +} + /** * ice_get_pkg_seg_by_idx * @pkg_hdr: pointer to the package header to be searched @@ -1269,137 +1394,21 @@ ice_is_signing_seg_type_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx, return false; } -/** - * ice_is_buffer_metadata - determine if package buffer is a metadata buffer - * @buf: pointer to buffer header - */ -static bool ice_is_buffer_metadata(struct ice_buf_hdr *buf) -{ - if (le32_to_cpu(buf->section_entry[0].type) & ICE_METADATA_BUF) - return true; - - return false; -} - -/** - * ice_is_last_download_buffer - * @buf: pointer to current buffer header - * @idx: index of the buffer in the current sequence - * @count: the buffer count in the current sequence - * - * Note: this routine should only be called if the buffer is not the last buffer - */ -static bool -ice_is_last_download_buffer(struct ice_buf_hdr *buf, u32 idx, u32 count) -{ - struct ice_buf *next_buf; - - if ((idx + 1) == count) - return true; - - /* A set metadata flag in the next buffer will signal that the current - * buffer will be the last buffer downloaded - */ - next_buf = ((struct ice_buf *)buf) + 1; - - return ice_is_buffer_metadata((struct ice_buf_hdr *)next_buf); -} - -/** - * ice_dwnld_cfg_bufs_no_lock - * @hw: pointer to the hardware structure - * @bufs: pointer to an array of buffers - * @start: buffer index of first buffer to download - * @count: the number of buffers to download - * @indicate_last: if true, then set last buffer flag on last buffer download - * - * Downloads package configuration buffers to the firmware. Metadata buffers - * are skipped, and the first metadata buffer found indicates that the rest - * of the buffers are all metadata buffers. - */ -static enum ice_ddp_state -ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 start, - u32 count, bool indicate_last) -{ - enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - struct ice_buf_hdr *bh; - enum ice_aq_err err; - u32 offset, info, i; - - if (!bufs || !count) - return ICE_DDP_PKG_ERR; - - /* If the first buffer's first section has its metadata bit set - * then there are no buffers to be downloaded, and the operation is - * considered a success. - */ - bh = (struct ice_buf_hdr *)(bufs + start); - if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF) - return ICE_DDP_PKG_SUCCESS; - - for (i = 0; i < count; i++) { - bool last = false; - int try_cnt = 0; - int status; - - bh = (struct ice_buf_hdr *)(bufs + start + i); - - if (indicate_last) - last = ice_is_last_download_buffer(bh, i, count); - - while (1) { - status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, - last, &offset, &info, - NULL); - if (hw->adminq.sq_last_status != ICE_AQ_RC_ENOSEC && - hw->adminq.sq_last_status != ICE_AQ_RC_EBADSIG) - break; - - try_cnt++; - - if (try_cnt == 5) - break; - - msleep(20); - } - - if (try_cnt) - dev_dbg(ice_hw_to_dev(hw), - "ice_aq_download_pkg number of retries: %d\n", - try_cnt); - - /* Save AQ status from download package */ - if (status) { - ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n", - status, offset, info); - err = hw->adminq.sq_last_status; - state = ice_map_aq_err_to_ddp_state(err); - break; - } - - if (last) - break; - } - - return state; -} - /** * ice_download_pkg_sig_seg - download a signature segment - * @hw: pointer to the hardware structure + * @ctx: context of the current buffers section to send * @seg: pointer to signature segment */ static enum ice_ddp_state -ice_download_pkg_sig_seg(struct ice_hw *hw, struct ice_sign_seg *seg) +ice_download_pkg_sig_seg(struct ice_ddp_send_ctx *ctx, struct ice_sign_seg *seg) { - return ice_dwnld_cfg_bufs_no_lock(hw, seg->buf_tbl.buf_array, 0, - le32_to_cpu(seg->buf_tbl.buf_count), - false); + return ice_dwnld_cfg_bufs_no_lock(ctx, seg->buf_tbl.buf_array, 0, + le32_to_cpu(seg->buf_tbl.buf_count)); } /** * ice_download_pkg_config_seg - download a config segment - * @hw: pointer to the hardware structure + * @ctx: context of the current buffers section to send * @pkg_hdr: pointer to package header * @idx: segment index * @start: starting buffer @@ -1408,8 +1417,9 @@ ice_download_pkg_sig_seg(struct ice_hw *hw, struct ice_sign_seg *seg) * Note: idx must reference a ICE segment */ static enum ice_ddp_state -ice_download_pkg_config_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, - u32 idx, u32 start, u32 count) +ice_download_pkg_config_seg(struct ice_ddp_send_ctx *ctx, + struct ice_pkg_hdr *pkg_hdr, u32 idx, u32 start, + u32 count) { struct ice_buf_table *bufs; struct ice_seg *seg; @@ -1425,21 +1435,20 @@ ice_download_pkg_config_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, if (start >= buf_count || start + count > buf_count) return ICE_DDP_PKG_ERR; - return ice_dwnld_cfg_bufs_no_lock(hw, bufs->buf_array, start, count, - true); + return ice_dwnld_cfg_bufs_no_lock(ctx, bufs->buf_array, start, count); } /** * ice_dwnld_sign_and_cfg_segs - download a signing segment and config segment - * @hw: pointer to the hardware structure + * @ctx: context of the current buffers section to send * @pkg_hdr: pointer to package header * @idx: segment index (must be a signature segment) * * Note: idx must reference a signature segment */ static enum ice_ddp_state -ice_dwnld_sign_and_cfg_segs(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, - u32 idx) +ice_dwnld_sign_and_cfg_segs(struct ice_ddp_send_ctx *ctx, + struct ice_pkg_hdr *pkg_hdr, u32 idx) { enum ice_ddp_state state; struct ice_sign_seg *seg; @@ -1450,21 +1459,20 @@ ice_dwnld_sign_and_cfg_segs(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, seg = (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx); if (!seg) { state = ICE_DDP_PKG_ERR; - goto exit; + ice_ddp_send_ctx_set_err(ctx, state); + return state; } count = le32_to_cpu(seg->signed_buf_count); - state = ice_download_pkg_sig_seg(hw, seg); + state = ice_download_pkg_sig_seg(ctx, seg); if (state || !count) - goto exit; + return state; conf_idx = le32_to_cpu(seg->signed_seg_idx); start = le32_to_cpu(seg->signed_buf_start); - state = ice_download_pkg_config_seg(hw, pkg_hdr, conf_idx, start, + state = ice_download_pkg_config_seg(ctx, pkg_hdr, conf_idx, start, count); - -exit: return state; } @@ -1519,6 +1527,7 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) { enum ice_aq_err aq_err = hw->adminq.sq_last_status; enum ice_ddp_state state = ICE_DDP_PKG_ERR; + struct ice_ddp_send_ctx ctx = { .hw = hw }; int status; u32 i; @@ -1539,7 +1548,9 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) hw->pkg_sign_type)) continue; - state = ice_dwnld_sign_and_cfg_segs(hw, pkg_hdr, i); + ice_dwnld_sign_and_cfg_segs(&ctx, pkg_hdr, i); + /* finish up by sending last hunk with "last" flag set */ + state = ice_ddp_send_hunk(&ctx, NULL); if (state) break; } @@ -1564,6 +1575,7 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { + struct ice_ddp_send_ctx ctx = { .hw = hw }; enum ice_ddp_state state; struct ice_buf_hdr *bh; int status; @@ -1576,7 +1588,7 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) * considered a success. */ bh = (struct ice_buf_hdr *)bufs; - if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF) + if (ice_is_buffer_metadata(bh)) return ICE_DDP_PKG_SUCCESS; status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); @@ -1586,7 +1598,9 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status); } - state = ice_dwnld_cfg_bufs_no_lock(hw, bufs, 0, count, true); + ice_dwnld_cfg_bufs_no_lock(&ctx, bufs, 0, count); + /* finish up by sending last hunk with "last" flag set */ + state = ice_ddp_send_hunk(&ctx, NULL); if (!state) state = ice_post_dwnld_pkg_actions(hw); From patchwork Tue Nov 5 22:23:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863632 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE31021644E for ; Tue, 5 Nov 2024 22:24:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845444; cv=none; b=PCn+28qNcXaRa65ndXcsVKmCOKLyrJUebf4+xNS+olGnpQPbUTgFsjzPNd7KhuWbmY5cKaqLOTl+3zlv0UuWlzMAEZ4JEwnGHnXv6s04qfnnLF0YGzOW0gn8H0+PpFIiwDCevK7OX+c1aXI7Ssv0El8HErto+gGfUvDYwF4Eafw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845444; c=relaxed/simple; bh=NgdVcrljFqMwr/5rI/mHmHlEbUgCPMK/aLx08vBDNxk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dg+Z0j67szs44N43TicMZgtEHHD9lvoIS/bLLDJ0VdR0lxs7JpGVWm2Y7XSbVrcrZfIeVAKPiZCS/ZcsRyP+JE1GzsdgTQAQ1e5E6GIGm2piawrAw82GZM+VpSHiR28YKQWjJGwk0dE5LNrr1O+30IXub6k3jq8QA26ReSHBvCw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YUIsZjWQ; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YUIsZjWQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845443; x=1762381443; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NgdVcrljFqMwr/5rI/mHmHlEbUgCPMK/aLx08vBDNxk=; b=YUIsZjWQWda6o6c6Z57QQHTIiyeR/L82zeuWrIXS2dIk8YAXlZckIY82 ZKQh311Jwa30ZmF7w8Mk6FDinPYwOd8GL+GmvzVQXziLO/2vKfDdWBgtI 1aXucFM2hi+fanB+wiqPDE5cp4vCaf+UyKUEhZjnijPIjSQDgT7WftDwj hO1tn/5+4PfAGXVwnmqpvUtuGjNkTZ9yiUXG+a6fEZd6PeeZjlrrfMvAL D+H32v4KVSO/wJfYEXZAcQPGo4bo21uHEptWV+fv5UFqk9NPqoyZyBj1f AmrPquZhK8bfAehu4kKRZD+EFFt34vrGRX/34d12mixmnEBQ1oXeyCbg/ Q==; X-CSE-ConnectionGUID: 1c4VTwyqTHaUpTXUZcAQ5g== X-CSE-MsgGUID: aTf/3q/tSYOPI6YWbH21rg== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314275" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314275" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:00 -0800 X-CSE-ConnectionGUID: 5BuOZltMRseZ3wOwM34Lkw== X-CSE-MsgGUID: g/hMSjn3RR6rne93Zd2NMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322443" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:23:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Przemek Kitszel , anthony.l.nguyen@intel.com, Paul Greenwalt , Ahmed Zaki , Dan Nowlin , Michal Swiatkowski , Pucha Himasekhar Reddy , Simon Horman Subject: [PATCH net-next 05/15] ice: support optional flags in signature segment header Date: Tue, 5 Nov 2024 14:23:39 -0800 Message-ID: <20241105222351.3320587-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Przemek Kitszel An optional flag field has been added to the signature segment header. The field contains two flags, a "valid" bit, and a "last segment" bit that indicates whether the segment is the last segment that will be sent to firmware. If the flag field's valid bit is NOT set, then as was done before, assume that this is the last segment being downloaded. However, if the flag field's valid bit IS set, then use the last segment flag to determine if this segment is the last segment to download. Signed-off-by: Paul Greenwalt Signed-off-by: Ahmed Zaki Co-developed-by: Dan Nowlin Signed-off-by: Dan Nowlin Reviewed-by: Michal Swiatkowski Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Przemek Kitszel Reviewed-by: Simon Horman Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_ddp.c | 22 ++++++++++++++++------ drivers/net/ethernet/intel/ice/ice_ddp.h | 5 ++++- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c index 3e1173ef4b5c..94d78ef382ce 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.c +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c @@ -1438,6 +1438,12 @@ ice_download_pkg_config_seg(struct ice_ddp_send_ctx *ctx, return ice_dwnld_cfg_bufs_no_lock(ctx, bufs->buf_array, start, count); } +static bool ice_is_last_sign_seg(u32 flags) +{ + return !(flags & ICE_SIGN_SEG_FLAGS_VALID) /* behavior prior to valid */ + || (flags & ICE_SIGN_SEG_FLAGS_LAST); +} + /** * ice_dwnld_sign_and_cfg_segs - download a signing segment and config segment * @ctx: context of the current buffers section to send @@ -1450,11 +1456,9 @@ static enum ice_ddp_state ice_dwnld_sign_and_cfg_segs(struct ice_ddp_send_ctx *ctx, struct ice_pkg_hdr *pkg_hdr, u32 idx) { + u32 conf_idx, start, count, flags; enum ice_ddp_state state; struct ice_sign_seg *seg; - u32 conf_idx; - u32 start; - u32 count; seg = (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx); if (!seg) { @@ -1473,6 +1477,14 @@ ice_dwnld_sign_and_cfg_segs(struct ice_ddp_send_ctx *ctx, state = ice_download_pkg_config_seg(ctx, pkg_hdr, conf_idx, start, count); + + /* finish up by sending last hunk with "last" flag set if requested by + * DDP content + */ + flags = le32_to_cpu(seg->flags); + if (ice_is_last_sign_seg(flags)) + state = ice_ddp_send_hunk(ctx, NULL); + return state; } @@ -1548,9 +1560,7 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) hw->pkg_sign_type)) continue; - ice_dwnld_sign_and_cfg_segs(&ctx, pkg_hdr, i); - /* finish up by sending last hunk with "last" flag set */ - state = ice_ddp_send_hunk(&ctx, NULL); + state = ice_dwnld_sign_and_cfg_segs(&ctx, pkg_hdr, i); if (state) break; } diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.h b/drivers/net/ethernet/intel/ice/ice_ddp.h index 79551da2a4b0..8a2d57fc5dae 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.h +++ b/drivers/net/ethernet/intel/ice/ice_ddp.h @@ -181,7 +181,10 @@ struct ice_sign_seg { __le32 signed_seg_idx; __le32 signed_buf_start; __le32 signed_buf_count; -#define ICE_SIGN_SEG_RESERVED_COUNT 44 +#define ICE_SIGN_SEG_FLAGS_VALID 0x80000000 +#define ICE_SIGN_SEG_FLAGS_LAST 0x00000001 + __le32 flags; +#define ICE_SIGN_SEG_RESERVED_COUNT 40 u8 reserved[ICE_SIGN_SEG_RESERVED_COUNT]; struct ice_buf_table buf_tbl; }; From patchwork Tue Nov 5 22:23:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863633 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0431A216A01 for ; Tue, 5 Nov 2024 22:24:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845445; cv=none; b=LwQAp5183BqxZihMrW/zlQ+xpbLYHLog1Eh4WVZe2Tooc1yB3MgxiP9lDUS3FtiDaLkH5zxRh0vfitTvYors9ze6KNw/7QRvbi30yG+jutoVViOU2UyZUZznVGWNHMkfrKlOFeILoF+tSjVd87Ih3TiIo4IdpBJZVe9zUs3oSlE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845445; c=relaxed/simple; bh=3ObS2I38sOxLiM8rzu5QGPeegsjxVbthWHQW7AcyVu4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i8W9yBq+/Jkv7I0hNpauRib8gJdpcgiXCzwhCMwmW4fkzuP6ju7Yo5rNhfbBr+Dj+pk0if55uH4/zI8EjN3O0u3fTt5bIvIp892dfWvE2krM8sZ7I4nKziJ1zrjqt8Yf6dta6b4rh97f0ZCGUZcnH+9UHXlEbF8DRHkLlWMhct8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hG/eOp+w; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hG/eOp+w" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845444; x=1762381444; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3ObS2I38sOxLiM8rzu5QGPeegsjxVbthWHQW7AcyVu4=; b=hG/eOp+wK9mPEFd1wHltFFRp2sMsTYpTcfjlXWEKoPBsa3Mj283D4l6n NolMjMXe9ePBnYAqEzkZ8RQ96jySmzwojb2YwRRYsoq1jTVEg2+OamsWx cxzLriveBzAEINSEAoyNPaLoPJgPazwuZ7IaJwtrC2i+1xryoylkS9y8Q y0HZLuZHQJXlO/Pb8gs7DCfUBHIhi9VYb6dnGZrCgH9p//fUWWMyHYT1L ixMHYY0K4t/qI3OLE6WX+UgBx5gsVUjyHeQsHvm/6OJGMrkViJnBgz1lc LRkahiQpTL9qTtPCAGFN8AXH7lLokmGJOh/sJyPy3mRHo+Ecay9FaSnRn Q==; X-CSE-ConnectionGUID: ieDAE5tQRpO/cdOe0kkVKA== X-CSE-MsgGUID: hyppjYfATlWtWXxfufkJaw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314279" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314279" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:00 -0800 X-CSE-ConnectionGUID: wrQaZHwJRmm509UcZU4kKQ== X-CSE-MsgGUID: FcTnYH7uS1CHIFN3QcqceA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322449" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:23:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Joe Damato , anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, Simon Horman , Pucha Himasekhar Reddy Subject: [PATCH net-next 06/15] ice: Add support for persistent NAPI config Date: Tue, 5 Nov 2024 14:23:40 -0800 Message-ID: <20241105222351.3320587-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Joe Damato Use netif_napi_add_config to assign persistent per-NAPI config when initializing NAPIs. This preserves NAPI config settings when queue counts are adjusted. Tested with an E810-2CQDA2 NIC. Begin by setting the queue count to 4: $ sudo ethtool -L eth4 combined 4 Check the queue settings: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump napi-get --json='{"ifindex": 4}' [{'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8452, 'ifindex': 4, 'irq': 2782}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8451, 'ifindex': 4, 'irq': 2781}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8450, 'ifindex': 4, 'irq': 2780}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8449, 'ifindex': 4, 'irq': 2779}] Now, set the queue with NAPI ID 8451 to have a gro-flush-timeout of 1111: $ sudo ./tools/net/ynl/cli.py \ --spec Documentation/netlink/specs/netdev.yaml \ --do napi-set --json='{"id": 8451, "gro-flush-timeout": 1111}' None Check that worked: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump napi-get --json='{"ifindex": 4}' [{'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8452, 'ifindex': 4, 'irq': 2782}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 1111, 'id': 8451, 'ifindex': 4, 'irq': 2781}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8450, 'ifindex': 4, 'irq': 2780}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8449, 'ifindex': 4, 'irq': 2779}] Now reduce the queue count to 2, which would destroy the queue with NAPI ID 8451: $ sudo ethtool -L eth4 combined 2 Check the queue settings, noting that NAPI ID 8451 is gone: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump napi-get --json='{"ifindex": 4}' [{'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8450, 'ifindex': 4, 'irq': 2780}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8449, 'ifindex': 4, 'irq': 2779}] Now, increase the number of queues back to 4: $ sudo ethtool -L eth4 combined 4 Dump the settings, expecting to see the same NAPI IDs as above and for NAPI ID 8451 to have its gro-flush-timeout set to 1111: $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump napi-get --json='{"ifindex": 4}' [{'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8452, 'ifindex': 4, 'irq': 2782}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 1111, 'id': 8451, 'ifindex': 4, 'irq': 2781}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8450, 'ifindex': 4, 'irq': 2780}, {'defer-hard-irqs': 0, 'gro-flush-timeout': 0, 'id': 8449, 'ifindex': 4, 'irq': 2779}] Signed-off-by: Joe Damato Reviewed-by: Simon Horman Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_base.c | 3 ++- drivers/net/ethernet/intel/ice/ice_lib.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 3a8e156d7d86..82a9cd4ec7ae 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -156,7 +156,8 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx) * handler here (i.e. resume, reset/rebuild, etc.) */ if (vsi->netdev) - netif_napi_add(vsi->netdev, &q_vector->napi, ice_napi_poll); + netif_napi_add_config(vsi->netdev, &q_vector->napi, + ice_napi_poll, v_idx); out: /* tie q_vector and VSI together */ diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 78f2d124601c..2093ca01195a 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2785,8 +2785,10 @@ void ice_napi_add(struct ice_vsi *vsi) return; ice_for_each_q_vector(vsi, v_idx) - netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi, - ice_napi_poll); + netif_napi_add_config(vsi->netdev, + &vsi->q_vectors[v_idx]->napi, + ice_napi_poll, + v_idx); } /** From patchwork Tue Nov 5 22:23:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863635 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 608EC216A06 for ; Tue, 5 Nov 2024 22:24:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845446; cv=none; b=JDFbhraHJxfklYfrd3uVwcTm2U7cpAE84NMJke10IR84df7LNSLvpXTQ4HyiU6HJLooUNugFFuNRmu0cEtOSeFmWdkxjIrQJqNGMiFZW3bfBlDMp1PbGs27+MCrUCQ4v3SOEHF7KHFce9Iq1FpSwa5axgym/DmUhBbEt5DworEo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845446; c=relaxed/simple; bh=ThtlHQN1luKHq6EgjWnKTRenP6Rh2gMCL+wwfWuQu5c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PGxL/GSX3ckonnTa089y4IFq0FOAixHg/kDDOVObkF1L5iDYQrHYDATZaTaQvfxOywqGSwNr4qlkLbIWRmBVJSUeq1wW03Kmf4X+gew626zuoCnNy2Gm4D9augUqxucPrN2sLUFhixglEQS6Mh84wNBBfQGe8E0jpeZ51xUuE6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MCFvVoBS; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MCFvVoBS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845444; x=1762381444; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ThtlHQN1luKHq6EgjWnKTRenP6Rh2gMCL+wwfWuQu5c=; b=MCFvVoBShozpFJ08CaCcMEmBOwnif2GyGwnZdgz3NG7fgRHAH5rC2CpB YbHZ0xim+N6N1TWOOZeF2CAaeSv5IIpAb6cGixRGPB/tSdfH2SjNNYSFK u+ay6Cwetp1LxP6KPNrGWU4UcYTEO43pnx4RlWmJReuCuAlD7JFwyXlzl PQXjrL/ZfmObdyK5q5in8Uf5+Phg1A5Dk88eEphtyS/89OgnkAo5x+dBr 5UYnPBoIsz5PaOWEHorDEdSAb2qHe0azpiILf9eOO4RE1pGyIDYEJvwJx DUkisrd4gMEOPfMWOT33ElZw51/KYwH4INEJT3sLTW8ILNcblbDZ/O8DX Q==; X-CSE-ConnectionGUID: 5U3u1twfQx+K/oSqZyZjkA== X-CSE-MsgGUID: JcHP8aCOTX+DareELGPJkA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314292" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314292" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:01 -0800 X-CSE-ConnectionGUID: gnl157D5QSO0qbcLdk2y/g== X-CSE-MsgGUID: t9T3V0gLTxK33bchyQT5Kg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322452" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:00 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Brett Creeley , anthony.l.nguyen@intel.com, Mateusz Polchlopek , Michal Swiatkowski , Simon Horman , Rafal Romanowski Subject: [PATCH net-next 07/15] ice: only allow Tx promiscuous for multicast Date: Tue, 5 Nov 2024 14:23:41 -0800 Message-ID: <20241105222351.3320587-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Brett Creeley Currently when any VF is trusted and true promiscuous mode is enabled on the PF, the VF will receive all unicast traffic directed to the device's internal switch. This includes traffic external to the NIC and also from other VSI (i.e. VFs). This does not match the expected behavior as unicast traffic should only be visible from external sources in this case. Disable the Tx promiscuous mode bits for unicast promiscuous mode. Reviewed-by: Mateusz Polchlopek Signed-off-by: Brett Creeley Signed-off-by: Michal Swiatkowski Reviewed-by: Simon Horman Tested-by: Rafal Romanowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 6 ++--- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 23 ++++++++++++++----- 2 files changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 1f30274624b2..0d2c80578633 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -181,11 +181,9 @@ #define ice_for_each_chnl_tc(i) \ for ((i) = ICE_CHNL_START_TC; (i) < ICE_CHNL_MAX_TC; (i)++) -#define ICE_UCAST_PROMISC_BITS (ICE_PROMISC_UCAST_TX | ICE_PROMISC_UCAST_RX) +#define ICE_UCAST_PROMISC_BITS ICE_PROMISC_UCAST_RX -#define ICE_UCAST_VLAN_PROMISC_BITS (ICE_PROMISC_UCAST_TX | \ - ICE_PROMISC_UCAST_RX | \ - ICE_PROMISC_VLAN_TX | \ +#define ICE_UCAST_VLAN_PROMISC_BITS (ICE_PROMISC_UCAST_RX | \ ICE_PROMISC_VLAN_RX) #define ICE_MCAST_PROMISC_BITS (ICE_PROMISC_MCAST_TX | ICE_PROMISC_MCAST_RX) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index aa2080747714..cc070c467fdd 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -2554,17 +2554,27 @@ static bool ice_is_vlan_promisc_allowed(struct ice_vf *vf) /** * ice_vf_ena_vlan_promisc - Enable Tx/Rx VLAN promiscuous for the VLAN + * @vf: VF to enable VLAN promisc on * @vsi: VF's VSI used to enable VLAN promiscuous mode * @vlan: VLAN used to enable VLAN promiscuous * * This function should only be called if VLAN promiscuous mode is allowed, * which can be determined via ice_is_vlan_promisc_allowed(). */ -static int ice_vf_ena_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +static int ice_vf_ena_vlan_promisc(struct ice_vf *vf, struct ice_vsi *vsi, + struct ice_vlan *vlan) { - u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + u8 promisc_m = 0; int status; + if (test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states)) + promisc_m |= ICE_UCAST_VLAN_PROMISC_BITS; + if (test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) + promisc_m |= ICE_MCAST_VLAN_PROMISC_BITS; + + if (!promisc_m) + return 0; + status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, vlan->vid); if (status && status != -EEXIST) @@ -2583,7 +2593,7 @@ static int ice_vf_ena_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) */ static int ice_vf_dis_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) { - u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + u8 promisc_m = ICE_UCAST_VLAN_PROMISC_BITS | ICE_MCAST_VLAN_PROMISC_BITS; int status; status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, @@ -2738,7 +2748,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } } else if (vlan_promisc) { - status = ice_vf_ena_vlan_promisc(vsi, &vlan); + status = ice_vf_ena_vlan_promisc(vf, vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable Unicast/multicast promiscuous mode on VLAN ID:%d failed error-%d\n", @@ -3575,7 +3585,7 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi, return err; if (vlan_promisc) { - err = ice_vf_ena_vlan_promisc(vsi, &vlan); + err = ice_vf_ena_vlan_promisc(vf, vsi, &vlan); if (err) return err; } @@ -3603,7 +3613,8 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi, */ if (!ice_is_dvm_ena(&vsi->back->hw)) { if (vlan_promisc) { - err = ice_vf_ena_vlan_promisc(vsi, &vlan); + err = ice_vf_ena_vlan_promisc(vf, vsi, + &vlan); if (err) return err; } From patchwork Tue Nov 5 22:23:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863637 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15A38216A31 for ; Tue, 5 Nov 2024 22:24:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845447; cv=none; b=EMhBqwu/nShoO6LoKb21kbWDOEoiJoHMMi0vKUnU5yWpB7H46qB/rg01xCdsTNN1P5ktcSf+Zk7ya/rBoPF1Igdgg8z7Sth5SyiUTLOC7c1nWEeH4LH1G/+i3YEon92Aw/C9MwKGMYaJo38VonhMRhXhaBp/N5K09GnD0ddHAas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845447; c=relaxed/simple; bh=lBj1cJu2ZEkFnLs2ep/9MrSyHrHCfVLTIlE0NVdEsYM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=g3zeZ19yF7hGY9b8OVfd9Z1xjOWznkOUQlzfp6amxPbLGlojJE90014+kvWygWEEc4zCJn8WzdAIxG3+9T7clDFwYkbFqDCyanXnnaXh4BGFJWAw3oql4IMjfQXlGaixXdEfJElHqVxySzX1z9WovHByMQv6K+0pTsdaW22wdw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MADADX2l; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MADADX2l" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845446; x=1762381446; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lBj1cJu2ZEkFnLs2ep/9MrSyHrHCfVLTIlE0NVdEsYM=; b=MADADX2l32PHfy7I3d3QZBjpNayMYrAWireVc0KXn5T8UWSZ97Tm9G/V bGl1OyftDrgiAOYU2Cb27yEqG61bxmcEhUbxRRPhqyWcC9+kVPH/1k0U4 TZNzGLHvl0o0uXQEFkeyksUjE8lqqfmdxqB7fzoJ5i1/ZVb9dd7vXlANR WF7qEDDEvbC09OCXiilSma+QfdA92Ikaq2mmNTBpaFAsqCN8HG3Ja83p/ mLRDVzw8zRNNTaeWcf+uLfUDdldSlvOGxop+XNUbryvzx0qEv5mufUidQ 4h5f3uuIR3Pql74IfA6AMm5c/kCyw6t4Cwu/odUk/54tJUUgK6R7eXuXh w==; X-CSE-ConnectionGUID: oABL8PK9QISLRjhJuiQoUA== X-CSE-MsgGUID: +fd4kz2aRL2FAV17jFEGKQ== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314298" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314298" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:01 -0800 X-CSE-ConnectionGUID: qrzW0Rl8SoudeIodxfK0eQ== X-CSE-MsgGUID: 1t9o1JcARZ6Xb878KK4f1Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322457" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:00 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Jacob Keller , anthony.l.nguyen@intel.com, Przemek Kitszel Subject: [PATCH net-next 08/15] ice: initialize pf->supported_rxdids immediately after loading DDP Date: Tue, 5 Nov 2024 14:23:42 -0800 Message-ID: <20241105222351.3320587-9-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jacob Keller The pf->supported_rxdids field is used to populate the list of valid RXDIDs that a VF may use when negotiating VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC. The set of supported RXDIDs is dependent on the DDP, and can be read from the GLXFLXP_RXDID_FLAGS register. The PF needs to send this list to the VF upon receiving the VIRTCHNL_OP_GET_SUPPORTED_RXDIDs. It also needs to use this list to validate the requested descriptor ID from the VF when programming the Rx queues. A future update to support VF live migration will also want to validate that the target VF can support the same descriptor ID when migrating. Currently, pf->supported_rxdids is initialized inside the ice_vc_query_rxdid() function. This means that it is only ever initialized if at least one VF actually tries to negotiate VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC. It is also unnecessarily re-initialized every time the VF loads and requests the descriptor list. This worked before because the PF only checks pf->suppported_rxdids when programming the Rx queue if the VF actually negotiates the VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC feature. This will be problematic for VF live migration. We need the list of supported Rx descriptor IDs when migrating. It is possible that no VF on the target PF has ever actually issued a VIRTCHNL_OP_GET_SUPPORTED_RXDIDs. Refactor the driver to initialize pf->supported_rxdids during driver initialization after the DDP is loaded. This is simpler, avoids unnecessary duplicate work, and avoids issues with the live migration process. Signed-off-by: Jacob Keller Reviewed-by: Przemek Kitszel Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_main.c | 31 +++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_virtchnl.c | 20 ++---------- 2 files changed, 33 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index c96feb292f84..db463793d870 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -4560,6 +4560,34 @@ ice_init_tx_topology(struct ice_hw *hw, const struct firmware *firmware) return 0; } +/** + * ice_init_supported_rxdids - Initialize supported Rx descriptor IDs + * @hw: pointer to the hardware structure + * @pf: pointer to pf structure + * + * The pf->supported_rxdids bitmap is used to indicate to VFs which descriptor + * formats the PF hardware supports. The exact list of supported RXDIDs + * depends on the loaded DDP package. The IDs can be determined by reading the + * GLFLXP_RXDID_FLAGS register after the DDP package is loaded. + * + * Note that the legacy 32-byte RXDID 0 is always supported but is not listed + * in the DDP package. The 16-byte legacy descriptor is never supported by + * VFs. + */ +static void ice_init_supported_rxdids(struct ice_hw *hw, struct ice_pf *pf) +{ + pf->supported_rxdids = BIT(ICE_RXDID_LEGACY_1); + + for (int i = ICE_RXDID_FLEX_NIC; i < ICE_FLEX_DESC_RXDID_MAX_NUM; i++) { + u32 regval; + + regval = rd32(hw, GLFLXP_RXDID_FLAGS(i, 0)); + if ((regval >> GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) + & GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) + pf->supported_rxdids |= BIT(i); + } +} + /** * ice_init_ddp_config - DDP related configuration * @hw: pointer to the hardware structure @@ -4594,6 +4622,9 @@ static int ice_init_ddp_config(struct ice_hw *hw, struct ice_pf *pf) ice_load_pkg(firmware, pf); release_firmware(firmware); + /* Initialize the supported Rx descriptor IDs after loading DDP */ + ice_init_supported_rxdids(hw, pf); + return 0; } diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index cc070c467fdd..bc9fadaccad0 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -3032,11 +3032,9 @@ static int ice_vc_query_rxdid(struct ice_vf *vf) { enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_supported_rxdids *rxdid = NULL; - struct ice_hw *hw = &vf->pf->hw; struct ice_pf *pf = vf->pf; int len = 0; - int ret, i; - u32 regval; + int ret; if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -3056,21 +3054,7 @@ static int ice_vc_query_rxdid(struct ice_vf *vf) goto err; } - /* RXDIDs supported by DDP package can be read from the register - * to get the supported RXDID bitmap. But the legacy 32byte RXDID - * is not listed in DDP package, add it in the bitmap manually. - * Legacy 16byte descriptor is not supported. - */ - rxdid->supported_rxdids |= BIT(ICE_RXDID_LEGACY_1); - - for (i = ICE_RXDID_FLEX_NIC; i < ICE_FLEX_DESC_RXDID_MAX_NUM; i++) { - regval = rd32(hw, GLFLXP_RXDID_FLAGS(i, 0)); - if ((regval >> GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) - & GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) - rxdid->supported_rxdids |= BIT(i); - } - - pf->supported_rxdids = rxdid->supported_rxdids; + rxdid->supported_rxdids = pf->supported_rxdids; err: ret = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, From patchwork Tue Nov 5 22:23:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863634 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6094A216A07 for ; Tue, 5 Nov 2024 22:24:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845445; cv=none; b=LdHkIB9dFQtH64g2+VnKR06cpQhyb6MqIJ9zzrv+CMYBiX/lOTmP+XBH+pkaSFj1Wss87uUyWi2DFKDBzV/OISsF31RqDyZztjfKh2emZ/YtkgUo3rOgIJnhO1xJUxJiDKess2IgmiFZllqINlPi3YOc5tr5QEVEuoRT6HTMF+4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845445; c=relaxed/simple; bh=L9EaD1mXSy6ejHghcmSLw+ooOGvXJHcTlnffJFTLCWk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XMnqaC66ZEVE7/J05UaGvbTYAbZe9nf4S/NrAw0LTCqJca9v9G3AWldeeG/kCTYJEXLBWTg3q9IaYiB+5v6Cj+IfqRJzAcDPp0BeRIgRXSMyRXAEIegf3SUN+s3piST9+8Bg74me8uujkxAhJdnTPoYY69NdAIDAppKpg5e3QTg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HGMNTEMo; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HGMNTEMo" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845444; x=1762381444; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L9EaD1mXSy6ejHghcmSLw+ooOGvXJHcTlnffJFTLCWk=; b=HGMNTEMohuJcA09d5V2ZOIsH9KIuTUu1L3yVuUY2B9dvdcETcL+A5lxS vzDDqf01MMQ22ejRZYcnMQ8PWuHVm2ASs5MBcT5mnVDOkUBlYlfBR9qNt uQ7W3TqJwDA2vRK47xyILu3CnLxCKrgXz3zWz+ZTPBhjbU2kIcCFO8f/1 HMWuQpvum0NGd7GlPMzP9THaLEJTLfOpMS+rAVmNfd01MEbcEnpbUh4N5 8GvH+ZrliVhQXKwby6UsO4QjXojYB/W7+IUjsx1jekdECusV2WMYrhwuM kTA73fLYErWfIs9Erpa2bC/Le8vG9g6T0t+uYIiQTa6Q50qiKwCkkh6v8 Q==; X-CSE-ConnectionGUID: JMMRuKA8RP6TBxak2EeQ3w== X-CSE-MsgGUID: RH2RUi5nR7KkqW6Z9nxDYA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314302" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314302" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:01 -0800 X-CSE-ConnectionGUID: lixNti0HQTOEUwKiqHxvig== X-CSE-MsgGUID: 1ASa+u/AQg2ngtpac0lzeA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322460" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:00 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Jacob Keller , anthony.l.nguyen@intel.com, Przemek Kitszel Subject: [PATCH net-next 09/15] ice: use stack variable for virtchnl_supported_rxdids Date: Tue, 5 Nov 2024 14:23:43 -0800 Message-ID: <20241105222351.3320587-10-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jacob Keller The ice_vc_query_rxdid() function allocates memory to store the virtchnl_supported_rxdids structure used to communicate the bitmap of supported RXDIDs to a VF. This structure is only 8 bytes in size. The function must hold the allocated length on the stack as well as the pointer to the structure which itself is 8 bytes. Allocating this storage on the heap adds unnecessary overhead including a potential error path that must be handled in case kzalloc fails. Because this structure is so small, we're not saving stack space. Additionally, because we must ensure that we free the allocated memory, the return value from ice_vc_send_msg_to_vf() must also be saved in the stack ret variable. Depending on compiler optimization, this means allocating the 8-byte structure is requiring up to 16-bytes of stack memory! Simplify this function to keep the rxdid variable on the stack, saving memory and removing a potential failure exit path from this function. Signed-off-by: Jacob Keller Reviewed-by: Przemek Kitszel Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 20 ++++--------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index bc9fadaccad0..f445e33b2028 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -3031,10 +3031,8 @@ static int ice_vc_set_rss_hena(struct ice_vf *vf, u8 *msg) static int ice_vc_query_rxdid(struct ice_vf *vf) { enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; - struct virtchnl_supported_rxdids *rxdid = NULL; + struct virtchnl_supported_rxdids rxdid = {}; struct ice_pf *pf = vf->pf; - int len = 0; - int ret; if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -3046,21 +3044,11 @@ static int ice_vc_query_rxdid(struct ice_vf *vf) goto err; } - len = sizeof(struct virtchnl_supported_rxdids); - rxdid = kzalloc(len, GFP_KERNEL); - if (!rxdid) { - v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; - len = 0; - goto err; - } - - rxdid->supported_rxdids = pf->supported_rxdids; + rxdid.supported_rxdids = pf->supported_rxdids; err: - ret = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, - v_ret, (u8 *)rxdid, len); - kfree(rxdid); - return ret; + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, + v_ret, (u8 *)&rxdid, sizeof(rxdid)); } /** From patchwork Tue Nov 5 22:23:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863636 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEE14216A2D for ; Tue, 5 Nov 2024 22:24:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845447; cv=none; b=kSzu+Cwu8ZunTabgM4VcfVd0lFEqdfPi2ERG3A+ZsmEsLgh0L2tbbPpLODC3VAB3K3XKiKzZkegzGzIRaWnjMuKp+V+MqQ27uMlGLDXVk85312a5Yxd4DJRdiLtpL7i6EJ4PYiqEyl9xxh4OF1pUfYfsJ9GtKpcIdaFlT4zIkRY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845447; c=relaxed/simple; bh=l3wyZGN8w190j59De4pzjx7iuqm3VdnmhJgTb9hg9vM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=H6kApY0HqPurnJAIAM5Hobfgaw73V4W4wBlyIbX6npLvwgtsu/eQpgWQSr59aaTv5rSOyn2zoAZlsBmgBawL0bOKJ7cbqWCJTBAnb+ql+ZJbzjKNqSMisoOFfM+ytUfiq6vbrwvvaVF8DiSK1CT6K8O06eB6Gm+46Ng3mmSEcog= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I97rHVTi; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I97rHVTi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845446; x=1762381446; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l3wyZGN8w190j59De4pzjx7iuqm3VdnmhJgTb9hg9vM=; b=I97rHVTikygj30WfPbNgRJFsObS8XaqLSRYo3nZnPJm3qf5aOl5b/cvw bofLQ6AbWE6iwO0e3J4ds0GDHZRBmywrJslxlkxbFzzMi67Y7VCFhjoh/ +TVhM1h6qD7mqNiAGu24CiFySwmC+J3IjQfA4MKo+90uZlrHiYH6GiY5r clKrnEQBMGvYEGyZLnA2a+Fe0H70NzbSTKlwo7FbQKkwhYRT83peF8KAC l/oQwgHdkYLAcC+WvVV8cEdhWvS2N/+zpGP2X5MTF8mlkdHsVyMV6geW8 NZo02i6NZOXboHCTW4GMi8YxDsfN1psUiNt0UdfnheMntiaAaL9S2niup A==; X-CSE-ConnectionGUID: 50tPS16ET1SHsLU4Fzij4w== X-CSE-MsgGUID: 7VFakvweTtyUKyXXYNqXwg== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314308" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314308" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:01 -0800 X-CSE-ConnectionGUID: I9/VhCT+Tpawv9DOBw1arA== X-CSE-MsgGUID: CHA+Z2pdQR2WNLBMRdk/xQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322463" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:01 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Frederic Weisbecker , anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, larysa.zaremba@intel.com, Pucha Himasekhar Reddy Subject: [PATCH net-next 10/15] ice: Unbind the workqueue Date: Tue, 5 Nov 2024 14:23:44 -0800 Message-ID: <20241105222351.3320587-11-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Frederic Weisbecker The ice workqueue doesn't seem to rely on any CPU locality and should therefore be able to run on any CPU. In practice this is already happening through the unbound ice_service_timer that may fire anywhere and queue the workqueue accordingly to any CPU. Make this official so that the ice workqueue is only ever queued to housekeeping CPUs on nohz_full. Signed-off-by: Frederic Weisbecker Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index db463793d870..e3c3ab5ae4b9 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -5937,7 +5937,7 @@ static int __init ice_module_init(void) ice_adv_lnk_speed_maps_init(); - ice_wq = alloc_workqueue("%s", 0, 0, KBUILD_MODNAME); + ice_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, KBUILD_MODNAME); if (!ice_wq) { pr_err("Failed to create workqueue\n"); return status; From patchwork Tue Nov 5 22:23:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863638 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63006216A39 for ; Tue, 5 Nov 2024 22:24:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845448; cv=none; b=TolalbNXPax3VfFX34zGUJQF7i3mfBuMPoF4LCub4XwwbuMuVC6RCKgn0qX2eZNGSuwRO4ADNg/4Poo27ibpLnOSHSoxEYs18SQVyCYgtDap7hZT6ugXhObTalqUXJaSZh3XVH8vbqZxBisXCj55qB7dcCmi0g5VH3+0PmqLDpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845448; c=relaxed/simple; bh=zQ0iotFVz+YV9+SneoYSpAEeQ2v22cN1qtJoss8Sduc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DJfYhSE0+xHKKQ85UhbmqoV7m9RI8kV9Vamf99rvmEq5AOfYp+/1Pfi7TsDjSg+HoflspLFodEJ28nt/2Yxpo+bANMwknhhvmy/EToW5+JfOW6iqMsY0mVN+nKoyh7fw34g5Zffm3YzCq/mzQ9bB+9AceuW3rGqGlkoco6f/6Hc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bhgbIALP; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bhgbIALP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845446; x=1762381446; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zQ0iotFVz+YV9+SneoYSpAEeQ2v22cN1qtJoss8Sduc=; b=bhgbIALPEoyKEefZJ+pjb+2jZE4UaegbmwRhPMA6G7rDFpK8DthCQvAB 6AX1Nv1LSzsa+n8rJx0sG6nfFzZjEFTGPts0GNWMfZreDq1YDjZCKv+fv V5d735Tvl2LD28wVr1RjFJtCOs7zrsqEri1SfQqI6dAKyUTgStZytciRj RDBV0kww+N2RGg4tHpQ0AGp0x88vlgcTjD8l3Na6DWIaUjSv1/wZaorsE ahnAT+OrSevOo8Jtd+Gt6BGOOp97VHuP17S7OuTwwpZ25Ctvw3L3T3HO2 vu3tHmSXy7CBltVR2+JJNI9fMBpsLSHrMfWL8MC66LQUcqdokEuIJL6d9 g==; X-CSE-ConnectionGUID: /3qOeh3rRmCb8oyOCQ1cGw== X-CSE-MsgGUID: pIsRKeb1TuCgaCJedoirgg== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314314" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314314" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:02 -0800 X-CSE-ConnectionGUID: gLtdISxtRNejfcwAOCI0WA== X-CSE-MsgGUID: WkKdw/5BRtaaebUhxwVPWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322467" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:01 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Diomidis Spinellis , anthony.l.nguyen@intel.com, Jacob Keller , Rafal Romanowski Subject: [PATCH net-next 11/15] ixgbe: Break include dependency cycle Date: Tue, 5 Nov 2024 14:23:45 -0800 Message-ID: <20241105222351.3320587-12-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Diomidis Spinellis Header ixgbe_type.h includes ixgbe_mbx.h. Also, header ixgbe_mbx.h included ixgbe_type.h, thus introducing a circular dependency. - Remove ixgbe_mbx.h inclusion from ixgbe_type.h. - ixgbe_mbx.h requires the definition of struct ixgbe_mbx_operations so move its definition there. While at it, add missing argument identifier names. - Add required forward structure declarations. - Include ixgbe_mbx.h in the .c files that need it, for the following reasons: ixgbe_sriov.c uses ixgbe_check_for_msg ixgbe_main.c uses ixgbe_init_mbx_params_pf ixgbe_82599.c uses mbx_ops_generic ixgbe_x540.c uses mbx_ops_generic ixgbe_x550.c uses mbx_ops_generic Signed-off-by: Diomidis Spinellis Reviewed-by: Jacob Keller Tested-by: Rafal Romanowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h | 16 +++++++++++++++- drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_type.h | 15 ++------------- drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c | 1 + 7 files changed, 22 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c index 283a23150a4d..4aaaea3b5f8f 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c @@ -6,6 +6,7 @@ #include #include "ixgbe.h" +#include "ixgbe_mbx.h" #include "ixgbe_phy.h" #define IXGBE_82598_MAX_TX_QUEUES 32 diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 8b8404d8c946..c229a26fbbb7 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -43,6 +43,7 @@ #include "ixgbe.h" #include "ixgbe_common.h" #include "ixgbe_dcb_82599.h" +#include "ixgbe_mbx.h" #include "ixgbe_phy.h" #include "ixgbe_sriov.h" #include "ixgbe_model.h" diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h index bd205306934b..bf65e82b4c61 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h @@ -4,7 +4,7 @@ #ifndef _IXGBE_MBX_H_ #define _IXGBE_MBX_H_ -#include "ixgbe_type.h" +#include #define IXGBE_VFMAILBOX_SIZE 16 /* 16 32 bit words - 64 bytes */ @@ -96,6 +96,8 @@ enum ixgbe_pfvf_api_rev { #define IXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ #define IXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */ +struct ixgbe_hw; + int ixgbe_read_mbx(struct ixgbe_hw *, u32 *, u16, u16); int ixgbe_write_mbx(struct ixgbe_hw *, u32 *, u16, u16); int ixgbe_check_for_msg(struct ixgbe_hw *, u16); @@ -105,6 +107,18 @@ int ixgbe_check_for_rst(struct ixgbe_hw *, u16); void ixgbe_init_mbx_params_pf(struct ixgbe_hw *); #endif /* CONFIG_PCI_IOV */ +struct ixgbe_mbx_operations { + int (*init_params)(struct ixgbe_hw *hw); + int (*read)(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 vf_number); + int (*write)(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 vf_number); + int (*read_posted)(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); + int (*write_posted)(struct ixgbe_hw *hw, u32 *msg, u16 size, + u16 mbx_id); + int (*check_for_msg)(struct ixgbe_hw *hw, u16 vf_number); + int (*check_for_ack)(struct ixgbe_hw *hw, u16 vf_number); + int (*check_for_rst)(struct ixgbe_hw *hw, u16 vf_number); +}; + extern const struct ixgbe_mbx_operations mbx_ops_generic; #endif /* _IXGBE_MBX_H_ */ diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index e71715f5da22..9631559a5aea 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -18,6 +18,7 @@ #include "ixgbe.h" #include "ixgbe_type.h" +#include "ixgbe_mbx.h" #include "ixgbe_sriov.h" #ifdef CONFIG_PCI_IOV diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h index 346e3d9114a8..9baccacd02a1 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h @@ -3601,19 +3601,6 @@ struct ixgbe_phy_info { u32 nw_mng_if_sel; }; -#include "ixgbe_mbx.h" - -struct ixgbe_mbx_operations { - int (*init_params)(struct ixgbe_hw *hw); - int (*read)(struct ixgbe_hw *, u32 *, u16, u16); - int (*write)(struct ixgbe_hw *, u32 *, u16, u16); - int (*read_posted)(struct ixgbe_hw *, u32 *, u16, u16); - int (*write_posted)(struct ixgbe_hw *, u32 *, u16, u16); - int (*check_for_msg)(struct ixgbe_hw *, u16); - int (*check_for_ack)(struct ixgbe_hw *, u16); - int (*check_for_rst)(struct ixgbe_hw *, u16); -}; - struct ixgbe_mbx_stats { u32 msgs_tx; u32 msgs_rx; @@ -3623,6 +3610,8 @@ struct ixgbe_mbx_stats { u32 rsts; }; +struct ixgbe_mbx_operations; + struct ixgbe_mbx_info { const struct ixgbe_mbx_operations *ops; struct ixgbe_mbx_stats stats; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c index f1ffa398f6df..81e1df83f136 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c @@ -6,6 +6,7 @@ #include #include "ixgbe.h" +#include "ixgbe_mbx.h" #include "ixgbe_phy.h" #include "ixgbe_x540.h" diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c index a5f644934445..d9a8cf018d3b 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c @@ -4,6 +4,7 @@ #include "ixgbe_x540.h" #include "ixgbe_type.h" #include "ixgbe_common.h" +#include "ixgbe_mbx.h" #include "ixgbe_phy.h" static int ixgbe_setup_kr_speed_x550em(struct ixgbe_hw *, ixgbe_link_speed); From patchwork Tue Nov 5 22:23:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863639 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B282216DF2 for ; Tue, 5 Nov 2024 22:24:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845449; cv=none; b=WtqLsBNi2AeqAOG6NrKAL9drPzdprXfFUE/7Xb4gDh36PyGMNbBv15iogvHj3Xbz4jpKmhuRfAiSOgWra7t1UcDDVqW4lN7Sfs8RAFY3tcDUEUeBo25iaj0/ES56VezQh083uJ3rSVcWyzzSPJoQ3Teaj5nyZUgDssYuMZPp4go= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845449; c=relaxed/simple; bh=POPtQA4z4W0eeC2oRwOmJW+kOiHy0ZDICENVatY2/Zs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pJnqYI/8RfekjvrEVq2NvlC9C2LWaNQLLREnvfVlNoYCSFFgpAAreMY06cj04KiiSzGtAW24k+74fRcPfj0DRQAq3mKQKvp5LWunBca/ScZGdG00a8AlJx7LJMxhntA+T+Y9BNNEgjB0lkOSIyYJquBun9tmPToR+BJqEwC1Sww= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GyWebpgv; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GyWebpgv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845448; x=1762381448; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=POPtQA4z4W0eeC2oRwOmJW+kOiHy0ZDICENVatY2/Zs=; b=GyWebpgvhdXUcojeLDoQdtI97W7ByDQdIOxt6wspK16vmzHANX7bZ4zd xzuLUXbMP9d+OGjG+oO/BQ9fP+lR8zYr49S/bRiTsQI84vQW0OBcaEL4G RQRMjt2nxvUlqVA6XdEIyy7e39dRx3Kozq1HyEpVPuNRfOXHSTIDjmG+o 8duRMLp3rotmvXnsWlqJmcM7pC4COSCyz8se835NpBZbuC7hMv1kuMNJq 4c3WhO5QwnGVe/P8unLXtPOmUDJ3yENhM9zeOjgHqj7z6TiE9hdpteHux uAJkkwJf8RQURROc3p7kTQv5At5UsGSpVBQ1Es+D3Aatqoz4vq3RS9mKn w==; X-CSE-ConnectionGUID: ZpxB4cgITMapHn2HrpfAPg== X-CSE-MsgGUID: XOem1xatSvu6FVyiODPHqw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314321" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314321" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:02 -0800 X-CSE-ConnectionGUID: YDUUg1MyQB+yx3q7XKG72Q== X-CSE-MsgGUID: uxtwq+h6QzmEZjLCQPwucg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322470" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:01 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Vitaly Lifshits , anthony.l.nguyen@intel.com, dima.ruinskiy@intel.com, Mor Bar-Gabay Subject: [PATCH net-next 12/15] igc: remove autoneg parameter from igc_mac_info Date: Tue, 5 Nov 2024 14:23:46 -0800 Message-ID: <20241105222351.3320587-13-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Vitaly Lifshits Since the igc driver doesn't support forced speed configuration and its current related hardware doesn't support it either, there is no use of the mac.autoneg parameter. Moreover, in one case this usage might result in a NULL pointer dereference due to an uninitialized function pointer, phy.ops.force_speed_duplex. Therefore, remove this parameter from the igc code. Signed-off-by: Vitaly Lifshits Tested-by: Mor Bar-Gabay Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_diag.c | 3 +- drivers/net/ethernet/intel/igc/igc_ethtool.c | 13 +- drivers/net/ethernet/intel/igc/igc_hw.h | 1 - drivers/net/ethernet/intel/igc/igc_mac.c | 316 +++++++++---------- drivers/net/ethernet/intel/igc/igc_main.c | 1 - drivers/net/ethernet/intel/igc/igc_phy.c | 24 +- 6 files changed, 163 insertions(+), 195 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_diag.c b/drivers/net/ethernet/intel/igc/igc_diag.c index cc621970c0cd..a43d7244ee70 100644 --- a/drivers/net/ethernet/intel/igc/igc_diag.c +++ b/drivers/net/ethernet/intel/igc/igc_diag.c @@ -173,8 +173,7 @@ bool igc_link_test(struct igc_adapter *adapter, u64 *data) *data = 0; /* add delay to give enough time for autonegotioation to finish */ - if (adapter->hw.mac.autoneg) - ssleep(5); + ssleep(5); link_up = igc_has_link(adapter); if (!link_up) { diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c index 5b0c6f433767..817838677817 100644 --- a/drivers/net/ethernet/intel/igc/igc_ethtool.c +++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c @@ -1821,11 +1821,8 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev, ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full); /* set autoneg settings */ - if (hw->mac.autoneg == 1) { - ethtool_link_ksettings_add_link_mode(cmd, supported, Autoneg); - ethtool_link_ksettings_add_link_mode(cmd, advertising, - Autoneg); - } + ethtool_link_ksettings_add_link_mode(cmd, supported, Autoneg); + ethtool_link_ksettings_add_link_mode(cmd, advertising, Autoneg); /* Set pause flow control settings */ ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); @@ -1878,10 +1875,7 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev, cmd->base.duplex = DUPLEX_UNKNOWN; } cmd->base.speed = speed; - if (hw->mac.autoneg) - cmd->base.autoneg = AUTONEG_ENABLE; - else - cmd->base.autoneg = AUTONEG_DISABLE; + cmd->base.autoneg = AUTONEG_ENABLE; /* MDI-X => 2; MDI =>1; Invalid =>0 */ if (hw->phy.media_type == igc_media_type_copper) @@ -1955,7 +1949,6 @@ igc_ethtool_set_link_ksettings(struct net_device *netdev, advertised |= ADVERTISE_10_HALF; if (cmd->base.autoneg == AUTONEG_ENABLE) { - hw->mac.autoneg = 1; hw->phy.autoneg_advertised = advertised; if (adapter->fc_autoneg) hw->fc.requested_mode = igc_fc_default; diff --git a/drivers/net/ethernet/intel/igc/igc_hw.h b/drivers/net/ethernet/intel/igc/igc_hw.h index e1c572e0d4ef..d9d1a1a11daf 100644 --- a/drivers/net/ethernet/intel/igc/igc_hw.h +++ b/drivers/net/ethernet/intel/igc/igc_hw.h @@ -92,7 +92,6 @@ struct igc_mac_info { bool asf_firmware_present; bool arc_subsystem_valid; - bool autoneg; bool autoneg_failed; bool get_link_status; }; diff --git a/drivers/net/ethernet/intel/igc/igc_mac.c b/drivers/net/ethernet/intel/igc/igc_mac.c index a5c4b19d71a2..d344e0a1cd5e 100644 --- a/drivers/net/ethernet/intel/igc/igc_mac.c +++ b/drivers/net/ethernet/intel/igc/igc_mac.c @@ -386,14 +386,6 @@ s32 igc_check_for_copper_link(struct igc_hw *hw) */ igc_check_downshift(hw); - /* If we are forcing speed/duplex, then we simply return since - * we have already determined whether we have link or not. - */ - if (!mac->autoneg) { - ret_val = -IGC_ERR_CONFIG; - goto out; - } - /* Auto-Neg is enabled. Auto Speed Detection takes care * of MAC speed/duplex configuration. So we only need to * configure Collision Distance in the MAC. @@ -468,173 +460,171 @@ s32 igc_config_fc_after_link_up(struct igc_hw *hw) goto out; } - /* Check for the case where we have copper media and auto-neg is - * enabled. In this case, we need to check and see if Auto-Neg - * has completed, and if so, how the PHY and link partner has - * flow control configured. + /* In auto-neg, we need to check and see if Auto-Neg has completed, + * and if so, how the PHY and link partner has flow control + * configured. */ - if (mac->autoneg) { - /* Read the MII Status Register and check to see if AutoNeg - * has completed. We read this twice because this reg has - * some "sticky" (latched) bits. - */ - ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, - &mii_status_reg); - if (ret_val) - goto out; - ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, - &mii_status_reg); - if (ret_val) - goto out; - if (!(mii_status_reg & MII_SR_AUTONEG_COMPLETE)) { - hw_dbg("Copper PHY and Auto Neg has not completed.\n"); - goto out; - } + /* Read the MII Status Register and check to see if AutoNeg + * has completed. We read this twice because this reg has + * some "sticky" (latched) bits. + */ + ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, + &mii_status_reg); + if (ret_val) + goto out; + ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, + &mii_status_reg); + if (ret_val) + goto out; - /* The AutoNeg process has completed, so we now need to - * read both the Auto Negotiation Advertisement - * Register (Address 4) and the Auto_Negotiation Base - * Page Ability Register (Address 5) to determine how - * flow control was negotiated. - */ - ret_val = hw->phy.ops.read_reg(hw, PHY_AUTONEG_ADV, - &mii_nway_adv_reg); - if (ret_val) - goto out; - ret_val = hw->phy.ops.read_reg(hw, PHY_LP_ABILITY, - &mii_nway_lp_ability_reg); - if (ret_val) - goto out; - /* Two bits in the Auto Negotiation Advertisement Register - * (Address 4) and two bits in the Auto Negotiation Base - * Page Ability Register (Address 5) determine flow control - * for both the PHY and the link partner. The following - * table, taken out of the IEEE 802.3ab/D6.0 dated March 25, - * 1999, describes these PAUSE resolution bits and how flow - * control is determined based upon these settings. - * NOTE: DC = Don't Care - * - * LOCAL DEVICE | LINK PARTNER - * PAUSE | ASM_DIR | PAUSE | ASM_DIR | NIC Resolution - *-------|---------|-------|---------|-------------------- - * 0 | 0 | DC | DC | igc_fc_none - * 0 | 1 | 0 | DC | igc_fc_none - * 0 | 1 | 1 | 0 | igc_fc_none - * 0 | 1 | 1 | 1 | igc_fc_tx_pause - * 1 | 0 | 0 | DC | igc_fc_none - * 1 | DC | 1 | DC | igc_fc_full - * 1 | 1 | 0 | 0 | igc_fc_none - * 1 | 1 | 0 | 1 | igc_fc_rx_pause - * - * Are both PAUSE bits set to 1? If so, this implies - * Symmetric Flow Control is enabled at both ends. The - * ASM_DIR bits are irrelevant per the spec. - * - * For Symmetric Flow Control: - * - * LOCAL DEVICE | LINK PARTNER - * PAUSE | ASM_DIR | PAUSE | ASM_DIR | Result - *-------|---------|-------|---------|-------------------- - * 1 | DC | 1 | DC | IGC_fc_full - * - */ - if ((mii_nway_adv_reg & NWAY_AR_PAUSE) && - (mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE)) { - /* Now we need to check if the user selected RX ONLY - * of pause frames. In this case, we had to advertise - * FULL flow control because we could not advertise RX - * ONLY. Hence, we must now check to see if we need to - * turn OFF the TRANSMISSION of PAUSE frames. - */ - if (hw->fc.requested_mode == igc_fc_full) { - hw->fc.current_mode = igc_fc_full; - hw_dbg("Flow Control = FULL.\n"); - } else { - hw->fc.current_mode = igc_fc_rx_pause; - hw_dbg("Flow Control = RX PAUSE frames only.\n"); - } - } + if (!(mii_status_reg & MII_SR_AUTONEG_COMPLETE)) { + hw_dbg("Copper PHY and Auto Neg has not completed.\n"); + goto out; + } - /* For receiving PAUSE frames ONLY. - * - * LOCAL DEVICE | LINK PARTNER - * PAUSE | ASM_DIR | PAUSE | ASM_DIR | Result - *-------|---------|-------|---------|-------------------- - * 0 | 1 | 1 | 1 | igc_fc_tx_pause - */ - else if (!(mii_nway_adv_reg & NWAY_AR_PAUSE) && - (mii_nway_adv_reg & NWAY_AR_ASM_DIR) && - (mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) && - (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) { - hw->fc.current_mode = igc_fc_tx_pause; - hw_dbg("Flow Control = TX PAUSE frames only.\n"); - } - /* For transmitting PAUSE frames ONLY. - * - * LOCAL DEVICE | LINK PARTNER - * PAUSE | ASM_DIR | PAUSE | ASM_DIR | Result - *-------|---------|-------|---------|-------------------- - * 1 | 1 | 0 | 1 | igc_fc_rx_pause - */ - else if ((mii_nway_adv_reg & NWAY_AR_PAUSE) && - (mii_nway_adv_reg & NWAY_AR_ASM_DIR) && - !(mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) && - (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) { - hw->fc.current_mode = igc_fc_rx_pause; - hw_dbg("Flow Control = RX PAUSE frames only.\n"); - } - /* Per the IEEE spec, at this point flow control should be - * disabled. However, we want to consider that we could - * be connected to a legacy switch that doesn't advertise - * desired flow control, but can be forced on the link - * partner. So if we advertised no flow control, that is - * what we will resolve to. If we advertised some kind of - * receive capability (Rx Pause Only or Full Flow Control) - * and the link partner advertised none, we will configure - * ourselves to enable Rx Flow Control only. We can do - * this safely for two reasons: If the link partner really - * didn't want flow control enabled, and we enable Rx, no - * harm done since we won't be receiving any PAUSE frames - * anyway. If the intent on the link partner was to have - * flow control enabled, then by us enabling RX only, we - * can at least receive pause frames and process them. - * This is a good idea because in most cases, since we are - * predominantly a server NIC, more times than not we will - * be asked to delay transmission of packets than asking - * our link partner to pause transmission of frames. + /* The AutoNeg process has completed, so we now need to + * read both the Auto Negotiation Advertisement + * Register (Address 4) and the Auto_Negotiation Base + * Page Ability Register (Address 5) to determine how + * flow control was negotiated. + */ + ret_val = hw->phy.ops.read_reg(hw, PHY_AUTONEG_ADV, + &mii_nway_adv_reg); + if (ret_val) + goto out; + ret_val = hw->phy.ops.read_reg(hw, PHY_LP_ABILITY, + &mii_nway_lp_ability_reg); + if (ret_val) + goto out; + /* Two bits in the Auto Negotiation Advertisement Register + * (Address 4) and two bits in the Auto Negotiation Base + * Page Ability Register (Address 5) determine flow control + * for both the PHY and the link partner. The following + * table, taken out of the IEEE 802.3ab/D6.0 dated March 25, + * 1999, describes these PAUSE resolution bits and how flow + * control is determined based upon these settings. + * NOTE: DC = Don't Care + * + * LOCAL DEVICE | LINK PARTNER + * PAUSE | ASM_DIR | PAUSE | ASM_DIR | NIC Resolution + *-------|---------|-------|---------|-------------------- + * 0 | 0 | DC | DC | igc_fc_none + * 0 | 1 | 0 | DC | igc_fc_none + * 0 | 1 | 1 | 0 | igc_fc_none + * 0 | 1 | 1 | 1 | igc_fc_tx_pause + * 1 | 0 | 0 | DC | igc_fc_none + * 1 | DC | 1 | DC | igc_fc_full + * 1 | 1 | 0 | 0 | igc_fc_none + * 1 | 1 | 0 | 1 | igc_fc_rx_pause + * + * Are both PAUSE bits set to 1? If so, this implies + * Symmetric Flow Control is enabled at both ends. The + * ASM_DIR bits are irrelevant per the spec. + * + * For Symmetric Flow Control: + * + * LOCAL DEVICE | LINK PARTNER + * PAUSE | ASM_DIR | PAUSE | ASM_DIR | Result + *-------|---------|-------|---------|-------------------- + * 1 | DC | 1 | DC | IGC_fc_full + * + */ + if ((mii_nway_adv_reg & NWAY_AR_PAUSE) && + (mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE)) { + /* Now we need to check if the user selected RX ONLY + * of pause frames. In this case, we had to advertise + * FULL flow control because we could not advertise RX + * ONLY. Hence, we must now check to see if we need to + * turn OFF the TRANSMISSION of PAUSE frames. */ - else if ((hw->fc.requested_mode == igc_fc_none) || - (hw->fc.requested_mode == igc_fc_tx_pause) || - (hw->fc.strict_ieee)) { - hw->fc.current_mode = igc_fc_none; - hw_dbg("Flow Control = NONE.\n"); + if (hw->fc.requested_mode == igc_fc_full) { + hw->fc.current_mode = igc_fc_full; + hw_dbg("Flow Control = FULL.\n"); } else { hw->fc.current_mode = igc_fc_rx_pause; hw_dbg("Flow Control = RX PAUSE frames only.\n"); } + } - /* Now we need to do one last check... If we auto- - * negotiated to HALF DUPLEX, flow control should not be - * enabled per IEEE 802.3 spec. - */ - ret_val = hw->mac.ops.get_speed_and_duplex(hw, &speed, &duplex); - if (ret_val) { - hw_dbg("Error getting link speed and duplex\n"); - goto out; - } + /* For receiving PAUSE frames ONLY. + * + * LOCAL DEVICE | LINK PARTNER + * PAUSE | ASM_DIR | PAUSE | ASM_DIR | Result + *-------|---------|-------|---------|-------------------- + * 0 | 1 | 1 | 1 | igc_fc_tx_pause + */ + else if (!(mii_nway_adv_reg & NWAY_AR_PAUSE) && + (mii_nway_adv_reg & NWAY_AR_ASM_DIR) && + (mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) && + (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) { + hw->fc.current_mode = igc_fc_tx_pause; + hw_dbg("Flow Control = TX PAUSE frames only.\n"); + } + /* For transmitting PAUSE frames ONLY. + * + * LOCAL DEVICE | LINK PARTNER + * PAUSE | ASM_DIR | PAUSE | ASM_DIR | Result + *-------|---------|-------|---------|-------------------- + * 1 | 1 | 0 | 1 | igc_fc_rx_pause + */ + else if ((mii_nway_adv_reg & NWAY_AR_PAUSE) && + (mii_nway_adv_reg & NWAY_AR_ASM_DIR) && + !(mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) && + (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) { + hw->fc.current_mode = igc_fc_rx_pause; + hw_dbg("Flow Control = RX PAUSE frames only.\n"); + } + /* Per the IEEE spec, at this point flow control should be + * disabled. However, we want to consider that we could + * be connected to a legacy switch that doesn't advertise + * desired flow control, but can be forced on the link + * partner. So if we advertised no flow control, that is + * what we will resolve to. If we advertised some kind of + * receive capability (Rx Pause Only or Full Flow Control) + * and the link partner advertised none, we will configure + * ourselves to enable Rx Flow Control only. We can do + * this safely for two reasons: If the link partner really + * didn't want flow control enabled, and we enable Rx, no + * harm done since we won't be receiving any PAUSE frames + * anyway. If the intent on the link partner was to have + * flow control enabled, then by us enabling RX only, we + * can at least receive pause frames and process them. + * This is a good idea because in most cases, since we are + * predominantly a server NIC, more times than not we will + * be asked to delay transmission of packets than asking + * our link partner to pause transmission of frames. + */ + else if ((hw->fc.requested_mode == igc_fc_none) || + (hw->fc.requested_mode == igc_fc_tx_pause) || + (hw->fc.strict_ieee)) { + hw->fc.current_mode = igc_fc_none; + hw_dbg("Flow Control = NONE.\n"); + } else { + hw->fc.current_mode = igc_fc_rx_pause; + hw_dbg("Flow Control = RX PAUSE frames only.\n"); + } - if (duplex == HALF_DUPLEX) - hw->fc.current_mode = igc_fc_none; + /* Now we need to do one last check... If we auto- + * negotiated to HALF DUPLEX, flow control should not be + * enabled per IEEE 802.3 spec. + */ + ret_val = hw->mac.ops.get_speed_and_duplex(hw, &speed, &duplex); + if (ret_val) { + hw_dbg("Error getting link speed and duplex\n"); + goto out; + } - /* Now we call a subroutine to actually force the MAC - * controller to use the correct flow control settings. - */ - ret_val = igc_force_mac_fc(hw); - if (ret_val) { - hw_dbg("Error forcing flow control settings\n"); - goto out; - } + if (duplex == HALF_DUPLEX) + hw->fc.current_mode = igc_fc_none; + + /* Now we call a subroutine to actually force the MAC + * controller to use the correct flow control settings. + */ + ret_val = igc_force_mac_fc(hw); + if (ret_val) { + hw_dbg("Error forcing flow control settings\n"); + goto out; } out: diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 6e70bca15db1..27872bdea9bd 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -7108,7 +7108,6 @@ static int igc_probe(struct pci_dev *pdev, /* Initialize link properties that are user-changeable */ adapter->fc_autoneg = true; - hw->mac.autoneg = true; hw->phy.autoneg_advertised = 0xaf; hw->fc.requested_mode = igc_fc_default; diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c index 2801e5f24df9..6c4d204aecfa 100644 --- a/drivers/net/ethernet/intel/igc/igc_phy.c +++ b/drivers/net/ethernet/intel/igc/igc_phy.c @@ -494,24 +494,12 @@ s32 igc_setup_copper_link(struct igc_hw *hw) s32 ret_val = 0; bool link; - if (hw->mac.autoneg) { - /* Setup autoneg and flow control advertisement and perform - * autonegotiation. - */ - ret_val = igc_copper_link_autoneg(hw); - if (ret_val) - goto out; - } else { - /* PHY will be set to 10H, 10F, 100H or 100F - * depending on user settings. - */ - hw_dbg("Forcing Speed and Duplex\n"); - ret_val = hw->phy.ops.force_speed_duplex(hw); - if (ret_val) { - hw_dbg("Error Forcing Speed and Duplex\n"); - goto out; - } - } + /* Setup autoneg and flow control advertisement and perform + * autonegotiation. + */ + ret_val = igc_copper_link_autoneg(hw); + if (ret_val) + goto out; /* Check link status. Wait up to 100 microseconds for link to become * valid. From patchwork Tue Nov 5 22:23:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863640 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3D4A216DF9 for ; Tue, 5 Nov 2024 22:24:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845449; cv=none; b=TPXF/B4dS5kmvF3nhInqvTSNAzXpR6JgReMw3Bx+5e0c4TPCWqeqlMh0hWEJeIJvfHEefu5EG3xR8sXXT2WSzbDw1XSBQux50VkVT7pu/VwvVbpyO3T4Ao7RJEryIwo2rHVKtR37yB4XzoMvf5/r6Ku9gvL6fyT3h7mgn7BSeg8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845449; c=relaxed/simple; bh=2122BFx7wZd5PY+V5o0vzfvpOOTUFdW+arY4RIKKIrM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Dl9Oikv8MyGDSTbYi9cKEtZ8k4nc8WJn5EmyteRQmfSZIdPlMLOztDMi2BaM1GCxrBOm1zPvhSDt2kO0MdY9fNP1UMR7EMhk8NO9So4t9H7l6xvSKdaQYEDAON4EGzQRqo/zo3TPPIOVB+HkaGZ6nTgDkT6jMsuYqk/7XX8Z13s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eq4nIxG9; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eq4nIxG9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845448; x=1762381448; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2122BFx7wZd5PY+V5o0vzfvpOOTUFdW+arY4RIKKIrM=; b=eq4nIxG9635F1zvRII3m7CcrA9xKX7WepK31x8HWnloAPeX6QZjul+fA AEH8SKSAVpVpLX0GOVb/r7UlVSWxmhYAtSgjcPHw+tCaOKpOtBgM8Du6C +VB6dX8mF4cKGmLLiXuqcLI6GqW7O+P04pYZoai6YkejVRxFEPPro6q4l z0wfPaFz+LjJgMWkCy0we/L2ql/5N/zz57gGhOnewL5b/aZVfiXLZKoIS wc1n2KT+iM/NUqoCpTkVoVZyXqnTCcnQ1fAuPJAIa2ZNHyOJSbHONneW+ B/B9I84GwplqLsNGPfbNeP3I8sgxEpQi/ix1chEcrfbWwL9GyjngNDe71 A==; X-CSE-ConnectionGUID: sHGhAJPjScyCuzMXY9CUGg== X-CSE-MsgGUID: hLXU7C6zQd2ledmcOOQZ0g== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314327" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314327" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:03 -0800 X-CSE-ConnectionGUID: k6c/d9oERuybEhmAeHlk3g== X-CSE-MsgGUID: wwysqthqQ/m9ikP85ZrDpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322474" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:02 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Johnny Park , anthony.l.nguyen@intel.com, horms@kernel.org, Przemek Kitszel Subject: [PATCH net-next 13/15] igb: Fix 2 typos in comments in igb_main.c Date: Tue, 5 Nov 2024 14:23:47 -0800 Message-ID: <20241105222351.3320587-14-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Johnny Park Fix 2 spelling mistakes in comments in `igb_main.c`. Signed-off-by: Johnny Park Acked-by: Przemek Kitszel Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index b83df5f94b1f..37b674f8cbcd 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -1204,7 +1204,7 @@ static int igb_alloc_q_vector(struct igb_adapter *adapter, /* initialize pointer to rings */ ring = q_vector->ring; - /* intialize ITR */ + /* initialize ITR */ if (rxr_count) { /* rx or rx/tx vector */ if (!adapter->rx_itr_setting || adapter->rx_itr_setting > 3) @@ -3906,7 +3906,7 @@ static void igb_remove(struct pci_dev *pdev) * * This function initializes the vf specific data storage and then attempts to * allocate the VFs. The reason for ordering it this way is because it is much - * mor expensive time wise to disable SR-IOV than it is to allocate and free + * more expensive time wise to disable SR-IOV than it is to allocate and free * the memory for the VFs. **/ static void igb_probe_vfs(struct igb_adapter *adapter) From patchwork Tue Nov 5 22:23:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863641 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 533A5216E00 for ; Tue, 5 Nov 2024 22:24:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845449; cv=none; b=labI08gIuqz3L9LbcjcfMf/ahNcqhrRMmEr8WyL8V0ocwPaZfbppN6zxv9yjsZF4ATdjNtb52iKP3UU+Nk7RmGSTqvo3dzaGIbhSbOuca/BjnpIlOr+bHg2PiM0/PQO62nTJnl8u9dAkE2vBjEAWhYiStBrorrQInlO/d1hEXJ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845449; c=relaxed/simple; bh=+qkYZqfvjzxp0CFLFUzJYCAAr85X49pQUa1hWU8bHvI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y2dzRiX11Z0H6+eZcnB/SRW0Jf31mLc3BChpQ7TtJVJFGIRRkmHCGeNX43Z8qxguj+YkUWI5OPkz3K2nseskCtE69dWr+SDnLvWPFboRy1vOj3KQ3XZT4VW+AzfHzxSaFmKPtr0x8atVRV3m+3RCr9047bSZiEunnIGZmTANLtw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lkN7HR1Q; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lkN7HR1Q" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845448; x=1762381448; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+qkYZqfvjzxp0CFLFUzJYCAAr85X49pQUa1hWU8bHvI=; b=lkN7HR1QJ/Kqa15KiQw/1+31x2iPi4qWT5phgmGFKecs27yTvqAsDP9X i7qy8vi/825lFYEVzFZ3nopBinPbmRFM9yXlw2tsoYXqHLuDlbMq3yTPi hbc9kNQfWAA3EhFa1V0dylyuvrLWZrPDPY0+Y64KKL9kY/rNqMaCcJSfE XQbIHv9JL16fnTaTCpWKgrHdzmJr2FJA58BS/bk4I8hhsR1vT4lTRCBE1 jbbO3oZn6x3buSaOegIuHxGWB/qFvIUolsyZY851+d5LhsV1cBbyfqjih yxhO0UH/EJqeZPTHLlJsjKlpf9trmpu6E1GyWtLTN5esg5tybAkhA83ax w==; X-CSE-ConnectionGUID: DXMAYzjuQKqbJkwu47kiAg== X-CSE-MsgGUID: TY1oJ3IaRe6lARsuTvfjjA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314334" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314334" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:03 -0800 X-CSE-ConnectionGUID: cHTkNZjaQqyP9WgUoVkCXg== X-CSE-MsgGUID: Dyl0Ma0sSUKOo01GhbEnlg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322477" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:02 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Wander Lairson Costa , anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, Paul Menzel Subject: [PATCH net-next 14/15] igbvf: remove unused spinlock Date: Tue, 5 Nov 2024 14:23:48 -0800 Message-ID: <20241105222351.3320587-15-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Wander Lairson Costa tx_queue_lock and stats_lock are declared and initialized, but never used. Remove them. Signed-off-by: Wander Lairson Costa Reviewed-by: Paul Menzel Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igbvf/igbvf.h | 3 --- drivers/net/ethernet/intel/igbvf/netdev.c | 3 --- 2 files changed, 6 deletions(-) diff --git a/drivers/net/ethernet/intel/igbvf/igbvf.h b/drivers/net/ethernet/intel/igbvf/igbvf.h index 6ad35a00a287..ca6e44245a7b 100644 --- a/drivers/net/ethernet/intel/igbvf/igbvf.h +++ b/drivers/net/ethernet/intel/igbvf/igbvf.h @@ -169,8 +169,6 @@ struct igbvf_adapter { u16 link_speed; u16 link_duplex; - spinlock_t tx_queue_lock; /* prevent concurrent tail updates */ - /* track device up/down/testing state */ unsigned long state; @@ -220,7 +218,6 @@ struct igbvf_adapter { /* OS defined structs */ struct net_device *netdev; struct pci_dev *pdev; - spinlock_t stats_lock; /* prevent concurrent stats updates */ /* structs defined in e1000_hw.h */ struct e1000_hw hw; diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c index 925d7286a8ee..02044aa2181b 100644 --- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -1656,12 +1656,9 @@ static int igbvf_sw_init(struct igbvf_adapter *adapter) if (igbvf_alloc_queues(adapter)) return -ENOMEM; - spin_lock_init(&adapter->tx_queue_lock); - /* Explicitly disable IRQ since the NIC can be in any state. */ igbvf_irq_disable(adapter); - spin_lock_init(&adapter->stats_lock); spin_lock_init(&adapter->hw.mbx_lock); set_bit(__IGBVF_DOWN, &adapter->state); From patchwork Tue Nov 5 22:23:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13863642 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 390022170C6 for ; Tue, 5 Nov 2024 22:24:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845454; cv=none; b=PiqelEe+IasG7ib5sQ9upX3GdWeGYZhthTgT6tHG3xoqeLexVAs8lup3k7sm2yHTZq90tZkEqylePBDA3iWJ0zidF1vnPA+XwbE71SG2tQiRVlT63lmOkRKeuBtC9dG54WjvvToa7FebK7jdD4U5lPWdiEbYAkVqASwBBN4DRIc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730845454; c=relaxed/simple; bh=3buNkZptrmm67XFSkWu1cSrwFgYDuF55Nz6t+nWwneA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n71VfQn4jVxS1sw3vd+FzuCH7/9FQHyERIhp+kKtc5719nSbSJLXONzOOAM+Paxe01MdbVEjKgIvs6fZY1Q/IhFfhYwAlMIxMxFwWxZWtoNDWjTUgOAVx/7rITaSycUl2JCDotk2bdOg5I0OYriSZJTV8IG4GT8eJOOz5GqFjWo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I4edpNde; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I4edpNde" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730845450; x=1762381450; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3buNkZptrmm67XFSkWu1cSrwFgYDuF55Nz6t+nWwneA=; b=I4edpNdegP5NmSiUpxsiVJrp3G/4cKDUXGdCMe3N3dGw4CM97iplgaih XNopQqNPQxphph9Mwp06lhz9aNE24Y+FlwUA8pq59duVDcQNjH652LXwl 1jLkIJ5sbIyxe2C5OpN9bCD35v9pLWqGiFJxqbXlNbO2tPcowF4/9C2aY W5tgfbvFSr/okAkFs6k2iJ8oYFmNPIgopbBB4wy8Amczjw5uC2mUhLmKu yyPx/DGe3DceckZhHxvY4rkaDoJPjIz5nIqURE9kEvl6Hp/ruth60mrlr B8qi37ZcR7bg0ieJYgDsdW3UuC6/gniAesds8p4ZXWQ8sSoZmhisUFT3b g==; X-CSE-ConnectionGUID: Jz32ND2hSxGaWdoR6cXJoQ== X-CSE-MsgGUID: WZ2GfZpSRKWuzlXz5DWJWA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="34314343" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="34314343" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 14:24:03 -0800 X-CSE-ConnectionGUID: 3tefEK3qR/C3d6p/eDwh7Q== X-CSE-MsgGUID: IsVA8e6/Tl+q5zlC8i7pQQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,261,1725346800"; d="scan'208";a="84322481" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa009.fm.intel.com with ESMTP; 05 Nov 2024 14:24:02 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, andrew+netdev@lunn.ch, netdev@vger.kernel.org Cc: Joe Damato , anthony.l.nguyen@intel.com, horms@kernel.org, jacob.e.keller@intel.com, przemyslaw.kitszel@intel.com, Dmitry Antipov Subject: [PATCH net-next 15/15] e1000: Hold RTNL when e1000_down can be called Date: Tue, 5 Nov 2024 14:23:49 -0800 Message-ID: <20241105222351.3320587-16-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.46.0.522.gc50d79eeffbf In-Reply-To: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> References: <20241105222351.3320587-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Joe Damato e1000_down calls netif_queue_set_napi, which assumes that RTNL is held. There are a few paths for e1000_down to be called in e1000 where RTNL is not currently being held: - e1000_shutdown (pci shutdown) - e1000_suspend (power management) - e1000_reinit_locked (via e1000_reset_task delayed work) - e1000_io_error_detected (via pci error handler) Hold RTNL in three places to fix this issue: - e1000_reset_task: igc, igb, and e100e all hold rtnl in this path. - e1000_io_error_detected (pci error handler): e1000e and ixgbe hold rtnl in this path. A patch has been posted for igc to do the same [1]. - __e1000_shutdown (which is called from both e1000_shutdown and e1000_suspend): igb, ixgbe, and e1000e all hold rtnl in the same path. The other paths which call e1000_down seemingly hold RTNL and are OK: - e1000_close (ndo_stop) - e1000_change_mtu (ndo_change_mtu) Based on the above analysis and mailing list discussion [2], I believe adding rtnl in the three places mentioned above is correct. Fixes: 8f7ff18a5ec7 ("e1000: Link NAPI instances to queues and IRQs") Reported-by: Dmitry Antipov Closes: https://lore.kernel.org/netdev/8cf62307-1965-46a0-a411-ff0080090ff9@yandex.ru/ Link: https://lore.kernel.org/netdev/20241022215246.307821-3-jdamato@fastly.com/ [1] Link: https://lore.kernel.org/netdev/ZxgVRX7Ne-lTjwiJ@LQ3V64L9R2/ [2] Signed-off-by: Joe Damato Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/e1000/e1000_main.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c index 4de9b156b2be..3f089c3d47b2 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_main.c +++ b/drivers/net/ethernet/intel/e1000/e1000_main.c @@ -3509,7 +3509,9 @@ static void e1000_reset_task(struct work_struct *work) container_of(work, struct e1000_adapter, reset_task); e_err(drv, "Reset adapter\n"); + rtnl_lock(); e1000_reinit_locked(adapter); + rtnl_unlock(); } /** @@ -5074,7 +5076,9 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake) usleep_range(10000, 20000); WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags)); + rtnl_lock(); e1000_down(adapter); + rtnl_unlock(); } status = er32(STATUS); @@ -5235,16 +5239,20 @@ static pci_ers_result_t e1000_io_error_detected(struct pci_dev *pdev, struct net_device *netdev = pci_get_drvdata(pdev); struct e1000_adapter *adapter = netdev_priv(netdev); + rtnl_lock(); netif_device_detach(netdev); - if (state == pci_channel_io_perm_failure) + if (state == pci_channel_io_perm_failure) { + rtnl_unlock(); return PCI_ERS_RESULT_DISCONNECT; + } if (netif_running(netdev)) e1000_down(adapter); if (!test_and_set_bit(__E1000_DISABLED, &adapter->flags)) pci_disable_device(pdev); + rtnl_unlock(); /* Request a slot reset. */ return PCI_ERS_RESULT_NEED_RESET;