From patchwork Wed Jun 15 16:10:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882668 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12AFBC433EF for ; Wed, 15 Jun 2022 16:15:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354748AbiFOQPA (ORCPT ); Wed, 15 Jun 2022 12:15:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351994AbiFOQNj (ORCPT ); Wed, 15 Jun 2022 12:13:39 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48CFD36690; Wed, 15 Jun 2022 09:13:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309591; x=1686845591; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ErhvI6oD4q6tNA4Mvtx4UA3aP40DUiY5A2A/F9sPEi0=; b=GfsVQIW0VOKOFcViIWPy0xYUA5h1M0il7p0bCNwuPz9Veq4RPQZA6jQa 8S5zgR6oo3bsgDAAzoGwNNONZtSgPj/HXgAqCdOjS2CCdJayizITyNzIy rM/Z+MB9W6lBcKJ5Wa9GbiyQjGJa7JrPbUnqreEro0GgvqO/C0/dXU8a1 Cz0mJ9kDH+Ej6SzUNLlsB3ki4kOa38KFhLcNj16TvKFwDc9LVfOxnUiWt 5HgkKf7VfoApDYThS3bMluykt1ZMkjEcO8JUDmoTG61sYyp07JZpjqlVn caP1CpgpH6OUC/CtCKgz9JpGR90xKf604S+vvIza5hoGGKkjYmFr/bnMW A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050109" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050109" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005259" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:01 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski , Alexandr Lobakin Subject: [PATCH v3 bpf-next 01/11] ice: compress branches in ice_set_features() Date: Wed, 15 Jun 2022 18:10:31 +0200 Message-Id: <20220615161041.902916-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Instead of rather verbose comparison of current netdev->features bits vs the incoming ones from user, let us compress them by a helper features set that will be the result of netdev->features XOR features. This way, current, extensive branches: if (features & NETIF_F_BIT && !(netdev->features & NETIF_F_BIT)) set_feature(true); else if (!(features & NETIF_F_BIT) && netdev->features & NETIF_F_BIT) set_feature(false); can become: netdev_features_t changed = netdev->features ^ features; if (changed & NETIF_F_BIT) set_feature(!!(features & NETIF_F_BIT)); This is nothing new as currently several other drivers use this approach, which I find much more convenient. CC: Alexandr Lobakin Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_main.c | 40 +++++++++++------------ 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index e1cae253412c..23d1b1fc39fb 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -5910,44 +5910,41 @@ ice_set_vlan_features(struct net_device *netdev, netdev_features_t features) static int ice_set_features(struct net_device *netdev, netdev_features_t features) { + netdev_features_t changed = netdev->features ^ features; struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; /* Don't set any netdev advanced features with device in Safe Mode */ - if (ice_is_safe_mode(vsi->back)) { - dev_err(ice_pf_to_dev(vsi->back), "Device is in Safe Mode - not enabling advanced netdev features\n"); + if (ice_is_safe_mode(pf)) { + dev_err(ice_pf_to_dev(vsi->back), + "Device is in Safe Mode - not enabling advanced netdev features\n"); return ret; } /* Do not change setting during reset */ if (ice_is_reset_in_progress(pf->state)) { - dev_err(ice_pf_to_dev(vsi->back), "Device is resetting, changing advanced netdev features temporarily unavailable.\n"); + dev_err(ice_pf_to_dev(pf), + "Device is resetting, changing advanced netdev features temporarily unavailable.\n"); return -EBUSY; } /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ - if (features & NETIF_F_RXHASH && !(netdev->features & NETIF_F_RXHASH)) - ice_vsi_manage_rss_lut(vsi, true); - else if (!(features & NETIF_F_RXHASH) && - netdev->features & NETIF_F_RXHASH) - ice_vsi_manage_rss_lut(vsi, false); + if (changed & NETIF_F_RXHASH) + ice_vsi_manage_rss_lut(vsi, !!(features & NETIF_F_RXHASH)); ret = ice_set_vlan_features(netdev, features); if (ret) return ret; - if ((features & NETIF_F_NTUPLE) && - !(netdev->features & NETIF_F_NTUPLE)) { - ice_vsi_manage_fdir(vsi, true); - ice_init_arfs(vsi); - } else if (!(features & NETIF_F_NTUPLE) && - (netdev->features & NETIF_F_NTUPLE)) { - ice_vsi_manage_fdir(vsi, false); - ice_clear_arfs(vsi); + if (changed & NETIF_F_NTUPLE) { + bool ena = !!(features & NETIF_F_NTUPLE); + + ice_vsi_manage_fdir(vsi, ena); + ena ? ice_init_arfs(vsi) : ice_clear_arfs(vsi); } /* don't turn off hw_tc_offload when ADQ is already enabled */ @@ -5956,11 +5953,12 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EACCES; } - if ((features & NETIF_F_HW_TC) && - !(netdev->features & NETIF_F_HW_TC)) - set_bit(ICE_FLAG_CLS_FLOWER, pf->flags); - else - clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags); + if (changed & NETIF_F_HW_TC) { + bool ena = !!(features & NETIF_F_HW_TC); + + ena ? set_bit(ICE_FLAG_CLS_FLOWER, pf->flags) : + clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags); + } return 0; } From patchwork Wed Jun 15 16:10:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882671 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB30C43334 for ; Wed, 15 Jun 2022 16:15:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349273AbiFOQP1 (ORCPT ); Wed, 15 Jun 2022 12:15:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352897AbiFOQNo (ORCPT ); Wed, 15 Jun 2022 12:13:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C38138181; Wed, 15 Jun 2022 09:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309593; x=1686845593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qc0EJDXHu0ex2slVkMhHu14r3oQJuW0ES/zvc9A2dwE=; b=MTcE2cyExPHip/zSwp4KIyHSa7ogfJ138rf0RsFYQrvcdWPLr2zlVY3c h0lQMaGNSacxC5e/InhiIDGirySPib8vhhnVTBa+jsO9nPUTxk0AjYmp5 nkwh8Z5SxRTr/AzSaZWS1BMlgKDt/d9YHBZ4cdSPANdEpl1jthsRoAbBU rta3lRNfZZLSsEedZFfAp+chhBaSkxVRvIwfvTSIlpk9lsf15v72X6+8m CfylZmGkCO2NPcowoaXKqvfNF3xYXayHkjGZFZGjqqQOp53hu9ywUK2Jf GhE5bd88+LqsferIHrTDAk6CXCF90cAmjNFJ4T8iuI8hbbY0Qlw2/w9bi Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050126" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050126" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005280" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:03 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski , Alexandr Lobakin Subject: [PATCH v3 bpf-next 02/11] ice: allow toggling loopback mode via ndo_set_features callback Date: Wed, 15 Jun 2022 18:10:32 +0200 Message-Id: <20220615161041.902916-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add support for NETIF_F_LOOPBACK. This feature can be set via: $ ethtool -K eth0 loopback Feature can be useful for local data path tests. CC: Alexandr Lobakin Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_main.c | 24 +++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 23d1b1fc39fb..85d956517b2e 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3358,6 +3358,7 @@ static void ice_set_netdev_features(struct net_device *netdev) netdev->features |= netdev->hw_features; netdev->hw_features |= NETIF_F_HW_TC; + netdev->hw_features |= NETIF_F_LOOPBACK; /* encap and VLAN devices inherit default, csumo and tso features */ netdev->hw_enc_features |= dflt_features | csumo_features | @@ -5902,6 +5903,25 @@ ice_set_vlan_features(struct net_device *netdev, netdev_features_t features) return 0; } +/** + * ice_set_loopback - turn on/off loopback mode on underlying PF + * @hw: ptr to ice_hw struct needed for AQ command + * @netdev: ptr to the netdev being adjusted + * @ena: flag to indicate the on/off setting + */ +static void +ice_set_loopback(struct ice_hw *hw, struct net_device *netdev, bool ena) +{ + bool if_running = netif_running(netdev); + + if (if_running) + ice_stop(netdev); + if (ice_aq_set_mac_loopback(hw, ena, NULL)) + netdev_err(netdev, "Failed to toggle loopback state\n"); + if (if_running) + ice_open(netdev); +} + /** * ice_set_features - set the netdev feature flags * @netdev: ptr to the netdev being adjusted @@ -5960,6 +5980,10 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags); } + if (changed & NETIF_F_LOOPBACK) + ice_set_loopback(&pf->hw, netdev, + !!(features & NETIF_F_LOOPBACK)); + return 0; } From patchwork Wed Jun 15 16:10:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882662 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFE13C43334 for ; Wed, 15 Jun 2022 16:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352075AbiFOQOr (ORCPT ); Wed, 15 Jun 2022 12:14:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353475AbiFOQNo (ORCPT ); Wed, 15 Jun 2022 12:13:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFC1D38D83; Wed, 15 Jun 2022 09:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309593; x=1686845593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D5KPza/kgLdh1wzhegt7VKlkovGKoDj3NRf/x28hTqA=; b=McLeEM1mR8uPbsM/CV+9ToZSIBvDQzFK1i0V9c+kWzwBKU6P1VyWpSp2 CPPvPWtQNMLG2kmNPKML37iL3Vg38CAEoelnek3zAARPrztDG76il9XRG 0+O2eZ4oPL+XidbKomjhQEU6aRK3bysmfS7cprNTL+sbNfLvj7j7jmRkV N7BnbxQqfdCr8M2EiLQxNfA2cn4L+c7svYLwYwjt8HM9hhfCSNTUroG3d Gp0/x6ovseqLEZsBbqdTMTFLHggzOSotkU2ou99eOKWYlbGUbtu0y5SBN Qw6zpEaAbeUoZ0CQIc37iOMBpZh/MeTzy4aA8+Ie07waIZIX4zYysfAW5 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050133" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050133" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005288" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:05 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 03/11] ice: check DD bit on Rx descriptor rather than (EOP | RS) Date: Wed, 15 Jun 2022 18:10:33 +0200 Message-Id: <20220615161041.902916-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Tx side sets EOP and RS bits on descriptors to indicate that a particular descriptor is the last one and needs to generate an irq when it was sent. These bits should not be checked on completion path regardless whether it's the Tx or the Rx. DD bit serves this purpose and it indicates that a particular descriptor is either for Rx or was successfully Txed. Look at DD bit being set in ice_lbtest_receive_frames() instead of EOP and RS pair. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_ethtool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 1e71b70f0e52..b6275a29fa0d 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -658,7 +658,7 @@ static int ice_lbtest_receive_frames(struct ice_rx_ring *rx_ring) rx_desc = ICE_RX_DESC(rx_ring, i); if (!(rx_desc->wb.status_error0 & - cpu_to_le16(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS))) + cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S)))) continue; rx_buf = &rx_ring->rx_buf[i]; From patchwork Wed Jun 15 16:10:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882666 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 604BFC3F2D4 for ; Wed, 15 Jun 2022 16:15:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354526AbiFOQO4 (ORCPT ); Wed, 15 Jun 2022 12:14:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353431AbiFOQNo (ORCPT ); Wed, 15 Jun 2022 12:13:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09D95393D1; Wed, 15 Jun 2022 09:13:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309594; x=1686845594; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LRTA36i3oAFrPONhtwUFLiCTm+mTTjPjnuvLpoLbp24=; b=BcDBM8K1V/2AhwYmEOB4N8vq7nESjYGnp0z9hvzVFA2WRwltAiheoGpm 0Vv35g8j5ETR+4luGDEpNNhhzUoWcavph+9ktpmbBOKEf5Y04ngUdQyEK zBY249jz0RY03tqgRMw1mZlllf9QVjpEUr5vrWDnvNulzeHhVJUsJEZTM 6wZ3VScxzmauzESEL2Y1jqCb5vVs5rtbVKhnX1aDRJfkkF0FlzUH317MV X2VZ2XjRHIuo1gB+pE+8TCrrjCXNJG9yR8/zU57aTlnnSDxsE1Ig1xEAv /QiexbIYVLk3VGQuoL9HxeBBa5bw1e/ppQRYLFTKTprpybEO0duG2tLp1 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050151" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050151" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005307" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:07 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 04/11] ice: do not setup vlan for loopback VSI Date: Wed, 15 Jun 2022 18:10:34 +0200 Message-Id: <20220615161041.902916-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently loopback test is failiing due to the error returned from ice_vsi_vlan_setup(). Skip calling it when preparing loopback VSI. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_main.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 85d956517b2e..418c1f6c1613 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -6019,10 +6019,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi) if (vsi->netdev) { ice_set_rx_mode(vsi->netdev); - err = ice_vsi_vlan_setup(vsi); + if (vsi->type != ICE_VSI_LB) { + err = ice_vsi_vlan_setup(vsi); - if (err) - return err; + if (err) + return err; + } } ice_vsi_cfg_dcb_rings(vsi); From patchwork Wed Jun 15 16:10:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882661 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33BEBCCA473 for ; Wed, 15 Jun 2022 16:14:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350692AbiFOQOr (ORCPT ); Wed, 15 Jun 2022 12:14:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353659AbiFOQNo (ORCPT ); Wed, 15 Jun 2022 12:13:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69AA039684; Wed, 15 Jun 2022 09:13:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309594; x=1686845594; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/JO16V/AXkkPcbAJE2CdoB0/uUAQIHrV4zJ+rrY4VL0=; b=ZFy3lLaedsUVWjnyvekSQvK/GfxzqfTgxJ3M3o46jRcvVLgh0rxWIJPg 72RUeySz1WtS3+EUYRvMcqHubadnQe6zv7JfynzmnbrHTq6Tuw7t0VowX P8updFeiGShQx5fLEQvUrTzDItqlESHyCXi3d2V1QXDppHLSrW4vQj2h+ E1bfhA/XrcEuj0mYI+wgYtKckhST7XxKO8gPKasyx/LVYPqrPf7pEKEtU GPzUs3thni1OsK6tLGdSf2ApM3+FIZhGt/izaKY3z9Vr1qYPvesatD+2u a0YhHboW6NNwHm7IcwEm+xhexcbYUvG3LNEwqcpHgcvUQrnMLoBo4UDlQ w==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050163" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050163" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005325" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:09 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 05/11] selftests: xsk: query for native XDP support Date: Wed, 15 Jun 2022 18:10:35 +0200 Message-Id: <20220615161041.902916-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, xdpxceiver assumes that underlying device supports XDP in native mode - it is fine by now since tests can run only on a veth pair. Future commit is going to allow running test suite against physical devices, so let us query the device if it is capable of running XDP programs in native mode. This way xdpxceiver will not try to run TEST_MODE_DRV if device being tested is not supporting it. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xdpxceiver.c | 36 ++++++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index e5992a6b5e09..a1e410f6a5d8 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -98,6 +98,8 @@ #include #include #include +#include +#include #include "xdpxceiver.h" #include "../kselftest.h" @@ -1605,10 +1607,37 @@ static void ifobject_delete(struct ifobject *ifobj) free(ifobj); } +static bool is_xdp_supported(struct ifobject *ifobject) +{ + int flags = XDP_FLAGS_DRV_MODE; + + LIBBPF_OPTS(bpf_link_create_opts, opts, .flags = flags); + struct bpf_insn insns[2] = { + BPF_MOV64_IMM(BPF_REG_0, XDP_PASS), + BPF_EXIT_INSN() + }; + int ifindex = if_nametoindex(ifobject->ifname); + int prog_fd, insn_cnt = ARRAY_SIZE(insns); + int err; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_XDP, NULL, "GPL", insns, insn_cnt, NULL); + if (prog_fd < 0) + return false; + + err = bpf_xdp_attach(ifindex, prog_fd, flags, NULL); + if (err) + return false; + + bpf_xdp_detach(ifindex, flags, NULL); + + return true; +} + int main(int argc, char **argv) { struct pkt_stream *pkt_stream_default; struct ifobject *ifobj_tx, *ifobj_rx; + int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; struct test_spec test; @@ -1636,15 +1665,18 @@ int main(int argc, char **argv) init_iface(ifobj_rx, MAC2, MAC1, IP2, IP1, UDP_PORT2, UDP_PORT1, worker_testapp_validate_rx); + if (is_xdp_supported(ifobj_tx)) + modes++; + test_spec_init(&test, ifobj_tx, ifobj_rx, 0); pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); if (!pkt_stream_default) exit_with_error(ENOMEM); test.pkt_stream_default = pkt_stream_default; - ksft_set_plan(TEST_MODE_MAX * TEST_TYPE_MAX); + ksft_set_plan(modes * TEST_TYPE_MAX); - for (i = 0; i < TEST_MODE_MAX; i++) + for (i = 0; i < modes; i++) for (j = 0; j < TEST_TYPE_MAX; j++) { test_spec_init(&test, ifobj_tx, ifobj_rx, i); run_pkt_test(&test, i, j); From patchwork Wed Jun 15 16:10:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882670 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09C93CCA473 for ; Wed, 15 Jun 2022 16:15:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354862AbiFOQPE (ORCPT ); Wed, 15 Jun 2022 12:15:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353677AbiFOQNo (ORCPT ); Wed, 15 Jun 2022 12:13:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E31B39807; Wed, 15 Jun 2022 09:13:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309594; x=1686845594; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GNXo+trQIlYOwd1aKpe01eUvic84o8Ql1koeEVmB18c=; b=Nnidn2RGAnMhIHJun0VeoQ2wfBOtPyqG5Cid6p8NWbftE0qEBuC21Iay xPMZkXHGI4oRfO4dbrjCcAYLtrK4O0JWEvrn/xQGb/MDO/w8csC7slAjo JxDDFfYf/tWrgrunibeSmLeon2yKFrkKqHr7T0tFV84TItEsY9VRkNtl6 +wdLC0eBCXqY4LKrVbXoW/s+k8B0EFJ1UClAUvlxAZfnU1lOhN0CDEP/Z 59SGhg8HnWzfwJ2wWPrDpPjKWMc8ZaNefLRBlODbFhhJ0HaDc7MTpmq2K /aRFh0+EV0BhmwrcGnS7c7gnYRVYiPJM0FftFt0Ft8Bji5/z+6Q1CkHx5 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050167" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050167" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005340" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:12 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 06/11] selftests: xsk: add missing close() on netns fd Date: Wed, 15 Jun 2022 18:10:36 +0200 Message-Id: <20220615161041.902916-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Commit 1034b03e54ac ("selftests: xsk: Simplify cleanup of ifobjects") removed close on netns fd, which is not correct, so let us restore it. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xdpxceiver.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index a1e410f6a5d8..81ad69ed5839 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -1591,6 +1591,8 @@ static struct ifobject *ifobject_create(void) if (!ifobj->umem) goto out_umem; + ifobj->ns_fd = -1; + return ifobj; out_umem: @@ -1602,6 +1604,8 @@ static struct ifobject *ifobject_create(void) static void ifobject_delete(struct ifobject *ifobj) { + if (ifobj->ns_fd != -1) + close(ifobj->ns_fd); free(ifobj->umem); free(ifobj->xsk_arr); free(ifobj); From patchwork Wed Jun 15 16:10:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882672 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 782AFC433EF for ; Wed, 15 Jun 2022 16:15:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349379AbiFOQPa (ORCPT ); Wed, 15 Jun 2022 12:15:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353771AbiFOQNo (ORCPT ); Wed, 15 Jun 2022 12:13:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 576CF39813; Wed, 15 Jun 2022 09:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309595; x=1686845595; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PgdN+a2WsKGc+mOR6KKKSjC67DeYTvRR2YK5AWiNXfM=; b=YyWeQO6HC5moT3EcAHDrFktlvYCgG9+QRrBfSLtLM0h6AzXcRmB8W5ic hXYs8tHAjGiTR3a8Fjf5iBEGthyvHza+4k/rtUN7l/4x2lG8px11HQW1h v9GoTeHjg7R+xNpRwbB8h9+4TYD+ITf43upJiIw0ZkM1hcqYiC23oXR5O QHK+xLt42KATeQDPMAZzrqmRtxvSBT6vg8gd8d/ZvLhuAlmWBMsfiLCAB WA5dEg7cMWvHlgmO56llLeMNXExVeWHR8ckbRdnLa/0nVH05mC8KntQUh l5zCL1BWMLl+AUH0+WKyD5PtL4IVdobj7gu8WsMOnreB5T3tLir+q8xXz Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050176" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050176" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005356" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:14 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 07/11] selftests: xsk: introduce default Rx pkt stream Date: Wed, 15 Jun 2022 18:10:37 +0200 Message-Id: <20220615161041.902916-8-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net In order to prepare xdpxceiver for physical device testing, let us introduce default Rx pkt stream. Reason for doing it is that physical device testing will use a UMEM with a doubled size where half of it will be used by Tx and other half by Rx. This means that pkt addresses will differ for Tx and Rx streams. Rx thread will initialize the xsk_umem_info::base_addr that is added here so that pkt_set(), when working on Rx UMEM will add this offset and second half of UMEM space will be used. Note that currently base_addr is 0 on both sides. Future commit will do the mentioned initialization. Previously, veth based testing worked on separate UMEMs, so single default stream was fine. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xdpxceiver.c | 74 +++++++++++++++--------- tools/testing/selftests/bpf/xdpxceiver.h | 4 +- 2 files changed, 51 insertions(+), 27 deletions(-) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index 81ad69ed5839..3d0731a80e4a 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -428,15 +428,16 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, ifobj->use_poll = false; ifobj->use_fill_ring = true; ifobj->release_rx = true; - ifobj->pkt_stream = test->pkt_stream_default; ifobj->validation_func = NULL; if (i == 0) { ifobj->rx_on = false; ifobj->tx_on = true; + ifobj->pkt_stream = test->tx_pkt_stream_default; } else { ifobj->rx_on = true; ifobj->tx_on = false; + ifobj->pkt_stream = test->rx_pkt_stream_default; } memset(ifobj->umem, 0, sizeof(*ifobj->umem)); @@ -460,12 +461,15 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, static void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, struct ifobject *ifobj_rx, enum test_mode mode) { - struct pkt_stream *pkt_stream; + struct pkt_stream *tx_pkt_stream; + struct pkt_stream *rx_pkt_stream; u32 i; - pkt_stream = test->pkt_stream_default; + tx_pkt_stream = test->tx_pkt_stream_default; + rx_pkt_stream = test->rx_pkt_stream_default; memset(test, 0, sizeof(*test)); - test->pkt_stream_default = pkt_stream; + test->tx_pkt_stream_default = tx_pkt_stream; + test->rx_pkt_stream_default = rx_pkt_stream; for (i = 0; i < MAX_INTERFACES; i++) { struct ifobject *ifobj = i ? ifobj_rx : ifobj_tx; @@ -526,16 +530,17 @@ static void pkt_stream_delete(struct pkt_stream *pkt_stream) static void pkt_stream_restore_default(struct test_spec *test) { struct pkt_stream *tx_pkt_stream = test->ifobj_tx->pkt_stream; + struct pkt_stream *rx_pkt_stream = test->ifobj_rx->pkt_stream; - if (tx_pkt_stream != test->pkt_stream_default) { + if (tx_pkt_stream != test->tx_pkt_stream_default) { pkt_stream_delete(test->ifobj_tx->pkt_stream); - test->ifobj_tx->pkt_stream = test->pkt_stream_default; + test->ifobj_tx->pkt_stream = test->tx_pkt_stream_default; } - if (test->ifobj_rx->pkt_stream != test->pkt_stream_default && - test->ifobj_rx->pkt_stream != tx_pkt_stream) + if (rx_pkt_stream != test->rx_pkt_stream_default) { pkt_stream_delete(test->ifobj_rx->pkt_stream); - test->ifobj_rx->pkt_stream = test->pkt_stream_default; + test->ifobj_rx->pkt_stream = test->rx_pkt_stream_default; + } } static struct pkt_stream *__pkt_stream_alloc(u32 nb_pkts) @@ -558,7 +563,7 @@ static struct pkt_stream *__pkt_stream_alloc(u32 nb_pkts) static void pkt_set(struct xsk_umem_info *umem, struct pkt *pkt, u64 addr, u32 len) { - pkt->addr = addr; + pkt->addr = addr + umem->base_addr; pkt->len = len; if (len > umem->frame_size - XDP_PACKET_HEADROOM - MIN_PKT_SIZE * 2 - umem->frame_headroom) pkt->valid = false; @@ -597,22 +602,29 @@ static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len) pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts, pkt_len); test->ifobj_tx->pkt_stream = pkt_stream; + pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts, pkt_len); test->ifobj_rx->pkt_stream = pkt_stream; } -static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset) +static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len, + int offset) { - struct xsk_umem_info *umem = test->ifobj_tx->umem; + struct xsk_umem_info *umem = ifobj->umem; struct pkt_stream *pkt_stream; u32 i; - pkt_stream = pkt_stream_clone(umem, test->pkt_stream_default); - for (i = 1; i < test->pkt_stream_default->nb_pkts; i += 2) + pkt_stream = pkt_stream_clone(umem, ifobj->pkt_stream); + for (i = 1; i < ifobj->pkt_stream->nb_pkts; i += 2) pkt_set(umem, &pkt_stream->pkts[i], (i % umem->num_frames) * umem->frame_size + offset, pkt_len); - test->ifobj_tx->pkt_stream = pkt_stream; - test->ifobj_rx->pkt_stream = pkt_stream; + ifobj->pkt_stream = pkt_stream; +} + +static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset) +{ + __pkt_stream_replace_half(test->ifobj_tx, pkt_len, offset); + __pkt_stream_replace_half(test->ifobj_rx, pkt_len, offset); } static void pkt_stream_receive_half(struct test_spec *test) @@ -654,7 +666,8 @@ static struct pkt *pkt_generate(struct ifobject *ifobject, u32 pkt_nb) return pkt; } -static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, u32 nb_pkts) +static void __pkt_stream_generate_custom(struct ifobject *ifobj, + struct pkt *pkts, u32 nb_pkts) { struct pkt_stream *pkt_stream; u32 i; @@ -663,15 +676,20 @@ static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, if (!pkt_stream) exit_with_error(ENOMEM); - test->ifobj_tx->pkt_stream = pkt_stream; - test->ifobj_rx->pkt_stream = pkt_stream; - for (i = 0; i < nb_pkts; i++) { - pkt_stream->pkts[i].addr = pkts[i].addr; + pkt_stream->pkts[i].addr = pkts[i].addr + ifobj->umem->base_addr; pkt_stream->pkts[i].len = pkts[i].len; pkt_stream->pkts[i].payload = i; pkt_stream->pkts[i].valid = pkts[i].valid; } + + ifobj->pkt_stream = pkt_stream; +} + +static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, u32 nb_pkts) +{ + __pkt_stream_generate_custom(test->ifobj_tx, pkts, nb_pkts); + __pkt_stream_generate_custom(test->ifobj_rx, pkts, nb_pkts); } static void pkt_dump(void *pkt, u32 len) @@ -1639,7 +1657,8 @@ static bool is_xdp_supported(struct ifobject *ifobject) int main(int argc, char **argv) { - struct pkt_stream *pkt_stream_default; + struct pkt_stream *rx_pkt_stream_default; + struct pkt_stream *tx_pkt_stream_default; struct ifobject *ifobj_tx, *ifobj_rx; int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; @@ -1673,10 +1692,12 @@ int main(int argc, char **argv) modes++; test_spec_init(&test, ifobj_tx, ifobj_rx, 0); - pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); - if (!pkt_stream_default) + tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); + rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem, DEFAULT_PKT_CNT, PKT_SIZE); + if (!tx_pkt_stream_default || !rx_pkt_stream_default) exit_with_error(ENOMEM); - test.pkt_stream_default = pkt_stream_default; + test.tx_pkt_stream_default = tx_pkt_stream_default; + test.rx_pkt_stream_default = rx_pkt_stream_default; ksft_set_plan(modes * TEST_TYPE_MAX); @@ -1690,7 +1711,8 @@ int main(int argc, char **argv) failed_tests++; } - pkt_stream_delete(pkt_stream_default); + pkt_stream_delete(tx_pkt_stream_default); + pkt_stream_delete(rx_pkt_stream_default); ifobject_delete(ifobj_tx); ifobject_delete(ifobj_rx); diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h index 8f672b0fe0e1..ccfc829b2e5e 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.h +++ b/tools/testing/selftests/bpf/xdpxceiver.h @@ -95,6 +95,7 @@ struct xsk_umem_info { u32 frame_headroom; void *buffer; u32 frame_size; + u32 base_addr; bool unaligned_mode; }; @@ -155,7 +156,8 @@ struct ifobject { struct test_spec { struct ifobject *ifobj_tx; struct ifobject *ifobj_rx; - struct pkt_stream *pkt_stream_default; + struct pkt_stream *tx_pkt_stream_default; + struct pkt_stream *rx_pkt_stream_default; u16 total_steps; u16 current_step; u16 nb_sockets; From patchwork Wed Jun 15 16:10:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882665 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE64FCCA473 for ; Wed, 15 Jun 2022 16:14:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353949AbiFOQOy (ORCPT ); Wed, 15 Jun 2022 12:14:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354074AbiFOQNp (ORCPT ); Wed, 15 Jun 2022 12:13:45 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCC7B39BAB; Wed, 15 Jun 2022 09:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309595; x=1686845595; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n1z9AW0joriVRC5RBVVqnvcO+4hndBjTIShR7TYLoMg=; b=faBNvX7572/GAeSx62d1hSy3RkHxWFnuPEddensok6q2dmWCkDRksYNL 7JOEa6hvZnhSVwXoPPzJ2LUafe2JmWIbzZZFNAEH0tgYEl+xKKSxu0XMr spnMvdrWUxLrCrMzemNVNXQbOeX1jS7orxcSDxijSYBLv4HD1Q7xx6I0M WvSXUWlegRBYMKqDxm/OOpCDh2fp9JyQsxJxsQZdrL7RZ6B1R5gcgtuJF xLq0j1yzaAr/5aUkn+qTDgS+4wFp0P1l8nxzpM7HQc9VPWjrAuLvbGb6p cgabBrvNsy9u/d1BLIBu9sNOQjwhlp100NLH3NjwFlrBy0hDJDCrZonn7 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050187" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050187" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005371" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:16 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 08/11] selftests: xsk: add support for executing tests on physical device Date: Wed, 15 Jun 2022 18:10:38 +0200 Message-Id: <20220615161041.902916-9-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, architecture of xdpxceiver is designed strictly for conducting veth based tests. Veth pair is created together with a network namespace and one of the veth interfaces is moved to the mentioned netns. Then, separate threads for Tx and Rx are spawned which will utilize described setup. Infrastructure described in the paragraph above can not be used for testing AF_XDP support on physical devices. That testing will be conducted on a single network interface and same queue. Xdpxceiver needs to be extended to distinguish between veth tests and physical interface tests. Since same iface/queue id pair will be used by both Tx/Rx threads for physical device testing, Tx thread, which happen to run after the Rx thread, is going to create XSK socket with shared umem flag. In order to track this setting throughout the lifetime of spawned threads, introduce 'shared_umem' boolean variable to struct ifobject and set it to true when xdpxceiver is run against physical device. In such case, UMEM size needs to be doubled, so half of it will be used by Rx thread and other half by Tx thread. For two step based test types, value of XSKMAP element under key 0 has to be updated as there is now another socket for the second step. Also, to avoid race conditions when destroying XSK resources, move this activity to the main thread after spawned Rx and Tx threads have finished its job. This way it is possible to gracefully remove shared umem without introducing synchronization mechanisms. To run xsk selftests suite on physical device, append "-i $IFACE" when invoking test_xsk.sh. For veth based tests, simply skip it. When "-i $IFACE" is in place, under the hood test_xsk.sh will use $IFACE for both interfaces supplied to xdpxceiver, which in turn will interpret that this execution of test suite is for a physical device. Note that currently this makes it possible only to test SKB and DRV mode (in case underlying device has native XDP support). ZC testing support is added in a later patch. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/test_xsk.sh | 52 +++++-- tools/testing/selftests/bpf/xdpxceiver.c | 189 ++++++++++++++--------- tools/testing/selftests/bpf/xdpxceiver.h | 1 + 3 files changed, 156 insertions(+), 86 deletions(-) diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh index 567500299231..19b24cce5414 100755 --- a/tools/testing/selftests/bpf/test_xsk.sh +++ b/tools/testing/selftests/bpf/test_xsk.sh @@ -73,14 +73,20 @@ # # Run and dump packet contents: # sudo ./test_xsk.sh -D +# +# Run test suite for physical device in loopback mode +# sudo ./test_xsk.sh -i IFACE . xsk_prereqs.sh -while getopts "vD" flag +ETH="" + +while getopts "vDi:" flag do case "${flag}" in v) verbose=1;; D) dump_pkts=1;; + i) ETH=${OPTARG};; esac done @@ -132,18 +138,25 @@ setup_vethPairs() { ip link set ${VETH0} up } -validate_root_exec -validate_veth_support ${VETH0} -validate_ip_utility -setup_vethPairs - -retval=$? -if [ $retval -ne 0 ]; then - test_status $retval "${TEST_NAME}" - cleanup_exit ${VETH0} ${VETH1} ${NS1} - exit $retval +if [ ! -z $ETH ]; then + VETH0=${ETH} + VETH1=${ETH} + NS1="" +else + validate_root_exec + validate_veth_support ${VETH0} + validate_ip_utility + setup_vethPairs + + retval=$? + if [ $retval -ne 0 ]; then + test_status $retval "${TEST_NAME}" + cleanup_exit ${VETH0} ${VETH1} ${NS1} + exit $retval + fi fi + if [[ $verbose -eq 1 ]]; then ARGS+="-v " fi @@ -152,26 +165,33 @@ if [[ $dump_pkts -eq 1 ]]; then ARGS="-D " fi +retval=$? test_status $retval "${TEST_NAME}" ## START TESTS statusList=() -TEST_NAME="XSK_SELFTESTS_SOFTIRQ" +TEST_NAME="XSK_SELFTESTS_${VETH0}_SOFTIRQ" execxdpxceiver -cleanup_exit ${VETH0} ${VETH1} ${NS1} -TEST_NAME="XSK_SELFTESTS_BUSY_POLL" +if [ -z $ETH ]; then + cleanup_exit ${VETH0} ${VETH1} ${NS1} +fi +TEST_NAME="XSK_SELFTESTS_${VETH0}_BUSY_POLL" busy_poll=1 -setup_vethPairs +if [ -z $ETH ]; then + setup_vethPairs +fi execxdpxceiver ## END TESTS -cleanup_exit ${VETH0} ${VETH1} ${NS1} +if [ -z $ETH ]; then + cleanup_exit ${VETH0} ${VETH1} ${NS1} +fi failures=0 echo -e "\nSummary:" diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index 3d0731a80e4a..de4cf0432243 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -296,8 +296,8 @@ static void enable_busy_poll(struct xsk_socket_info *xsk) exit_with_error(errno); } -static int xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem, - struct ifobject *ifobject, bool shared) +static int __xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem, + struct ifobject *ifobject, bool shared) { struct xsk_socket_config cfg = {}; struct xsk_ring_cons *rxr; @@ -443,6 +443,9 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, memset(ifobj->umem, 0, sizeof(*ifobj->umem)); ifobj->umem->num_frames = DEFAULT_UMEM_BUFFERS; ifobj->umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; + if (ifobj->shared_umem && ifobj->rx_on) + ifobj->umem->base_addr = DEFAULT_UMEM_BUFFERS * + XSK_UMEM__DEFAULT_FRAME_SIZE; for (j = 0; j < MAX_SOCKETS; j++) { memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j])); @@ -1101,19 +1104,85 @@ static int validate_tx_invalid_descs(struct ifobject *ifobject) return TEST_PASS; } +static void xsk_configure_socket(struct test_spec *test, struct ifobject *ifobject, + struct xsk_umem_info *umem, bool tx) +{ + int i, ret; + + for (i = 0; i < test->nb_sockets; i++) { + bool shared = (ifobject->shared_umem && tx) ? true : !!i; + u32 ctr = 0; + + while (ctr++ < SOCK_RECONF_CTR) { + ret = __xsk_configure_socket(&ifobject->xsk_arr[i], umem, + ifobject, shared); + if (!ret) + break; + + /* Retry if it fails as xsk_socket__create() is asynchronous */ + if (ctr >= SOCK_RECONF_CTR) + exit_with_error(-ret); + usleep(USLEEP_MAX); + } + if (ifobject->busy_poll) + enable_busy_poll(&ifobject->xsk_arr[i]); + } +} + +static void thread_common_ops_tx(struct test_spec *test, struct ifobject *ifobject) +{ + xsk_configure_socket(test, ifobject, test->ifobj_rx->umem, true); + ifobject->xsk = &ifobject->xsk_arr[0]; + ifobject->xsk_map_fd = test->ifobj_rx->xsk_map_fd; + memcpy(ifobject->umem, test->ifobj_rx->umem, sizeof(struct xsk_umem_info)); +} + +static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream) +{ + u32 idx = 0, i, buffers_to_fill; + int ret; + + if (umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS) + buffers_to_fill = umem->num_frames; + else + buffers_to_fill = XSK_RING_PROD__DEFAULT_NUM_DESCS; + + ret = xsk_ring_prod__reserve(&umem->fq, buffers_to_fill, &idx); + if (ret != buffers_to_fill) + exit_with_error(ENOSPC); + for (i = 0; i < buffers_to_fill; i++) { + u64 addr; + + if (pkt_stream->use_addr_for_fill) { + struct pkt *pkt = pkt_stream_get_pkt(pkt_stream, i); + + if (!pkt) + break; + addr = pkt->addr; + } else { + addr = i * umem->frame_size; + } + + *xsk_ring_prod__fill_addr(&umem->fq, idx++) = addr; + } + xsk_ring_prod__submit(&umem->fq, buffers_to_fill); +} + static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) { u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size; int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; int ret, ifindex; void *bufs; - u32 i; ifobject->ns_fd = switch_namespace(ifobject->nsname); if (ifobject->umem->unaligned_mode) mmap_flags |= MAP_HUGETLB; + if (ifobject->shared_umem) + umem_sz *= 2; + bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); if (bufs == MAP_FAILED) exit_with_error(errno); @@ -1122,24 +1191,9 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) if (ret) exit_with_error(-ret); - for (i = 0; i < test->nb_sockets; i++) { - u32 ctr = 0; - - while (ctr++ < SOCK_RECONF_CTR) { - ret = xsk_configure_socket(&ifobject->xsk_arr[i], ifobject->umem, - ifobject, !!i); - if (!ret) - break; - - /* Retry if it fails as xsk_socket__create() is asynchronous */ - if (ctr >= SOCK_RECONF_CTR) - exit_with_error(-ret); - usleep(USLEEP_MAX); - } + xsk_populate_fill_ring(ifobject->umem, ifobject->pkt_stream); - if (ifobject->busy_poll) - enable_busy_poll(&ifobject->xsk_arr[i]); - } + xsk_configure_socket(test, ifobject, ifobject->umem, false); ifobject->xsk = &ifobject->xsk_arr[0]; @@ -1159,22 +1213,18 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) exit_with_error(-ret); } -static void testapp_cleanup_xsk_res(struct ifobject *ifobj) -{ - print_verbose("Destroying socket\n"); - xsk_socket__delete(ifobj->xsk->xsk); - munmap(ifobj->umem->buffer, ifobj->umem->num_frames * ifobj->umem->frame_size); - xsk_umem__delete(ifobj->umem->umem); -} - static void *worker_testapp_validate_tx(void *arg) { struct test_spec *test = (struct test_spec *)arg; struct ifobject *ifobject = test->ifobj_tx; int err; - if (test->current_step == 1) - thread_common_ops(test, ifobject); + if (test->current_step == 1) { + if (!ifobject->shared_umem) + thread_common_ops(test, ifobject); + else + thread_common_ops_tx(test, ifobject); + } print_verbose("Sending %d packets on interface %s\n", ifobject->pkt_stream->nb_pkts, ifobject->ifname); @@ -1185,53 +1235,23 @@ static void *worker_testapp_validate_tx(void *arg) if (err) report_failure(test); - if (test->total_steps == test->current_step || err) - testapp_cleanup_xsk_res(ifobject); pthread_exit(NULL); } -static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream) -{ - u32 idx = 0, i, buffers_to_fill; - int ret; - - if (umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS) - buffers_to_fill = umem->num_frames; - else - buffers_to_fill = XSK_RING_PROD__DEFAULT_NUM_DESCS; - - ret = xsk_ring_prod__reserve(&umem->fq, buffers_to_fill, &idx); - if (ret != buffers_to_fill) - exit_with_error(ENOSPC); - for (i = 0; i < buffers_to_fill; i++) { - u64 addr; - - if (pkt_stream->use_addr_for_fill) { - struct pkt *pkt = pkt_stream_get_pkt(pkt_stream, i); - - if (!pkt) - break; - addr = pkt->addr; - } else { - addr = i * umem->frame_size; - } - - *xsk_ring_prod__fill_addr(&umem->fq, idx++) = addr; - } - xsk_ring_prod__submit(&umem->fq, buffers_to_fill); -} - static void *worker_testapp_validate_rx(void *arg) { struct test_spec *test = (struct test_spec *)arg; struct ifobject *ifobject = test->ifobj_rx; struct pollfd fds = { }; + int id = 0; int err; - if (test->current_step == 1) + if (test->current_step == 1) { thread_common_ops(test, ifobject); - - xsk_populate_fill_ring(ifobject->umem, ifobject->pkt_stream); + } else { + bpf_map_delete_elem(ifobject->xsk_map_fd, &id); + xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd); + } fds.fd = xsk_socket__fd(ifobject->xsk->xsk); fds.events = POLLIN; @@ -1249,11 +1269,20 @@ static void *worker_testapp_validate_rx(void *arg) pthread_mutex_unlock(&pacing_mutex); } - if (test->total_steps == test->current_step || err) - testapp_cleanup_xsk_res(ifobject); pthread_exit(NULL); } +static void testapp_clean_xsk_umem(struct ifobject *ifobj) +{ + u64 umem_sz = ifobj->umem->num_frames * ifobj->umem->frame_size; + + if (ifobj->shared_umem) + umem_sz *= 2; + + xsk_umem__delete(ifobj->umem->umem); + munmap(ifobj->umem->buffer, umem_sz); +} + static int testapp_validate_traffic(struct test_spec *test) { struct ifobject *ifobj_tx = test->ifobj_tx; @@ -1280,6 +1309,14 @@ static int testapp_validate_traffic(struct test_spec *test) pthread_join(t1, NULL); pthread_join(t0, NULL); + if (test->total_steps == test->current_step || test->fail) { + xsk_socket__delete(ifobj_tx->xsk->xsk); + xsk_socket__delete(ifobj_rx->xsk->xsk); + testapp_clean_xsk_umem(ifobj_rx); + if (!ifobj_tx->shared_umem) + testapp_clean_xsk_umem(ifobj_tx); + } + return !!test->fail; } @@ -1359,9 +1396,9 @@ static void testapp_headroom(struct test_spec *test) static void testapp_stats_rx_dropped(struct test_spec *test) { test_spec_set_name(test, "STAT_RX_DROPPED"); + pkt_stream_replace_half(test, MIN_PKT_SIZE * 4, 0); test->ifobj_rx->umem->frame_headroom = test->ifobj_rx->umem->frame_size - XDP_PACKET_HEADROOM - MIN_PKT_SIZE * 3; - pkt_stream_replace_half(test, MIN_PKT_SIZE * 4, 0); pkt_stream_receive_half(test); test->ifobj_rx->validation_func = validate_rx_dropped; testapp_validate_traffic(test); @@ -1484,6 +1521,11 @@ static void testapp_invalid_desc(struct test_spec *test) pkts[7].valid = false; } + if (test->ifobj_tx->shared_umem) { + pkts[4].addr += UMEM_SIZE; + pkts[5].addr += UMEM_SIZE; + } + pkt_stream_generate_custom(test, pkts, ARRAY_SIZE(pkts)); testapp_validate_traffic(test); pkt_stream_restore_default(test); @@ -1624,7 +1666,6 @@ static void ifobject_delete(struct ifobject *ifobj) { if (ifobj->ns_fd != -1) close(ifobj->ns_fd); - free(ifobj->umem); free(ifobj->xsk_arr); free(ifobj); } @@ -1663,6 +1704,7 @@ int main(int argc, char **argv) int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; struct test_spec test; + bool shared_umem; /* Use libbpf 1.0 API mode */ libbpf_set_strict_mode(LIBBPF_STRICT_ALL); @@ -1677,6 +1719,10 @@ int main(int argc, char **argv) setlocale(LC_ALL, ""); parse_command_line(ifobj_tx, ifobj_rx, argc, argv); + shared_umem = !strcmp(ifobj_tx->ifname, ifobj_rx->ifname); + + ifobj_tx->shared_umem = shared_umem; + ifobj_rx->shared_umem = shared_umem; if (!validate_interface(ifobj_tx) || !validate_interface(ifobj_rx)) { usage(basename(argv[0])); @@ -1713,6 +1759,9 @@ int main(int argc, char **argv) pkt_stream_delete(tx_pkt_stream_default); pkt_stream_delete(rx_pkt_stream_default); + free(ifobj_rx->umem); + if (!ifobj_tx->shared_umem) + free(ifobj_tx->umem); ifobject_delete(ifobj_tx); ifobject_delete(ifobj_rx); diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h index ccfc829b2e5e..b7aa6c7cf2be 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.h +++ b/tools/testing/selftests/bpf/xdpxceiver.h @@ -149,6 +149,7 @@ struct ifobject { bool busy_poll; bool use_fill_ring; bool release_rx; + bool shared_umem; u8 dst_mac[ETH_ALEN]; u8 src_mac[ETH_ALEN]; }; From patchwork Wed Jun 15 16:10:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882667 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A217C433EF for ; Wed, 15 Jun 2022 16:15:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354712AbiFOQPA (ORCPT ); Wed, 15 Jun 2022 12:15:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354486AbiFOQNp (ORCPT ); Wed, 15 Jun 2022 12:13:45 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82AD239BBC; Wed, 15 Jun 2022 09:13:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309596; x=1686845596; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5MKX3A+Kq0uVtr+3KWFqll7T9q90GcHf4K/urm0XVms=; b=GrrAtU99Wj7GNtdCKqr7XCHgdKgkBMfgN2jTKNgoNT7dJxSVH4sZj/t8 2OVjN0elvKZKtvbmV0DI66xezLd8lq86N9m2wEaim2EHkZPFAiS6sGnfv PFOOKjhoNppS//MmkB0Sc2dciv0QLQ9I2+DnPyIS6YqOp2/4BYSY5Ooa/ wqgnFkPeExcVPWNdfJbX1v1SWNPBlgqGJoF7zHfI2wFdfG3gQ9JvZjLdR BF+gMcI86aQ+fSwmb4mk2KdAty/3rEUEFYSZVLV71d64YR5NG2yz2+DJ9 obMJY3aFwinSHPwF9Lv7B/OvnXlkfM3cV8hhBZIkMDi0iisyMlZ+Ui3ob A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050199" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050199" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005385" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:18 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 09/11] selftests: xsk: rely on pkts_in_flight in wait_for_tx_completion() Date: Wed, 15 Jun 2022 18:10:39 +0200 Message-Id: <20220615161041.902916-10-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Some of the drivers that implement support for AF_XDP Zero Copy (like ice) can have lazy approach for cleaning Tx descriptors. For ZC, when descriptor is cleaned, it is placed onto AF_XDP completion queue. This means that current implementation of wait_for_tx_completion() in xdpxceiver can get onto infinite loop, as some of the descriptors can never reach CQ. This function can be changed to rely on pkts_in_flight instead. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xdpxceiver.c | 3 ++- tools/testing/selftests/bpf/xdpxceiver.h | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index de4cf0432243..13a3b2ac2399 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -965,7 +965,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb) static void wait_for_tx_completion(struct xsk_socket_info *xsk) { - while (xsk->outstanding_tx) + while (pkts_in_flight) complete_pkts(xsk, BATCH_SIZE); } @@ -1269,6 +1269,7 @@ static void *worker_testapp_validate_rx(void *arg) pthread_mutex_unlock(&pacing_mutex); } + pkts_in_flight = 0; pthread_exit(NULL); } diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h index b7aa6c7cf2be..f364a92675f8 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.h +++ b/tools/testing/selftests/bpf/xdpxceiver.h @@ -170,6 +170,6 @@ pthread_barrier_t barr; pthread_mutex_t pacing_mutex = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t pacing_cond = PTHREAD_COND_INITIALIZER; -int pkts_in_flight; +volatile int pkts_in_flight; #endif /* XDPXCEIVER_H */ From patchwork Wed Jun 15 16:10:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882669 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE794C43334 for ; Wed, 15 Jun 2022 16:15:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354788AbiFOQPD (ORCPT ); Wed, 15 Jun 2022 12:15:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354769AbiFOQNq (ORCPT ); Wed, 15 Jun 2022 12:13:46 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5EE43C73C; Wed, 15 Jun 2022 09:13:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309598; x=1686845598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/SCh8xIVt+EA815S+rIiK7ApSTujGLO77jE80fN6j7s=; b=lGz+bnFoVHNGoI5lYiVlqtIgQShImCvurtxdTdVhwR0QceBXyw0QJvWT yCFoZ5JDxeU8n+Vmbttl7f/kGD9svJ8YhuFSKG21+xGdafIOCLa66JKw5 0PXRTl++gdLyZw1fN82PafwbB4i2kEgIazxTrDNFLVo4wVqmSmjCxPUZ3 mq8GyWpKZCj754XIUC85HVQUo7WkKEsT/1oE6kLybXdSZNS0vMiGV2fmX 9Kj6wcBAIaHVlfnMjwujKpB60fmaxoHAE6Em9lAQI9yw9HIyovN7W1qyr otEpBERXdT43OjV/RNzjfwVw3Pb/cQr0nH+AJ9H9JHAD2Lmg1a0cTejbj Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050209" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050209" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005403" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:20 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 10/11] selftests: xsk: remove struct xsk_socket_info::outstanding_tx Date: Wed, 15 Jun 2022 18:10:40 +0200 Message-Id: <20220615161041.902916-11-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Previous change makes xsk->outstanding_tx a dead code, so let's remove it. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xdpxceiver.c | 20 +++----------------- tools/testing/selftests/bpf/xdpxceiver.h | 1 - 2 files changed, 3 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index 13a3b2ac2399..ade9d87e7a7c 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -815,7 +815,7 @@ static void kick_rx(struct xsk_socket_info *xsk) exit_with_error(errno); } -static int complete_pkts(struct xsk_socket_info *xsk, int batch_size) +static void complete_pkts(struct xsk_socket_info *xsk, int batch_size) { unsigned int rcvd; u32 idx; @@ -824,20 +824,8 @@ static int complete_pkts(struct xsk_socket_info *xsk, int batch_size) kick_tx(xsk); rcvd = xsk_ring_cons__peek(&xsk->umem->cq, batch_size, &idx); - if (rcvd) { - if (rcvd > xsk->outstanding_tx) { - u64 addr = *xsk_ring_cons__comp_addr(&xsk->umem->cq, idx + rcvd - 1); - - ksft_print_msg("[%s] Too many packets completed\n", __func__); - ksft_print_msg("Last completion address: %llx\n", addr); - return TEST_FAILURE; - } - + if (rcvd) xsk_ring_cons__release(&xsk->umem->cq, rcvd); - xsk->outstanding_tx -= rcvd; - } - - return TEST_PASS; } static int receive_pkts(struct ifobject *ifobj, struct pollfd *fds) @@ -955,9 +943,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb) pthread_mutex_unlock(&pacing_mutex); xsk_ring_prod__submit(&xsk->tx, i); - xsk->outstanding_tx += valid_pkts; - if (complete_pkts(xsk, i)) - return TEST_FAILURE; + complete_pkts(xsk, i); usleep(10); return TEST_PASS; diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h index f364a92675f8..12b792004163 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.h +++ b/tools/testing/selftests/bpf/xdpxceiver.h @@ -104,7 +104,6 @@ struct xsk_socket_info { struct xsk_ring_prod tx; struct xsk_umem_info *umem; struct xsk_socket *xsk; - u32 outstanding_tx; u32 rxqsize; }; From patchwork Wed Jun 15 16:10:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12882664 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92DA0C43334 for ; Wed, 15 Jun 2022 16:14:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353475AbiFOQOv (ORCPT ); Wed, 15 Jun 2022 12:14:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354917AbiFOQNr (ORCPT ); Wed, 15 Jun 2022 12:13:47 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4BC63E5E6; Wed, 15 Jun 2022 09:13:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655309599; x=1686845599; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uNAMJwfWgl6186qsZ6uebZwng8aOMZ6Ri5S0+Rnw08Q=; b=Qv6bT+/8MIc9QLq4hc67rzRw8pQYO/5BS9xMCOxh4eDL1hIe9uLPJoYg DHN5e5h0yMSl3MRWDQRkcNqY+v2hZsU1/Zu6rDTacb8yAREaHPc3eGwjj szLJNksHUfePRvb16YfQWCft9LSY5j2oq1uDm1z8imuD59Q68XmpWk2Iq W3s969E9qZ3R5+FKC1svHk5AJrEOJaqJauMdgSI6LyfntMOvV74TXJRoh XrudpeOVcwshW2ErXssCoFp1Lpo/zM2+3MEzrkB7O5uyvB/APo8soqFRa k12Ni2Y+TSWzySve9rudiRKBI6CwGRIdE2FUigrYyV2lcxaL10xF+Jxjr w==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="280050218" X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="280050218" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2022 09:11:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,302,1647327600"; d="scan'208";a="713005422" Received: from boxer.igk.intel.com ([10.102.20.173]) by orsmga004.jf.intel.com with ESMTP; 15 Jun 2022 09:11:22 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, Maciej Fijalkowski Subject: [PATCH v3 bpf-next 11/11] selftests: xsk: add support for zero copy testing Date: Wed, 15 Jun 2022 18:10:41 +0200 Message-Id: <20220615161041.902916-12-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220615161041.902916-1-maciej.fijalkowski@intel.com> References: <20220615161041.902916-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce new mode to xdpxceiver responsible for testing AF_XDP zero copy support of driver that serves underlying physical device. When setting up test suite, determine whether driver has ZC support or not by trying to bind XSK ZC socket to the interface. If it succeeded, interpret it as ZC support being in place and do softirq and busy poll tests for zero copy mode. Note that Rx dropped tests are skipped since ZC path is not touching rx_dropped stat at all. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xdpxceiver.c | 76 ++++++++++++++++++++++-- tools/testing/selftests/bpf/xdpxceiver.h | 2 + 2 files changed, 74 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index ade9d87e7a7c..66bfb365b656 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -124,9 +124,20 @@ static void __exit_with_error(int error, const char *file, const char *func, int } #define exit_with_error(error) __exit_with_error(error, __FILE__, __func__, __LINE__) - -#define mode_string(test) (test)->ifobj_tx->xdp_flags & XDP_FLAGS_SKB_MODE ? "SKB" : "DRV" #define busy_poll_string(test) (test)->ifobj_tx->busy_poll ? "BUSY-POLL " : "" +static char *mode_string(struct test_spec *test) +{ + switch (test->mode) { + case TEST_MODE_SKB: + return "SKB"; + case TEST_MODE_DRV: + return "DRV"; + case TEST_MODE_ZC: + return "ZC"; + default: + return "BOGUS"; + } +} static void report_failure(struct test_spec *test) { @@ -317,6 +328,51 @@ static int __xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_i return xsk_socket__create(&xsk->xsk, ifobject->ifname, 0, umem->umem, rxr, txr, &cfg); } +static bool ifobj_zc_avail(struct ifobject *ifobject) +{ + size_t umem_sz = DEFAULT_UMEM_BUFFERS * XSK_UMEM__DEFAULT_FRAME_SIZE; + int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; + struct xsk_socket_info *xsk; + struct xsk_umem_info *umem; + bool zc_avail = false; + void *bufs; + int ret; + + bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); + if (bufs == MAP_FAILED) + exit_with_error(errno); + + umem = calloc(1, sizeof(struct xsk_umem_info)); + if (!umem) { + munmap(bufs, umem_sz); + exit_with_error(-ENOMEM); + } + umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; + ret = xsk_configure_umem(umem, bufs, umem_sz); + if (ret) + exit_with_error(-ret); + + xsk = calloc(1, sizeof(struct xsk_socket_info)); + if (!xsk) + goto out; + ifobject->xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; + ifobject->xdp_flags |= XDP_FLAGS_DRV_MODE; + ifobject->bind_flags = XDP_USE_NEED_WAKEUP | XDP_ZEROCOPY; + ifobject->rx_on = true; + xsk->rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS; + ret = __xsk_configure_socket(xsk, umem, ifobject, false); + if (!ret) + zc_avail = true; + + xsk_socket__delete(xsk->xsk); + free(xsk); +out: + munmap(umem->buffer, umem_sz); + xsk_umem__delete(umem->umem); + free(umem); + return zc_avail; +} + static struct option long_options[] = { {"interface", required_argument, 0, 'i'}, {"busy-poll", no_argument, 0, 'b'}, @@ -483,9 +539,14 @@ static void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, else ifobj->xdp_flags |= XDP_FLAGS_DRV_MODE; - ifobj->bind_flags = XDP_USE_NEED_WAKEUP | XDP_COPY; + ifobj->bind_flags = XDP_USE_NEED_WAKEUP; + if (mode == TEST_MODE_ZC) + ifobj->bind_flags |= XDP_ZEROCOPY; + else + ifobj->bind_flags |= XDP_COPY; } + test->mode = mode; __test_spec_init(test, ifobj_tx, ifobj_rx); } @@ -1543,6 +1604,10 @@ static void run_pkt_test(struct test_spec *test, enum test_mode mode, enum test_ { switch (type) { case TEST_TYPE_STATS_RX_DROPPED: + if (mode == TEST_MODE_ZC) { + ksft_test_result_skip("Can not run RX_DROPPED test for ZC mode\n"); + return; + } testapp_stats_rx_dropped(test); break; case TEST_TYPE_STATS_TX_INVALID_DESCS: @@ -1721,8 +1786,11 @@ int main(int argc, char **argv) init_iface(ifobj_rx, MAC2, MAC1, IP2, IP1, UDP_PORT2, UDP_PORT1, worker_testapp_validate_rx); - if (is_xdp_supported(ifobj_tx)) + if (is_xdp_supported(ifobj_tx)) { modes++; + if (ifobj_zc_avail(ifobj_tx)) + modes++; + } test_spec_init(&test, ifobj_tx, ifobj_rx, 0); tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h index 12b792004163..a86331c6b0c5 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.h +++ b/tools/testing/selftests/bpf/xdpxceiver.h @@ -61,6 +61,7 @@ enum test_mode { TEST_MODE_SKB, TEST_MODE_DRV, + TEST_MODE_ZC, TEST_MODE_MAX }; @@ -162,6 +163,7 @@ struct test_spec { u16 current_step; u16 nb_sockets; bool fail; + enum test_mode mode; char name[MAX_TEST_NAME_SIZE]; };