From patchwork Thu May 18 10:25:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246418 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2059E125B7 for ; Thu, 18 May 2023 10:26:39 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 469B71BE6 for ; Thu, 18 May 2023 03:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405597; x=1715941597; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ninw869I6P0cFGI0nOQ6S8ycG12PdZIEkTEIVH21FGs=; b=F0CymuYwHyy0McNWyRzdIe3Wv2g0xl2SKCsXFK/pXoKBq7lRBCa3ould sGSoepmpIrQIgQBfWxnYDL4HwBMvAjJLKlvsw9aCzxhRNI5nruFMEa1tH yYw/zoDxCmajKpWUi06RpKO18iMxy8yReBRniJU3Ciyhyj8uNHzonVQ2N CBQpFdurYZZLDBpBjeOHF/bi6vMTIXOgNLXRN7tC7xPcfWSAwDVONPRlN Tsu7H4KqV8mDFxrcD14tPCTvTrH/uNxFzMS5lHO2l4UIofGaZkuGlOums 4SbPrrCj3R3prNXapa8DpH+Wa1WhUnE9jsiNekRiaYfyUS5PNIqSQU0ls w==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371421" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371421" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143732" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143732" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:20 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id AF81A27BA3; Thu, 18 May 2023 11:26:19 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 01/10] ice: Minor switchdev fixes Date: Thu, 18 May 2023 12:25:00 +0200 Message-Id: <20230518102509.20913-2-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Introduce a few fixes that are needed for bridge offload to work properly. - Skip adv rule removal in ice_eswitch_disable_switchdev. Advanced rules for ctrl VSI will be removed anyway when the VSI will cleaned up, no need to do it explicitly. - Don't allow to change promisc mode in switchdev mode. When switchdev is configured, PF netdev is set to be a default VSI. This is needed for the slow-path to work correctly. All the unmatched packets will be directed to PF netdev. It is possible that this setting might be overwritten by ndo_set_rx_mode. Prevent this by checking if switchdev is enabled in ice_set_rx_mode. - Disable vlan pruning for uplink VSI. In switchdev mode, uplink VSI is configured to be default VSI which means it will receive all unmatched packets. In order to receive vlan packets we need to disable vlan pruning as well. This is done by dis_rx_filtering vlan op. - There is possibility that ice_eswitch_port_start_xmit might be called while some resources are still not allocated which might cause NULL pointer dereference. Fix this by checking if switchdev configuration was finished. Signed-off-by: Wojciech Drewek --- v2: enclose bitops into separate set of braces, move ice_is_switchdev_running check to ice_set_rx_mode from ice_vsi_sync_fltr --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 14 +++++++++++++- drivers/net/ethernet/intel/ice/ice_main.c | 4 ++-- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index ad0a007b7398..bfd003135fc8 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -103,6 +103,10 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) rule_added = true; } + vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi); + if (vlan_ops->dis_rx_filtering(uplink_vsi)) + goto err_dis_rx; + if (ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_set_allow_override)) goto err_override_uplink; @@ -114,6 +118,8 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) err_override_control: ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); err_override_uplink: + vlan_ops->ena_rx_filtering(uplink_vsi); +err_dis_rx: if (rule_added) ice_clear_dflt_vsi(uplink_vsi); err_def_rx: @@ -331,6 +337,9 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev) np = netdev_priv(netdev); vsi = np->vsi; + if (!vsi || !ice_is_switchdev_running(vsi->back)) + return NETDEV_TX_BUSY; + if (ice_is_reset_in_progress(vsi->back->state) || test_bit(ICE_VF_DIS, vsi->back->state)) return NETDEV_TX_BUSY; @@ -378,9 +387,13 @@ static void ice_eswitch_release_env(struct ice_pf *pf) { struct ice_vsi *uplink_vsi = pf->switchdev.uplink_vsi; struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi_vlan_ops *vlan_ops; + + vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi); ice_vsi_update_security(ctrl_vsi, ice_vsi_ctx_clear_allow_override); ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); + vlan_ops->ena_rx_filtering(uplink_vsi); ice_clear_dflt_vsi(uplink_vsi); ice_fltr_add_mac_and_broadcast(uplink_vsi, uplink_vsi->port_info->mac.perm_addr, @@ -503,7 +516,6 @@ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) ice_eswitch_napi_disable(pf); ice_eswitch_release_env(pf); - ice_rem_adv_rule_for_vsi(&pf->hw, ctrl_vsi->idx); ice_eswitch_release_reprs(pf, ctrl_vsi); ice_vsi_release(ctrl_vsi); ice_repr_rem_from_all_vfs(pf); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index b0d1e6116eb9..80b2b4d39278 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -385,7 +385,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) } err = 0; /* check for changes in promiscuous modes */ - if (changed_flags & IFF_ALLMULTI) { + if ((changed_flags & IFF_ALLMULTI)) { if (vsi->current_netdev_flags & IFF_ALLMULTI) { err = ice_set_promisc(vsi, ICE_MCAST_PROMISC_BITS); if (err) { @@ -5767,7 +5767,7 @@ static void ice_set_rx_mode(struct net_device *netdev) struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; - if (!vsi) + if (!vsi || ice_is_switchdev_running(vsi->back)) return; /* Set the flags to synchronize filters From patchwork Thu May 18 10:25:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246419 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A961125C0 for ; Thu, 18 May 2023 10:26:39 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E63A1BF2 for ; Thu, 18 May 2023 03:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405598; x=1715941598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tJHYS1D+DuCqmg6wf2r4ihvHN4TALwGfSZd23lEepI4=; b=IxksGTBXgXlbE2uQMADsBkyi+k3NxapVCRjRgYHm4zboAmSVGsw+kwdn jzunq+0mz63vI9HO7arWPGBT4+LoxEcM5mFMaSpOkVRlYEAlbj0ukZ0lL 1+5OdMJZV3IoM+1QZ6hbM8oAcFfanKFuohM0frR5EZZXW6Ao2Wt+B6WlF Vzm+jcFwidMptNK44exOgotHY9sdUQVCD6kVQ89sBNI1nQDgHHxwtzKvY W4cNNZyd0U70akWkblcujduYXvsIKstaB+jKLxFjoWQp9Mxz2BU8ekF4a Ooqyd8PaetYhk0S7u6JgZ4v087aYCrJYxZSvzHcOo9zX400bv45W4W2UV A==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371427" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371427" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143737" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143737" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:21 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 552A734EE3; Thu, 18 May 2023 11:26:20 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 02/10] ice: Unset src prune on uplink VSI Date: Thu, 18 May 2023 12:25:01 +0200 Message-Id: <20230518102509.20913-3-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org In switchdev mode uplink VSI is supposed to receive all packets that were not matched by existing filters. If ICE_AQ_VSI_SW_FLAG_LOCAL_LB bit is unset and we have a filter associated with uplink VSI which matches on dst mac equal to MAC1, then packets with src mac equal to MAC1 will be pruned from reaching uplink VSI. Fix this by updating uplink VSI with ICE_AQ_VSI_SW_FLAG_LOCAL_LB bit set when configuring switchdev mode. Signed-off-by: Wojciech Drewek --- v2: fix @ctx declaration in ice_vsi_update_local_lb --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 6 +++++ drivers/net/ethernet/intel/ice/ice_lib.c | 25 ++++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_lib.h | 1 + 3 files changed, 32 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index bfd003135fc8..4fe235da1182 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -113,8 +113,13 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) if (ice_vsi_update_security(ctrl_vsi, ice_vsi_ctx_set_allow_override)) goto err_override_control; + if (ice_vsi_update_local_lb(uplink_vsi, true)) + goto err_override_local_lb; + return 0; +err_override_local_lb: + ice_vsi_update_security(ctrl_vsi, ice_vsi_ctx_clear_allow_override); err_override_control: ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); err_override_uplink: @@ -391,6 +396,7 @@ static void ice_eswitch_release_env(struct ice_pf *pf) vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi); + ice_vsi_update_local_lb(uplink_vsi, false); ice_vsi_update_security(ctrl_vsi, ice_vsi_ctx_clear_allow_override); ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); vlan_ops->ena_rx_filtering(uplink_vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 5ddb95d1073a..e8142bea2eb2 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -4117,3 +4117,28 @@ void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx) { ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; } + +/** + * ice_vsi_update_local_lb - update sw block in VSI with local loopback bit + * @vsi: pointer to VSI structure + * @set: set or unset the bit + */ +int +ice_vsi_update_local_lb(struct ice_vsi *vsi, bool set) +{ + struct ice_vsi_ctx ctx = { + .info = vsi->info, + }; + + ctx.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); + if (set) + ctx.info.sw_flags |= ICE_AQ_VSI_SW_FLAG_LOCAL_LB; + else + ctx.info.sw_flags &= ~ICE_AQ_VSI_SW_FLAG_LOCAL_LB; + + if (ice_update_vsi(&vsi->back->hw, vsi->idx, &ctx, NULL)) + return -ENODEV; + + vsi->info = ctx.info; + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index e985766e6bb5..1628385a9672 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -157,6 +157,7 @@ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); +int ice_vsi_update_local_lb(struct ice_vsi *vsi, bool set); int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); int ice_vsi_del_vlan_zero(struct ice_vsi *vsi); bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi); From patchwork Thu May 18 10:25:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246422 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3802F13AC4 for ; Thu, 18 May 2023 10:26:42 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8693B1BE2 for ; Thu, 18 May 2023 03:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405599; x=1715941599; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mnZ/l3GKqXkbAKCjgkmZmAfjBvGPSqILoKwmIRttiBc=; b=H2FUgjA4B04zyUUENHxNL4VdnusDBdxlxHZ0HlWnMUx48UjpO0+yDsqq QvAaU/J2y1ESAMv208OQBt631xDlnq49hxbWD2qD461FeH5IU19hnWZZ7 5MZTL9hNX9SuBKVgNa8SkpOELBkSTp/AE/hnx2g6ufq01mAUwTv3h9V1T zm7P8F9x8ijcRFqabKogmBsmWH2ocn4FyZZ4oNiaZc3rp0EOmDqH4mjZ0 ZrZ3xNKCVW1XAQm4wc2fHCXG2fwNcf7H1fi0cB4zQZp5vmaMPDVOEtjFE B70tjTHPGeV2Ukz9u9awmmt/fCHCuOaXL/ixbpj8lGIOW1/JbcWzabIsw g==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371435" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371435" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143745" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143745" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:22 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id EED86365DF; Thu, 18 May 2023 11:26:20 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 03/10] ice: Implement basic eswitch bridge setup Date: Thu, 18 May 2023 12:25:02 +0200 Message-Id: <20230518102509.20913-4-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org With this patch, ice driver is able to track if the port representors or uplink port were added to the linux bridge in switchdev mode. Listen for NETDEV_CHANGEUPPER events in order to detect this. ice_esw_br data structure reflects the linux bridge and stores all the ports of the bridge (ice_esw_br_port) in xarray, it's created when the first port is added to the bridge and freed once the last port is removed. Note that only one bridge is supported per eswitch. Bridge port (ice_esw_br_port) can be either a VF port representor port or uplink port (ice_esw_br_port_type). In both cases bridge port holds a reference to the VSI, VF's VSI in case of the PR and uplink VSI in case of the uplink. VSI's index is used as an index to the xarray in which ports are stored. Add a check which prevents configuring switchdev mode if uplink is already added to any bridge. This is needed because we need to listen for NETDEV_CHANGEUPPER events to record if the uplink was added to the bridge. Netdevice notifier is registered after eswitch mode is changed top switchdev. Signed-off-by: Wojciech Drewek --- v2: fix structure holes, wrapping improvements --- drivers/net/ethernet/intel/ice/Makefile | 2 +- drivers/net/ethernet/intel/ice/ice.h | 4 +- drivers/net/ethernet/intel/ice/ice_eswitch.c | 23 +- .../net/ethernet/intel/ice/ice_eswitch_br.c | 381 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_eswitch_br.h | 42 ++ drivers/net/ethernet/intel/ice/ice_main.c | 2 +- drivers/net/ethernet/intel/ice/ice_repr.c | 2 +- drivers/net/ethernet/intel/ice/ice_repr.h | 3 +- 8 files changed, 450 insertions(+), 9 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_eswitch_br.c create mode 100644 drivers/net/ethernet/intel/ice/ice_eswitch_br.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index 817977e3039d..960277d78e09 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -47,5 +47,5 @@ ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o -ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o +ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o ice_eswitch_br.o ice-$(CONFIG_GNSS) += ice_gnss.o diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index b4bca1d964a9..8876ed8ba268 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -511,6 +511,7 @@ enum ice_pf_flags { struct ice_switchdev_info { struct ice_vsi *control_vsi; struct ice_vsi *uplink_vsi; + struct ice_esw_br_offloads *br_offloads; bool is_running; }; @@ -619,6 +620,7 @@ struct ice_pf { struct ice_lag *lag; /* Link Aggregation information */ struct ice_switchdev_info switchdev; + struct ice_esw_br_port *br_port; #define ICE_INVALID_AGG_NODE_ID 0 #define ICE_PF_AGG_NODE_ID_START 1 @@ -846,7 +848,7 @@ static inline bool ice_is_adq_active(struct ice_pf *pf) return false; } -bool netif_is_ice(struct net_device *dev); +bool netif_is_ice(const struct net_device *dev); int ice_vsi_setup_tx_rings(struct ice_vsi *vsi); int ice_vsi_setup_rx_rings(struct ice_vsi *vsi); int ice_vsi_open_ctrl(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 4fe235da1182..c2e3289a0bb4 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -4,6 +4,7 @@ #include "ice.h" #include "ice_lib.h" #include "ice_eswitch.h" +#include "ice_eswitch_br.h" #include "ice_fltr.h" #include "ice_repr.h" #include "ice_devlink.h" @@ -474,16 +475,24 @@ static void ice_eswitch_napi_disable(struct ice_pf *pf) */ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) { - struct ice_vsi *ctrl_vsi; + struct ice_vsi *ctrl_vsi, *uplink_vsi; + + uplink_vsi = ice_get_main_vsi(pf); + if (!uplink_vsi) + return -ENODEV; + + if (netif_is_any_bridge_port(uplink_vsi->netdev)) { + dev_err(ice_pf_to_dev(pf), + "Uplink port cannot be a bridge port\n"); + return -EINVAL; + } pf->switchdev.control_vsi = ice_eswitch_vsi_setup(pf, pf->hw.port_info); if (!pf->switchdev.control_vsi) return -ENODEV; ctrl_vsi = pf->switchdev.control_vsi; - pf->switchdev.uplink_vsi = ice_get_main_vsi(pf); - if (!pf->switchdev.uplink_vsi) - goto err_vsi; + pf->switchdev.uplink_vsi = uplink_vsi; if (ice_eswitch_setup_env(pf)) goto err_vsi; @@ -499,10 +508,15 @@ static int ice_eswitch_enable_switchdev(struct ice_pf *pf) if (ice_vsi_open(ctrl_vsi)) goto err_setup_reprs; + if (ice_eswitch_br_offloads_init(pf)) + goto err_br_offloads; + ice_eswitch_napi_enable(pf); return 0; +err_br_offloads: + ice_vsi_close(ctrl_vsi); err_setup_reprs: ice_repr_rem_from_all_vfs(pf); err_repr_add: @@ -521,6 +535,7 @@ static void ice_eswitch_disable_switchdev(struct ice_pf *pf) struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; ice_eswitch_napi_disable(pf); + ice_eswitch_br_offloads_deinit(pf); ice_eswitch_release_env(pf); ice_eswitch_release_reprs(pf, ctrl_vsi); ice_vsi_release(ctrl_vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c new file mode 100644 index 000000000000..1e5fd9c5a8b6 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -0,0 +1,381 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2023, Intel Corporation. */ + +#include "ice.h" +#include "ice_eswitch_br.h" +#include "ice_repr.h" + +static bool ice_eswitch_br_is_dev_valid(const struct net_device *dev) +{ + /* Accept only PF netdev and PRs */ + return ice_is_port_repr_netdev(dev) || netif_is_ice(dev); +} + +static struct ice_esw_br_port * +ice_eswitch_br_netdev_to_port(struct net_device *dev) +{ + if (ice_is_port_repr_netdev(dev)) { + struct ice_repr *repr = ice_netdev_to_repr(dev); + + return repr->br_port; + } else if (netif_is_ice(dev)) { + struct ice_pf *pf = ice_netdev_to_pf(dev); + + return pf->br_port; + } + + return NULL; +} + +static void +ice_eswitch_br_port_deinit(struct ice_esw_br *bridge, + struct ice_esw_br_port *br_port) +{ + struct ice_vsi *vsi = br_port->vsi; + + if (br_port->type == ICE_ESWITCH_BR_UPLINK_PORT && vsi->back) + vsi->back->br_port = NULL; + else if (vsi->vf) + vsi->vf->repr->br_port = NULL; + + xa_erase(&bridge->ports, br_port->vsi_idx); + kfree(br_port); +} + +static struct ice_esw_br_port * +ice_eswitch_br_port_init(struct ice_esw_br *bridge) +{ + struct ice_esw_br_port *br_port; + + br_port = kzalloc(sizeof(*br_port), GFP_KERNEL); + if (!br_port) + return ERR_PTR(-ENOMEM); + + br_port->bridge = bridge; + + return br_port; +} + +static int +ice_eswitch_br_vf_repr_port_init(struct ice_esw_br *bridge, + struct ice_repr *repr) +{ + struct ice_esw_br_port *br_port; + int err; + + br_port = ice_eswitch_br_port_init(bridge); + if (IS_ERR(br_port)) + return PTR_ERR(br_port); + + br_port->vsi = repr->src_vsi; + br_port->vsi_idx = br_port->vsi->idx; + br_port->type = ICE_ESWITCH_BR_VF_REPR_PORT; + repr->br_port = br_port; + + err = xa_insert(&bridge->ports, br_port->vsi_idx, br_port, GFP_KERNEL); + if (err) { + ice_eswitch_br_port_deinit(bridge, br_port); + return err; + } + + return 0; +} + +static int +ice_eswitch_br_uplink_port_init(struct ice_esw_br *bridge, struct ice_pf *pf) +{ + struct ice_vsi *vsi = pf->switchdev.uplink_vsi; + struct ice_esw_br_port *br_port; + int err; + + br_port = ice_eswitch_br_port_init(bridge); + if (IS_ERR(br_port)) + return PTR_ERR(br_port); + + br_port->vsi = vsi; + br_port->vsi_idx = br_port->vsi->idx; + br_port->type = ICE_ESWITCH_BR_UPLINK_PORT; + pf->br_port = br_port; + + err = xa_insert(&bridge->ports, br_port->vsi_idx, br_port, GFP_KERNEL); + if (err) { + ice_eswitch_br_port_deinit(bridge, br_port); + return err; + } + + return 0; +} + +static void +ice_eswitch_br_ports_flush(struct ice_esw_br *bridge) +{ + struct ice_esw_br_port *port; + unsigned long i; + + xa_for_each(&bridge->ports, i, port) + ice_eswitch_br_port_deinit(bridge, port); +} + +static void +ice_eswitch_br_deinit(struct ice_esw_br_offloads *br_offloads, + struct ice_esw_br *bridge) +{ + if (!bridge) + return; + + /* Cleanup all the ports that were added asynchronously + * through NETDEV_CHANGEUPPER event. + */ + ice_eswitch_br_ports_flush(bridge); + WARN_ON(!xa_empty(&bridge->ports)); + xa_destroy(&bridge->ports); + br_offloads->bridge = NULL; + kfree(bridge); +} + +static struct ice_esw_br * +ice_eswitch_br_init(struct ice_esw_br_offloads *br_offloads, int ifindex) +{ + struct ice_esw_br *bridge; + + bridge = kzalloc(sizeof(*bridge), GFP_KERNEL); + if (!bridge) + return ERR_PTR(-ENOMEM); + + bridge->br_offloads = br_offloads; + bridge->ifindex = ifindex; + xa_init(&bridge->ports); + br_offloads->bridge = bridge; + + return bridge; +} + +static struct ice_esw_br * +ice_eswitch_br_get(struct ice_esw_br_offloads *br_offloads, int ifindex, + struct netlink_ext_ack *extack) +{ + struct ice_esw_br *bridge = br_offloads->bridge; + + if (bridge) { + if (bridge->ifindex != ifindex) { + NL_SET_ERR_MSG_MOD(extack, + "Only one bridge is supported per eswitch"); + return ERR_PTR(-EOPNOTSUPP); + } + return bridge; + } + + /* Create the bridge if it doesn't exist yet */ + bridge = ice_eswitch_br_init(br_offloads, ifindex); + if (IS_ERR(bridge)) + NL_SET_ERR_MSG_MOD(extack, "Failed to init the bridge"); + + return bridge; +} + +static void +ice_eswitch_br_verify_deinit(struct ice_esw_br_offloads *br_offloads, + struct ice_esw_br *bridge) +{ + /* Remove the bridge if it exists and there are no ports left */ + if (!bridge || !xa_empty(&bridge->ports)) + return; + + ice_eswitch_br_deinit(br_offloads, bridge); +} + +static int +ice_eswitch_br_port_unlink(struct ice_esw_br_offloads *br_offloads, + struct net_device *dev, int ifindex, + struct netlink_ext_ack *extack) +{ + struct ice_esw_br_port *br_port = ice_eswitch_br_netdev_to_port(dev); + + if (!br_port) { + NL_SET_ERR_MSG_MOD(extack, + "Port representor is not attached to any bridge"); + return -EINVAL; + } + + if (br_port->bridge->ifindex != ifindex) { + NL_SET_ERR_MSG_MOD(extack, + "Port representor is attached to another bridge"); + return -EINVAL; + } + + ice_eswitch_br_port_deinit(br_port->bridge, br_port); + ice_eswitch_br_verify_deinit(br_offloads, br_port->bridge); + + return 0; +} + +static int +ice_eswitch_br_port_link(struct ice_esw_br_offloads *br_offloads, + struct net_device *dev, int ifindex, + struct netlink_ext_ack *extack) +{ + struct ice_esw_br *bridge; + int err; + + if (ice_eswitch_br_netdev_to_port(dev)) { + NL_SET_ERR_MSG_MOD(extack, + "Port is already attached to the bridge"); + return -EINVAL; + } + + bridge = ice_eswitch_br_get(br_offloads, ifindex, extack); + if (IS_ERR(bridge)) + return PTR_ERR(bridge); + + if (ice_is_port_repr_netdev(dev)) { + struct ice_repr *repr = ice_netdev_to_repr(dev); + + err = ice_eswitch_br_vf_repr_port_init(bridge, repr); + } else { + struct ice_pf *pf = ice_netdev_to_pf(dev); + + err = ice_eswitch_br_uplink_port_init(bridge, pf); + } + if (err) { + NL_SET_ERR_MSG_MOD(extack, "Failed to init bridge port"); + goto err_port_init; + } + + return 0; + +err_port_init: + ice_eswitch_br_verify_deinit(br_offloads, bridge); + return err; +} + +static int +ice_eswitch_br_port_changeupper(struct notifier_block *nb, void *ptr) +{ + struct net_device *dev = netdev_notifier_info_to_dev(ptr); + struct netdev_notifier_changeupper_info *info = ptr; + struct ice_esw_br_offloads *br_offloads; + struct netlink_ext_ack *extack; + struct net_device *upper; + + br_offloads = ice_nb_to_br_offloads(nb, netdev_nb); + + if (!ice_eswitch_br_is_dev_valid(dev)) + return 0; + + upper = info->upper_dev; + if (!netif_is_bridge_master(upper)) + return 0; + + extack = netdev_notifier_info_to_extack(&info->info); + + if (info->linking) + return ice_eswitch_br_port_link(br_offloads, dev, + upper->ifindex, extack); + else + return ice_eswitch_br_port_unlink(br_offloads, dev, + upper->ifindex, extack); +} + +static int +ice_eswitch_br_port_event(struct notifier_block *nb, + unsigned long event, void *ptr) +{ + int err = 0; + + switch (event) { + case NETDEV_CHANGEUPPER: + err = ice_eswitch_br_port_changeupper(nb, ptr); + break; + } + + return notifier_from_errno(err); +} + +static void +ice_eswitch_br_offloads_dealloc(struct ice_pf *pf) +{ + struct ice_esw_br_offloads *br_offloads = pf->switchdev.br_offloads; + + ASSERT_RTNL(); + + if (!br_offloads) + return; + + ice_eswitch_br_deinit(br_offloads, br_offloads->bridge); + + pf->switchdev.br_offloads = NULL; + kfree(br_offloads); +} + +static struct ice_esw_br_offloads * +ice_eswitch_br_offloads_alloc(struct ice_pf *pf) +{ + struct ice_esw_br_offloads *br_offloads; + + ASSERT_RTNL(); + + if (pf->switchdev.br_offloads) + return ERR_PTR(-EEXIST); + + br_offloads = kzalloc(sizeof(*br_offloads), GFP_KERNEL); + if (!br_offloads) + return ERR_PTR(-ENOMEM); + + pf->switchdev.br_offloads = br_offloads; + br_offloads->pf = pf; + + return br_offloads; +} + +void +ice_eswitch_br_offloads_deinit(struct ice_pf *pf) +{ + struct ice_esw_br_offloads *br_offloads; + + br_offloads = pf->switchdev.br_offloads; + if (!br_offloads) + return; + + unregister_netdevice_notifier(&br_offloads->netdev_nb); + /* Although notifier block is unregistered just before, + * so we don't get any new events, some events might be + * already in progress. Hold the rtnl lock and wait for + * them to finished. + */ + rtnl_lock(); + ice_eswitch_br_offloads_dealloc(pf); + rtnl_unlock(); +} + +int +ice_eswitch_br_offloads_init(struct ice_pf *pf) +{ + struct ice_esw_br_offloads *br_offloads; + struct device *dev = ice_pf_to_dev(pf); + int err; + + rtnl_lock(); + br_offloads = ice_eswitch_br_offloads_alloc(pf); + rtnl_unlock(); + if (IS_ERR(br_offloads)) { + dev_err(dev, "Failed to init eswitch bridge\n"); + return PTR_ERR(br_offloads); + } + + br_offloads->netdev_nb.notifier_call = ice_eswitch_br_port_event; + err = register_netdevice_notifier(&br_offloads->netdev_nb); + if (err) { + dev_err(dev, + "Failed to register bridge port event notifier\n"); + goto err_reg_netdev_nb; + } + + return 0; + +err_reg_netdev_nb: + rtnl_lock(); + ice_eswitch_br_offloads_dealloc(pf); + rtnl_unlock(); + + return err; +} diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h new file mode 100644 index 000000000000..3ad28a17298f --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2023, Intel Corporation. */ + +#ifndef _ICE_ESWITCH_BR_H_ +#define _ICE_ESWITCH_BR_H_ + +enum ice_esw_br_port_type { + ICE_ESWITCH_BR_UPLINK_PORT = 0, + ICE_ESWITCH_BR_VF_REPR_PORT = 1, +}; + +struct ice_esw_br_port { + struct ice_esw_br *bridge; + struct ice_vsi *vsi; + enum ice_esw_br_port_type type; + u16 vsi_idx; +}; + +struct ice_esw_br { + struct ice_esw_br_offloads *br_offloads; + struct xarray ports; + + int ifindex; +}; + +struct ice_esw_br_offloads { + struct ice_pf *pf; + struct ice_esw_br *bridge; + struct notifier_block netdev_nb; +}; + +#define ice_nb_to_br_offloads(nb, nb_name) \ + container_of(nb, \ + struct ice_esw_br_offloads, \ + nb_name) + +void +ice_eswitch_br_offloads_deinit(struct ice_pf *pf); +int +ice_eswitch_br_offloads_init(struct ice_pf *pf); + +#endif /* _ICE_ESWITCH_BR_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 80b2b4d39278..7aa0bec05186 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -80,7 +80,7 @@ ice_indr_setup_tc_cb(struct net_device *netdev, struct Qdisc *sch, void *data, void (*cleanup)(struct flow_block_cb *block_cb)); -bool netif_is_ice(struct net_device *dev) +bool netif_is_ice(const struct net_device *dev) { return dev && (dev->netdev_ops == &ice_netdev_ops); } diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c index e30e12321abd..c686ac0935eb 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.c +++ b/drivers/net/ethernet/intel/ice/ice_repr.c @@ -254,7 +254,7 @@ static const struct net_device_ops ice_repr_netdev_ops = { * ice_is_port_repr_netdev - Check if a given netdevice is a port representor netdev * @netdev: pointer to netdev */ -bool ice_is_port_repr_netdev(struct net_device *netdev) +bool ice_is_port_repr_netdev(const struct net_device *netdev) { return netdev && (netdev->netdev_ops == &ice_repr_netdev_ops); } diff --git a/drivers/net/ethernet/intel/ice/ice_repr.h b/drivers/net/ethernet/intel/ice/ice_repr.h index 9c2a6f496b3b..e1ee2d2c1d2d 100644 --- a/drivers/net/ethernet/intel/ice/ice_repr.h +++ b/drivers/net/ethernet/intel/ice/ice_repr.h @@ -12,6 +12,7 @@ struct ice_repr { struct ice_q_vector *q_vector; struct net_device *netdev; struct metadata_dst *dst; + struct ice_esw_br_port *br_port; #ifdef CONFIG_ICE_SWITCHDEV /* info about slow path rule */ struct ice_rule_query_data sp_rule; @@ -27,5 +28,5 @@ void ice_repr_stop_tx_queues(struct ice_repr *repr); void ice_repr_set_traffic_vsi(struct ice_repr *repr, struct ice_vsi *vsi); struct ice_repr *ice_netdev_to_repr(struct net_device *netdev); -bool ice_is_port_repr_netdev(struct net_device *netdev); +bool ice_is_port_repr_netdev(const struct net_device *netdev); #endif From patchwork Thu May 18 10:25:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246421 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7F58125A7 for ; Thu, 18 May 2023 10:26:41 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 028231BF5 for ; Thu, 18 May 2023 03:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405598; x=1715941598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hacv2fi9e1FSp2ww1DQDntIAcGEWssv2Y3CFVch0QM8=; b=doLNUl8KhYrXnLz2uLjfP3FAHUALKAdL5tuKVT9X5VKvNtunpTWu8y2t nIaWBoMtVHHagk+o1EJfDmV6ckJPyOK0C4FsYMmqgRmzvPtcmjGVT9DUs jaF6jNRhv6QfUBre/ZRBPPA4wCCkwooxRXlyLaxBgz6zMax8JxjydW4nE L4f8QlJjQQwztrQ1U4iyLA/+Gv/CEVNNaTVqDooyTxr9cZG+HC5Lm6/gn dITrhEUzhdvANJX45IIE1kEM6P1Fqcx5VUaRt4lOvtRx6xwTXGE6lBODK 1ClsiQ5RD7Eeg6/8KjgQRKsiuzLl5MS6uEDqxKG5GOwqczDDj4FHnM19b w==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371437" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371437" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143747" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143747" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:22 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id AF0AD36C0D; Thu, 18 May 2023 11:26:21 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 04/10] ice: Switchdev FDB events support Date: Thu, 18 May 2023 12:25:03 +0200 Message-Id: <20230518102509.20913-5-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Listen for SWITCHDEV_FDB_{ADD|DEL}_TO_DEVICE events while in switchdev mode. Accept these events on both uplink and VF PR ports. Add HW rules in newly created workqueue. FDB entries are stored in rhashtable for lookup when removing the entry and in the list for cleanup purpose. Direction of the HW rule depends on the type of the ports on which the FDB event was received: ICE_ESWITCH_BR_UPLINK_PORT: TX rule that forwards the packet to the LAN (egress). ICE_ESWITCH_BR_VF_REPR_PORT: RX rule that forwards the packet to the VF associated with the port representor. In both cases the rule matches on the dst mac address. All the FDB entries are stored in the bridge structure. When the port is removed all the FDB entries associated with this port are removed as well. This is achieved thanks to the reference to the port that FDB entry holds. In the fwd rule we use only one lookup type (MAC address) but lkups_cnt variable is already introduced because we will have more lookups in the subsequent patches. Signed-off-by: Wojciech Drewek --- v2: declare-time initialization, code style nitpicks, use typeof instead of full type name in the container_of macro, use PTR_ERR_OR_ZERO in ice_eswitch_br_flow_create --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 448 +++++++++++++++++- .../net/ethernet/intel/ice/ice_eswitch_br.h | 46 ++ 2 files changed, 493 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 1e5fd9c5a8b6..0fc4807e1d0c 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -4,6 +4,14 @@ #include "ice.h" #include "ice_eswitch_br.h" #include "ice_repr.h" +#include "ice_switch.h" + +static const struct rhashtable_params ice_fdb_ht_params = { + .key_offset = offsetof(struct ice_esw_br_fdb_entry, data), + .key_len = sizeof(struct ice_esw_br_fdb_data), + .head_offset = offsetof(struct ice_esw_br_fdb_entry, ht_node), + .automatic_shrinking = true, +}; static bool ice_eswitch_br_is_dev_valid(const struct net_device *dev) { @@ -27,15 +35,421 @@ ice_eswitch_br_netdev_to_port(struct net_device *dev) return NULL; } +static void +ice_eswitch_br_ingress_rule_setup(struct ice_adv_lkup_elem *list, + struct ice_adv_rule_info *rule_info, + const unsigned char *mac, + u8 pf_id, u16 vf_vsi_idx) +{ + list[0].type = ICE_MAC_OFOS; + ether_addr_copy(list[0].h_u.eth_hdr.dst_addr, mac); + eth_broadcast_addr(list[0].m_u.eth_hdr.dst_addr); + + rule_info->sw_act.vsi_handle = vf_vsi_idx; + rule_info->sw_act.flag |= ICE_FLTR_RX; + rule_info->sw_act.src = pf_id; + rule_info->priority = 5; +} + +static void +ice_eswitch_br_egress_rule_setup(struct ice_adv_lkup_elem *list, + struct ice_adv_rule_info *rule_info, + const unsigned char *mac, + u16 pf_vsi_idx) +{ + list[0].type = ICE_MAC_OFOS; + ether_addr_copy(list[0].h_u.eth_hdr.dst_addr, mac); + eth_broadcast_addr(list[0].m_u.eth_hdr.dst_addr); + + rule_info->sw_act.vsi_handle = pf_vsi_idx; + rule_info->sw_act.flag |= ICE_FLTR_TX; + rule_info->flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE; + rule_info->flags_info.act_valid = true; + rule_info->priority = 5; +} + +static int +ice_eswitch_br_rule_delete(struct ice_hw *hw, struct ice_rule_query_data *rule) +{ + int err; + + if (!rule) + return -EINVAL; + + err = ice_rem_adv_rule_by_id(hw, rule); + kfree(rule); + + return err; +} + +static struct ice_rule_query_data * +ice_eswitch_br_fwd_rule_create(struct ice_hw *hw, int vsi_idx, int port_type, + const unsigned char *mac) +{ + struct ice_adv_rule_info rule_info = { 0 }; + struct ice_rule_query_data *rule; + struct ice_adv_lkup_elem *list; + u16 lkups_cnt = 1; + int err; + + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + return ERR_PTR(-ENOMEM); + + list = kcalloc(lkups_cnt, sizeof(*list), GFP_ATOMIC); + if (!list) { + err = -ENOMEM; + goto err_list_alloc; + } + + switch (port_type) { + case ICE_ESWITCH_BR_UPLINK_PORT: + ice_eswitch_br_egress_rule_setup(list, &rule_info, mac, + vsi_idx); + break; + case ICE_ESWITCH_BR_VF_REPR_PORT: + ice_eswitch_br_ingress_rule_setup(list, &rule_info, mac, + hw->pf_id, vsi_idx); + break; + default: + err = -EINVAL; + goto err_add_rule; + } + + rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI; + + err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, rule); + if (err) + goto err_add_rule; + + kfree(list); + + return rule; + +err_add_rule: + kfree(list); +err_list_alloc: + kfree(rule); + + return ERR_PTR(err); +} + +static struct ice_esw_br_flow * +ice_eswitch_br_flow_create(struct device *dev, struct ice_hw *hw, int vsi_idx, + int port_type, const unsigned char *mac) +{ + struct ice_rule_query_data *fwd_rule; + struct ice_esw_br_flow *flow; + int err; + + flow = kzalloc(sizeof(*flow), GFP_KERNEL); + if (!flow) + return ERR_PTR(-ENOMEM); + + fwd_rule = ice_eswitch_br_fwd_rule_create(hw, vsi_idx, port_type, mac); + err = PTR_ERR_OR_ZERO(fwd_rule); + if (err) { + dev_err(dev, "Failed to create eswitch bridge %sgress forward rule, err: %d\n", + port_type == ICE_ESWITCH_BR_UPLINK_PORT ? "e" : "in", + err); + goto err_fwd_rule; + } + + flow->fwd_rule = fwd_rule; + + return flow; + +err_fwd_rule: + kfree(flow); + + return ERR_PTR(err); +} + +static struct ice_esw_br_fdb_entry * +ice_eswitch_br_fdb_find(struct ice_esw_br *bridge, const unsigned char *mac, + u16 vid) +{ + struct ice_esw_br_fdb_data data = { + .vid = vid, + }; + + ether_addr_copy(data.addr, mac); + return rhashtable_lookup_fast(&bridge->fdb_ht, &data, + ice_fdb_ht_params); +} + +static void +ice_eswitch_br_flow_delete(struct ice_pf *pf, struct ice_esw_br_flow *flow) +{ + struct device *dev = ice_pf_to_dev(pf); + int err; + + err = ice_eswitch_br_rule_delete(&pf->hw, flow->fwd_rule); + if (err) + dev_err(dev, "Failed to delete FDB forward rule, err: %d\n", + err); + + kfree(flow); +} + +static void +ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge, + struct ice_esw_br_fdb_entry *fdb_entry) +{ + struct ice_pf *pf = bridge->br_offloads->pf; + + rhashtable_remove_fast(&bridge->fdb_ht, &fdb_entry->ht_node, + ice_fdb_ht_params); + list_del(&fdb_entry->list); + + ice_eswitch_br_flow_delete(pf, fdb_entry->flow); + + kfree(fdb_entry); +} + +static void +ice_eswitch_br_fdb_offload_notify(struct net_device *dev, + const unsigned char *mac, u16 vid, + unsigned long val) +{ + struct switchdev_notifier_fdb_info fdb_info = { + .addr = mac, + .vid = vid, + .offloaded = true, + }; + + call_switchdev_notifiers(val, dev, &fdb_info.info, NULL); +} + +static void +ice_eswitch_br_fdb_entry_notify_and_cleanup(struct ice_esw_br *bridge, + struct ice_esw_br_fdb_entry *entry) +{ + if (!(entry->flags & ICE_ESWITCH_BR_FDB_ADDED_BY_USER)) + ice_eswitch_br_fdb_offload_notify(entry->dev, entry->data.addr, + entry->data.vid, + SWITCHDEV_FDB_DEL_TO_BRIDGE); + ice_eswitch_br_fdb_entry_delete(bridge, entry); +} + +static void +ice_eswitch_br_fdb_entry_find_and_delete(struct ice_esw_br *bridge, + const unsigned char *mac, u16 vid) +{ + struct ice_pf *pf = bridge->br_offloads->pf; + struct ice_esw_br_fdb_entry *fdb_entry; + struct device *dev = ice_pf_to_dev(pf); + + fdb_entry = ice_eswitch_br_fdb_find(bridge, mac, vid); + if (!fdb_entry) { + dev_err(dev, "FDB entry with mac: %pM and vid: %u not found\n", + mac, vid); + return; + } + + ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, fdb_entry); +} + +static void +ice_eswitch_br_fdb_entry_create(struct net_device *netdev, + struct ice_esw_br_port *br_port, + bool added_by_user, + const unsigned char *mac, u16 vid) +{ + struct ice_esw_br *bridge = br_port->bridge; + struct ice_pf *pf = bridge->br_offloads->pf; + struct device *dev = ice_pf_to_dev(pf); + struct ice_esw_br_fdb_entry *fdb_entry; + struct ice_esw_br_flow *flow; + struct ice_hw *hw = &pf->hw; + unsigned long event; + int err; + + fdb_entry = ice_eswitch_br_fdb_find(bridge, mac, vid); + if (fdb_entry) + ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, fdb_entry); + + fdb_entry = kzalloc(sizeof(*fdb_entry), GFP_KERNEL); + if (!fdb_entry) { + err = -ENOMEM; + goto err_exit; + } + + flow = ice_eswitch_br_flow_create(dev, hw, br_port->vsi_idx, + br_port->type, mac); + if (IS_ERR(flow)) { + err = PTR_ERR(flow); + goto err_add_flow; + } + + ether_addr_copy(fdb_entry->data.addr, mac); + fdb_entry->data.vid = vid; + fdb_entry->br_port = br_port; + fdb_entry->flow = flow; + fdb_entry->dev = netdev; + event = SWITCHDEV_FDB_ADD_TO_BRIDGE; + + if (added_by_user) { + fdb_entry->flags |= ICE_ESWITCH_BR_FDB_ADDED_BY_USER; + event = SWITCHDEV_FDB_OFFLOADED; + } + + err = rhashtable_insert_fast(&bridge->fdb_ht, &fdb_entry->ht_node, + ice_fdb_ht_params); + if (err) + goto err_fdb_insert; + + list_add(&fdb_entry->list, &bridge->fdb_list); + + ice_eswitch_br_fdb_offload_notify(netdev, mac, vid, event); + + return; + +err_fdb_insert: + ice_eswitch_br_flow_delete(pf, flow); +err_add_flow: + kfree(fdb_entry); +err_exit: + dev_err(dev, "Failed to create fdb entry, err: %d\n", err); +} + +static void +ice_eswitch_br_fdb_work_dealloc(struct ice_esw_br_fdb_work *fdb_work) +{ + kfree(fdb_work->fdb_info.addr); + kfree(fdb_work); +} + +static void +ice_eswitch_br_fdb_event_work(struct work_struct *work) +{ + struct ice_esw_br_fdb_work *fdb_work = ice_work_to_fdb_work(work); + bool added_by_user = fdb_work->fdb_info.added_by_user; + struct ice_esw_br_port *br_port = fdb_work->br_port; + const unsigned char *mac = fdb_work->fdb_info.addr; + u16 vid = fdb_work->fdb_info.vid; + + rtnl_lock(); + + if (!br_port || !br_port->bridge) + goto err_exit; + + switch (fdb_work->event) { + case SWITCHDEV_FDB_ADD_TO_DEVICE: + ice_eswitch_br_fdb_entry_create(fdb_work->dev, br_port, + added_by_user, mac, vid); + break; + case SWITCHDEV_FDB_DEL_TO_DEVICE: + ice_eswitch_br_fdb_entry_find_and_delete(br_port->bridge, + mac, vid); + break; + default: + goto err_exit; + } + +err_exit: + rtnl_unlock(); + dev_put(fdb_work->dev); + ice_eswitch_br_fdb_work_dealloc(fdb_work); +} + +static struct ice_esw_br_fdb_work * +ice_eswitch_br_fdb_work_alloc(struct switchdev_notifier_fdb_info *fdb_info, + struct ice_esw_br_port *br_port, + struct net_device *dev, + unsigned long event) +{ + struct ice_esw_br_fdb_work *work; + unsigned char *mac; + + work = kzalloc(sizeof(*work), GFP_ATOMIC); + if (!work) + return ERR_PTR(-ENOMEM); + + INIT_WORK(&work->work, ice_eswitch_br_fdb_event_work); + memcpy(&work->fdb_info, fdb_info, sizeof(work->fdb_info)); + + mac = kzalloc(ETH_ALEN, GFP_ATOMIC); + if (!mac) { + kfree(work); + return ERR_PTR(-ENOMEM); + } + + ether_addr_copy(mac, fdb_info->addr); + work->fdb_info.addr = mac; + work->br_port = br_port; + work->event = event; + work->dev = dev; + + return work; +} + +static int +ice_eswitch_br_switchdev_event(struct notifier_block *nb, + unsigned long event, void *ptr) +{ + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + struct switchdev_notifier_fdb_info *fdb_info; + struct switchdev_notifier_info *info = ptr; + struct ice_esw_br_offloads *br_offloads; + struct ice_esw_br_fdb_work *work; + struct ice_esw_br_port *br_port; + struct netlink_ext_ack *extack; + struct net_device *upper; + + br_offloads = ice_nb_to_br_offloads(nb, switchdev_nb); + extack = switchdev_notifier_info_to_extack(ptr); + + upper = netdev_master_upper_dev_get_rcu(dev); + if (!upper) + return NOTIFY_DONE; + + if (!netif_is_bridge_master(upper)) + return NOTIFY_DONE; + + if (!ice_eswitch_br_is_dev_valid(dev)) + return NOTIFY_DONE; + + br_port = ice_eswitch_br_netdev_to_port(dev); + if (!br_port) + return NOTIFY_DONE; + + switch (event) { + case SWITCHDEV_FDB_ADD_TO_DEVICE: + case SWITCHDEV_FDB_DEL_TO_DEVICE: + fdb_info = container_of(info, typeof(*fdb_info), info); + + work = ice_eswitch_br_fdb_work_alloc(fdb_info, br_port, dev, + event); + if (IS_ERR(work)) { + NL_SET_ERR_MSG_MOD(extack, "Failed to init switchdev fdb work"); + return notifier_from_errno(PTR_ERR(work)); + } + dev_hold(dev); + + queue_work(br_offloads->wq, &work->work); + break; + default: + break; + } + return NOTIFY_DONE; +} + static void ice_eswitch_br_port_deinit(struct ice_esw_br *bridge, struct ice_esw_br_port *br_port) { + struct ice_esw_br_fdb_entry *fdb_entry, *tmp; struct ice_vsi *vsi = br_port->vsi; + list_for_each_entry_safe(fdb_entry, tmp, &bridge->fdb_list, list) { + if (br_port == fdb_entry->br_port) + ice_eswitch_br_fdb_entry_delete(bridge, fdb_entry); + } + if (br_port->type == ICE_ESWITCH_BR_UPLINK_PORT && vsi->back) vsi->back->br_port = NULL; - else if (vsi->vf) + else if (vsi->vf && vsi->vf->repr) vsi->vf->repr->br_port = NULL; xa_erase(&bridge->ports, br_port->vsi_idx); @@ -129,6 +543,8 @@ ice_eswitch_br_deinit(struct ice_esw_br_offloads *br_offloads, ice_eswitch_br_ports_flush(bridge); WARN_ON(!xa_empty(&bridge->ports)); xa_destroy(&bridge->ports); + rhashtable_destroy(&bridge->fdb_ht); + br_offloads->bridge = NULL; kfree(bridge); } @@ -137,11 +553,19 @@ static struct ice_esw_br * ice_eswitch_br_init(struct ice_esw_br_offloads *br_offloads, int ifindex) { struct ice_esw_br *bridge; + int err; bridge = kzalloc(sizeof(*bridge), GFP_KERNEL); if (!bridge) return ERR_PTR(-ENOMEM); + err = rhashtable_init(&bridge->fdb_ht, &ice_fdb_ht_params); + if (err) { + kfree(bridge); + return ERR_PTR(err); + } + + INIT_LIST_HEAD(&bridge->fdb_list); bridge->br_offloads = br_offloads; bridge->ifindex = ifindex; xa_init(&bridge->ports); @@ -337,6 +761,8 @@ ice_eswitch_br_offloads_deinit(struct ice_pf *pf) return; unregister_netdevice_notifier(&br_offloads->netdev_nb); + unregister_switchdev_notifier(&br_offloads->switchdev_nb); + destroy_workqueue(br_offloads->wq); /* Although notifier block is unregistered just before, * so we don't get any new events, some events might be * already in progress. Hold the rtnl lock and wait for @@ -362,6 +788,22 @@ ice_eswitch_br_offloads_init(struct ice_pf *pf) return PTR_ERR(br_offloads); } + br_offloads->wq = alloc_ordered_workqueue("ice_bridge_wq", 0); + if (!br_offloads->wq) { + err = -ENOMEM; + dev_err(dev, "Failed to allocate bridge workqueue\n"); + goto err_alloc_wq; + } + + br_offloads->switchdev_nb.notifier_call = + ice_eswitch_br_switchdev_event; + err = register_switchdev_notifier(&br_offloads->switchdev_nb); + if (err) { + dev_err(dev, + "Failed to register switchdev notifier\n"); + goto err_reg_switchdev_nb; + } + br_offloads->netdev_nb.notifier_call = ice_eswitch_br_port_event; err = register_netdevice_notifier(&br_offloads->netdev_nb); if (err) { @@ -373,6 +815,10 @@ ice_eswitch_br_offloads_init(struct ice_pf *pf) return 0; err_reg_netdev_nb: + unregister_switchdev_notifier(&br_offloads->switchdev_nb); +err_reg_switchdev_nb: + destroy_workqueue(br_offloads->wq); +err_alloc_wq: rtnl_lock(); ice_eswitch_br_offloads_dealloc(pf); rtnl_unlock(); diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h index 3ad28a17298f..6fcacf545b98 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h @@ -4,6 +4,33 @@ #ifndef _ICE_ESWITCH_BR_H_ #define _ICE_ESWITCH_BR_H_ +#include + +struct ice_esw_br_fdb_data { + unsigned char addr[ETH_ALEN]; + u16 vid; +}; + +struct ice_esw_br_flow { + struct ice_rule_query_data *fwd_rule; +}; + +enum { + ICE_ESWITCH_BR_FDB_ADDED_BY_USER = BIT(0), +}; + +struct ice_esw_br_fdb_entry { + struct ice_esw_br_fdb_data data; + struct rhash_head ht_node; + struct list_head list; + + int flags; + + struct net_device *dev; + struct ice_esw_br_port *br_port; + struct ice_esw_br_flow *flow; +}; + enum ice_esw_br_port_type { ICE_ESWITCH_BR_UPLINK_PORT = 0, ICE_ESWITCH_BR_VF_REPR_PORT = 1, @@ -20,6 +47,9 @@ struct ice_esw_br { struct ice_esw_br_offloads *br_offloads; struct xarray ports; + struct rhashtable fdb_ht; + struct list_head fdb_list; + int ifindex; }; @@ -27,6 +57,17 @@ struct ice_esw_br_offloads { struct ice_pf *pf; struct ice_esw_br *bridge; struct notifier_block netdev_nb; + struct notifier_block switchdev_nb; + + struct workqueue_struct *wq; +}; + +struct ice_esw_br_fdb_work { + struct work_struct work; + struct switchdev_notifier_fdb_info fdb_info; + struct ice_esw_br_port *br_port; + struct net_device *dev; + unsigned long event; }; #define ice_nb_to_br_offloads(nb, nb_name) \ @@ -34,6 +75,11 @@ struct ice_esw_br_offloads { struct ice_esw_br_offloads, \ nb_name) +#define ice_work_to_fdb_work(w) \ + container_of(w, \ + struct ice_esw_br_fdb_work, \ + work) + void ice_eswitch_br_offloads_deinit(struct ice_pf *pf); int From patchwork Thu May 18 10:25:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246424 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38D0813AD5 for ; Thu, 18 May 2023 10:26:43 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C75201BDF for ; Thu, 18 May 2023 03:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405600; x=1715941600; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jvAdf8wv10LccpLyzmfSt+euSbZYHoieXRcuju3yr8k=; b=J66xg4wVGUUr2qKlGSga/QENtqb+h2BD3gG0QUyp81WbvVVYmGPVrPL1 ABtk0bG4yX2b7ZLNKBR6c5mDzetorPqXUOy42pxWCS5b68qbM+nBbFt8P 6bQ8ylrZFMUcBdLmkRvjcYSrC+88rrfprKeHKqirb9Qoi9tk1gpJ3fVQK jpU334M96bwnat4M5qFls4uofpHIIAidUw2wKV3I+TG7oM1yd8sRkVssh +5FFsmw9JuFlvLrLfori/Yvw1y6bDyvLQz1hGR5nJR1teLIBJUcfpPLeM 8LP3NTX5t/D77QLlC3OtRTaXQnecNBAwIAvB/IkwD3uugxW02hLe0Q6tn A==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371439" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371439" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143751" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143751" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:23 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 7818F1266B; Thu, 18 May 2023 11:26:22 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 05/10] ice: Add guard rule when creating FDB in switchdev Date: Thu, 18 May 2023 12:25:04 +0200 Message-Id: <20230518102509.20913-6-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Marcin Szycik Introduce new "guard" rule upon FDB entry creation. It matches on src_mac, has valid bit unset, allow_pass_l2 set and has a nop action. Previously introduced "forward" rule matches on dst_mac, has valid bit set, need_pass_l2 set and has a forward action. With these rules, a packet will be offloaded only if FDB exists in both directions (RX and TX). Let's assume link partner sends a packet to VF1: src_mac = LP_MAC, dst_mac = is VF1_MAC. Bridge adds FDB, two rules are created: 1. Guard rule matching on src_mac == LP_MAC 2. Forward rule matching on dst_mac == LP_MAC Now VF1 responds with src_mac = VF1_MAC, dst_mac = LP_MAC. Before this change, only one rule with dst_mac == LP_MAC would have existed, and the packet would have been offloaded, meaning the bridge wouldn't add FDB in the opposite direction. Now, the forward rule matches (dst_mac == LP_MAC), but it has need_pass_l2 set an there is no guard rule with src_mac == VF1_MAC, so the packet goes through slow-path and the bridge adds FDB. Two rules are created: 1. Guard rule matching on src_mac == VF1_MAC 2. Forward rule matching on dst_mac == VF1_MAC Further packets in both directions will be offloaded. The same example is true in opposite direction (i.e. VF1 is the first to send a packet out). Signed-off-by: Marcin Szycik Signed-off-by: Wojciech Drewek --- v2: init err with -ENOMEM in ice_eswitch_br_guard_rule_create, use FIELD_PREP in ice_add_adv_rule, use @content var in ice_add_sw_recipe --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 62 +++++++++++- .../net/ethernet/intel/ice/ice_eswitch_br.h | 1 + drivers/net/ethernet/intel/ice/ice_switch.c | 95 ++++++++++++------- drivers/net/ethernet/intel/ice/ice_switch.h | 5 + drivers/net/ethernet/intel/ice/ice_type.h | 1 + 5 files changed, 129 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 0fc4807e1d0c..9941b0a5aaba 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -116,6 +116,8 @@ ice_eswitch_br_fwd_rule_create(struct ice_hw *hw, int vsi_idx, int port_type, goto err_add_rule; } + rule_info.need_pass_l2 = true; + rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI; err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, rule); @@ -134,11 +136,52 @@ ice_eswitch_br_fwd_rule_create(struct ice_hw *hw, int vsi_idx, int port_type, return ERR_PTR(err); } +static struct ice_rule_query_data * +ice_eswitch_br_guard_rule_create(struct ice_hw *hw, u16 vsi_idx, + const unsigned char *mac) +{ + struct ice_adv_rule_info rule_info = { 0 }; + struct ice_rule_query_data *rule; + struct ice_adv_lkup_elem *list; + const u16 lkups_cnt = 1; + int err = -ENOMEM; + + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + goto err_exit; + + list = kcalloc(lkups_cnt, sizeof(*list), GFP_ATOMIC); + if (!list) + goto err_list_alloc; + + list[0].type = ICE_MAC_OFOS; + ether_addr_copy(list[0].h_u.eth_hdr.src_addr, mac); + eth_broadcast_addr(list[0].m_u.eth_hdr.src_addr); + + rule_info.allow_pass_l2 = true; + rule_info.sw_act.vsi_handle = vsi_idx; + rule_info.sw_act.fltr_act = ICE_NOP; + rule_info.priority = 5; + + err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, rule); + if (err) + goto err_add_rule; + + return rule; + +err_add_rule: + kfree(list); +err_list_alloc: + kfree(rule); +err_exit: + return ERR_PTR(err); +} + static struct ice_esw_br_flow * ice_eswitch_br_flow_create(struct device *dev, struct ice_hw *hw, int vsi_idx, int port_type, const unsigned char *mac) { - struct ice_rule_query_data *fwd_rule; + struct ice_rule_query_data *fwd_rule, *guard_rule; struct ice_esw_br_flow *flow; int err; @@ -155,10 +198,22 @@ ice_eswitch_br_flow_create(struct device *dev, struct ice_hw *hw, int vsi_idx, goto err_fwd_rule; } + guard_rule = ice_eswitch_br_guard_rule_create(hw, vsi_idx, mac); + err = PTR_ERR_OR_ZERO(guard_rule); + if (err) { + dev_err(dev, "Failed to create eswitch bridge %sgress guard rule, err: %d\n", + port_type == ICE_ESWITCH_BR_UPLINK_PORT ? "e" : "in", + err); + goto err_guard_rule; + } + flow->fwd_rule = fwd_rule; + flow->guard_rule = guard_rule; return flow; +err_guard_rule: + ice_eswitch_br_rule_delete(hw, fwd_rule); err_fwd_rule: kfree(flow); @@ -189,6 +244,11 @@ ice_eswitch_br_flow_delete(struct ice_pf *pf, struct ice_esw_br_flow *flow) dev_err(dev, "Failed to delete FDB forward rule, err: %d\n", err); + err = ice_eswitch_br_rule_delete(&pf->hw, flow->guard_rule); + if (err) + dev_err(dev, "Failed to delete FDB guard rule, err: %d\n", + err); + kfree(flow); } diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h index 6fcacf545b98..7afd00cdea9a 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h @@ -13,6 +13,7 @@ struct ice_esw_br_fdb_data { struct ice_esw_br_flow { struct ice_rule_query_data *fwd_rule; + struct ice_rule_query_data *guard_rule; }; enum { diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index 2ea9e1ae5517..6b435e36b2fc 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -2277,6 +2277,10 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, /* Propagate some data to the recipe database */ recps[idx].is_root = !!is_root; recps[idx].priority = root_bufs.content.act_ctrl_fwd_priority; + recps[idx].need_pass_l2 = root_bufs.content.act_ctrl & + ICE_AQ_RECIPE_ACT_NEED_PASS_L2; + recps[idx].allow_pass_l2 = root_bufs.content.act_ctrl & + ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2; bitmap_zero(recps[idx].res_idxs, ICE_MAX_FV_WORDS); if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN) { recps[idx].chain_idx = root_bufs.content.result_indx & @@ -4624,7 +4628,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = { */ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, - enum ice_sw_tunnel_type tun_type) + const struct ice_adv_rule_info *rinfo) { bool refresh_required = true; struct ice_sw_recipe *recp; @@ -4685,9 +4689,12 @@ ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, } /* If for "i"th recipe the found was never set to false * then it means we found our match - * Also tun type of recipe needs to be checked + * Also tun type and *_pass_l2 of recipe needs to be + * checked */ - if (found && recp[i].tun_type == tun_type) + if (found && recp[i].tun_type == rinfo->tun_type && + recp[i].need_pass_l2 == rinfo->need_pass_l2 && + recp[i].allow_pass_l2 == rinfo->allow_pass_l2) return i; /* Return the recipe ID */ } } @@ -4957,6 +4964,7 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, unsigned long *profiles) { DECLARE_BITMAP(result_idx_bm, ICE_MAX_FV_WORDS); + struct ice_aqc_recipe_content *content; struct ice_aqc_recipe_data_elem *tmp; struct ice_aqc_recipe_data_elem *buf; struct ice_recp_grp_entry *entry; @@ -5017,6 +5025,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, if (status) goto err_unroll; + content = &buf[recps].content; + /* Clear the result index of the located recipe, as this will be * updated, if needed, later in the recipe creation process. */ @@ -5027,26 +5037,24 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, /* if the recipe is a non-root recipe RID should be programmed * as 0 for the rules to be applied correctly. */ - buf[recps].content.rid = 0; - memset(&buf[recps].content.lkup_indx, 0, - sizeof(buf[recps].content.lkup_indx)); + content->rid = 0; + memset(&content->lkup_indx, 0, + sizeof(content->lkup_indx)); /* All recipes use look-up index 0 to match switch ID. */ - buf[recps].content.lkup_indx[0] = ICE_AQ_SW_ID_LKUP_IDX; - buf[recps].content.mask[0] = - cpu_to_le16(ICE_AQ_SW_ID_LKUP_MASK); + content->lkup_indx[0] = ICE_AQ_SW_ID_LKUP_IDX; + content->mask[0] = cpu_to_le16(ICE_AQ_SW_ID_LKUP_MASK); /* Setup lkup_indx 1..4 to INVALID/ignore and set the mask * to be 0 */ for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) { - buf[recps].content.lkup_indx[i] = 0x80; - buf[recps].content.mask[i] = 0; + content->lkup_indx[i] = 0x80; + content->mask[i] = 0; } for (i = 0; i < entry->r_group.n_val_pairs; i++) { - buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i]; - buf[recps].content.mask[i + 1] = - cpu_to_le16(entry->fv_mask[i]); + content->lkup_indx[i + 1] = entry->fv_idx[i]; + content->mask[i + 1] = cpu_to_le16(entry->fv_mask[i]); } if (rm->n_grp_count > 1) { @@ -5060,7 +5068,7 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, } entry->chain_idx = chain_idx; - buf[recps].content.result_indx = + content->result_indx = ICE_AQ_RECIPE_RESULT_EN | ((chain_idx << ICE_AQ_RECIPE_RESULT_DATA_S) & ICE_AQ_RECIPE_RESULT_DATA_M); @@ -5074,7 +5082,13 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, ICE_MAX_NUM_RECIPES); set_bit(buf[recps].recipe_indx, (unsigned long *)buf[recps].recipe_bitmap); - buf[recps].content.act_ctrl_fwd_priority = rm->priority; + content->act_ctrl_fwd_priority = rm->priority; + + if (rm->need_pass_l2) + content->act_ctrl |= ICE_AQ_RECIPE_ACT_NEED_PASS_L2; + + if (rm->allow_pass_l2) + content->act_ctrl |= ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2; recps++; } @@ -5112,9 +5126,11 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, if (status) goto err_unroll; + content = &buf[recps].content; + buf[recps].recipe_indx = (u8)rid; - buf[recps].content.rid = (u8)rid; - buf[recps].content.rid |= ICE_AQ_RECIPE_ID_IS_ROOT; + content->rid = (u8)rid; + content->rid |= ICE_AQ_RECIPE_ID_IS_ROOT; /* the new entry created should also be part of rg_list to * make sure we have complete recipe */ @@ -5126,16 +5142,13 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, goto err_unroll; } last_chain_entry->rid = rid; - memset(&buf[recps].content.lkup_indx, 0, - sizeof(buf[recps].content.lkup_indx)); + memset(&content->lkup_indx, 0, sizeof(content->lkup_indx)); /* All recipes use look-up index 0 to match switch ID. */ - buf[recps].content.lkup_indx[0] = ICE_AQ_SW_ID_LKUP_IDX; - buf[recps].content.mask[0] = - cpu_to_le16(ICE_AQ_SW_ID_LKUP_MASK); + content->lkup_indx[0] = ICE_AQ_SW_ID_LKUP_IDX; + content->mask[0] = cpu_to_le16(ICE_AQ_SW_ID_LKUP_MASK); for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) { - buf[recps].content.lkup_indx[i] = - ICE_AQ_RECIPE_LKUP_IGNORE; - buf[recps].content.mask[i] = 0; + content->lkup_indx[i] = ICE_AQ_RECIPE_LKUP_IGNORE; + content->mask[i] = 0; } i = 1; @@ -5147,8 +5160,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, last_chain_entry->chain_idx = ICE_INVAL_CHAIN_IND; list_for_each_entry(entry, &rm->rg_list, l_entry) { last_chain_entry->fv_idx[i] = entry->chain_idx; - buf[recps].content.lkup_indx[i] = entry->chain_idx; - buf[recps].content.mask[i++] = cpu_to_le16(0xFFFF); + content->lkup_indx[i] = entry->chain_idx; + content->mask[i++] = cpu_to_le16(0xFFFF); set_bit(entry->rid, rm->r_bitmap); } list_add(&last_chain_entry->l_entry, &rm->rg_list); @@ -5160,7 +5173,7 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, status = -EINVAL; goto err_unroll; } - buf[recps].content.act_ctrl_fwd_priority = rm->priority; + content->act_ctrl_fwd_priority = rm->priority; recps++; rm->root_rid = (u8)rid; @@ -5225,6 +5238,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, recp->priority = buf[buf_idx].content.act_ctrl_fwd_priority; recp->n_grp_count = rm->n_grp_count; recp->tun_type = rm->tun_type; + recp->need_pass_l2 = rm->need_pass_l2; + recp->allow_pass_l2 = rm->allow_pass_l2; recp->recp_created = true; } rm->root_buf = buf; @@ -5393,6 +5408,9 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, /* set the recipe priority if specified */ rm->priority = (u8)rinfo->priority; + rm->need_pass_l2 = rinfo->need_pass_l2; + rm->allow_pass_l2 = rinfo->allow_pass_l2; + /* Find offsets from the field vector. Pick the first one for all the * recipes. */ @@ -5408,7 +5426,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, } /* Look for a recipe which matches our requested fv / mask list */ - *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type); + *rid = ice_find_recp(hw, lkup_exts, rinfo); if (*rid < ICE_MAX_NUM_RECIPES) /* Success if found a recipe that match the existing criteria */ goto err_unroll; @@ -5846,7 +5864,9 @@ static bool ice_rules_equal(const struct ice_adv_rule_info *first, return first->sw_act.flag == second->sw_act.flag && first->tun_type == second->tun_type && first->vlan_type == second->vlan_type && - first->src_vsi == second->src_vsi; + first->src_vsi == second->src_vsi && + first->need_pass_l2 == second->need_pass_l2 && + first->allow_pass_l2 == second->allow_pass_l2; } /** @@ -6085,7 +6105,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI || rinfo->sw_act.fltr_act == ICE_FWD_TO_Q || rinfo->sw_act.fltr_act == ICE_FWD_TO_QGRP || - rinfo->sw_act.fltr_act == ICE_DROP_PACKET)) { + rinfo->sw_act.fltr_act == ICE_DROP_PACKET || + rinfo->sw_act.fltr_act == ICE_NOP)) { status = -EIO; goto free_pkt_profile; } @@ -6096,7 +6117,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, goto free_pkt_profile; } - if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI) + if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI || + rinfo->sw_act.fltr_act == ICE_NOP) rinfo->sw_act.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle); @@ -6166,6 +6188,11 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP | ICE_SINGLE_ACT_VALID_BIT; break; + case ICE_NOP: + act |= FIELD_PREP(ICE_SINGLE_ACT_VSI_ID_M, + rinfo->sw_act.fwd_id.hw_vsi_id); + act &= ~ICE_SINGLE_ACT_VALID_BIT; + break; default: status = -EIO; goto err_ice_add_adv_rule; @@ -6446,7 +6473,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, return -EIO; } - rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type); + rid = ice_find_recp(hw, &lkup_exts, rinfo); /* If did not find a recipe that match the existing criteria */ if (rid == ICE_MAX_NUM_RECIPES) return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index c84b56fe84a5..838a2823b3dc 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -191,6 +191,8 @@ struct ice_adv_rule_info { u16 vlan_type; u16 fltr_rule_id; u32 priority; + u16 need_pass_l2:1; + u16 allow_pass_l2:1; u16 src_vsi; struct ice_sw_act_ctrl sw_act; struct ice_adv_rule_flags_info flags_info; @@ -254,6 +256,9 @@ struct ice_sw_recipe { */ u8 priority; + u8 need_pass_l2:1; + u8 allow_pass_l2:1; + struct list_head rg_list; /* AQ buffer associated with this recipe */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 5602695243a8..96977d6fc149 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -1034,6 +1034,7 @@ enum ice_sw_fwd_act_type { ICE_FWD_TO_Q, ICE_FWD_TO_QGRP, ICE_DROP_PACKET, + ICE_NOP, ICE_INVAL_ACT }; From patchwork Thu May 18 10:25:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246420 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 086B9125CD for ; Thu, 18 May 2023 10:26:40 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB22A1BDF for ; Thu, 18 May 2023 03:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405598; x=1715941598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AlgnQdk73zCrGewRdJQyDhOw9Np8EC2kj0Np8NZ1S34=; b=j87fU2Gx0Dp2FqZaFYj07zrW/zguI8v0iVr4Ouf11TV6rHN0vHsJXaVo w/tfYCoWqVmA6QPX7LWkakl3jSxeX+Qk1sWapUI5DIY0St/xhF+yV5RfI ZoUYPX60M3e9KDmMZb6iUDLPXhvdCRqOQf8A2sFJ+3fiuIHac2Y/qphCl PslYQRkF0+1eZ7kpGXKbo9DH+1kXsQ+TUmFz0hihw9TCtSMQGri7ZR5mR /1hJpDuk/f33aGcPb6454tac3DZjZUXi7mG9ZRWa22aYvfMJSj0LqLtDf FA255weOwmAUx5ORYoM64jW4Ea+w90KiYgvSnKA1RuH7AR7zGhT8EPz4A g==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371433" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371433" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143743" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143743" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:24 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 37C0A36C11; Thu, 18 May 2023 11:26:23 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 06/10] ice: Accept LAG netdevs in bridge offloads Date: Thu, 18 May 2023 12:25:05 +0200 Message-Id: <20230518102509.20913-7-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Allow LAG interfaces to be used in bridge offload using netif_is_lag_master. In this case, search for ice netdev in the list of LAG's lower devices. Signed-off-by: Wojciech Drewek --- v2: braces added in ice_eswitch_br_get_uplnik_from_lag, use else in ice_eswitch_br_netdev_to_port and ice_eswitch_br_port_link --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 47 +++++++++++++++++-- 1 file changed, 42 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 9941b0a5aaba..7b7eecec8497 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -15,8 +15,23 @@ static const struct rhashtable_params ice_fdb_ht_params = { static bool ice_eswitch_br_is_dev_valid(const struct net_device *dev) { - /* Accept only PF netdev and PRs */ - return ice_is_port_repr_netdev(dev) || netif_is_ice(dev); + /* Accept only PF netdev, PRs and LAG */ + return ice_is_port_repr_netdev(dev) || netif_is_ice(dev) || + netif_is_lag_master(dev); +} + +static struct net_device * +ice_eswitch_br_get_uplnik_from_lag(struct net_device *lag_dev) +{ + struct net_device *lower; + struct list_head *iter; + + netdev_for_each_lower_dev(lag_dev, lower, iter) { + if (netif_is_ice(lower)) + return lower; + } + + return NULL; } static struct ice_esw_br_port * @@ -26,8 +41,19 @@ ice_eswitch_br_netdev_to_port(struct net_device *dev) struct ice_repr *repr = ice_netdev_to_repr(dev); return repr->br_port; - } else if (netif_is_ice(dev)) { - struct ice_pf *pf = ice_netdev_to_pf(dev); + } else if (netif_is_ice(dev) || netif_is_lag_master(dev)) { + struct net_device *ice_dev; + struct ice_pf *pf; + + if (netif_is_lag_master(dev)) + ice_dev = ice_eswitch_br_get_uplnik_from_lag(dev); + else + ice_dev = dev; + + if (!ice_dev) + return NULL; + + pf = ice_netdev_to_pf(ice_dev); return pf->br_port; } @@ -716,7 +742,18 @@ ice_eswitch_br_port_link(struct ice_esw_br_offloads *br_offloads, err = ice_eswitch_br_vf_repr_port_init(bridge, repr); } else { - struct ice_pf *pf = ice_netdev_to_pf(dev); + struct net_device *ice_dev; + struct ice_pf *pf; + + if (netif_is_lag_master(dev)) + ice_dev = ice_eswitch_br_get_uplnik_from_lag(dev); + else + ice_dev = dev; + + if (!ice_dev) + return 0; + + pf = ice_netdev_to_pf(ice_dev); err = ice_eswitch_br_uplink_port_init(bridge, pf); } From patchwork Thu May 18 10:25:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246426 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BF8913AD5 for ; Thu, 18 May 2023 10:26:44 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0C511BD3 for ; Thu, 18 May 2023 03:26:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405601; x=1715941601; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R9FnT9zbmA1n1hOwj6grjCr1F3bobYzqlZg4WEgYMLA=; b=G7y3eM+1ZAnKifZpocz/tUpizYtD4M/IBAbBdY0TuuvTOK0mdGMvRb+f JxNOt2pdIQt/DQ6wDo+pYSFVakU1dfBf16qnZBGszUBLU1YArYGWFkm2R 9wYzmQSit92pujUpnHlEvhk6de74rWf9V5Pgv2N31d2PlRtWCAHaXCI1o YlcSRl1LwDdfgvCFgst5wualLDehykrla3AiwkxhvbGLYsCCeMomVBuHg Ym54xUHZXOdce7aXew4glUdGYSvbPI7sil+kS89p358Mc3yJ6SN3RwZZs 5uDBVA/M80yCJ6jOI2z7LUbfg5ZjZZ5HbBhXqaL2vG90WoGm/fJzO2K6p Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371444" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371444" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143755" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143755" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:24 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id C558336C13; Thu, 18 May 2023 11:26:23 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 07/10] ice: Add VLAN FDB support in switchdev mode Date: Thu, 18 May 2023 12:25:06 +0200 Message-Id: <20230518102509.20913-8-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Marcin Szycik Add support for matching on VLAN tag in bridge offloads. Currently only trunk mode is supported. To enable VLAN filtering (existing FDB entries will be deleted): ip link set $BR type bridge vlan_filtering 1 To add VLANs to bridge in trunk mode: bridge vlan add dev $PF1 vid 110-111 bridge vlan add dev $VF1_PR vid 110-111 Signed-off-by: Marcin Szycik Signed-off-by: Wojciech Drewek --- v2: introduce ice_eswitch_is_vid_valid, remove vlan bool arg, introduce better log msg --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 317 +++++++++++++++++- .../net/ethernet/intel/ice/ice_eswitch_br.h | 12 + 2 files changed, 317 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 7b7eecec8497..223514d12491 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -13,6 +13,15 @@ static const struct rhashtable_params ice_fdb_ht_params = { .automatic_shrinking = true, }; +static inline bool ice_eswitch_is_vid_valid(u16 vid) +{ + /* In trunk VLAN mode, for untagged traffic the bridge sends requests + * to offload VLAN 1 with pvid and untagged flags set. Since these + * flags are not supported, add a MAC filter instead. + */ + return vid > 1; +} + static bool ice_eswitch_br_is_dev_valid(const struct net_device *dev) { /* Accept only PF netdev, PRs and LAG */ @@ -64,13 +73,19 @@ ice_eswitch_br_netdev_to_port(struct net_device *dev) static void ice_eswitch_br_ingress_rule_setup(struct ice_adv_lkup_elem *list, struct ice_adv_rule_info *rule_info, - const unsigned char *mac, + const unsigned char *mac, u16 vid, u8 pf_id, u16 vf_vsi_idx) { list[0].type = ICE_MAC_OFOS; ether_addr_copy(list[0].h_u.eth_hdr.dst_addr, mac); eth_broadcast_addr(list[0].m_u.eth_hdr.dst_addr); + if (ice_eswitch_is_vid_valid(vid)) { + list[1].type = ICE_VLAN_OFOS; + list[1].h_u.vlan_hdr.vlan = cpu_to_be16(vid & VLAN_VID_MASK); + list[1].m_u.vlan_hdr.vlan = cpu_to_be16(0xFFFF); + } + rule_info->sw_act.vsi_handle = vf_vsi_idx; rule_info->sw_act.flag |= ICE_FLTR_RX; rule_info->sw_act.src = pf_id; @@ -80,13 +95,19 @@ ice_eswitch_br_ingress_rule_setup(struct ice_adv_lkup_elem *list, static void ice_eswitch_br_egress_rule_setup(struct ice_adv_lkup_elem *list, struct ice_adv_rule_info *rule_info, - const unsigned char *mac, + const unsigned char *mac, u16 vid, u16 pf_vsi_idx) { list[0].type = ICE_MAC_OFOS; ether_addr_copy(list[0].h_u.eth_hdr.dst_addr, mac); eth_broadcast_addr(list[0].m_u.eth_hdr.dst_addr); + if (ice_eswitch_is_vid_valid(vid)) { + list[1].type = ICE_VLAN_OFOS; + list[1].h_u.vlan_hdr.vlan = cpu_to_be16(vid & VLAN_VID_MASK); + list[1].m_u.vlan_hdr.vlan = cpu_to_be16(0xFFFF); + } + rule_info->sw_act.vsi_handle = pf_vsi_idx; rule_info->sw_act.flag |= ICE_FLTR_TX; rule_info->flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE; @@ -110,14 +131,19 @@ ice_eswitch_br_rule_delete(struct ice_hw *hw, struct ice_rule_query_data *rule) static struct ice_rule_query_data * ice_eswitch_br_fwd_rule_create(struct ice_hw *hw, int vsi_idx, int port_type, - const unsigned char *mac) + const unsigned char *mac, u16 vid) { struct ice_adv_rule_info rule_info = { 0 }; struct ice_rule_query_data *rule; struct ice_adv_lkup_elem *list; - u16 lkups_cnt = 1; + u16 lkups_cnt; int err; + if (ice_eswitch_is_vid_valid(vid)) + lkups_cnt = 2; + else + lkups_cnt = 1; + rule = kzalloc(sizeof(*rule), GFP_KERNEL); if (!rule) return ERR_PTR(-ENOMEM); @@ -131,11 +157,11 @@ ice_eswitch_br_fwd_rule_create(struct ice_hw *hw, int vsi_idx, int port_type, switch (port_type) { case ICE_ESWITCH_BR_UPLINK_PORT: ice_eswitch_br_egress_rule_setup(list, &rule_info, mac, - vsi_idx); + vid, vsi_idx); break; case ICE_ESWITCH_BR_VF_REPR_PORT: ice_eswitch_br_ingress_rule_setup(list, &rule_info, mac, - hw->pf_id, vsi_idx); + vid, hw->pf_id, vsi_idx); break; default: err = -EINVAL; @@ -164,13 +190,18 @@ ice_eswitch_br_fwd_rule_create(struct ice_hw *hw, int vsi_idx, int port_type, static struct ice_rule_query_data * ice_eswitch_br_guard_rule_create(struct ice_hw *hw, u16 vsi_idx, - const unsigned char *mac) + const unsigned char *mac, u16 vid) { struct ice_adv_rule_info rule_info = { 0 }; struct ice_rule_query_data *rule; struct ice_adv_lkup_elem *list; - const u16 lkups_cnt = 1; int err = -ENOMEM; + u16 lkups_cnt; + + if (ice_eswitch_is_vid_valid(vid)) + lkups_cnt = 2; + else + lkups_cnt = 1; rule = kzalloc(sizeof(*rule), GFP_KERNEL); if (!rule) @@ -184,6 +215,12 @@ ice_eswitch_br_guard_rule_create(struct ice_hw *hw, u16 vsi_idx, ether_addr_copy(list[0].h_u.eth_hdr.src_addr, mac); eth_broadcast_addr(list[0].m_u.eth_hdr.src_addr); + if (ice_eswitch_is_vid_valid(vid)) { + list[1].type = ICE_VLAN_OFOS; + list[1].h_u.vlan_hdr.vlan = cpu_to_be16(vid & VLAN_VID_MASK); + list[1].m_u.vlan_hdr.vlan = cpu_to_be16(0xFFFF); + } + rule_info.allow_pass_l2 = true; rule_info.sw_act.vsi_handle = vsi_idx; rule_info.sw_act.fltr_act = ICE_NOP; @@ -205,7 +242,7 @@ ice_eswitch_br_guard_rule_create(struct ice_hw *hw, u16 vsi_idx, static struct ice_esw_br_flow * ice_eswitch_br_flow_create(struct device *dev, struct ice_hw *hw, int vsi_idx, - int port_type, const unsigned char *mac) + int port_type, const unsigned char *mac, u16 vid) { struct ice_rule_query_data *fwd_rule, *guard_rule; struct ice_esw_br_flow *flow; @@ -215,7 +252,8 @@ ice_eswitch_br_flow_create(struct device *dev, struct ice_hw *hw, int vsi_idx, if (!flow) return ERR_PTR(-ENOMEM); - fwd_rule = ice_eswitch_br_fwd_rule_create(hw, vsi_idx, port_type, mac); + fwd_rule = ice_eswitch_br_fwd_rule_create(hw, vsi_idx, port_type, mac, + vid); err = PTR_ERR_OR_ZERO(fwd_rule); if (err) { dev_err(dev, "Failed to create eswitch bridge %sgress forward rule, err: %d\n", @@ -224,7 +262,7 @@ ice_eswitch_br_flow_create(struct device *dev, struct ice_hw *hw, int vsi_idx, goto err_fwd_rule; } - guard_rule = ice_eswitch_br_guard_rule_create(hw, vsi_idx, mac); + guard_rule = ice_eswitch_br_guard_rule_create(hw, vsi_idx, mac, vid); err = PTR_ERR_OR_ZERO(guard_rule); if (err) { dev_err(dev, "Failed to create eswitch bridge %sgress guard rule, err: %d\n", @@ -278,6 +316,30 @@ ice_eswitch_br_flow_delete(struct ice_pf *pf, struct ice_esw_br_flow *flow) kfree(flow); } +static struct ice_esw_br_vlan * +ice_esw_br_port_vlan_lookup(struct ice_esw_br *bridge, u16 vsi_idx, u16 vid) +{ + struct ice_pf *pf = bridge->br_offloads->pf; + struct device *dev = ice_pf_to_dev(pf); + struct ice_esw_br_port *port; + struct ice_esw_br_vlan *vlan; + + port = xa_load(&bridge->ports, vsi_idx); + if (!port) { + dev_info(dev, "Bridge port lookup failed (vsi=%u)\n", vsi_idx); + return ERR_PTR(-EINVAL); + } + + vlan = xa_load(&port->vlans, vid); + if (!vlan) { + dev_info(dev, "Bridge port vlan metadata lookup failed (vsi=%u)\n", + vsi_idx); + return ERR_PTR(-EINVAL); + } + + return vlan; +} + static void ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge, struct ice_esw_br_fdb_entry *fdb_entry) @@ -347,10 +409,25 @@ ice_eswitch_br_fdb_entry_create(struct net_device *netdev, struct device *dev = ice_pf_to_dev(pf); struct ice_esw_br_fdb_entry *fdb_entry; struct ice_esw_br_flow *flow; + struct ice_esw_br_vlan *vlan; struct ice_hw *hw = &pf->hw; unsigned long event; int err; + /* untagged filtering is not yet supported */ + if (!(bridge->flags & ICE_ESWITCH_BR_VLAN_FILTERING) && vid) + return; + + if ((bridge->flags & ICE_ESWITCH_BR_VLAN_FILTERING)) { + vlan = ice_esw_br_port_vlan_lookup(bridge, br_port->vsi_idx, + vid); + if (IS_ERR(vlan)) { + dev_err(dev, "Failed to find vlan lookup, err: %ld\n", + PTR_ERR(vlan)); + return; + } + } + fdb_entry = ice_eswitch_br_fdb_find(bridge, mac, vid); if (fdb_entry) ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, fdb_entry); @@ -362,7 +439,7 @@ ice_eswitch_br_fdb_entry_create(struct net_device *netdev, } flow = ice_eswitch_br_flow_create(dev, hw, br_port->vsi_idx, - br_port->type, mac); + br_port->type, mac, vid); if (IS_ERR(flow)) { err = PTR_ERR(flow); goto err_add_flow; @@ -521,6 +598,207 @@ ice_eswitch_br_switchdev_event(struct notifier_block *nb, return NOTIFY_DONE; } +static void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge) +{ + struct ice_esw_br_fdb_entry *entry, *tmp; + + list_for_each_entry_safe(entry, tmp, &bridge->fdb_list, list) + ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, entry); +} + +static void +ice_eswitch_br_vlan_filtering_set(struct ice_esw_br *bridge, bool enable) +{ + if (enable == !!(bridge->flags & ICE_ESWITCH_BR_VLAN_FILTERING)) + return; + + ice_eswitch_br_fdb_flush(bridge); + if (enable) + bridge->flags |= ICE_ESWITCH_BR_VLAN_FILTERING; + else + bridge->flags &= ~ICE_ESWITCH_BR_VLAN_FILTERING; +} + +static void +ice_eswitch_br_vlan_cleanup(struct ice_esw_br_port *port, + struct ice_esw_br_vlan *vlan) +{ + xa_erase(&port->vlans, vlan->vid); + kfree(vlan); +} + +static void ice_eswitch_br_port_vlans_flush(struct ice_esw_br_port *port) +{ + struct ice_esw_br_vlan *vlan; + unsigned long index; + + xa_for_each(&port->vlans, index, vlan) + ice_eswitch_br_vlan_cleanup(port, vlan); +} + +static struct ice_esw_br_vlan * +ice_eswitch_br_vlan_create(u16 vid, u16 flags, struct ice_esw_br_port *port) +{ + struct ice_esw_br_vlan *vlan; + int err; + + vlan = kzalloc(sizeof(*vlan), GFP_KERNEL); + if (!vlan) + return ERR_PTR(-ENOMEM); + + vlan->vid = vid; + vlan->flags = flags; + + err = xa_insert(&port->vlans, vlan->vid, vlan, GFP_KERNEL); + if (err) { + kfree(vlan); + return ERR_PTR(err); + } + + return vlan; +} + +static int +ice_eswitch_br_port_vlan_add(struct ice_esw_br *bridge, u16 vsi_idx, u16 vid, + u16 flags, struct netlink_ext_ack *extack) +{ + struct ice_esw_br_port *port; + struct ice_esw_br_vlan *vlan; + + port = xa_load(&bridge->ports, vsi_idx); + if (!port) + return -EINVAL; + + vlan = xa_load(&port->vlans, vid); + if (vlan) { + if (vlan->flags == flags) + return 0; + + ice_eswitch_br_vlan_cleanup(port, vlan); + } + + vlan = ice_eswitch_br_vlan_create(vid, flags, port); + if (IS_ERR(vlan)) { + NL_SET_ERR_MSG_FMT_MOD(extack, "Failed to create VLAN entry, vid: %u, vsi: %u", + vid, vsi_idx); + return PTR_ERR(vlan); + } + + return 0; +} + +static void +ice_eswitch_br_port_vlan_del(struct ice_esw_br *bridge, u16 vsi_idx, u16 vid) +{ + struct ice_esw_br_port *port; + struct ice_esw_br_vlan *vlan; + + port = xa_load(&bridge->ports, vsi_idx); + if (!port) + return; + + vlan = xa_load(&port->vlans, vid); + if (!vlan) + return; + + ice_eswitch_br_vlan_cleanup(port, vlan); +} + +static int +ice_eswitch_br_port_obj_add(struct net_device *netdev, const void *ctx, + const struct switchdev_obj *obj, + struct netlink_ext_ack *extack) +{ + struct ice_esw_br_port *br_port = ice_eswitch_br_netdev_to_port(netdev); + struct switchdev_obj_port_vlan *vlan; + int err; + + if (!br_port) + return -EINVAL; + + switch (obj->id) { + case SWITCHDEV_OBJ_ID_PORT_VLAN: + vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); + err = ice_eswitch_br_port_vlan_add(br_port->bridge, + br_port->vsi_idx, vlan->vid, + vlan->flags, extack); + return err; + default: + return -EOPNOTSUPP; + } +} + +static int +ice_eswitch_br_port_obj_del(struct net_device *netdev, const void *ctx, + const struct switchdev_obj *obj) +{ + struct ice_esw_br_port *br_port = ice_eswitch_br_netdev_to_port(netdev); + struct switchdev_obj_port_vlan *vlan; + + if (!br_port) + return -EINVAL; + + switch (obj->id) { + case SWITCHDEV_OBJ_ID_PORT_VLAN: + vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); + ice_eswitch_br_port_vlan_del(br_port->bridge, br_port->vsi_idx, + vlan->vid); + return 0; + default: + return -EOPNOTSUPP; + } +} + +static int +ice_eswitch_br_port_obj_attr_set(struct net_device *netdev, const void *ctx, + const struct switchdev_attr *attr, + struct netlink_ext_ack *extack) +{ + struct ice_esw_br_port *br_port = ice_eswitch_br_netdev_to_port(netdev); + + if (!br_port) + return -EINVAL; + + switch (attr->id) { + case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING: + ice_eswitch_br_vlan_filtering_set(br_port->bridge, + attr->u.vlan_filtering); + return 0; + default: + return -EOPNOTSUPP; + } +} + +static int +ice_eswitch_br_event_blocking(struct notifier_block *nb, unsigned long event, + void *ptr) +{ + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + int err; + + switch (event) { + case SWITCHDEV_PORT_OBJ_ADD: + err = switchdev_handle_port_obj_add(dev, ptr, + ice_eswitch_br_is_dev_valid, + ice_eswitch_br_port_obj_add); + break; + case SWITCHDEV_PORT_OBJ_DEL: + err = switchdev_handle_port_obj_del(dev, ptr, + ice_eswitch_br_is_dev_valid, + ice_eswitch_br_port_obj_del); + break; + case SWITCHDEV_PORT_ATTR_SET: + err = switchdev_handle_port_attr_set(dev, ptr, + ice_eswitch_br_is_dev_valid, + ice_eswitch_br_port_obj_attr_set); + break; + default: + err = 0; + } + + return notifier_from_errno(err); +} + static void ice_eswitch_br_port_deinit(struct ice_esw_br *bridge, struct ice_esw_br_port *br_port) @@ -539,6 +817,7 @@ ice_eswitch_br_port_deinit(struct ice_esw_br *bridge, vsi->vf->repr->br_port = NULL; xa_erase(&bridge->ports, br_port->vsi_idx); + ice_eswitch_br_port_vlans_flush(br_port); kfree(br_port); } @@ -551,6 +830,8 @@ ice_eswitch_br_port_init(struct ice_esw_br *bridge) if (!br_port) return ERR_PTR(-ENOMEM); + xa_init(&br_port->vlans); + br_port->bridge = bridge; return br_port; @@ -858,6 +1139,7 @@ ice_eswitch_br_offloads_deinit(struct ice_pf *pf) return; unregister_netdevice_notifier(&br_offloads->netdev_nb); + unregister_switchdev_blocking_notifier(&br_offloads->switchdev_blk); unregister_switchdev_notifier(&br_offloads->switchdev_nb); destroy_workqueue(br_offloads->wq); /* Although notifier block is unregistered just before, @@ -901,6 +1183,15 @@ ice_eswitch_br_offloads_init(struct ice_pf *pf) goto err_reg_switchdev_nb; } + br_offloads->switchdev_blk.notifier_call = + ice_eswitch_br_event_blocking; + err = register_switchdev_blocking_notifier(&br_offloads->switchdev_blk); + if (err) { + dev_err(dev, + "Failed to register bridge blocking switchdev notifier\n"); + goto err_reg_switchdev_blk; + } + br_offloads->netdev_nb.notifier_call = ice_eswitch_br_port_event; err = register_netdevice_notifier(&br_offloads->netdev_nb); if (err) { @@ -912,6 +1203,8 @@ ice_eswitch_br_offloads_init(struct ice_pf *pf) return 0; err_reg_netdev_nb: + unregister_switchdev_blocking_notifier(&br_offloads->switchdev_blk); +err_reg_switchdev_blk: unregister_switchdev_notifier(&br_offloads->switchdev_nb); err_reg_switchdev_nb: destroy_workqueue(br_offloads->wq); diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h index 7afd00cdea9a..917b8c9c9e19 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h @@ -42,6 +42,11 @@ struct ice_esw_br_port { struct ice_vsi *vsi; enum ice_esw_br_port_type type; u16 vsi_idx; + struct xarray vlans; +}; + +enum { + ICE_ESWITCH_BR_VLAN_FILTERING = BIT(0), }; struct ice_esw_br { @@ -52,12 +57,14 @@ struct ice_esw_br { struct list_head fdb_list; int ifindex; + u32 flags; }; struct ice_esw_br_offloads { struct ice_pf *pf; struct ice_esw_br *bridge; struct notifier_block netdev_nb; + struct notifier_block switchdev_blk; struct notifier_block switchdev_nb; struct workqueue_struct *wq; @@ -71,6 +78,11 @@ struct ice_esw_br_fdb_work { unsigned long event; }; +struct ice_esw_br_vlan { + u16 vid; + u16 flags; +}; + #define ice_nb_to_br_offloads(nb, nb_name) \ container_of(nb, \ struct ice_esw_br_offloads, \ From patchwork Thu May 18 10:25:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246427 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A19E713AD5 for ; Thu, 18 May 2023 10:26:45 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECA311BD5 for ; Thu, 18 May 2023 03:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405602; x=1715941602; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HTcwEc6AsNC5TeLLpTVTUO74fb7+PD082MhJtHwoDzo=; b=RjZLE+ECiqUFu2d38wL9ulHwugOuGnLcjQE5MZs+uIxJ+1BvIiKU0AK0 bGBhyWUYIOLCMKLDKwALJ+7pFhCTnH6zbH6Gc3iJAK5pik0LFyqaIZZ4o ocycGJ6Aj+crMdCi9b7mrzeeCjEghauadkF5w18Ao+EX19LRr6uI/xFUX oL+uFQ9Isbdkd3EBpJ0fblktkQYHzq89QpfZ2L5c5yugJ3QtNkcf4nIi5 pXuXK8tV81QKMTg0Pm9MJ7k/IAGrMo3gDGMS9udG15NwwGsBfYlEEEUzb I0/VdN9VgvC68PS4qhQfWVbTckg/ydH9rKPVSOKi5bn1QF116aRjJFIou Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371449" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371449" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143760" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143760" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:25 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 8F70536C15; Thu, 18 May 2023 11:26:24 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 08/10] ice: implement bridge port vlan Date: Thu, 18 May 2023 12:25:07 +0200 Message-Id: <20230518102509.20913-9-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Michal Swiatkowski Port VLAN in this case means push and pop VLAN action on specific vid. There are a few limitation in hardware: - push and pop can't be used separately - if port VLAN is used there can't be any trunk VLANs, because pop action is done on all trafic received by VSI in port VLAN mode - port VLAN mode on uplink port isn't supported Reflect these limitations in code using dev_info to inform the user about unsupported configuration. In bridge mode there is a need to configure port vlan without resetting VFs. To do that implement ice_port_vlan_on/off() functions. They are only configuring correct vlan_ops to allow setting port vlan. We also need to clear port vlan without resetting the VF which is not supported right now. Change it by implementing clear_port_vlan ops. As previous VLAN configuration isn't always the same, store current config while creating port vlan and restore it in clear function. Configuration steps: - configure switchdev with bridge - #bridge vlan add dev eth0 vid 120 pvid untagged - #bridge vlan add dev eth1 vid 120 pvid untagged - ping from VF0 to VF1 Signed-off-by: Michal Swiatkowski Signed-off-by: Wojciech Drewek --- v2: minor code style changes, optimize port vlan ops initialization in ice_port_vlan_on and ice_port_vlan_off, replace WARN_ON with WARN_ON_ONCE --- drivers/net/ethernet/intel/ice/ice.h | 1 + .../net/ethernet/intel/ice/ice_eswitch_br.c | 89 ++++++++- .../net/ethernet/intel/ice/ice_eswitch_br.h | 1 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 186 ++++++++++-------- .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 3 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 84 +++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 8 + .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 1 + 8 files changed, 283 insertions(+), 90 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 8876ed8ba268..176e281dfa24 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -370,6 +370,7 @@ struct ice_vsi { u16 rx_buf_len; struct ice_aqc_vsi_props info; /* VSI properties */ + struct ice_vsi_vlan_info vlan_info; /* vlan config to be restored */ /* VSI stats */ struct rtnl_link_stats64 net_stats; diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 223514d12491..e4a8c9ac5e5d 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -5,6 +5,8 @@ #include "ice_eswitch_br.h" #include "ice_repr.h" #include "ice_switch.h" +#include "ice_vlan.h" +#include "ice_vf_vsi_vlan_ops.h" static const struct rhashtable_params ice_fdb_ht_params = { .key_offset = offsetof(struct ice_esw_br_fdb_entry, data), @@ -619,11 +621,27 @@ ice_eswitch_br_vlan_filtering_set(struct ice_esw_br *bridge, bool enable) bridge->flags &= ~ICE_ESWITCH_BR_VLAN_FILTERING; } +static void +ice_eswitch_br_clear_pvid(struct ice_esw_br_port *port) +{ + struct ice_vsi_vlan_ops *vlan_ops; + + vlan_ops = ice_get_compat_vsi_vlan_ops(port->vsi); + + vlan_ops->clear_port_vlan(port->vsi); + + ice_vf_vsi_disable_port_vlan(port->vsi); + + port->pvid = 0; +} + static void ice_eswitch_br_vlan_cleanup(struct ice_esw_br_port *port, struct ice_esw_br_vlan *vlan) { xa_erase(&port->vlans, vlan->vid); + if (port->pvid == vlan->vid) + ice_eswitch_br_clear_pvid(port); kfree(vlan); } @@ -636,9 +654,50 @@ static void ice_eswitch_br_port_vlans_flush(struct ice_esw_br_port *port) ice_eswitch_br_vlan_cleanup(port, vlan); } +static int +ice_eswitch_br_set_pvid(struct ice_esw_br_port *port, + struct ice_esw_br_vlan *vlan) +{ + struct ice_vlan port_vlan = ICE_VLAN(ETH_P_8021Q, vlan->vid, 0); + struct device *dev = ice_pf_to_dev(port->vsi->back); + struct ice_vsi_vlan_ops *vlan_ops; + int err; + + if (port->pvid == vlan->vid || vlan->vid == 1) + return 0; + + /* Setting port vlan on uplink isn't supported by hw */ + if (port->type == ICE_ESWITCH_BR_UPLINK_PORT) + return -EOPNOTSUPP; + + if (port->pvid) { + dev_info(dev, + "Port VLAN (vsi=%u, vid=%u) already exists on the port, remove it before adding new one\n", + port->vsi_idx, port->pvid); + return -EEXIST; + } + + ice_vf_vsi_enable_port_vlan(port->vsi); + + vlan_ops = ice_get_compat_vsi_vlan_ops(port->vsi); + err = vlan_ops->set_port_vlan(port->vsi, &port_vlan); + if (err) + return err; + + err = vlan_ops->add_vlan(port->vsi, &port_vlan); + if (err) + return err; + + ice_eswitch_br_port_vlans_flush(port); + port->pvid = vlan->vid; + + return 0; +} + static struct ice_esw_br_vlan * ice_eswitch_br_vlan_create(u16 vid, u16 flags, struct ice_esw_br_port *port) { + struct device *dev = ice_pf_to_dev(port->vsi->back); struct ice_esw_br_vlan *vlan; int err; @@ -648,14 +707,29 @@ ice_eswitch_br_vlan_create(u16 vid, u16 flags, struct ice_esw_br_port *port) vlan->vid = vid; vlan->flags = flags; + if ((flags & BRIDGE_VLAN_INFO_PVID) && + (flags & BRIDGE_VLAN_INFO_UNTAGGED)) { + err = ice_eswitch_br_set_pvid(port, vlan); + if (err) + goto err_set_pvid; + } else if ((flags & BRIDGE_VLAN_INFO_PVID) || + (flags & BRIDGE_VLAN_INFO_UNTAGGED)) { + dev_info(dev, "VLAN push and pop are supported only simultaneously\n"); + return ERR_PTR(-EOPNOTSUPP); + } err = xa_insert(&port->vlans, vlan->vid, vlan, GFP_KERNEL); - if (err) { - kfree(vlan); - return ERR_PTR(err); - } + if (err) + goto err_insert; return vlan; + +err_insert: + if (port->pvid) + ice_eswitch_br_clear_pvid(port); +err_set_pvid: + kfree(vlan); + return ERR_PTR(err); } static int @@ -669,6 +743,13 @@ ice_eswitch_br_port_vlan_add(struct ice_esw_br *bridge, u16 vsi_idx, u16 vid, if (!port) return -EINVAL; + if (port->pvid) { + dev_info(ice_pf_to_dev(port->vsi->back), + "Port VLAN (vsi=%u, vid=%d) exists on the port, remove it to add trunk VLANs\n", + port->vsi_idx, port->pvid); + return -EEXIST; + } + vlan = xa_load(&port->vlans, vid); if (vlan) { if (vlan->flags == flags) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h index 917b8c9c9e19..8c108fc25b2f 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h @@ -42,6 +42,7 @@ struct ice_esw_br_port { struct ice_vsi *vsi; enum ice_esw_br_port_type type; u16 vsi_idx; + u16 pvid; struct xarray vlans; }; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index b1ffb81893d4..d7b10dc67f03 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -21,6 +21,99 @@ noop_vlan(struct ice_vsi __always_unused *vsi) return 0; } +static void ice_port_vlan_on(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + struct ice_pf *pf = vsi->back; + + if (ice_is_dvm_ena(&pf->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + /* setup outer VLAN ops */ + vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + vlan_ops->clear_port_vlan = ice_vsi_clear_outer_port_vlan; + vlan_ops->clear_port_vlan = ice_vsi_clear_outer_port_vlan; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->add_vlan = noop_vlan_arg; + vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } else { + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + vlan_ops->clear_port_vlan = ice_vsi_clear_inner_port_vlan; + vlan_ops->clear_port_vlan = ice_vsi_clear_inner_port_vlan; + } + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; +} + +static void ice_port_vlan_off(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + struct ice_pf *pf = vsi->back; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + + if (ice_is_dvm_ena(&pf->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + } + + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; +} + +/** + * ice_vf_vsi_enable_port_vlan - Set VSI VLAN ops to support port VLAN + * @vsi: VF's VSI being configured + * + * The function won't create port VLAN, it only allows to create port VLAN + * using VLAN ops on the VF VSI. + */ +void ice_vf_vsi_enable_port_vlan(struct ice_vsi *vsi) +{ + if (WARN_ON_ONCE(!vsi->vf)) + return; + + ice_port_vlan_on(vsi); +} + +/** + * ice_vf_vsi_disable_port_vlan - Clear VSI support for creating port VLAN + * @vsi: VF's VSI being configured + * + * The function should be called after removing port VLAN on VSI + * (using VLAN ops) + */ +void ice_vf_vsi_disable_port_vlan(struct ice_vsi *vsi) +{ + if (WARN_ON_ONCE(!vsi->vf)) + return; + + ice_port_vlan_off(vsi); +} + /** * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI * @vsi: VF's VSI being configured @@ -39,91 +132,18 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) if (WARN_ON(!vf)) return; - if (ice_is_dvm_ena(&pf->hw)) { - vlan_ops = &vsi->outer_vlan_ops; + if (ice_vf_is_port_vlan_ena(vf)) + ice_port_vlan_on(vsi); + else + ice_port_vlan_off(vsi); - /* outer VLAN ops regardless of port VLAN config */ - vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; - vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - - if (ice_vf_is_port_vlan_ena(vf)) { - /* setup outer VLAN ops */ - vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; - /* all Rx traffic should be in the domain of the - * assigned port VLAN, so prevent disabling Rx VLAN - * filtering - */ - vlan_ops->dis_rx_filtering = noop_vlan; - vlan_ops->ena_rx_filtering = - ice_vsi_ena_rx_vlan_filtering; - - /* setup inner VLAN ops */ - vlan_ops = &vsi->inner_vlan_ops; - vlan_ops->add_vlan = noop_vlan_arg; - vlan_ops->del_vlan = noop_vlan_arg; - vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; - vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; - vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; - vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; - } else { - vlan_ops->dis_rx_filtering = - ice_vsi_dis_rx_vlan_filtering; - - if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) - vlan_ops->ena_rx_filtering = noop_vlan; - else - vlan_ops->ena_rx_filtering = - ice_vsi_ena_rx_vlan_filtering; - - vlan_ops->del_vlan = ice_vsi_del_vlan; - vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; - vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; - vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; - vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; - - /* setup inner VLAN ops */ - vlan_ops = &vsi->inner_vlan_ops; - - vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; - vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; - vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; - vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; - } - } else { - vlan_ops = &vsi->inner_vlan_ops; + vlan_ops = ice_is_dvm_ena(&pf->hw) ? + &vsi->outer_vlan_ops : &vsi->inner_vlan_ops; - /* inner VLAN ops regardless of port VLAN config */ - vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; - vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; - vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - - if (ice_vf_is_port_vlan_ena(vf)) { - vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; - vlan_ops->ena_rx_filtering = - ice_vsi_ena_rx_vlan_filtering; - /* all Rx traffic should be in the domain of the - * assigned port VLAN, so prevent disabling Rx VLAN - * filtering - */ - vlan_ops->dis_rx_filtering = noop_vlan; - } else { - vlan_ops->dis_rx_filtering = - ice_vsi_dis_rx_vlan_filtering; - if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) - vlan_ops->ena_rx_filtering = noop_vlan; - else - vlan_ops->ena_rx_filtering = - ice_vsi_ena_rx_vlan_filtering; - - vlan_ops->del_vlan = ice_vsi_del_vlan; - vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; - vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; - vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; - vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; - } - } + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h index 875a4e615f39..845330b49608 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -11,6 +11,9 @@ struct ice_vsi; void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi); void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi); +void ice_vf_vsi_enable_port_vlan(struct ice_vsi *vsi); +void ice_vf_vsi_disable_port_vlan(struct ice_vsi *vsi); + #ifdef CONFIG_PCI_IOV void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); #else diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 5b4a0abb4607..76266e709a39 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -202,6 +202,24 @@ int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi) return ice_vsi_manage_vlan_insertion(vsi); } +static void +ice_save_vlan_info(struct ice_aqc_vsi_props *info, + struct ice_vsi_vlan_info *vlan) +{ + vlan->sw_flags2 = info->sw_flags2; + vlan->inner_vlan_flags = info->inner_vlan_flags; + vlan->outer_vlan_flags = info->outer_vlan_flags; +} + +static void +ice_restore_vlan_info(struct ice_aqc_vsi_props *info, + struct ice_vsi_vlan_info *vlan) +{ + info->sw_flags2 = vlan->sw_flags2; + info->inner_vlan_flags = vlan->inner_vlan_flags; + info->outer_vlan_flags = vlan->outer_vlan_flags; +} + /** * __ice_vsi_set_inner_port_vlan - set port VLAN VSI context settings to enable a port VLAN * @vsi: the VSI to update @@ -218,6 +236,7 @@ static int __ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, u16 pvid_info) if (!ctxt) return -ENOMEM; + ice_save_vlan_info(&vsi->info, &vsi->vlan_info); ctxt->info = vsi->info; info = &ctxt->info; info->inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED | @@ -259,6 +278,33 @@ int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) return __ice_vsi_set_inner_port_vlan(vsi, port_vlan_info); } +int ice_vsi_clear_inner_port_vlan(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_aqc_vsi_props *info; + struct ice_vsi_ctx *ctxt; + int ret; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ice_restore_vlan_info(&vsi->info, &vsi->vlan_info); + vsi->info.port_based_inner_vlan = 0; + ctxt->info = vsi->info; + info = &ctxt->info; + info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (ret) + dev_err(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", + ret, ice_aq_str(hw->adminq.sq_last_status)); + + kfree(ctxt); + return ret; +} + /** * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI * @vsi: VSI to enable or disable VLAN pruning on @@ -647,6 +693,7 @@ __ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, u16 vlan_info, u16 tpid) if (!ctxt) return -ENOMEM; + ice_save_vlan_info(&vsi->info, &vsi->vlan_info); ctxt->info = vsi->info; ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; @@ -689,9 +736,6 @@ __ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, u16 vlan_info, u16 tpid) * used if DVM is supported. Also, this function should never be called directly * as it should be part of ice_vsi_vlan_ops if it's needed. * - * This function does not support clearing the port VLAN as there is currently - * no use case for this. - * * Use the ice_vlan structure passed in to set this VSI in a port VLAN. */ int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) @@ -705,3 +749,37 @@ int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) return __ice_vsi_set_outer_port_vlan(vsi, port_vlan_info, vlan->tpid); } + +/** + * ice_vsi_clear_outer_port_vlan - clear outer port vlan + * @vsi: VSI to configure + * + * The function is restoring previously set vlan config (saved in + * vsi->vlan_info). Setting happens in port vlan configuration. + */ +int ice_vsi_clear_outer_port_vlan(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ice_restore_vlan_info(&vsi->info, &vsi->vlan_info); + vsi->info.port_based_outer_vlan = 0; + ctxt->info = vsi->info; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for clearing outer port based VLAN failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + + kfree(ctxt); + return err; +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index f459909490ec..f0d84d11bd5b 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -7,6 +7,12 @@ #include #include "ice_vlan.h" +struct ice_vsi_vlan_info { + u8 sw_flags2; + u8 inner_vlan_flags; + u8 outer_vlan_flags; +}; + struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); @@ -17,6 +23,7 @@ int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi); int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi); int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_clear_inner_port_vlan(struct ice_vsi *vsi); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); @@ -28,5 +35,6 @@ int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi); int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi); int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_clear_outer_port_vlan(struct ice_vsi *vsi); #endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 5b47568f6256..b2d2330dedcb 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -21,6 +21,7 @@ struct ice_vsi_vlan_ops { int (*ena_tx_filtering)(struct ice_vsi *vsi); int (*dis_tx_filtering)(struct ice_vsi *vsi); int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); + int (*clear_port_vlan)(struct ice_vsi *vsi); }; void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); From patchwork Thu May 18 10:25:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246423 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E0BD13AD6 for ; Thu, 18 May 2023 10:26:43 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C2661BEA for ; Thu, 18 May 2023 03:26:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405601; x=1715941601; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K3kpBK9SLQx9yogV9QhZwbURZfdtUbdLPkuDa6JNRyc=; b=I0UWFOwi9+ScPZCeUjT4HG0oeVvHKYgjeQYeoCWVbXZaQg4FJNdht3iM JXVevRj8DUMUUGrOoPDXrgRLa8DnR6QlzGDB2HZNM8pWB8t6AOU+4hKnA Eyv2dJ7fStyquAuKHecy/gt8aWCZNct0tVrk0PGk6EZBN7NbLRgUvl+3+ ICBjk6lBee+6HB7ywTPOP7U6loqv7YTorezAUvJ3DcC58/GdXhpsxsJw8 xguPwESfSoDB2eeOFgoYjF78EH9FgFPLpe/RndcXsCFQogFmESmSQaFCL gcf8j4TL3qTNrTTRilDvqBud9/00p0GsNcYLGawI9p+7ct4SMYHBpyonL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371442" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371442" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143753" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143753" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:26 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 4FDCC36C17; Thu, 18 May 2023 11:26:25 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 09/10] ice: implement static version of ageing Date: Thu, 18 May 2023 12:25:08 +0200 Message-Id: <20230518102509.20913-10-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Michal Swiatkowski Remove fdb entries always when ageing time expired. Allow user to set ageing time using port object attribute. Signed-off-by: Michal Swiatkowski Signed-off-by: Wojciech Drewek --- v2: use msecs_to_jiffies upon definition of ICE_ESW_BRIDGE_UPDATE_INTERVAL --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 48 +++++++++++++++++++ .../net/ethernet/intel/ice/ice_eswitch_br.h | 10 ++++ 2 files changed, 58 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index e4a8c9ac5e5d..41c47dec5d32 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -8,6 +8,8 @@ #include "ice_vlan.h" #include "ice_vf_vsi_vlan_ops.h" +#define ICE_ESW_BRIDGE_UPDATE_INTERVAL msecs_to_jiffies(1000) + static const struct rhashtable_params ice_fdb_ht_params = { .key_offset = offsetof(struct ice_esw_br_fdb_entry, data), .key_len = sizeof(struct ice_esw_br_fdb_data), @@ -452,6 +454,7 @@ ice_eswitch_br_fdb_entry_create(struct net_device *netdev, fdb_entry->br_port = br_port; fdb_entry->flow = flow; fdb_entry->dev = netdev; + fdb_entry->last_use = jiffies; event = SWITCHDEV_FDB_ADD_TO_BRIDGE; if (added_by_user) { @@ -845,6 +848,10 @@ ice_eswitch_br_port_obj_attr_set(struct net_device *netdev, const void *ctx, ice_eswitch_br_vlan_filtering_set(br_port->bridge, attr->u.vlan_filtering); return 0; + case SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME: + br_port->bridge->ageing_time = + clock_t_to_jiffies(attr->u.ageing_time); + return 0; default: return -EOPNOTSUPP; } @@ -1016,6 +1023,7 @@ ice_eswitch_br_init(struct ice_esw_br_offloads *br_offloads, int ifindex) INIT_LIST_HEAD(&bridge->fdb_list); bridge->br_offloads = br_offloads; bridge->ifindex = ifindex; + bridge->ageing_time = clock_t_to_jiffies(BR_DEFAULT_AGEING_TIME); xa_init(&bridge->ports); br_offloads->bridge = bridge; @@ -1219,6 +1227,7 @@ ice_eswitch_br_offloads_deinit(struct ice_pf *pf) if (!br_offloads) return; + cancel_delayed_work_sync(&br_offloads->update_work); unregister_netdevice_notifier(&br_offloads->netdev_nb); unregister_switchdev_blocking_notifier(&br_offloads->switchdev_blk); unregister_switchdev_notifier(&br_offloads->switchdev_nb); @@ -1233,6 +1242,40 @@ ice_eswitch_br_offloads_deinit(struct ice_pf *pf) rtnl_unlock(); } +static void ice_eswitch_br_update(struct ice_esw_br_offloads *br_offloads) +{ + struct ice_esw_br *bridge = br_offloads->bridge; + struct ice_esw_br_fdb_entry *entry, *tmp; + + if (!bridge) + return; + + rtnl_lock(); + list_for_each_entry_safe(entry, tmp, &bridge->fdb_list, list) { + if (entry->flags & ICE_ESWITCH_BR_FDB_ADDED_BY_USER) + continue; + + if (time_is_after_eq_jiffies(entry->last_use + + bridge->ageing_time)) + continue; + + ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, entry); + } + rtnl_unlock(); +} + +static void ice_eswitch_br_update_work(struct work_struct *work) +{ + struct ice_esw_br_offloads *br_offloads; + + br_offloads = ice_work_to_br_offloads(work); + + ice_eswitch_br_update(br_offloads); + + queue_delayed_work(br_offloads->wq, &br_offloads->update_work, + ICE_ESW_BRIDGE_UPDATE_INTERVAL); +} + int ice_eswitch_br_offloads_init(struct ice_pf *pf) { @@ -1281,6 +1324,11 @@ ice_eswitch_br_offloads_init(struct ice_pf *pf) goto err_reg_netdev_nb; } + INIT_DELAYED_WORK(&br_offloads->update_work, + ice_eswitch_br_update_work); + queue_delayed_work(br_offloads->wq, &br_offloads->update_work, + ICE_ESW_BRIDGE_UPDATE_INTERVAL); + return 0; err_reg_netdev_nb: diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h index 8c108fc25b2f..d17f7a6fefc4 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.h +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.h @@ -5,6 +5,7 @@ #define _ICE_ESWITCH_BR_H_ #include +#include struct ice_esw_br_fdb_data { unsigned char addr[ETH_ALEN]; @@ -30,6 +31,8 @@ struct ice_esw_br_fdb_entry { struct net_device *dev; struct ice_esw_br_port *br_port; struct ice_esw_br_flow *flow; + + unsigned long last_use; }; enum ice_esw_br_port_type { @@ -59,6 +62,7 @@ struct ice_esw_br { int ifindex; u32 flags; + unsigned long ageing_time; }; struct ice_esw_br_offloads { @@ -69,6 +73,7 @@ struct ice_esw_br_offloads { struct notifier_block switchdev_nb; struct workqueue_struct *wq; + struct delayed_work update_work; }; struct ice_esw_br_fdb_work { @@ -89,6 +94,11 @@ struct ice_esw_br_vlan { struct ice_esw_br_offloads, \ nb_name) +#define ice_work_to_br_offloads(w) \ + container_of(w, \ + struct ice_esw_br_offloads, \ + update_work.work) + #define ice_work_to_fdb_work(w) \ container_of(w, \ struct ice_esw_br_fdb_work, \ From patchwork Thu May 18 10:25:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13246425 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49C2813AE9 for ; Thu, 18 May 2023 10:26:44 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F05341BDB for ; Thu, 18 May 2023 03:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684405602; x=1715941602; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Fk+MwS5mHyXXDU/+J9uuIfeeyhxFbj6y0BPlCSHgm2g=; b=DpguepAJHrJEFnKaUwn9eUwEuA/kpbkAQBSGDK0ubeXmiG5jGik987dL vm12TqU46zwUjxY0CgHJPSop1PTPkMXGZPNqmct8RlnlN7pocsf7kijdW xK54WBcNH9m9AbBVigZ43tQL4WzlHceITS5qJ7NloKGzi+d+H9z0M6l/c /DRC8wQbQQQXegGv8CEdela8mUyNV3rbjvrFWa9h0ZvpAVXn1P0t1L1ti fV2H8z3n0f/whPeuegY/5tVE+uyONOcUsqZmuVK7ntgx6Szb0X+AScGox 7YvrjcxZ5P6O2nA3gtWrLqTHxOEqR8sXIb/C26uQbKDmc00UsNL5WTiU6 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="438371446" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="438371446" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 03:26:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10713"; a="767143757" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="767143757" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 18 May 2023 03:26:26 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id EA63936C1E; Thu, 18 May 2023 11:26:25 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH iwl-next v2 10/10] ice: add tracepoints for the switchdev bridge Date: Thu, 18 May 2023 12:25:09 +0200 Message-Id: <20230518102509.20913-11-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230518102509.20913-1-wojciech.drewek@intel.com> References: <20230518102509.20913-1-wojciech.drewek@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Pawel Chmielewski Add tracepoints for the following events: - Add FDB entry - Delete FDB entry - Create bridge VLAN - Cleanup bridge VLAN - Link port to the bridge - Unlink port from the bridge Signed-off-by: Pawel Chmielewski Signed-off-by: Wojciech Drewek --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 9 ++ drivers/net/ethernet/intel/ice/ice_trace.h | 90 +++++++++++++++++++ 2 files changed, 99 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 41c47dec5d32..1e75f5210c85 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -7,6 +7,7 @@ #include "ice_switch.h" #include "ice_vlan.h" #include "ice_vf_vsi_vlan_ops.h" +#include "ice_trace.h" #define ICE_ESW_BRIDGE_UPDATE_INTERVAL msecs_to_jiffies(1000) @@ -399,6 +400,7 @@ ice_eswitch_br_fdb_entry_find_and_delete(struct ice_esw_br *bridge, return; } + trace_ice_eswitch_br_fdb_entry_find_and_delete(fdb_entry); ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, fdb_entry); } @@ -468,6 +470,7 @@ ice_eswitch_br_fdb_entry_create(struct net_device *netdev, goto err_fdb_insert; list_add(&fdb_entry->list, &bridge->fdb_list); + trace_ice_eswitch_br_fdb_entry_create(fdb_entry); ice_eswitch_br_fdb_offload_notify(netdev, mac, vid, event); @@ -642,6 +645,7 @@ static void ice_eswitch_br_vlan_cleanup(struct ice_esw_br_port *port, struct ice_esw_br_vlan *vlan) { + trace_ice_eswitch_br_vlan_cleanup(vlan); xa_erase(&port->vlans, vlan->vid); if (port->pvid == vlan->vid) ice_eswitch_br_clear_pvid(port); @@ -725,6 +729,8 @@ ice_eswitch_br_vlan_create(u16 vid, u16 flags, struct ice_esw_br_port *port) if (err) goto err_insert; + trace_ice_eswitch_br_vlan_create(vlan); + return vlan; err_insert: @@ -1083,6 +1089,7 @@ ice_eswitch_br_port_unlink(struct ice_esw_br_offloads *br_offloads, return -EINVAL; } + trace_ice_eswitch_br_port_unlink(br_port); ice_eswitch_br_port_deinit(br_port->bridge, br_port); ice_eswitch_br_verify_deinit(br_offloads, br_port->bridge); @@ -1111,6 +1118,7 @@ ice_eswitch_br_port_link(struct ice_esw_br_offloads *br_offloads, struct ice_repr *repr = ice_netdev_to_repr(dev); err = ice_eswitch_br_vf_repr_port_init(bridge, repr); + trace_ice_eswitch_br_port_link(repr->br_port); } else { struct net_device *ice_dev; struct ice_pf *pf; @@ -1126,6 +1134,7 @@ ice_eswitch_br_port_link(struct ice_esw_br_offloads *br_offloads, pf = ice_netdev_to_pf(ice_dev); err = ice_eswitch_br_uplink_port_init(bridge, pf); + trace_ice_eswitch_br_port_link(pf->br_port); } if (err) { NL_SET_ERR_MSG_MOD(extack, "Failed to init bridge port"); diff --git a/drivers/net/ethernet/intel/ice/ice_trace.h b/drivers/net/ethernet/intel/ice/ice_trace.h index ae98d5a8ff60..b2f5c9fe0149 100644 --- a/drivers/net/ethernet/intel/ice/ice_trace.h +++ b/drivers/net/ethernet/intel/ice/ice_trace.h @@ -21,6 +21,7 @@ #define _ICE_TRACE_H_ #include +#include "ice_eswitch_br.h" /* ice_trace() macro enables shared code to refer to trace points * like: @@ -240,6 +241,95 @@ DEFINE_TX_TSTAMP_OP_EVENT(ice_tx_tstamp_fw_req); DEFINE_TX_TSTAMP_OP_EVENT(ice_tx_tstamp_fw_done); DEFINE_TX_TSTAMP_OP_EVENT(ice_tx_tstamp_complete); +DECLARE_EVENT_CLASS(ice_esw_br_fdb_template, + TP_PROTO(struct ice_esw_br_fdb_entry *fdb), + TP_ARGS(fdb), + TP_STRUCT__entry(__array(char, dev_name, IFNAMSIZ) + __array(unsigned char, addr, ETH_ALEN) + __field(u16, vid) + __field(int, flags)), + TP_fast_assign(strscpy(__entry->dev_name, + netdev_name(fdb->dev), + IFNAMSIZ); + memcpy(__entry->addr, fdb->data.addr, ETH_ALEN); + __entry->vid = fdb->data.vid; + __entry->flags = fdb->flags;), + TP_printk("net_device=%s addr=%pM vid=%u flags=%x", + __entry->dev_name, + __entry->addr, + __entry->vid, + __entry->flags) +); + +DEFINE_EVENT(ice_esw_br_fdb_template, + ice_eswitch_br_fdb_entry_create, + TP_PROTO(struct ice_esw_br_fdb_entry *fdb), + TP_ARGS(fdb) +); + +DEFINE_EVENT(ice_esw_br_fdb_template, + ice_eswitch_br_fdb_entry_find_and_delete, + TP_PROTO(struct ice_esw_br_fdb_entry *fdb), + TP_ARGS(fdb) +); + +DECLARE_EVENT_CLASS(ice_esw_br_vlan_template, + TP_PROTO(struct ice_esw_br_vlan *vlan), + TP_ARGS(vlan), + TP_STRUCT__entry(__field(u16, vid) + __field(u16, flags)), + TP_fast_assign(__entry->vid = vlan->vid; + __entry->flags = vlan->flags;), + TP_printk("vid=%u flags=%x", + __entry->vid, + __entry->flags) +); + +DEFINE_EVENT(ice_esw_br_vlan_template, + ice_eswitch_br_vlan_create, + TP_PROTO(struct ice_esw_br_vlan *vlan), + TP_ARGS(vlan) +); + +DEFINE_EVENT(ice_esw_br_vlan_template, + ice_eswitch_br_vlan_cleanup, + TP_PROTO(struct ice_esw_br_vlan *vlan), + TP_ARGS(vlan) +); + +#define ICE_ESW_BR_PORT_NAME_L 16 + +DECLARE_EVENT_CLASS(ice_esw_br_port_template, + TP_PROTO(struct ice_esw_br_port *port), + TP_ARGS(port), + TP_STRUCT__entry(__field(u16, vport_num) + __array(char, port_type, ICE_ESW_BR_PORT_NAME_L)), + TP_fast_assign(__entry->vport_num = port->vsi_idx; + if (port->type == ICE_ESWITCH_BR_UPLINK_PORT) + strscpy(__entry->port_type, + "Uplink", + ICE_ESW_BR_PORT_NAME_L); + else + strscpy(__entry->port_type, + "VF Representor", + ICE_ESW_BR_PORT_NAME_L);), + TP_printk("vport_num=%u port type=%s", + __entry->vport_num, + __entry->port_type) +); + +DEFINE_EVENT(ice_esw_br_port_template, + ice_eswitch_br_port_link, + TP_PROTO(struct ice_esw_br_port *port), + TP_ARGS(port) +); + +DEFINE_EVENT(ice_esw_br_port_template, + ice_eswitch_br_port_unlink, + TP_PROTO(struct ice_esw_br_port *port), + TP_ARGS(port) +); + /* End tracepoints */ #endif /* _ICE_TRACE_H_ */