From patchwork Mon Apr 17 09:34:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13213568 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C01C8C77B78 for ; Mon, 17 Apr 2023 09:38:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230045AbjDQJiP (ORCPT ); Mon, 17 Apr 2023 05:38:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231158AbjDQJhs (ORCPT ); Mon, 17 Apr 2023 05:37:48 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B2661BEA for ; Mon, 17 Apr 2023 02:37:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681724231; x=1713260231; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FEKkv6X8oo4VaFcHi/a7z/X9A0ek0drQgfG9UurzswE=; b=eUR5Ykc8OtdTV3398FtQr+4hACOHKZWl23++Svup5hCvaqVMpu4CdIKf cww66Vg1fPaEMV8fZvCVsFzsWNJAqY+a021dX9rUChbx+mexxPxVCSbEW pu4RwrxDIZ+uTXpGGE2I4LQaokHZCKS7CiRd9ZT0mqAwDiLqTqkiZXA0Q jJtNOsVtU/13+jIU/c6/zfaysmQ2Cn2oNXZSHUWmNU2C7plyu2Z8nsKLr +vW62a5eS8PhfLzei1rGnQjcvBdNYBvHYpVABLvkCQ9tbOkRQABkCTVYj 436gUJnr9GitUI0g4170kLGg8ikyGTq8jLIq7I12CJDCj4yBPRu0mshIH A==; X-IronPort-AV: E=McAfee;i="6600,9927,10682"; a="333644088" X-IronPort-AV: E=Sophos;i="5.99,203,1677571200"; d="scan'208";a="333644088" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2023 02:35:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10682"; a="640899259" X-IronPort-AV: E=Sophos;i="5.99,203,1677571200"; d="scan'208";a="640899259" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga003.jf.intel.com with ESMTP; 17 Apr 2023 02:35:22 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 69B5F37F5C; Mon, 17 Apr 2023 10:35:21 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, alexandr.lobakin@intel.com, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com Subject: [PATCH net-next 07/12] ice: Accept LAG netdevs in bridge offloads Date: Mon, 17 Apr 2023 11:34:07 +0200 Message-Id: <20230417093412.12161-8-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417093412.12161-1-wojciech.drewek@intel.com> References: <20230417093412.12161-1-wojciech.drewek@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Allow LAG interfaces to be used in bridge offload using netif_is_lag_master. In this case, search for ice netdev in the list of LAG's lower devices. Signed-off-by: Wojciech Drewek --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 40 ++++++++++++++++--- 1 file changed, 35 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 82b5eb2020cd..49381e4bf62a 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -15,8 +15,21 @@ static const struct rhashtable_params ice_fdb_ht_params = { static bool ice_eswitch_br_is_dev_valid(const struct net_device *dev) { - /* Accept only PF netdev and PRs */ - return ice_is_port_repr_netdev(dev) || netif_is_ice(dev); + /* Accept only PF netdev, PRs and LAG */ + return ice_is_port_repr_netdev(dev) || netif_is_ice(dev) || + netif_is_lag_master(dev); +} + +static struct net_device * +ice_eswitch_br_get_uplnik_from_lag(struct net_device *lag_dev) +{ + struct net_device *lower; + struct list_head *iter; + + netdev_for_each_lower_dev(lag_dev, lower, iter) + if (netif_is_ice(lower)) + return lower; + return NULL; } static struct ice_esw_br_port * @@ -26,8 +39,16 @@ ice_eswitch_br_netdev_to_port(struct net_device *dev) struct ice_repr *repr = ice_netdev_to_repr(dev); return repr->br_port; - } else if (netif_is_ice(dev)) { - struct ice_pf *pf = ice_netdev_to_pf(dev); + } else if (netif_is_ice(dev) || netif_is_lag_master(dev)) { + struct net_device *ice_dev = dev; + struct ice_pf *pf; + + if (netif_is_lag_master(dev)) + ice_dev = ice_eswitch_br_get_uplnik_from_lag(dev); + if (!ice_dev) + return NULL; + + pf = ice_netdev_to_pf(ice_dev); return pf->br_port; } @@ -719,7 +740,16 @@ ice_eswitch_br_port_link(struct ice_esw_br_offloads *br_offloads, err = ice_eswitch_br_vf_repr_port_init(bridge, repr); } else { - struct ice_pf *pf = ice_netdev_to_pf(dev); + struct net_device *ice_dev = dev; + struct ice_pf *pf; + + if (netif_is_lag_master(dev)) + ice_dev = ice_eswitch_br_get_uplnik_from_lag(dev); + + if (!ice_dev) + return 0; + + pf = ice_netdev_to_pf(ice_dev); err = ice_eswitch_br_uplink_port_init(bridge, pf); }