From patchwork Thu Sep 8 19:33:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970554 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13EDCC38145 for ; Thu, 8 Sep 2022 19:35:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231124AbiIHTfL (ORCPT ); Thu, 8 Sep 2022 15:35:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230355AbiIHTfJ (ORCPT ); Thu, 8 Sep 2022 15:35:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A2EC32EF8; Thu, 8 Sep 2022 12:35:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9EFA4B82236; Thu, 8 Sep 2022 19:35:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEED8C433C1; Thu, 8 Sep 2022 19:35:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665704; bh=kYB4D/y+Ews0cFobxl2wcNCaZL6yhPF3HS5lGLSaQHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iW8uXyiqaI+2Y0Y3SQeYGhhDQDKgF7HZDTARKg/+khSslZU8DVQ7ZbsFTAw4N5m+l Mp2EH4Pt3Awk6hQSjJqp59R3ul/MiJj/8MkCdvAy0tseGUsEX0IgqkS5Idw2Jt/GyY ud/hNImRA0JNDnvP7RINaU4U5Jv0le117g7+2sSHLPgRxzuvEdVfgl8lf++4bObm90 3vYwERIMds4XHAynd52YvrFK30sBfnl2Y6Eox0ral65awmRKYwemQYsI0y86d+FAaJ kv8HLuyah2cfgOy+fX/ormiitdhptz/ni8NFUL0NVQBCL4HXWTrpPelITjf6IrCsJo 8XQo8SRc1UsZA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 01/12] arm64: dts: mediatek: mt7986: add support for Wireless Ethernet Dispatch Date: Thu, 8 Sep 2022 21:33:35 +0200 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce wed0 and wed1 nodes in order to enable offloading forwarding between ethernet and wireless devices on the mt7986 chipset. Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Signed-off-by: Lorenzo Bianconi --- arch/arm64/boot/dts/mediatek/mt7986a.dtsi | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi index e3a407d03551..419d056b8369 100644 --- a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi @@ -222,6 +222,25 @@ ethsys: syscon@15000000 { #reset-cells = <1>; }; + wed_pcie: wed_pcie@10003000 { + compatible = "mediatek,wed"; + reg = <0 0x10003000 0 0x10>; + }; + + wed0: wed@15010000 { + compatible = "mediatek,wed", "syscon"; + reg = <0 0x15010000 0 0x1000>; + interrupt-parent = <&gic>; + interrupts = ; + }; + + wed1: wed@15011000 { + compatible = "mediatek,wed", "syscon"; + reg = <0 0x15011000 0 0x1000>; + interrupt-parent = <&gic>; + interrupts = ; + }; + eth: ethernet@15100000 { compatible = "mediatek,mt7986-eth"; reg = <0 0x15100000 0 0x80000>; @@ -256,6 +275,7 @@ eth: ethernet@15100000 { <&apmixedsys CLK_APMIXED_SGMPLL>; mediatek,ethsys = <ðsys>; mediatek,sgmiisys = <&sgmiisys0>, <&sgmiisys1>; + mediatek,wed = <&wed0>, <&wed1>; #reset-cells = <1>; #address-cells = <1>; #size-cells = <0>; From patchwork Thu Sep 8 19:33:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970555 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FEF6C54EE9 for ; Thu, 8 Sep 2022 19:35:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231314AbiIHTfO (ORCPT ); Thu, 8 Sep 2022 15:35:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230481AbiIHTfN (ORCPT ); Thu, 8 Sep 2022 15:35:13 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33BC132EF8; Thu, 8 Sep 2022 12:35:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D59B2B82236; Thu, 8 Sep 2022 19:35:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FDD2C433D6; Thu, 8 Sep 2022 19:35:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665708; bh=mzm5L4CAPDFUvFpZmOoeFfGUUflQPsc++RkAX676KYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UpThxICq2otDlIIYFawQj8gPWRjBrQ5CS2MOSsvhqNQ33MpTBSPvEjcwUuoJSzGZt kKIiRiIKu7k2q1nhbpWD0mHPs71LTY1pKityb2W6tUTR89Bp9SCSgUy/3Ho+aFTHEJ tP7rYVVnld2MqbXjXrEpO4+LlUtdAXddu5UpOE9fqMpBCLoiheAgbUM2N+KStT4c6/ 11wuoXWEH1JI8KaZ0BDTUYyIk8MYShCtNGIfzgb4VZiLIWsOXlSDu9J45j6mBem7Yy 9B4rRNEbWfmlaESOZ9eW5IE85nf07XjhlhewAnFodoZNHfpSf5IlTvE1X+Cn7SOVLf 1PMtrBTMaYN+Q== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 02/12] dt-bindings: net: mediatek: add WED binding for MT7986 eth driver Date: Thu, 8 Sep 2022 21:33:36 +0200 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Document the binding for the Wireless Ethernet Dispatch core on the MT7986 ethernet driver Signed-off-by: Lorenzo Bianconi --- Documentation/devicetree/bindings/net/mediatek,net.yaml | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/Documentation/devicetree/bindings/net/mediatek,net.yaml b/Documentation/devicetree/bindings/net/mediatek,net.yaml index f5564ecddb62..0116f100ef19 100644 --- a/Documentation/devicetree/bindings/net/mediatek,net.yaml +++ b/Documentation/devicetree/bindings/net/mediatek,net.yaml @@ -238,6 +238,15 @@ allOf: minItems: 2 maxItems: 2 + mediatek,wed: + $ref: /schemas/types.yaml#/definitions/phandle-array + minItems: 2 + maxItems: 2 + items: + maxItems: 1 + description: + List of phandles to wireless ethernet dispatch nodes. + patternProperties: "^mac@[0-1]$": type: object From patchwork Thu Sep 8 19:33:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970556 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DB0BC54EE9 for ; Thu, 8 Sep 2022 19:35:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231787AbiIHTfY (ORCPT ); Thu, 8 Sep 2022 15:35:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231604AbiIHTfT (ORCPT ); Thu, 8 Sep 2022 15:35:19 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62F8546DA2; Thu, 8 Sep 2022 12:35:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CF0C0B8222E; Thu, 8 Sep 2022 19:35:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 39770C43141; Thu, 8 Sep 2022 19:35:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665712; bh=zmjUlkfMZ0yirk+DwF5mKNE6imjgG5voskJN4UPQFbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nWkjv3gdShmhXqNUTccyM4Hw9wxneA9JKgXW0d6wGydDHO4/v3EpbcZpWPSVqAO+z 3SX/fRjdIHBeXva7mihU7zL/78ByJyPCdJKbGc440tG1UfmjNbWxb2IKNIHoA51LhW e8SSnSk5ROUn6sumLt2797j1EBdE1PRQtTxM4kJ2TGDfRwS9quikvH3kPRVzy8LDkZ n+5R7PBWSvh1axk4+GOu9+KV+U7Q6ItpvDEauKZlTToQ7Q/gmlvDIEwl2Nu7zr8NZF FMOxB/tIMfYjQE9hIkrqWbaEEuwNDOj3XZJuddTu4e7Q1TZU4PIfLw9NxtQTtvP/hw etIv7ICFaHsEA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 03/12] net: ethernet: mtk_eth_soc: move gdma_to_ppe and ppe_base definitions in mtk register map Date: Thu, 8 Sep 2022 21:33:37 +0200 Message-Id: <95938fc9cbe0223714be2658a49ca58e9baace00.1662661555.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This is a preliminary patch to introduce mt7986 hw packet engine. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 15 +++++++++++---- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 3 ++- drivers/net/ethernet/mediatek/mtk_ppe.h | 2 -- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index c19c67a480ae..b2b92fe2a96a 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -73,6 +73,8 @@ static const struct mtk_reg_map mtk_reg_map = { .fq_blen = 0x1b2c, }, .gdm1_cnt = 0x2400, + .gdma_to_ppe = 0x4444, + .ppe_base = 0x0c00, }; static const struct mtk_reg_map mt7628_reg_map = { @@ -126,6 +128,8 @@ static const struct mtk_reg_map mt7986_reg_map = { .fq_blen = 0x472c, }, .gdm1_cnt = 0x1c00, + .gdma_to_ppe = 0x3333, + .ppe_base = 0x2000, }; /* strings used by ethtool */ @@ -2978,21 +2982,22 @@ static int mtk_open(struct net_device *dev) /* we run 2 netdevs on the same dma ring so we only bring it up once */ if (!refcount_read(ð->dma_refcnt)) { + const struct mtk_soc_data *soc = eth->soc; u32 gdm_config = MTK_GDMA_TO_PDMA; err = mtk_start_dma(eth); if (err) return err; - if (eth->soc->offload_version && mtk_ppe_start(eth->ppe) == 0) - gdm_config = MTK_GDMA_TO_PPE; + if (soc->offload_version && mtk_ppe_start(eth->ppe) == 0) + gdm_config = soc->reg_map->gdma_to_ppe; mtk_gdm_config(eth, gdm_config); napi_enable(ð->tx_napi); napi_enable(ð->rx_napi); mtk_tx_irq_enable(eth, MTK_TX_DONE_INT); - mtk_rx_irq_enable(eth, eth->soc->txrx.rx_irq_done_mask); + mtk_rx_irq_enable(eth, soc->txrx.rx_irq_done_mask); refcount_set(ð->dma_refcnt, 1); } else @@ -4098,7 +4103,9 @@ static int mtk_probe(struct platform_device *pdev) } if (eth->soc->offload_version) { - eth->ppe = mtk_ppe_init(eth, eth->base + MTK_ETH_PPE_BASE, 2); + u32 ppe_addr = eth->soc->reg_map->ppe_base; + + eth->ppe = mtk_ppe_init(eth, eth->base + ppe_addr, 2); if (!eth->ppe) { err = -ENOMEM; goto err_free_dev; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index ecf85e9ed824..2617cbecdfca 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -105,7 +105,6 @@ #define MTK_GDMA_TCS_EN BIT(21) #define MTK_GDMA_UCS_EN BIT(20) #define MTK_GDMA_TO_PDMA 0x0 -#define MTK_GDMA_TO_PPE 0x4444 #define MTK_GDMA_DROP_ALL 0x7777 /* Unicast Filter MAC Address Register - Low */ @@ -955,6 +954,8 @@ struct mtk_reg_map { u32 fq_blen; /* fq free page buffer length */ } qdma; u32 gdm1_cnt; + u32 gdma_to_ppe; + u32 ppe_base; }; /* struct mtk_eth_data - This is the structure holding all differences diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 8f786c47b61a..bb079e3c0417 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -8,8 +8,6 @@ #include #include -#define MTK_ETH_PPE_BASE 0xc00 - #define MTK_PPE_ENTRIES_SHIFT 3 #define MTK_PPE_ENTRIES (1024 << MTK_PPE_ENTRIES_SHIFT) #define MTK_PPE_HASH_MASK (MTK_PPE_ENTRIES - 1) From patchwork Thu Sep 8 19:33:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970557 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17476C38145 for ; Thu, 8 Sep 2022 19:35:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231437AbiIHTf2 (ORCPT ); Thu, 8 Sep 2022 15:35:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbiIHTfV (ORCPT ); Thu, 8 Sep 2022 15:35:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 300B856BB9; Thu, 8 Sep 2022 12:35:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2A0C561DE7; Thu, 8 Sep 2022 19:35:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 31421C433D7; Thu, 8 Sep 2022 19:35:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665716; bh=t+trN+nMmksX+M+u18ZOQ0bCmlZr7/qSh7/d7RGQDqQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FBA/UIIoUa1eIF4WckH9ThZgAV5vphoAV4S50sx97c5adcJYfE2Yc0BittQS5aZy7 E1OlvhYfepZrEdTv2UlvyHbdp6sKeEMGxLr2on3+/avFPMUhfYmpnk4kKWA8uJDvMl qGIjSyvhag0FlGzibcjcPFfWSRfk6UpmuTc/qhHmZ8xOorc77SXJdVj8uY2Iw7Mdd8 uWt/IYjHHuZizNOFSsogEZvnQmCn+A+cnvtTiGydDj96HU0vKIS8Lk5oYZROAPsa5e lfFahKZ7HdRisFElR/LhEfL8cMugpLYUm/xuAB23QsJi95qGaMutlQU8TY5sVH8G4T 83/rYSSn0CACA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 04/12] net: ethernet: mtk_eth_soc: move ppe table hash offset to mtk_soc_data structure Date: Thu, 8 Sep 2022 21:33:38 +0200 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This is a preliminary patch to introduce mt7986 hw packet engine. Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 4 ++++ drivers/net/ethernet/mediatek/mtk_eth_soc.h | 2 ++ drivers/net/ethernet/mediatek/mtk_ppe.c | 24 +++++++++++++++------ drivers/net/ethernet/mediatek/mtk_ppe.h | 2 +- 4 files changed, 25 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index b2b92fe2a96a..d09717d4f3be 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -4201,6 +4201,7 @@ static const struct mtk_soc_data mt7621_data = { .required_clks = MT7621_CLKS_BITMAP, .required_pctl = false, .offload_version = 2, + .hash_offset = 2, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4219,6 +4220,7 @@ static const struct mtk_soc_data mt7622_data = { .required_clks = MT7622_CLKS_BITMAP, .required_pctl = false, .offload_version = 2, + .hash_offset = 2, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4236,6 +4238,7 @@ static const struct mtk_soc_data mt7623_data = { .required_clks = MT7623_CLKS_BITMAP, .required_pctl = true, .offload_version = 2, + .hash_offset = 2, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4269,6 +4272,7 @@ static const struct mtk_soc_data mt7986_data = { .caps = MT7986_CAPS, .required_clks = MT7986_CLKS_BITMAP, .required_pctl = false, + .hash_offset = 4, .txrx = { .txd_size = sizeof(struct mtk_tx_dma_v2), .rxd_size = sizeof(struct mtk_rx_dma_v2), diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 2617cbecdfca..6c5e144cb9f0 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -969,6 +969,7 @@ struct mtk_reg_map { * the target SoC * @required_pctl A bool value to show whether the SoC requires * the extra setup for those pins used by GMAC. + * @hash_offset Flow table hash offset. * @txd_size Tx DMA descriptor size. * @rxd_size Rx DMA descriptor size. * @rx_irq_done_mask Rx irq done register mask. @@ -983,6 +984,7 @@ struct mtk_soc_data { u32 required_clks; bool required_pctl; u8 offload_version; + u8 hash_offset; netdev_features_t hw_features; struct { u32 txd_size; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index cfe804bc8d20..1cc7d8338722 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -88,7 +88,7 @@ static void mtk_ppe_cache_enable(struct mtk_ppe *ppe, bool enable) enable * MTK_PPE_CACHE_CTL_EN); } -static u32 mtk_ppe_hash_entry(struct mtk_foe_entry *e) +static u32 mtk_ppe_hash_entry(struct mtk_eth *eth, struct mtk_foe_entry *e) { u32 hv1, hv2, hv3; u32 hash; @@ -122,7 +122,7 @@ static u32 mtk_ppe_hash_entry(struct mtk_foe_entry *e) hash = (hash >> 24) | ((hash & 0xffffff) << 8); hash ^= hv1 ^ hv2 ^ hv3; hash ^= hash >> 16; - hash <<= 1; + hash <<= (ffs(eth->soc->hash_offset) - 1); hash &= MTK_PPE_ENTRIES - 1; return hash; @@ -540,15 +540,16 @@ mtk_foe_entry_commit_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { int type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->data.ib1); + const struct mtk_soc_data *soc = ppe->eth->soc; u32 hash; if (type == MTK_PPE_PKT_TYPE_BRIDGE) return mtk_foe_entry_commit_l2(ppe, entry); - hash = mtk_ppe_hash_entry(&entry->data); + hash = mtk_ppe_hash_entry(ppe->eth, &entry->data); entry->hash = 0xffff; spin_lock_bh(&ppe_lock); - hlist_add_head(&entry->list, &ppe->foe_flow[hash / 2]); + hlist_add_head(&entry->list, &ppe->foe_flow[hash / soc->hash_offset]); spin_unlock_bh(&ppe_lock); return 0; @@ -558,6 +559,7 @@ static void mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, u16 hash) { + const struct mtk_soc_data *soc = ppe->eth->soc; struct mtk_flow_entry *flow_info; struct mtk_foe_entry foe, *hwe; struct mtk_foe_mac_info *l2; @@ -572,7 +574,8 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, flow_info->l2_data.base_flow = entry; flow_info->type = MTK_FLOW_TYPE_L2_SUBFLOW; flow_info->hash = hash; - hlist_add_head(&flow_info->list, &ppe->foe_flow[hash / 2]); + hlist_add_head(&flow_info->list, + &ppe->foe_flow[hash / soc->hash_offset]); hlist_add_head(&flow_info->l2_data.list, &entry->l2_flows); hwe = &ppe->foe_table[hash]; @@ -596,7 +599,8 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) { - struct hlist_head *head = &ppe->foe_flow[hash / 2]; + const struct mtk_soc_data *soc = ppe->eth->soc; + struct hlist_head *head = &ppe->foe_flow[hash / soc->hash_offset]; struct mtk_foe_entry *hwe = &ppe->foe_table[hash]; struct mtk_flow_entry *entry; struct mtk_foe_bridge key = {}; @@ -680,9 +684,11 @@ int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, int version) { + const struct mtk_soc_data *soc = eth->soc; struct device *dev = eth->dev; struct mtk_foe_entry *foe; struct mtk_ppe *ppe; + u32 foe_flow_size; ppe = devm_kzalloc(dev, sizeof(*ppe), GFP_KERNEL); if (!ppe) @@ -705,6 +711,12 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, ppe->foe_table = foe; + foe_flow_size = (MTK_PPE_ENTRIES / soc->hash_offset) * + sizeof(*ppe->foe_flow); + ppe->foe_flow = devm_kzalloc(dev, foe_flow_size, GFP_KERNEL); + if (!ppe->foe_flow) + return NULL; + mtk_ppe_debugfs_init(ppe); return ppe; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index bb079e3c0417..22efed6599c2 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -270,7 +270,7 @@ struct mtk_ppe { dma_addr_t foe_phys; u16 foe_check_time[MTK_PPE_ENTRIES]; - struct hlist_head foe_flow[MTK_PPE_ENTRIES / 2]; + struct hlist_head *foe_flow; struct rhashtable l2_flows; From patchwork Thu Sep 8 19:33:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970559 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 877A2C54EE9 for ; Thu, 8 Sep 2022 19:35:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231621AbiIHTff (ORCPT ); Thu, 8 Sep 2022 15:35:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231576AbiIHTfX (ORCPT ); Thu, 8 Sep 2022 15:35:23 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C639474DF; Thu, 8 Sep 2022 12:35:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C3BE861DF3; Thu, 8 Sep 2022 19:35:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9996C433D6; Thu, 8 Sep 2022 19:35:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665721; bh=UCo8WjMWi7KIyTBPnjzHpkfBerUaBGH8JiDneaKY47c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pIXK18aRAYI0i7aluQ5bB+9OfNGzrYv01J37IPrHAqnmIjv3bZIXdyXSxDsQOZoHG uhjZOmrXtnurWkT3PfNFlnWX4Ve6dH6XVhREONk1+RK1SyQRZLhAJp11PIOT12Si4O 3+cXdXwVb1QUMEXL655/kOOjhnql1lFnDCZacNKs0rYzOROP8nA5RH4ol/Kc/W5CrU 2pz3qOFbh6WqizjcbZP1Y0AkiFazWbsEFx1V6rCbx3JYYdHDXEbHowBkJmETmOHFfZ ZhPGTg9b5SVI/i57UFXRHiIXt5kzlYlcGT1Oc+8qAK+on24IY9Pbis2562wMI5p9bs dpG9BaJRNyjDg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 05/12] net: ethernet: mtk_eth_soc: add the capability to run multiple ppe Date: Thu, 8 Sep 2022 21:33:39 +0200 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org mt7986 chipset support multiple packet engines for wlan <-> eth packet forwarding. Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 35 ++++++++++++------- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 2 +- drivers/net/ethernet/mediatek/mtk_ppe.c | 14 +++++--- drivers/net/ethernet/mediatek/mtk_ppe.h | 9 +++-- .../net/ethernet/mediatek/mtk_ppe_debugfs.c | 8 ++--- .../net/ethernet/mediatek/mtk_ppe_offload.c | 13 +++---- 6 files changed, 48 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index d09717d4f3be..bbafe5598b14 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1919,7 +1919,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, reason = FIELD_GET(MTK_RXD4_PPE_CPU_REASON, trxd.rxd4); if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) - mtk_ppe_check_skb(eth->ppe, skb, hash); + mtk_ppe_check_skb(eth->ppe[0], skb, hash); if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { @@ -2983,15 +2983,18 @@ static int mtk_open(struct net_device *dev) /* we run 2 netdevs on the same dma ring so we only bring it up once */ if (!refcount_read(ð->dma_refcnt)) { const struct mtk_soc_data *soc = eth->soc; - u32 gdm_config = MTK_GDMA_TO_PDMA; + u32 gdm_config; + int i; err = mtk_start_dma(eth); if (err) return err; - if (soc->offload_version && mtk_ppe_start(eth->ppe) == 0) - gdm_config = soc->reg_map->gdma_to_ppe; + for (i = 0; i < ARRAY_SIZE(eth->ppe); i++) + mtk_ppe_start(eth->ppe[i]); + gdm_config = soc->offload_version ? soc->reg_map->gdma_to_ppe + : MTK_GDMA_TO_PDMA; mtk_gdm_config(eth, gdm_config); napi_enable(ð->tx_napi); @@ -3035,6 +3038,7 @@ static int mtk_stop(struct net_device *dev) { struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; + int i; phylink_stop(mac->phylink); @@ -3062,8 +3066,8 @@ static int mtk_stop(struct net_device *dev) mtk_dma_free(eth); - if (eth->soc->offload_version) - mtk_ppe_stop(eth->ppe); + for (i = 0; i < ARRAY_SIZE(eth->ppe); i++) + mtk_ppe_stop(eth->ppe[i]); return 0; } @@ -4103,12 +4107,19 @@ static int mtk_probe(struct platform_device *pdev) } if (eth->soc->offload_version) { - u32 ppe_addr = eth->soc->reg_map->ppe_base; - - eth->ppe = mtk_ppe_init(eth, eth->base + ppe_addr, 2); - if (!eth->ppe) { - err = -ENOMEM; - goto err_free_dev; + u32 num_ppe; + + num_ppe = MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2) ? 2 : 1; + num_ppe = min_t(u32, ARRAY_SIZE(eth->ppe), num_ppe); + for (i = 0; i < num_ppe; i++) { + u32 ppe_addr = eth->soc->reg_map->ppe_base + i * 0x400; + + eth->ppe[i] = mtk_ppe_init(eth, eth->base + ppe_addr, + eth->soc->offload_version, i); + if (!eth->ppe[i]) { + err = -ENOMEM; + goto err_free_dev; + } } err = mtk_eth_offload_init(eth); diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 6c5e144cb9f0..54448795159d 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -1114,7 +1114,7 @@ struct mtk_eth { int ip_align; - struct mtk_ppe *ppe; + struct mtk_ppe *ppe[2]; struct rhashtable flow_table; struct bpf_prog __rcu *prog; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 1cc7d8338722..687d365b601a 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -682,7 +682,7 @@ int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) } struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, - int version) + int version, int index) { const struct mtk_soc_data *soc = eth->soc; struct device *dev = eth->dev; @@ -717,7 +717,7 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, if (!ppe->foe_flow) return NULL; - mtk_ppe_debugfs_init(ppe); + mtk_ppe_debugfs_init(ppe, index); return ppe; } @@ -738,10 +738,13 @@ static void mtk_ppe_init_foe_table(struct mtk_ppe *ppe) ppe->foe_table[i + skip[k]].ib1 |= MTK_FOE_IB1_STATIC; } -int mtk_ppe_start(struct mtk_ppe *ppe) +void mtk_ppe_start(struct mtk_ppe *ppe) { u32 val; + if (!ppe) + return; + mtk_ppe_init_foe_table(ppe); ppe_w32(ppe, MTK_PPE_TB_BASE, ppe->foe_phys); @@ -809,8 +812,6 @@ int mtk_ppe_start(struct mtk_ppe *ppe) ppe_w32(ppe, MTK_PPE_GLO_CFG, val); ppe_w32(ppe, MTK_PPE_DEFAULT_CPU_PORT, 0); - - return 0; } int mtk_ppe_stop(struct mtk_ppe *ppe) @@ -818,6 +819,9 @@ int mtk_ppe_stop(struct mtk_ppe *ppe) u32 val; int i; + if (!ppe) + return 0; + for (i = 0; i < MTK_PPE_ENTRIES; i++) ppe->foe_table[i].ib1 = FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_INVALID); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 22efed6599c2..4c31d854e986 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -247,6 +247,7 @@ struct mtk_flow_entry { }; u8 type; s8 wed_index; + u8 ppe_index; u16 hash; union { struct mtk_foe_entry data; @@ -265,6 +266,7 @@ struct mtk_ppe { struct device *dev; void __iomem *base; int version; + char dirname[5]; struct mtk_foe_entry *foe_table; dma_addr_t foe_phys; @@ -277,8 +279,9 @@ struct mtk_ppe { void *acct_table; }; -struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, int version); -int mtk_ppe_start(struct mtk_ppe *ppe); +struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, + int version, int index); +void mtk_ppe_start(struct mtk_ppe *ppe); int mtk_ppe_stop(struct mtk_ppe *ppe); void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash); @@ -320,6 +323,6 @@ int mtk_foe_entry_set_wdma(struct mtk_foe_entry *entry, int wdma_idx, int txq, int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); void mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); -int mtk_ppe_debugfs_init(struct mtk_ppe *ppe); +int mtk_ppe_debugfs_init(struct mtk_ppe *ppe, int index); #endif diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c index eb0b598f14e4..0868226ccc27 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c @@ -187,7 +187,7 @@ mtk_ppe_debugfs_foe_open_bind(struct inode *inode, struct file *file) inode->i_private); } -int mtk_ppe_debugfs_init(struct mtk_ppe *ppe) +int mtk_ppe_debugfs_init(struct mtk_ppe *ppe, int index) { static const struct file_operations fops_all = { .open = mtk_ppe_debugfs_foe_open_all, @@ -195,17 +195,17 @@ int mtk_ppe_debugfs_init(struct mtk_ppe *ppe) .llseek = seq_lseek, .release = single_release, }; - static const struct file_operations fops_bind = { .open = mtk_ppe_debugfs_foe_open_bind, .read = seq_read, .llseek = seq_lseek, .release = single_release, }; - struct dentry *root; - root = debugfs_create_dir("mtk_ppe", NULL); + snprintf(ppe->dirname, sizeof(ppe->dirname), "ppe%d", index); + + root = debugfs_create_dir(ppe->dirname, NULL); debugfs_create_file("entries", S_IRUGO, root, ppe, &fops_all); debugfs_create_file("bind", S_IRUGO, root, ppe, &fops_bind); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 25dc3c3aa31d..0324e7750065 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -434,7 +434,7 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) memcpy(&entry->data, &foe, sizeof(entry->data)); entry->wed_index = wed_index; - err = mtk_foe_entry_commit(eth->ppe, entry); + err = mtk_foe_entry_commit(eth->ppe[entry->ppe_index], entry); if (err < 0) goto free; @@ -446,7 +446,7 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) return 0; clear: - mtk_foe_entry_clear(eth->ppe, entry); + mtk_foe_entry_clear(eth->ppe[entry->ppe_index], entry); free: kfree(entry); if (wed_index >= 0) @@ -464,7 +464,7 @@ mtk_flow_offload_destroy(struct mtk_eth *eth, struct flow_cls_offload *f) if (!entry) return -ENOENT; - mtk_foe_entry_clear(eth->ppe, entry); + mtk_foe_entry_clear(eth->ppe[entry->ppe_index], entry); rhashtable_remove_fast(ð->flow_table, &entry->node, mtk_flow_ht_params); if (entry->wed_index >= 0) @@ -485,7 +485,7 @@ mtk_flow_offload_stats(struct mtk_eth *eth, struct flow_cls_offload *f) if (!entry) return -ENOENT; - idle = mtk_foe_entry_idle_time(eth->ppe, entry); + idle = mtk_foe_entry_idle_time(eth->ppe[entry->ppe_index], entry); f->stats.lastused = jiffies - idle * HZ; return 0; @@ -537,7 +537,7 @@ mtk_eth_setup_tc_block(struct net_device *dev, struct flow_block_offload *f) struct flow_block_cb *block_cb; flow_setup_cb_t *cb; - if (!eth->ppe || !eth->ppe->foe_table) + if (!eth->soc->offload_version) return -EOPNOTSUPP; if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) @@ -589,8 +589,5 @@ int mtk_eth_setup_tc(struct net_device *dev, enum tc_setup_type type, int mtk_eth_offload_init(struct mtk_eth *eth) { - if (!eth->ppe || !eth->ppe->foe_table) - return 0; - return rhashtable_init(ð->flow_table, &mtk_flow_ht_params); } From patchwork Thu Sep 8 19:33:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970558 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E1CC6FA8B for ; Thu, 8 Sep 2022 19:35:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231576AbiIHTfg (ORCPT ); Thu, 8 Sep 2022 15:35:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229674AbiIHTfa (ORCPT ); Thu, 8 Sep 2022 15:35:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B05E3ED51; Thu, 8 Sep 2022 12:35:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CF1A2B82236; Thu, 8 Sep 2022 19:35:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7AAFC433D6; Thu, 8 Sep 2022 19:35:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665725; bh=lVROcDvUI1oT0K9dvWzVXXNeHJtyCimmqQFHs1RPwB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=awotpNZBlR/Go/iFhuYtXdxxpmlUCXaqvzbss2BWh4QsXIjUw14ExocFYdtxc8aJI 5GEZ8DlQWLu1imwI0kIzJUq20hS+Lh/6jL6IAeODptXhilxlOBkUxSam1NhZp215gp 5hBf9EfEKjQukH3gBGxjE85TowcpRwSF5lV+XILV3Txh4X9uIKRgb77eYbuvn5HhoI jvmkSDg1O8PU7uH4R/dwLxwg8uX92GUPEnLO5GsBSRiQ4yYraxMhY/XhbU9dwxZyzb G+P0L3MmnjbIcHU1Xk4NUp1dlTvzKczLWiG9LnSLdAfnWr6AKmOQyVbLuNepq67shp wAJp+SUrV8goQ== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 06/12] net: ethernet: mtk_eth_soc: move wdma_base definitions in mtk register map Date: Thu, 8 Sep 2022 21:33:40 +0200 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This is a preliminary patch to introduce mt7986 wed support. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 16 ++++++++++------ drivers/net/ethernet/mediatek/mtk_eth_soc.h | 4 +--- 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index bbafe5598b14..f289b994e7d5 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -75,6 +75,10 @@ static const struct mtk_reg_map mtk_reg_map = { .gdm1_cnt = 0x2400, .gdma_to_ppe = 0x4444, .ppe_base = 0x0c00, + .wdma_base = { + [0] = 0x2800, + [1] = 0x2c00, + }, }; static const struct mtk_reg_map mt7628_reg_map = { @@ -130,6 +134,10 @@ static const struct mtk_reg_map mt7986_reg_map = { .gdm1_cnt = 0x1c00, .gdma_to_ppe = 0x3333, .ppe_base = 0x2000, + .wdma_base = { + [0] = 0x4800, + [1] = 0x4c00, + }, }; /* strings used by ethtool */ @@ -4019,16 +4027,12 @@ static int mtk_probe(struct platform_device *pdev) for (i = 0;; i++) { struct device_node *np = of_parse_phandle(pdev->dev.of_node, "mediatek,wed", i); - static const u32 wdma_regs[] = { - MTK_WDMA0_BASE, - MTK_WDMA1_BASE - }; void __iomem *wdma; - if (!np || i >= ARRAY_SIZE(wdma_regs)) + if (!np || i >= ARRAY_SIZE(eth->soc->reg_map->wdma_base)) break; - wdma = eth->base + wdma_regs[i]; + wdma = eth->base + eth->soc->reg_map->wdma_base[i]; mtk_wed_add_hw(np, eth, wdma, i); } diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 54448795159d..39a0361ca989 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -268,9 +268,6 @@ #define TX_DMA_FPORT_MASK_V2 0xf #define TX_DMA_SWC_V2 BIT(30) -#define MTK_WDMA0_BASE 0x2800 -#define MTK_WDMA1_BASE 0x2c00 - /* QDMA descriptor txd4 */ #define TX_DMA_CHKSUM (0x7 << 29) #define TX_DMA_TSO BIT(28) @@ -956,6 +953,7 @@ struct mtk_reg_map { u32 gdm1_cnt; u32 gdma_to_ppe; u32 ppe_base; + u32 wdma_base[2]; }; /* struct mtk_eth_data - This is the structure holding all differences From patchwork Thu Sep 8 19:33:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970560 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0844C6FA89 for ; Thu, 8 Sep 2022 19:35:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231604AbiIHTfi (ORCPT ); Thu, 8 Sep 2022 15:35:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231693AbiIHTfd (ORCPT ); Thu, 8 Sep 2022 15:35:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9C1A5F7CB; Thu, 8 Sep 2022 12:35:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7F5E3B82190; Thu, 8 Sep 2022 19:35:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FEF3C433D6; Thu, 8 Sep 2022 19:35:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665729; bh=abJ7nYR84qNEFjiirgEaY32Ku5TcmUajyR7XXEaIQvQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SIx9GzggF25isQY53XdUwhXlbiuVoRb0D2IqZmbWfvBY+G27Ft8lddgc6fqe/4jhi 3d7Y5jHd3m0SP2fenVTaymmf+soZjqvlkkfU/NS2jdSdDYtJz1GRUlfh8pwKeGjCTs 4355wixCjeSuVG/n2hzeVU81is7i5cp37fpxgMXA/A6djvsUTsIWns7TUF8Yqp1g+H csJHH1Jnw8oNCAJHdFo4qk/GvUQJUlA75hVutzpZLV4EeO43nU7tg1B+LunQSzu584 zjeX45OV2NG+ddFDWULA07qpYkcqsH8Okd9M2Zul7cj4W3OQZS0u9Dk4cm7h0E8CH9 eUsqovtAuQM9Q== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 07/12] net: ethernet: mtk_eth_soc: add foe_entry_size to mtk_eth_soc Date: Thu, 8 Sep 2022 21:33:41 +0200 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce foe_entry_size to mtk_eth_soc data structure since mt7986 relies on a bigger mtk_foe_entry data structure. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 3 + drivers/net/ethernet/mediatek/mtk_eth_soc.h | 10 ++++ drivers/net/ethernet/mediatek/mtk_ppe.c | 55 +++++++++++-------- drivers/net/ethernet/mediatek/mtk_ppe.h | 2 +- .../net/ethernet/mediatek/mtk_ppe_debugfs.c | 2 +- 5 files changed, 48 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index f289b994e7d5..b4bccd42a22c 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -4217,6 +4217,7 @@ static const struct mtk_soc_data mt7621_data = { .required_pctl = false, .offload_version = 2, .hash_offset = 2, + .foe_entry_size = sizeof(struct mtk_foe_entry), .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4236,6 +4237,7 @@ static const struct mtk_soc_data mt7622_data = { .required_pctl = false, .offload_version = 2, .hash_offset = 2, + .foe_entry_size = sizeof(struct mtk_foe_entry), .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4254,6 +4256,7 @@ static const struct mtk_soc_data mt7623_data = { .required_pctl = true, .offload_version = 2, .hash_offset = 2, + .foe_entry_size = sizeof(struct mtk_foe_entry), .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 39a0361ca989..08236e054616 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -968,6 +968,7 @@ struct mtk_reg_map { * @required_pctl A bool value to show whether the SoC requires * the extra setup for those pins used by GMAC. * @hash_offset Flow table hash offset. + * @foe_entry_size Foe table entry size. * @txd_size Tx DMA descriptor size. * @rxd_size Rx DMA descriptor size. * @rx_irq_done_mask Rx irq done register mask. @@ -983,6 +984,7 @@ struct mtk_soc_data { bool required_pctl; u8 offload_version; u8 hash_offset; + u16 foe_entry_size; netdev_features_t hw_features; struct { u32 txd_size; @@ -1143,6 +1145,14 @@ struct mtk_mac { /* the struct describing the SoC. these are declared in the soc_xyz.c files */ extern const struct of_device_id of_mtk_match[]; +static inline struct mtk_foe_entry * +mtk_foe_get_entry(struct mtk_ppe *ppe, u16 hash) +{ + const struct mtk_soc_data *soc = ppe->eth->soc; + + return ppe->foe_table + hash * soc->foe_entry_size; +} + /* read the hardware status register */ void mtk_stats_update_mac(struct mtk_mac *mac); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 687d365b601a..8c52cfc7ce76 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -410,9 +410,10 @@ __mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) hlist_del_init(&entry->list); if (entry->hash != 0xffff) { - ppe->foe_table[entry->hash].ib1 &= ~MTK_FOE_IB1_STATE; - ppe->foe_table[entry->hash].ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE, - MTK_FOE_STATE_UNBIND); + struct mtk_foe_entry *hwe = mtk_foe_get_entry(ppe, entry->hash); + + hwe->ib1 &= ~MTK_FOE_IB1_STATE; + hwe->ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_UNBIND); dma_wmb(); } entry->hash = 0xffff; @@ -451,7 +452,7 @@ mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) int cur_idle; u32 ib1; - hwe = &ppe->foe_table[cur->hash]; + hwe = mtk_foe_get_entry(ppe, cur->hash); ib1 = READ_ONCE(hwe->ib1); if (FIELD_GET(MTK_FOE_IB1_STATE, ib1) != MTK_FOE_STATE_BIND) { @@ -473,8 +474,8 @@ mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) static void mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { + struct mtk_foe_entry foe = {}; struct mtk_foe_entry *hwe; - struct mtk_foe_entry foe; spin_lock_bh(&ppe_lock); @@ -486,8 +487,8 @@ mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) if (entry->hash == 0xffff) goto out; - hwe = &ppe->foe_table[entry->hash]; - memcpy(&foe, hwe, sizeof(foe)); + hwe = mtk_foe_get_entry(ppe, entry->hash); + memcpy(&foe, hwe, ppe->eth->soc->foe_entry_size); if (!mtk_flow_entry_match(entry, &foe)) { entry->hash = 0xffff; goto out; @@ -511,8 +512,8 @@ __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, entry->ib1 &= ~MTK_FOE_IB1_BIND_TIMESTAMP; entry->ib1 |= FIELD_PREP(MTK_FOE_IB1_BIND_TIMESTAMP, timestamp); - hwe = &ppe->foe_table[hash]; - memcpy(&hwe->data, &entry->data, sizeof(hwe->data)); + hwe = mtk_foe_get_entry(ppe, hash); + memcpy(&hwe->data, &entry->data, ppe->eth->soc->foe_entry_size); wmb(); hwe->ib1 = entry->ib1; @@ -561,7 +562,7 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, { const struct mtk_soc_data *soc = ppe->eth->soc; struct mtk_flow_entry *flow_info; - struct mtk_foe_entry foe, *hwe; + struct mtk_foe_entry foe = {}, *hwe; struct mtk_foe_mac_info *l2; u32 ib1_mask = MTK_FOE_IB1_PACKET_TYPE | MTK_FOE_IB1_UDP; int type; @@ -578,8 +579,8 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, &ppe->foe_flow[hash / soc->hash_offset]); hlist_add_head(&flow_info->l2_data.list, &entry->l2_flows); - hwe = &ppe->foe_table[hash]; - memcpy(&foe, hwe, sizeof(foe)); + hwe = mtk_foe_get_entry(ppe, hash); + memcpy(&foe, hwe, soc->foe_entry_size); foe.ib1 &= ib1_mask; foe.ib1 |= entry->data.ib1 & ~ib1_mask; @@ -601,7 +602,7 @@ void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) { const struct mtk_soc_data *soc = ppe->eth->soc; struct hlist_head *head = &ppe->foe_flow[hash / soc->hash_offset]; - struct mtk_foe_entry *hwe = &ppe->foe_table[hash]; + struct mtk_foe_entry *hwe = mtk_foe_get_entry(ppe, hash); struct mtk_flow_entry *entry; struct mtk_foe_bridge key = {}; struct hlist_node *n; @@ -686,9 +687,9 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, { const struct mtk_soc_data *soc = eth->soc; struct device *dev = eth->dev; - struct mtk_foe_entry *foe; struct mtk_ppe *ppe; u32 foe_flow_size; + void *foe; ppe = devm_kzalloc(dev, sizeof(*ppe), GFP_KERNEL); if (!ppe) @@ -704,7 +705,8 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, ppe->dev = dev; ppe->version = version; - foe = dmam_alloc_coherent(ppe->dev, MTK_PPE_ENTRIES * sizeof(*foe), + foe = dmam_alloc_coherent(ppe->dev, + MTK_PPE_ENTRIES * soc->foe_entry_size, &ppe->foe_phys, GFP_KERNEL); if (!foe) return NULL; @@ -727,15 +729,21 @@ static void mtk_ppe_init_foe_table(struct mtk_ppe *ppe) static const u8 skip[] = { 12, 25, 38, 51, 76, 89, 102 }; int i, k; - memset(ppe->foe_table, 0, MTK_PPE_ENTRIES * sizeof(*ppe->foe_table)); + memset(ppe->foe_table, 0, + MTK_PPE_ENTRIES * ppe->eth->soc->foe_entry_size); if (!IS_ENABLED(CONFIG_SOC_MT7621)) return; /* skip all entries that cross the 1024 byte boundary */ - for (i = 0; i < MTK_PPE_ENTRIES; i += 128) - for (k = 0; k < ARRAY_SIZE(skip); k++) - ppe->foe_table[i + skip[k]].ib1 |= MTK_FOE_IB1_STATIC; + for (i = 0; i < MTK_PPE_ENTRIES; i += 128) { + for (k = 0; k < ARRAY_SIZE(skip); k++) { + struct mtk_foe_entry *hwe; + + hwe = mtk_foe_get_entry(ppe, i + skip[k]); + hwe->ib1 |= MTK_FOE_IB1_STATIC; + } + } } void mtk_ppe_start(struct mtk_ppe *ppe) @@ -822,9 +830,12 @@ int mtk_ppe_stop(struct mtk_ppe *ppe) if (!ppe) return 0; - for (i = 0; i < MTK_PPE_ENTRIES; i++) - ppe->foe_table[i].ib1 = FIELD_PREP(MTK_FOE_IB1_STATE, - MTK_FOE_STATE_INVALID); + for (i = 0; i < MTK_PPE_ENTRIES; i++) { + struct mtk_foe_entry *hwe = mtk_foe_get_entry(ppe, i); + + hwe->ib1 = FIELD_PREP(MTK_FOE_IB1_STATE, + MTK_FOE_STATE_INVALID); + } mtk_ppe_cache_enable(ppe, false); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 4c31d854e986..6d4c91acd1a5 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -268,7 +268,7 @@ struct mtk_ppe { int version; char dirname[5]; - struct mtk_foe_entry *foe_table; + void *foe_table; dma_addr_t foe_phys; u16 foe_check_time[MTK_PPE_ENTRIES]; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c index 0868226ccc27..ec49829ab32d 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c @@ -79,7 +79,7 @@ mtk_ppe_debugfs_foe_show(struct seq_file *m, void *private, bool bind) int i; for (i = 0; i < MTK_PPE_ENTRIES; i++) { - struct mtk_foe_entry *entry = &ppe->foe_table[i]; + struct mtk_foe_entry *entry = mtk_foe_get_entry(ppe, i); struct mtk_foe_mac_info *l2; struct mtk_flow_addr_info ai = {}; unsigned char h_source[ETH_ALEN]; From patchwork Thu Sep 8 19:33:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970561 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DA89C38145 for ; Thu, 8 Sep 2022 19:35:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229700AbiIHTfw (ORCPT ); Thu, 8 Sep 2022 15:35:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230461AbiIHTfi (ORCPT ); Thu, 8 Sep 2022 15:35:38 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06BA285FDB; Thu, 8 Sep 2022 12:35:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 64282B8223C; Thu, 8 Sep 2022 19:35:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7FB86C43142; Thu, 8 Sep 2022 19:35:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665733; bh=hzSyrFozf/wc40X1iyhOy3ho3ArY0BksXYwIX46bLyk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fJQ5XOzT6tPEI9ZSU1GtrMZfamQs7UKSyzZrRiOzgGdT41LyYJDze067cKbJ6s8Cv 4hJzYurMhmfjflyig++7a+XzzWAoA9n9a9yFt2utWmZbggtB9Ji4xpBWZ4nv7mKSg/ myLlK4cqGjbScyQOPFGv8iM1qzyTsqaRToVKjwDBkXv5A/SqHqZL3QZ8fCoBvmc+LW bHOtBCtkVDaj2pjN5Sf0X6C0WIabhPTLjKNrw0VxSQqmiYSB5TTVfdQa80TwXkudlk jgIKkQDn1NUwSAK1lRucML/UyVpqV7wNYyx8Jb05vaSNzbW+4ODVMvorwgg/jGib0G 01qviM1eBbnYA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 08/12] net: ethernet: mtk_eth_soc: add foe info in mtk_soc_data structure Date: Thu, 8 Sep 2022 21:33:42 +0200 Message-Id: <0d0bfa99e313c0b00bf75f943f58b6fe552ed004.1662661555.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce foe struct in mtk_soc_data as a container for foe table chip related definitions. This is a preliminary patch to enable mt7986 wed support. Signed-off-by: Lorenzo Bianconi Reported-by: kernel test robot --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 70 +++++++- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 27 ++- drivers/net/ethernet/mediatek/mtk_ppe.c | 161 ++++++++++-------- drivers/net/ethernet/mediatek/mtk_ppe.h | 29 ++-- .../net/ethernet/mediatek/mtk_ppe_offload.c | 34 ++-- 5 files changed, 208 insertions(+), 113 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index b4bccd42a22c..8aa0a61b35fc 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -4216,8 +4216,6 @@ static const struct mtk_soc_data mt7621_data = { .required_clks = MT7621_CLKS_BITMAP, .required_pctl = false, .offload_version = 2, - .hash_offset = 2, - .foe_entry_size = sizeof(struct mtk_foe_entry), .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4226,6 +4224,26 @@ static const struct mtk_soc_data mt7621_data = { .dma_max_len = MTK_TX_DMA_BUF_LEN, .dma_len_offset = 16, }, + .foe = { + .entry_size = sizeof(struct mtk_foe_entry), + .hash_offset = 2, + .ib1 = { + .bind_ppoe = BIT(19), + .bind_vlan_tag = BIT(20), + .bind_cache = BIT(22), + .bind_ttl = BIT(24), + .bind_ts = GENMASK(14, 0), + .bind_vlan_layer = GENMASK(18, 16), + .pkt_type = GENMASK(27, 25), + }, + .ib2 = { + .multicast = BIT(8), + .wdma_winfo = BIT(17), + .port_ag = GENMASK(23, 18), + .port_mg = GENMASK(17, 12), + .dst_port = GENMASK(7, 5), + }, + }, }; static const struct mtk_soc_data mt7622_data = { @@ -4236,8 +4254,6 @@ static const struct mtk_soc_data mt7622_data = { .required_clks = MT7622_CLKS_BITMAP, .required_pctl = false, .offload_version = 2, - .hash_offset = 2, - .foe_entry_size = sizeof(struct mtk_foe_entry), .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4246,6 +4262,26 @@ static const struct mtk_soc_data mt7622_data = { .dma_max_len = MTK_TX_DMA_BUF_LEN, .dma_len_offset = 16, }, + .foe = { + .entry_size = sizeof(struct mtk_foe_entry), + .hash_offset = 2, + .ib1 = { + .bind_ppoe = BIT(19), + .bind_vlan_tag = BIT(20), + .bind_cache = BIT(22), + .bind_ttl = BIT(24), + .bind_ts = GENMASK(14, 0), + .bind_vlan_layer = GENMASK(18, 16), + .pkt_type = GENMASK(27, 25), + }, + .ib2 = { + .multicast = BIT(8), + .wdma_winfo = BIT(17), + .port_ag = GENMASK(23, 18), + .port_mg = GENMASK(17, 12), + .dst_port = GENMASK(7, 5), + }, + }, }; static const struct mtk_soc_data mt7623_data = { @@ -4255,8 +4291,6 @@ static const struct mtk_soc_data mt7623_data = { .required_clks = MT7623_CLKS_BITMAP, .required_pctl = true, .offload_version = 2, - .hash_offset = 2, - .foe_entry_size = sizeof(struct mtk_foe_entry), .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), @@ -4265,6 +4299,26 @@ static const struct mtk_soc_data mt7623_data = { .dma_max_len = MTK_TX_DMA_BUF_LEN, .dma_len_offset = 16, }, + .foe = { + .entry_size = sizeof(struct mtk_foe_entry), + .hash_offset = 2, + .ib1 = { + .bind_ppoe = BIT(19), + .bind_vlan_tag = BIT(20), + .bind_cache = BIT(22), + .bind_ttl = BIT(24), + .bind_ts = GENMASK(14, 0), + .bind_vlan_layer = GENMASK(18, 16), + .pkt_type = GENMASK(27, 25), + }, + .ib2 = { + .multicast = BIT(8), + .wdma_winfo = BIT(17), + .port_ag = GENMASK(23, 18), + .port_mg = GENMASK(17, 12), + .dst_port = GENMASK(7, 5), + }, + }, }; static const struct mtk_soc_data mt7629_data = { @@ -4290,7 +4344,6 @@ static const struct mtk_soc_data mt7986_data = { .caps = MT7986_CAPS, .required_clks = MT7986_CLKS_BITMAP, .required_pctl = false, - .hash_offset = 4, .txrx = { .txd_size = sizeof(struct mtk_tx_dma_v2), .rxd_size = sizeof(struct mtk_rx_dma_v2), @@ -4299,6 +4352,9 @@ static const struct mtk_soc_data mt7986_data = { .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, .dma_len_offset = 8, }, + .foe = { + .hash_offset = 4, + }, }; static const struct mtk_soc_data rt5350_data = { diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 08236e054616..6d0b080c2048 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -967,14 +967,13 @@ struct mtk_reg_map { * the target SoC * @required_pctl A bool value to show whether the SoC requires * the extra setup for those pins used by GMAC. - * @hash_offset Flow table hash offset. - * @foe_entry_size Foe table entry size. * @txd_size Tx DMA descriptor size. * @rxd_size Rx DMA descriptor size. * @rx_irq_done_mask Rx irq done register mask. * @rx_dma_l4_valid Rx DMA valid register mask. * @dma_max_len Max DMA tx/rx buffer length. * @dma_len_offset Tx/Rx DMA length field offset. + * @foe Foe table chip info. */ struct mtk_soc_data { const struct mtk_reg_map *reg_map; @@ -983,8 +982,6 @@ struct mtk_soc_data { u32 required_clks; bool required_pctl; u8 offload_version; - u8 hash_offset; - u16 foe_entry_size; netdev_features_t hw_features; struct { u32 txd_size; @@ -994,6 +991,26 @@ struct mtk_soc_data { u32 dma_max_len; u32 dma_len_offset; } txrx; + struct { + u16 entry_size; + u8 hash_offset; + struct { + u32 bind_ppoe; + u32 bind_vlan_tag; + u32 bind_cache; + u32 bind_ttl; + u32 bind_ts; + u32 bind_vlan_layer; + u32 pkt_type; + } ib1; + struct { + u32 wdma_winfo; + u32 port_ag; + u32 port_mg; + u16 multicast; + u16 dst_port; + } ib2; + } foe; }; /* currently no SoC has more than 2 macs */ @@ -1150,7 +1167,7 @@ mtk_foe_get_entry(struct mtk_ppe *ppe, u16 hash) { const struct mtk_soc_data *soc = ppe->eth->soc; - return ppe->foe_table + hash * soc->foe_entry_size; + return ppe->foe_table + hash * soc->foe.entry_size; } /* read the hardware status register */ diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 8c52cfc7ce76..4248a3b78aa6 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -56,7 +56,7 @@ static u32 ppe_clear(struct mtk_ppe *ppe, u32 reg, u32 val) static u32 mtk_eth_timestamp(struct mtk_eth *eth) { - return mtk_r32(eth, 0x0010) & MTK_FOE_IB1_BIND_TIMESTAMP; + return mtk_r32(eth, 0x0010) & eth->soc->foe.ib1.bind_ts; } static int mtk_ppe_wait_busy(struct mtk_ppe *ppe) @@ -93,7 +93,7 @@ static u32 mtk_ppe_hash_entry(struct mtk_eth *eth, struct mtk_foe_entry *e) u32 hv1, hv2, hv3; u32 hash; - switch (FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, e->ib1)) { + switch (MTK_FIELD_GET(eth->soc->foe.ib1.pkt_type, e->ib1)) { case MTK_PPE_PKT_TYPE_IPV4_ROUTE: case MTK_PPE_PKT_TYPE_IPV4_HNAPT: hv1 = e->ipv4.orig.ports; @@ -122,16 +122,16 @@ static u32 mtk_ppe_hash_entry(struct mtk_eth *eth, struct mtk_foe_entry *e) hash = (hash >> 24) | ((hash & 0xffffff) << 8); hash ^= hv1 ^ hv2 ^ hv3; hash ^= hash >> 16; - hash <<= (ffs(eth->soc->hash_offset) - 1); + hash <<= (ffs(eth->soc->foe.hash_offset) - 1); hash &= MTK_PPE_ENTRIES - 1; return hash; } static inline struct mtk_foe_mac_info * -mtk_foe_entry_l2(struct mtk_foe_entry *entry) +mtk_foe_entry_l2(struct mtk_eth *eth, struct mtk_foe_entry *entry) { - int type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1); + int type = MTK_FIELD_GET(eth->soc->foe.ib1.pkt_type, entry->ib1); if (type == MTK_PPE_PKT_TYPE_BRIDGE) return &entry->bridge.l2; @@ -143,9 +143,9 @@ mtk_foe_entry_l2(struct mtk_foe_entry *entry) } static inline u32 * -mtk_foe_entry_ib2(struct mtk_foe_entry *entry) +mtk_foe_entry_ib2(struct mtk_eth *eth, struct mtk_foe_entry *entry) { - int type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1); + int type = MTK_FIELD_GET(eth->soc->foe.ib1.pkt_type, entry->ib1); if (type == MTK_PPE_PKT_TYPE_BRIDGE) return &entry->bridge.ib2; @@ -156,8 +156,9 @@ mtk_foe_entry_ib2(struct mtk_foe_entry *entry) return &entry->ipv4.ib2; } -int mtk_foe_entry_prepare(struct mtk_foe_entry *entry, int type, int l4proto, - u8 pse_port, u8 *src_mac, u8 *dest_mac) +int mtk_foe_entry_prepare(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int type, int l4proto, u8 pse_port, u8 *src_mac, + u8 *dest_mac) { struct mtk_foe_mac_info *l2; u32 ports_pad, val; @@ -165,18 +166,18 @@ int mtk_foe_entry_prepare(struct mtk_foe_entry *entry, int type, int l4proto, memset(entry, 0, sizeof(*entry)); val = FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_BIND) | - FIELD_PREP(MTK_FOE_IB1_PACKET_TYPE, type) | + MTK_FIELD_PREP(eth->soc->foe.ib1.pkt_type, type) | FIELD_PREP(MTK_FOE_IB1_UDP, l4proto == IPPROTO_UDP) | - MTK_FOE_IB1_BIND_TTL | - MTK_FOE_IB1_BIND_CACHE; + eth->soc->foe.ib1.bind_ttl | + eth->soc->foe.ib1.bind_cache; entry->ib1 = val; - val = FIELD_PREP(MTK_FOE_IB2_PORT_MG, 0x3f) | - FIELD_PREP(MTK_FOE_IB2_PORT_AG, 0x1f) | - FIELD_PREP(MTK_FOE_IB2_DEST_PORT, pse_port); + val = MTK_FIELD_PREP(eth->soc->foe.ib2.port_mg, 0x3f) | + MTK_FIELD_PREP(eth->soc->foe.ib2.port_ag, 0x1f) | + MTK_FIELD_PREP(eth->soc->foe.ib2.dst_port, pse_port); if (is_multicast_ether_addr(dest_mac)) - val |= MTK_FOE_IB2_MULTICAST; + val |= eth->soc->foe.ib2.multicast; ports_pad = 0xa5a5a500 | (l4proto & 0xff); if (type == MTK_PPE_PKT_TYPE_IPV4_ROUTE) @@ -210,24 +211,26 @@ int mtk_foe_entry_prepare(struct mtk_foe_entry *entry, int type, int l4proto, return 0; } -int mtk_foe_entry_set_pse_port(struct mtk_foe_entry *entry, u8 port) +int mtk_foe_entry_set_pse_port(struct mtk_eth *eth, + struct mtk_foe_entry *entry, u8 port) { - u32 *ib2 = mtk_foe_entry_ib2(entry); + u32 *ib2 = mtk_foe_entry_ib2(eth, entry); u32 val; val = *ib2; - val &= ~MTK_FOE_IB2_DEST_PORT; - val |= FIELD_PREP(MTK_FOE_IB2_DEST_PORT, port); + val &= ~eth->soc->foe.ib2.dst_port; + val |= MTK_FIELD_PREP(eth->soc->foe.ib2.dst_port, port); *ib2 = val; return 0; } -int mtk_foe_entry_set_ipv4_tuple(struct mtk_foe_entry *entry, bool egress, +int mtk_foe_entry_set_ipv4_tuple(struct mtk_eth *eth, + struct mtk_foe_entry *entry, bool egress, __be32 src_addr, __be16 src_port, __be32 dest_addr, __be16 dest_port) { - int type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1); + int type = MTK_FIELD_GET(eth->soc->foe.ib1.pkt_type, entry->ib1); struct mtk_ipv4_tuple *t; switch (type) { @@ -262,11 +265,12 @@ int mtk_foe_entry_set_ipv4_tuple(struct mtk_foe_entry *entry, bool egress, return 0; } -int mtk_foe_entry_set_ipv6_tuple(struct mtk_foe_entry *entry, +int mtk_foe_entry_set_ipv6_tuple(struct mtk_eth *eth, + struct mtk_foe_entry *entry, __be32 *src_addr, __be16 src_port, __be32 *dest_addr, __be16 dest_port) { - int type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1); + int type = MTK_FIELD_GET(eth->soc->foe.ib1.pkt_type, entry->ib1); u32 *src, *dest; int i; @@ -297,39 +301,45 @@ int mtk_foe_entry_set_ipv6_tuple(struct mtk_foe_entry *entry, return 0; } -int mtk_foe_entry_set_dsa(struct mtk_foe_entry *entry, int port) +int mtk_foe_entry_set_dsa(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int port) { - struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(entry); + struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(eth, entry); + const struct mtk_soc_data *soc = eth->soc; l2->etype = BIT(port); - if (!(entry->ib1 & MTK_FOE_IB1_BIND_VLAN_LAYER)) - entry->ib1 |= FIELD_PREP(MTK_FOE_IB1_BIND_VLAN_LAYER, 1); + if (!(entry->ib1 & soc->foe.ib1.bind_vlan_layer)) + entry->ib1 |= MTK_FIELD_PREP(soc->foe.ib1.bind_vlan_layer, 1); else l2->etype |= BIT(8); - entry->ib1 &= ~MTK_FOE_IB1_BIND_VLAN_TAG; + entry->ib1 &= ~soc->foe.ib1.bind_vlan_tag; return 0; } -int mtk_foe_entry_set_vlan(struct mtk_foe_entry *entry, int vid) +int mtk_foe_entry_set_vlan(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int vid) { - struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(entry); + struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(eth, entry); + const struct mtk_soc_data *soc = eth->soc; - switch (FIELD_GET(MTK_FOE_IB1_BIND_VLAN_LAYER, entry->ib1)) { + switch (MTK_FIELD_GET(soc->foe.ib1.bind_vlan_layer, entry->ib1)) { case 0: - entry->ib1 |= MTK_FOE_IB1_BIND_VLAN_TAG | - FIELD_PREP(MTK_FOE_IB1_BIND_VLAN_LAYER, 1); + entry->ib1 |= soc->foe.ib1.bind_vlan_tag | + MTK_FIELD_PREP(soc->foe.ib1.bind_vlan_layer, 1); l2->vlan1 = vid; return 0; case 1: - if (!(entry->ib1 & MTK_FOE_IB1_BIND_VLAN_TAG)) { + if (!(entry->ib1 & soc->foe.ib1.bind_vlan_tag)) { l2->vlan1 = vid; l2->etype |= BIT(8); } else { l2->vlan2 = vid; - entry->ib1 += FIELD_PREP(MTK_FOE_IB1_BIND_VLAN_LAYER, 1); + entry->ib1 += + MTK_FIELD_PREP(soc->foe.ib1.bind_vlan_layer, + 1); } return 0; default: @@ -337,28 +347,29 @@ int mtk_foe_entry_set_vlan(struct mtk_foe_entry *entry, int vid) } } -int mtk_foe_entry_set_pppoe(struct mtk_foe_entry *entry, int sid) +int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int sid) { - struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(entry); + struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(eth, entry); - if (!(entry->ib1 & MTK_FOE_IB1_BIND_VLAN_LAYER) || - (entry->ib1 & MTK_FOE_IB1_BIND_VLAN_TAG)) + if (!(entry->ib1 & eth->soc->foe.ib1.bind_vlan_layer) || + (entry->ib1 & eth->soc->foe.ib1.bind_vlan_tag)) l2->etype = ETH_P_PPP_SES; - entry->ib1 |= MTK_FOE_IB1_BIND_PPPOE; + entry->ib1 |= eth->soc->foe.ib1.bind_ppoe; l2->pppoe_id = sid; return 0; } -int mtk_foe_entry_set_wdma(struct mtk_foe_entry *entry, int wdma_idx, int txq, - int bss, int wcid) +int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int wdma_idx, int txq, int bss, int wcid) { - struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(entry); - u32 *ib2 = mtk_foe_entry_ib2(entry); + struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(eth, entry); + u32 *ib2 = mtk_foe_entry_ib2(eth, entry); - *ib2 &= ~MTK_FOE_IB2_PORT_MG; - *ib2 |= MTK_FOE_IB2_WDMA_WINFO; + *ib2 &= ~eth->soc->foe.ib2.port_mg; + *ib2 |= eth->soc->foe.ib2.wdma_winfo; if (wdma_idx) *ib2 |= MTK_FOE_IB2_WDMA_DEVIDX; @@ -376,14 +387,15 @@ static inline bool mtk_foe_entry_usable(struct mtk_foe_entry *entry) } static bool -mtk_flow_entry_match(struct mtk_flow_entry *entry, struct mtk_foe_entry *data) +mtk_flow_entry_match(struct mtk_eth *eth, struct mtk_flow_entry *entry, + struct mtk_foe_entry *data) { int type, len; if ((data->ib1 ^ entry->data.ib1) & MTK_FOE_IB1_UDP) return false; - type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->data.ib1); + type = MTK_FIELD_GET(eth->soc->foe.ib1.pkt_type, entry->data.ib1); if (type > MTK_PPE_PKT_TYPE_IPV4_DSLITE) len = offsetof(struct mtk_foe_entry, ipv6._rsv); else @@ -430,11 +442,11 @@ static int __mtk_foe_entry_idle_time(struct mtk_ppe *ppe, u32 ib1) u16 timestamp; u16 now; - now = mtk_eth_timestamp(ppe->eth) & MTK_FOE_IB1_BIND_TIMESTAMP; - timestamp = ib1 & MTK_FOE_IB1_BIND_TIMESTAMP; + now = mtk_eth_timestamp(ppe->eth) & ppe->eth->soc->foe.ib1.bind_ts; + timestamp = ib1 & ppe->eth->soc->foe.ib1.bind_ts; if (timestamp > now) - return MTK_FOE_IB1_BIND_TIMESTAMP + 1 - timestamp + now; + return ppe->eth->soc->foe.ib1.bind_ts + 1 - timestamp + now; else return now - timestamp; } @@ -466,8 +478,8 @@ mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) continue; idle = cur_idle; - entry->data.ib1 &= ~MTK_FOE_IB1_BIND_TIMESTAMP; - entry->data.ib1 |= hwe->ib1 & MTK_FOE_IB1_BIND_TIMESTAMP; + entry->data.ib1 &= ~ppe->eth->soc->foe.ib1.bind_ts; + entry->data.ib1 |= hwe->ib1 & ppe->eth->soc->foe.ib1.bind_ts; } } @@ -488,8 +500,8 @@ mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) goto out; hwe = mtk_foe_get_entry(ppe, entry->hash); - memcpy(&foe, hwe, ppe->eth->soc->foe_entry_size); - if (!mtk_flow_entry_match(entry, &foe)) { + memcpy(&foe, hwe, ppe->eth->soc->foe.entry_size); + if (!mtk_flow_entry_match(ppe->eth, entry, &foe)) { entry->hash = 0xffff; goto out; } @@ -508,12 +520,12 @@ __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, u16 timestamp; timestamp = mtk_eth_timestamp(ppe->eth); - timestamp &= MTK_FOE_IB1_BIND_TIMESTAMP; - entry->ib1 &= ~MTK_FOE_IB1_BIND_TIMESTAMP; - entry->ib1 |= FIELD_PREP(MTK_FOE_IB1_BIND_TIMESTAMP, timestamp); - + timestamp &= ppe->eth->soc->foe.ib1.bind_ts; + entry->ib1 &= ~ppe->eth->soc->foe.ib1.bind_ts; + entry->ib1 |= MTK_FIELD_PREP(ppe->eth->soc->foe.ib1.bind_ts, + timestamp); hwe = mtk_foe_get_entry(ppe, hash); - memcpy(&hwe->data, &entry->data, ppe->eth->soc->foe_entry_size); + memcpy(&hwe->data, &entry->data, ppe->eth->soc->foe.entry_size); wmb(); hwe->ib1 = entry->ib1; @@ -540,8 +552,8 @@ mtk_foe_entry_commit_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { - int type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->data.ib1); const struct mtk_soc_data *soc = ppe->eth->soc; + int type = MTK_FIELD_GET(soc->foe.ib1.pkt_type, entry->data.ib1); u32 hash; if (type == MTK_PPE_PKT_TYPE_BRIDGE) @@ -550,7 +562,8 @@ int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) hash = mtk_ppe_hash_entry(ppe->eth, &entry->data); entry->hash = 0xffff; spin_lock_bh(&ppe_lock); - hlist_add_head(&entry->list, &ppe->foe_flow[hash / soc->hash_offset]); + hlist_add_head(&entry->list, + &ppe->foe_flow[hash / soc->foe.hash_offset]); spin_unlock_bh(&ppe_lock); return 0; @@ -564,7 +577,7 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, struct mtk_flow_entry *flow_info; struct mtk_foe_entry foe = {}, *hwe; struct mtk_foe_mac_info *l2; - u32 ib1_mask = MTK_FOE_IB1_PACKET_TYPE | MTK_FOE_IB1_UDP; + u32 ib1_mask = soc->foe.ib1.pkt_type | MTK_FOE_IB1_UDP; int type; flow_info = kzalloc(offsetof(struct mtk_flow_entry, l2_data.end), @@ -576,24 +589,24 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, flow_info->type = MTK_FLOW_TYPE_L2_SUBFLOW; flow_info->hash = hash; hlist_add_head(&flow_info->list, - &ppe->foe_flow[hash / soc->hash_offset]); + &ppe->foe_flow[hash / soc->foe.hash_offset]); hlist_add_head(&flow_info->l2_data.list, &entry->l2_flows); hwe = mtk_foe_get_entry(ppe, hash); - memcpy(&foe, hwe, soc->foe_entry_size); + memcpy(&foe, hwe, soc->foe.entry_size); foe.ib1 &= ib1_mask; foe.ib1 |= entry->data.ib1 & ~ib1_mask; - l2 = mtk_foe_entry_l2(&foe); + l2 = mtk_foe_entry_l2(ppe->eth, &foe); memcpy(l2, &entry->data.bridge.l2, sizeof(*l2)); - type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, foe.ib1); + type = MTK_FIELD_GET(soc->foe.ib1.pkt_type, foe.ib1); if (type == MTK_PPE_PKT_TYPE_IPV4_HNAPT) memcpy(&foe.ipv4.new, &foe.ipv4.orig, sizeof(foe.ipv4.new)); else if (type >= MTK_PPE_PKT_TYPE_IPV6_ROUTE_3T && l2->etype == ETH_P_IP) l2->etype = ETH_P_IPV6; - *mtk_foe_entry_ib2(&foe) = entry->data.bridge.ib2; + *mtk_foe_entry_ib2(ppe->eth, &foe) = entry->data.bridge.ib2; __mtk_foe_entry_commit(ppe, &foe, hash); } @@ -601,7 +614,7 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) { const struct mtk_soc_data *soc = ppe->eth->soc; - struct hlist_head *head = &ppe->foe_flow[hash / soc->hash_offset]; + struct hlist_head *head = &ppe->foe_flow[hash / soc->foe.hash_offset]; struct mtk_foe_entry *hwe = mtk_foe_get_entry(ppe, hash); struct mtk_flow_entry *entry; struct mtk_foe_bridge key = {}; @@ -626,7 +639,7 @@ void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) continue; } - if (found || !mtk_flow_entry_match(entry, hwe)) { + if (found || !mtk_flow_entry_match(ppe->eth, entry, hwe)) { if (entry->hash != 0xffff) entry->hash = 0xffff; continue; @@ -706,14 +719,14 @@ struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, ppe->version = version; foe = dmam_alloc_coherent(ppe->dev, - MTK_PPE_ENTRIES * soc->foe_entry_size, + MTK_PPE_ENTRIES * soc->foe.entry_size, &ppe->foe_phys, GFP_KERNEL); if (!foe) return NULL; ppe->foe_table = foe; - foe_flow_size = (MTK_PPE_ENTRIES / soc->hash_offset) * + foe_flow_size = (MTK_PPE_ENTRIES / soc->foe.hash_offset) * sizeof(*ppe->foe_flow); ppe->foe_flow = devm_kzalloc(dev, foe_flow_size, GFP_KERNEL); if (!ppe->foe_flow) @@ -730,7 +743,7 @@ static void mtk_ppe_init_foe_table(struct mtk_ppe *ppe) int i, k; memset(ppe->foe_table, 0, - MTK_PPE_ENTRIES * ppe->eth->soc->foe_entry_size); + MTK_PPE_ENTRIES * ppe->eth->soc->foe.entry_size); if (!IS_ENABLED(CONFIG_SOC_MT7621)) return; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 6d4c91acd1a5..a364f45edf38 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -61,6 +61,8 @@ enum { #define MTK_FOE_VLAN2_WINFO_WCID GENMASK(13, 6) #define MTK_FOE_VLAN2_WINFO_RING GENMASK(15, 14) +#define MTK_FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define MTK_FIELD_GET(mask, val) ((typeof(mask))(((val) & (mask)) >> __bf_shf(mask))) enum { MTK_FOE_STATE_INVALID, MTK_FOE_STATE_UNBIND, @@ -306,20 +308,27 @@ mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) __mtk_ppe_check_skb(ppe, skb, hash); } -int mtk_foe_entry_prepare(struct mtk_foe_entry *entry, int type, int l4proto, - u8 pse_port, u8 *src_mac, u8 *dest_mac); -int mtk_foe_entry_set_pse_port(struct mtk_foe_entry *entry, u8 port); -int mtk_foe_entry_set_ipv4_tuple(struct mtk_foe_entry *entry, bool orig, +int mtk_foe_entry_prepare(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int type, int l4proto, u8 pse_port, u8 *src_mac, + u8 *dest_mac); +int mtk_foe_entry_set_pse_port(struct mtk_eth *eth, + struct mtk_foe_entry *entry, u8 port); +int mtk_foe_entry_set_ipv4_tuple(struct mtk_eth *eth, + struct mtk_foe_entry *entry, bool orig, __be32 src_addr, __be16 src_port, __be32 dest_addr, __be16 dest_port); -int mtk_foe_entry_set_ipv6_tuple(struct mtk_foe_entry *entry, +int mtk_foe_entry_set_ipv6_tuple(struct mtk_eth *eth, + struct mtk_foe_entry *entry, __be32 *src_addr, __be16 src_port, __be32 *dest_addr, __be16 dest_port); -int mtk_foe_entry_set_dsa(struct mtk_foe_entry *entry, int port); -int mtk_foe_entry_set_vlan(struct mtk_foe_entry *entry, int vid); -int mtk_foe_entry_set_pppoe(struct mtk_foe_entry *entry, int sid); -int mtk_foe_entry_set_wdma(struct mtk_foe_entry *entry, int wdma_idx, int txq, - int bss, int wcid); +int mtk_foe_entry_set_dsa(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int port); +int mtk_foe_entry_set_vlan(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int vid); +int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int sid); +int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, + int wdma_idx, int txq, int bss, int wcid); int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); void mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 0324e7750065..56c49ac712b9 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -52,18 +52,19 @@ static const struct rhashtable_params mtk_flow_ht_params = { }; static int -mtk_flow_set_ipv4_addr(struct mtk_foe_entry *foe, struct mtk_flow_data *data, - bool egress) +mtk_flow_set_ipv4_addr(struct mtk_eth *eth, struct mtk_foe_entry *foe, + struct mtk_flow_data *data, bool egress) { - return mtk_foe_entry_set_ipv4_tuple(foe, egress, + return mtk_foe_entry_set_ipv4_tuple(eth, foe, egress, data->v4.src_addr, data->src_port, data->v4.dst_addr, data->dst_port); } static int -mtk_flow_set_ipv6_addr(struct mtk_foe_entry *foe, struct mtk_flow_data *data) +mtk_flow_set_ipv6_addr(struct mtk_eth *eth, struct mtk_foe_entry *foe, + struct mtk_flow_data *data) { - return mtk_foe_entry_set_ipv6_tuple(foe, + return mtk_foe_entry_set_ipv6_tuple(eth, foe, data->v6.src_addr.s6_addr32, data->src_port, data->v6.dst_addr.s6_addr32, data->dst_port); } @@ -190,8 +191,8 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, int pse_port, dsa_port; if (mtk_flow_get_wdma_info(dev, dest_mac, &info) == 0) { - mtk_foe_entry_set_wdma(foe, info.wdma_idx, info.queue, info.bss, - info.wcid); + mtk_foe_entry_set_wdma(eth, foe, info.wdma_idx, info.queue, + info.bss, info.wcid); pse_port = 3; *wed_index = info.wdma_idx; goto out; @@ -199,7 +200,7 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, dsa_port = mtk_flow_get_dsa_port(&dev); if (dsa_port >= 0) - mtk_foe_entry_set_dsa(foe, dsa_port); + mtk_foe_entry_set_dsa(eth, foe, dsa_port); if (dev == eth->netdev[0]) pse_port = 1; @@ -209,7 +210,7 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, return -EOPNOTSUPP; out: - mtk_foe_entry_set_pse_port(foe, pse_port); + mtk_foe_entry_set_pse_port(eth, foe, pse_port); return 0; } @@ -333,9 +334,8 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) !is_valid_ether_addr(data.eth.h_dest)) return -EINVAL; - err = mtk_foe_entry_prepare(&foe, offload_type, l4proto, 0, - data.eth.h_source, - data.eth.h_dest); + err = mtk_foe_entry_prepare(eth, &foe, offload_type, l4proto, 0, + data.eth.h_source, data.eth.h_dest); if (err) return err; @@ -360,7 +360,7 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) data.v4.src_addr = addrs.key->src; data.v4.dst_addr = addrs.key->dst; - mtk_flow_set_ipv4_addr(&foe, &data, false); + mtk_flow_set_ipv4_addr(eth, &foe, &data, false); } if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { @@ -371,7 +371,7 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) data.v6.src_addr = addrs.key->src; data.v6.dst_addr = addrs.key->dst; - mtk_flow_set_ipv6_addr(&foe, &data); + mtk_flow_set_ipv6_addr(eth, &foe, &data); } flow_action_for_each(i, act, &rule->action) { @@ -401,7 +401,7 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) } if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { - err = mtk_flow_set_ipv4_addr(&foe, &data, true); + err = mtk_flow_set_ipv4_addr(eth, &foe, &data, true); if (err) return err; } @@ -413,10 +413,10 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f) if (data.vlan.proto != htons(ETH_P_8021Q)) return -EOPNOTSUPP; - mtk_foe_entry_set_vlan(&foe, data.vlan.id); + mtk_foe_entry_set_vlan(eth, &foe, data.vlan.id); } if (data.pppoe.num == 1) - mtk_foe_entry_set_pppoe(&foe, data.pppoe.sid); + mtk_foe_entry_set_pppoe(eth, &foe, data.pppoe.sid); err = mtk_flow_set_output_device(eth, &foe, odev, data.eth.h_dest, &wed_index); From patchwork Thu Sep 8 19:33:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970562 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B69EC6FA8B for ; Thu, 8 Sep 2022 19:35:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbiIHTfx (ORCPT ); Thu, 8 Sep 2022 15:35:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230349AbiIHTfi (ORCPT ); Thu, 8 Sep 2022 15:35:38 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B90C8642F0; Thu, 8 Sep 2022 12:35:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 56E9C61DF3; Thu, 8 Sep 2022 19:35:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 635BEC433D6; Thu, 8 Sep 2022 19:35:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665736; bh=kUP4hHEn+uQm6cpFwdVQqzeJKjDXMYqTzZOyK7QWFdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eeidYItdeIS3tLC2ntPmsA0A2E9NyKAF3WBSbg2/GcKVVTvwsdR+9SOr4G10pIz36 spTw+9kliQypgSoVgKT7sQ1WJiAs2Om7bHmYy9+I7JBWa2jlzSiv5BYUgrb5lphnNJ Ktt55dUw7i+IgkqNy7k1J2FcrX+hdyvdPq7KQIJXVZzXAiY5YyOrdnz+5l/WBV7/+S PNm4azCbfRxbMR8ld55mc1T7NQnve4/VY3Rx+HUXNQgYGyYBCtaA3WGj8awjVvgAAH lntL2edOQwQ6knf0hfNf8rZVNRLPVhBNWqHYY1biykIsKC3Bpr/W1+FKtBfiQQNJZb sgQHdG50vpvWg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 09/12] net: ethernet: mtk_eth_wed: add mtk_wed_configure_irq and mtk_wed_dma_{enable/disable} Date: Thu, 8 Sep 2022 21:33:43 +0200 Message-Id: <2c7d84f061d2a3693b760163f810cec56d629d17.1662661555.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce mtk_wed_configure_irq, mtk_wed_dma_enable and mtk_wed_dma_disable utility routines. This is a preliminary patch to introduce mt7986 wed support. Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 87 +++++++++++++------- drivers/net/ethernet/mediatek/mtk_wed_regs.h | 6 +- 2 files changed, 64 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 29be2fcafea3..d1ef5b563ddf 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -237,9 +237,30 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en) } static void -mtk_wed_stop(struct mtk_wed_device *dev) +mtk_wed_dma_disable(struct mtk_wed_device *dev) { + wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN); + + wed_clr(dev, MTK_WED_WDMA_GLO_CFG, MTK_WED_WDMA_GLO_CFG_RX_DRV_EN); + + wed_clr(dev, MTK_WED_GLO_CFG, + MTK_WED_GLO_CFG_TX_DMA_EN | + MTK_WED_GLO_CFG_RX_DMA_EN); + regmap_write(dev->hw->mirror, dev->hw->index * 4, 0); + wdma_m32(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_TX_DMA_EN | + MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO3_PRERES, 0); +} + +static void +mtk_wed_stop(struct mtk_wed_device *dev) +{ + mtk_wed_dma_disable(dev); mtk_wed_set_ext_int(dev, false); wed_clr(dev, MTK_WED_CTRL, @@ -252,15 +273,6 @@ mtk_wed_stop(struct mtk_wed_device *dev) wdma_w32(dev, MTK_WDMA_INT_MASK, 0); wdma_w32(dev, MTK_WDMA_INT_GRP2, 0); wed_w32(dev, MTK_WED_WPDMA_INT_MASK, 0); - - wed_clr(dev, MTK_WED_GLO_CFG, - MTK_WED_GLO_CFG_TX_DMA_EN | - MTK_WED_GLO_CFG_RX_DMA_EN); - wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, - MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN | - MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN); - wed_clr(dev, MTK_WED_WDMA_GLO_CFG, - MTK_WED_WDMA_GLO_CFG_RX_DRV_EN); } static void @@ -313,7 +325,10 @@ mtk_wed_hw_init_early(struct mtk_wed_device *dev) MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY; wed_m32(dev, MTK_WED_WDMA_GLO_CFG, mask, set); - wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_RX_INFO_PRERES); + wdma_set(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); offset = dev->hw->index ? 0x04000400 : 0; wed_w32(dev, MTK_WED_WDMA_OFFSET0, 0x2a042a20 + offset); @@ -520,43 +535,38 @@ mtk_wed_wdma_ring_setup(struct mtk_wed_device *dev, int idx, int size) } static void -mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) +mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) { - u32 wdma_mask; - u32 val; - int i; - - for (i = 0; i < ARRAY_SIZE(dev->tx_wdma); i++) - if (!dev->tx_wdma[i].desc) - mtk_wed_wdma_ring_setup(dev, i, 16); - - wdma_mask = FIELD_PREP(MTK_WDMA_INT_MASK_RX_DONE, GENMASK(1, 0)); - - mtk_wed_hw_init(dev); + u32 wdma_mask = FIELD_PREP(MTK_WDMA_INT_MASK_RX_DONE, GENMASK(1, 0)); + /* wed control cr set */ wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WDMA_INT_AGENT_EN | MTK_WED_CTRL_WPDMA_INT_AGENT_EN | MTK_WED_CTRL_WED_TX_BM_EN | MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); - wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, MTK_WED_PCIE_INT_TRIGGER_STATUS); + wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, + MTK_WED_PCIE_INT_TRIGGER_STATUS); wed_w32(dev, MTK_WED_WPDMA_INT_TRIGGER, MTK_WED_WPDMA_INT_TRIGGER_RX_DONE | MTK_WED_WPDMA_INT_TRIGGER_TX_DONE); - wed_set(dev, MTK_WED_WPDMA_INT_CTRL, - MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV); - + /* initail wdma interrupt agent */ wed_w32(dev, MTK_WED_WDMA_INT_TRIGGER, wdma_mask); wed_clr(dev, MTK_WED_WDMA_INT_CTRL, wdma_mask); wdma_w32(dev, MTK_WDMA_INT_MASK, wdma_mask); wdma_w32(dev, MTK_WDMA_INT_GRP2, wdma_mask); - wed_w32(dev, MTK_WED_WPDMA_INT_MASK, irq_mask); wed_w32(dev, MTK_WED_INT_MASK, irq_mask); +} + +static void +mtk_wed_dma_enable(struct mtk_wed_device *dev) +{ + wed_set(dev, MTK_WED_WPDMA_INT_CTRL, MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV); wed_set(dev, MTK_WED_GLO_CFG, MTK_WED_GLO_CFG_TX_DMA_EN | @@ -567,6 +577,26 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) wed_set(dev, MTK_WED_WDMA_GLO_CFG, MTK_WED_WDMA_GLO_CFG_RX_DRV_EN); + wdma_set(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_TX_DMA_EN | + MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); +} + +static void +mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) +{ + u32 val; + int i; + + for (i = 0; i < ARRAY_SIZE(dev->tx_wdma); i++) + if (!dev->tx_wdma[i].desc) + mtk_wed_wdma_ring_setup(dev, i, 16); + + mtk_wed_hw_init(dev); + mtk_wed_configure_irq(dev, irq_mask); + mtk_wed_set_ext_int(dev, true); val = dev->wlan.wpdma_phys | MTK_PCIE_MIRROR_MAP_EN | @@ -577,6 +607,7 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) val |= BIT(0); regmap_write(dev->hw->mirror, dev->hw->index * 4, val); + mtk_wed_dma_enable(dev); dev->running = true; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed_regs.h b/drivers/net/ethernet/mediatek/mtk_wed_regs.h index 0a0465ea58b4..eec22daebd30 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_regs.h @@ -224,7 +224,11 @@ struct mtk_wdma_desc { #define MTK_WDMA_RING_RX(_n) (0x100 + (_n) * 0x10) #define MTK_WDMA_GLO_CFG 0x204 -#define MTK_WDMA_GLO_CFG_RX_INFO_PRERES GENMASK(28, 26) +#define MTK_WDMA_GLO_CFG_TX_DMA_EN BIT(0) +#define MTK_WDMA_GLO_CFG_RX_DMA_EN BIT(2) +#define MTK_WDMA_GLO_CFG_RX_INFO3_PRERES BIT(26) +#define MTK_WDMA_GLO_CFG_RX_INFO2_PRERES BIT(27) +#define MTK_WDMA_GLO_CFG_RX_INFO1_PRERES BIT(28) #define MTK_WDMA_RESET_IDX 0x208 #define MTK_WDMA_RESET_IDX_TX GENMASK(3, 0) From patchwork Thu Sep 8 19:33:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970563 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 758E1C6FA8D for ; Thu, 8 Sep 2022 19:35:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230090AbiIHTfy (ORCPT ); Thu, 8 Sep 2022 15:35:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229624AbiIHTfo (ORCPT ); Thu, 8 Sep 2022 15:35:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAACC7F139; Thu, 8 Sep 2022 12:35:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4990761C50; Thu, 8 Sep 2022 19:35:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B1BBC433D7; Thu, 8 Sep 2022 19:35:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665740; bh=752XwboePlTgd5LMLUTiln4ykmNZ+AAr+03QFf+D4kQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SKVkHNret/9b+jHZiFr1tMtIWgDLSvBx+UxcPtFCQTFCpN26c8YiyvZlUMLumzAbB 6fIZ7PQnPRFpXKbkNCRDsknnmDC6P7BH44sjzjvUfXl9Yl/cCfweGyJE5+PksMwLUA Keare6zQiKvzGk/ATVS0hUdcnQlZhcWOY1KAfMlzmBpoemuBrneLosvNMMu9NuoxU7 L8Iq7L2gP/o+VjLjuAxDcEgzFJkN8V6kq8tyi/N1Ne4z5qZCfgWjuR5vYs2Q3WoyRW S3AXg5x2GTiW3thSeNsYpvYYCJSsjQm3JF3LSZhnfzBIY7SAYMH01hqZnjk8637rtm ThEIIvLNx/jbg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 10/12] net: ethernet: mtk_eth_wed: add wed support for mt7986 chipset Date: Thu, 8 Sep 2022 21:33:44 +0200 Message-Id: <1600f04403768df24a1aa377af7ec342c889a3c2.1662661555.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce Wireless Etherne Dispatcher support on transmission side for mt7986 chipset Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 48 ++- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 7 + drivers/net/ethernet/mediatek/mtk_wed.c | 362 ++++++++++++++---- drivers/net/ethernet/mediatek/mtk_wed.h | 8 +- .../net/ethernet/mediatek/mtk_wed_debugfs.c | 3 + drivers/net/ethernet/mediatek/mtk_wed_regs.h | 77 +++- include/linux/soc/mediatek/mtk_wed.h | 8 + 7 files changed, 418 insertions(+), 95 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 8aa0a61b35fc..7dae650c4586 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -3944,6 +3944,7 @@ void mtk_eth_set_dma_device(struct mtk_eth *eth, struct device *dma_dev) static int mtk_probe(struct platform_device *pdev) { + struct resource *res = NULL; struct device_node *mac_np; struct mtk_eth *eth; int err, i; @@ -4024,16 +4025,31 @@ static int mtk_probe(struct platform_device *pdev) } } - for (i = 0;; i++) { - struct device_node *np = of_parse_phandle(pdev->dev.of_node, - "mediatek,wed", i); - void __iomem *wdma; - - if (!np || i >= ARRAY_SIZE(eth->soc->reg_map->wdma_base)) - break; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -EINVAL; + } - wdma = eth->base + eth->soc->reg_map->wdma_base[i]; - mtk_wed_add_hw(np, eth, wdma, i); + if (eth->soc->offload_version) { + for (i = 0;; i++) { + struct device_node *np; + phys_addr_t wdma_phy; + u32 wdma_base; + + if (i >= ARRAY_SIZE(eth->soc->reg_map->wdma_base)) + break; + + np = of_parse_phandle(pdev->dev.of_node, + "mediatek,wed", i); + if (!np) + break; + + wdma_base = eth->soc->reg_map->wdma_base[i]; + wdma_phy = res ? res->start + wdma_base : 0; + mtk_wed_add_hw(np, eth, eth->base + wdma_base, + wdma_phy, i); + } } for (i = 0; i < 3; i++) { @@ -4282,6 +4298,13 @@ static const struct mtk_soc_data mt7622_data = { .dst_port = GENMASK(7, 5), }, }, + .wed = { + .desc_ctrl_len1 = GENMASK(14, 0), + .desc_ctrl_last_seg1 = BIT(15), + .tx_bm_tkid_addr = 0x088, + .tx_bm_dyn_thr_lo = GENMASK(6, 0), + .tx_bm_dyn_thr_hi = GENMASK(22, 16), + }, }; static const struct mtk_soc_data mt7623_data = { @@ -4355,6 +4378,13 @@ static const struct mtk_soc_data mt7986_data = { .foe = { .hash_offset = 4, }, + .wed = { + .desc_ctrl_len1 = GENMASK(13, 0), + .desc_ctrl_last_seg1 = BIT(14), + .tx_bm_tkid_addr = 0x0c8, + .tx_bm_dyn_thr_lo = GENMASK(8, 0), + .tx_bm_dyn_thr_hi = GENMASK(24, 16), + }, }; static const struct mtk_soc_data rt5350_data = { diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 6d0b080c2048..e40ad480513d 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -1011,6 +1011,13 @@ struct mtk_soc_data { u16 dst_port; } ib2; } foe; + struct { + u32 desc_ctrl_len1; + u32 desc_ctrl_last_seg1; + u32 tx_bm_tkid_addr; + u32 tx_bm_dyn_thr_lo; + u32 tx_bm_dyn_thr_hi; + } wed; }; /* currently no SoC has more than 2 macs */ diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index d1ef5b563ddf..3ec891f05fa7 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -25,6 +25,11 @@ #define MTK_WED_TX_RING_SIZE 2048 #define MTK_WED_WDMA_RING_SIZE 1024 +#define MTK_WED_MAX_GROUP_SIZE 0x100 +#define MTK_WED_VLD_GROUP_SIZE 0x40 +#define MTK_WED_PER_GROUP_PKT 128 + +#define MTK_WED_FBUF_SIZE 128 static struct mtk_wed_hw *hw_list[2]; static DEFINE_MUTEX(hw_lock); @@ -92,11 +97,13 @@ mtk_wed_assign(struct mtk_wed_device *dev) static int mtk_wed_buffer_alloc(struct mtk_wed_device *dev) { + struct mtk_eth *eth = dev->hw->eth; struct mtk_wdma_desc *desc; dma_addr_t desc_phys; void **page_list; int token = dev->wlan.token_start; int ring_size; + u32 last_seg; int n_pages; int i, page_idx; @@ -118,6 +125,9 @@ mtk_wed_buffer_alloc(struct mtk_wed_device *dev) dev->buf_ring.desc = desc; dev->buf_ring.desc_phys = desc_phys; + last_seg = dev->hw->version == 1 ? MTK_WDMA_DESC_CTRL_LAST_SEG1 + : MTK_WDMA_DESC_CTRL_LAST_SEG0; + for (i = 0, page_idx = 0; i < ring_size; i += MTK_WED_BUF_PER_PAGE) { dma_addr_t page_phys, buf_phys; struct page *page; @@ -150,11 +160,12 @@ mtk_wed_buffer_alloc(struct mtk_wed_device *dev) desc->buf0 = cpu_to_le32(buf_phys); desc->buf1 = cpu_to_le32(buf_phys + txd_size); + ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size) | - FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1, - MTK_WED_BUF_SIZE - txd_size) | - MTK_WDMA_DESC_CTRL_LAST_SEG1; - desc->ctrl = cpu_to_le32(ctrl); + MTK_FIELD_PREP(eth->soc->wed.desc_ctrl_len1, + MTK_WED_BUF_SIZE - txd_size); + + desc->ctrl = cpu_to_le32(ctrl | last_seg); desc->info = 0; desc++; @@ -209,7 +220,7 @@ mtk_wed_free_ring(struct mtk_wed_device *dev, struct mtk_wed_ring *ring) if (!ring->desc) return; - dma_free_coherent(dev->hw->dev, ring->size * sizeof(*ring->desc), + dma_free_coherent(dev->hw->dev, ring->size * ring->desc_size, ring->desc, ring->desc_phys); } @@ -229,6 +240,14 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en) { u32 mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK; + if (dev->hw->version == 1) + mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR; + else + mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH | + MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH | + MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT | + MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR; + if (!dev->hw->num_flows) mask &= ~MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD; @@ -236,6 +255,20 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en) wed_r32(dev, MTK_WED_EXT_INT_MASK); } +static void +mtk_wed_set_512_support(struct mtk_wed_device *dev, bool enable) +{ + if (enable) { + wed_w32(dev, MTK_WED_TXDP_CTRL, MTK_WED_TXDP_DW9_OVERWR); + wed_w32(dev, MTK_WED_TXP_DW1, + FIELD_PREP(MTK_WED_WPDMA_WRITE_TXP, 0x0103)); + } else { + wed_w32(dev, MTK_WED_TXP_DW1, + FIELD_PREP(MTK_WED_WPDMA_WRITE_TXP, 0x0100)); + wed_clr(dev, MTK_WED_TXDP_CTRL, MTK_WED_TXDP_DW9_OVERWR); + } +} + static void mtk_wed_dma_disable(struct mtk_wed_device *dev) { @@ -249,12 +282,22 @@ mtk_wed_dma_disable(struct mtk_wed_device *dev) MTK_WED_GLO_CFG_TX_DMA_EN | MTK_WED_GLO_CFG_RX_DMA_EN); - regmap_write(dev->hw->mirror, dev->hw->index * 4, 0); wdma_m32(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_TX_DMA_EN | MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO3_PRERES, 0); + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES, 0); + + if (dev->hw->version == 1) { + regmap_write(dev->hw->mirror, dev->hw->index * 4, 0); + wdma_m32(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_RX_INFO3_PRERES, 0); + } else { + wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC); + + mtk_wed_set_512_support(dev, false); + } } static void @@ -293,7 +336,7 @@ mtk_wed_detach(struct mtk_wed_device *dev) mtk_wed_free_buffer(dev); mtk_wed_free_tx_rings(dev); - if (of_dma_is_coherent(wlan_node)) + if (of_dma_is_coherent(wlan_node) && hw->hifsys) regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, BIT(hw->index), BIT(hw->index)); @@ -308,14 +351,71 @@ mtk_wed_detach(struct mtk_wed_device *dev) mutex_unlock(&hw_lock); } +#define PCIE_BASE_ADDR0 0x11280000 +static void +mtk_wed_bus_init(struct mtk_wed_device *dev) +{ + struct device_node *node; + void __iomem *base_addr; + u32 val; + + node = of_parse_phandle(dev->hw->node, "mediatek,wed_pcie", 0); + if (!node) + return; + + base_addr = of_iomap(node, 0); + val = BIT(0) | readl(base_addr); + writel(val, base_addr); + + wed_w32(dev, MTK_WED_PCIE_INT_CTRL, + FIELD_PREP(MTK_WED_PCIE_INT_CTRL_POLL_EN, 2)); + + /* pcie interrupt control: pola/source selection */ + wed_set(dev, MTK_WED_PCIE_INT_CTRL, + MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA | + FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL, 1)); + wed_r32(dev, MTK_WED_PCIE_INT_CTRL); + + val = wed_r32(dev, MTK_WED_PCIE_CFG_INTM); + val = wed_r32(dev, MTK_WED_PCIE_CFG_BASE); + wed_w32(dev, MTK_WED_PCIE_CFG_INTM, PCIE_BASE_ADDR0 | 0x180); + wed_w32(dev, MTK_WED_PCIE_CFG_BASE, PCIE_BASE_ADDR0 | 0x184); + + val = wed_r32(dev, MTK_WED_PCIE_CFG_INTM); + val = wed_r32(dev, MTK_WED_PCIE_CFG_BASE); + + /* pcie interrupt status trigger register */ + wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24)); + wed_r32(dev, MTK_WED_PCIE_INT_TRIGGER); + + /* pola setting */ + val = wed_r32(dev, MTK_WED_PCIE_INT_CTRL); + wed_set(dev, MTK_WED_PCIE_INT_CTRL, MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA); +} + +static void +mtk_wed_set_wpdma(struct mtk_wed_device *dev) +{ + if (dev->hw->version == 1) { + wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys); + } else { + mtk_wed_bus_init(dev); + + wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_int); + wed_w32(dev, MTK_WED_WPDMA_CFG_INT_MASK, dev->wlan.wpdma_mask); + wed_w32(dev, MTK_WED_WPDMA_CFG_TX, dev->wlan.wpdma_tx); + wed_w32(dev, MTK_WED_WPDMA_CFG_TX_FREE, dev->wlan.wpdma_txfree); + } +} + static void mtk_wed_hw_init_early(struct mtk_wed_device *dev) { u32 mask, set; - u32 offset; mtk_wed_stop(dev); mtk_wed_reset(dev, MTK_WED_RESET_WED); + mtk_wed_set_wpdma(dev); mask = MTK_WED_WDMA_GLO_CFG_BT_SIZE | MTK_WED_WDMA_GLO_CFG_DYNAMIC_DMAD_RECYCLE | @@ -325,22 +425,40 @@ mtk_wed_hw_init_early(struct mtk_wed_device *dev) MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY; wed_m32(dev, MTK_WED_WDMA_GLO_CFG, mask, set); - wdma_set(dev, MTK_WDMA_GLO_CFG, - MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); + if (dev->hw->version == 1) { + u32 offset = dev->hw->index ? 0x04000400 : 0; - offset = dev->hw->index ? 0x04000400 : 0; - wed_w32(dev, MTK_WED_WDMA_OFFSET0, 0x2a042a20 + offset); - wed_w32(dev, MTK_WED_WDMA_OFFSET1, 0x29002800 + offset); + wdma_set(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); - wed_w32(dev, MTK_WED_PCIE_CFG_BASE, MTK_PCIE_BASE(dev->hw->index)); - wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys); + wed_w32(dev, MTK_WED_WDMA_OFFSET0, 0x2a042a20 + offset); + wed_w32(dev, MTK_WED_WDMA_OFFSET1, 0x29002800 + offset); + wed_w32(dev, MTK_WED_PCIE_CFG_BASE, + MTK_PCIE_BASE(dev->hw->index)); + } else { + wed_w32(dev, MTK_WED_WDMA_CFG_BASE, dev->hw->wdma_phy); + wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_ETH_DMAD_FMT); + wed_w32(dev, MTK_WED_WDMA_OFFSET0, + FIELD_PREP(MTK_WED_WDMA_OFST0_GLO_INTS, + MTK_WDMA_INT_STATUS) | + FIELD_PREP(MTK_WED_WDMA_OFST0_GLO_CFG, + MTK_WDMA_GLO_CFG)); + + wed_w32(dev, MTK_WED_WDMA_OFFSET1, + FIELD_PREP(MTK_WED_WDMA_OFST1_TX_CTRL, + MTK_WDMA_RING_TX(0)) | + FIELD_PREP(MTK_WED_WDMA_OFST1_RX_CTRL, + MTK_WDMA_RING_RX(0))); + } } static void mtk_wed_hw_init(struct mtk_wed_device *dev) { + struct mtk_eth *eth = dev->hw->eth; + if (dev->init_done) return; @@ -355,37 +473,57 @@ mtk_wed_hw_init(struct mtk_wed_device *dev) wed_w32(dev, MTK_WED_TX_BM_BASE, dev->buf_ring.desc_phys); - wed_w32(dev, MTK_WED_TX_BM_TKID, + wed_w32(dev, MTK_WED_TX_BM_BUF_LEN, MTK_WED_PKT_SIZE); + + wed_w32(dev, eth->soc->wed.tx_bm_tkid_addr, FIELD_PREP(MTK_WED_TX_BM_TKID_START, dev->wlan.token_start) | FIELD_PREP(MTK_WED_TX_BM_TKID_END, - dev->wlan.token_start + dev->wlan.nbuf - 1)); - - wed_w32(dev, MTK_WED_TX_BM_BUF_LEN, MTK_WED_PKT_SIZE); - + dev->wlan.token_start + + dev->wlan.nbuf - 1)); wed_w32(dev, MTK_WED_TX_BM_DYN_THR, - FIELD_PREP(MTK_WED_TX_BM_DYN_THR_LO, 1) | - MTK_WED_TX_BM_DYN_THR_HI); + MTK_FIELD_PREP(eth->soc->wed.tx_bm_dyn_thr_lo, + dev->hw->version == 1) | + eth->soc->wed.tx_bm_dyn_thr_hi); + + if (dev->hw->version > 1) { + wed_w32(dev, MTK_WED_TX_TKID_CTRL, + MTK_WED_TX_TKID_CTRL_PAUSE | + FIELD_PREP(MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM, + dev->buf_ring.size / 128) | + FIELD_PREP(MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM, + dev->buf_ring.size / 128)); + wed_w32(dev, MTK_WED_TX_TKID_DYN_THR, + FIELD_PREP(MTK_WED_TX_TKID_DYN_THR_LO, 0) | + MTK_WED_TX_TKID_DYN_THR_HI); + } mtk_wed_reset(dev, MTK_WED_RESET_TX_BM); - wed_set(dev, MTK_WED_CTRL, - MTK_WED_CTRL_WED_TX_BM_EN | - MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); + if (dev->hw->version == 1) + wed_set(dev, MTK_WED_CTRL, + MTK_WED_CTRL_WED_TX_BM_EN | + MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); + else + wed_clr(dev, MTK_WED_TX_TKID_CTRL, MTK_WED_TX_TKID_CTRL_PAUSE); wed_clr(dev, MTK_WED_TX_BM_CTRL, MTK_WED_TX_BM_CTRL_PAUSE); } static void -mtk_wed_ring_reset(struct mtk_wdma_desc *desc, int size) +mtk_wed_ring_reset(struct mtk_wed_ring *ring, int size) { + void *head = (void *)ring->desc; int i; for (i = 0; i < size; i++) { - desc[i].buf0 = 0; - desc[i].ctrl = cpu_to_le32(MTK_WDMA_DESC_CTRL_DMA_DONE); - desc[i].buf1 = 0; - desc[i].info = 0; + struct mtk_wdma_desc *desc; + + desc = (struct mtk_wdma_desc *)(head + i * ring->desc_size); + desc->buf0 = 0; + desc->ctrl = cpu_to_le32(MTK_WDMA_DESC_CTRL_DMA_DONE); + desc->buf1 = 0; + desc->info = 0; } } @@ -436,12 +574,10 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev) int i; for (i = 0; i < ARRAY_SIZE(dev->tx_ring); i++) { - struct mtk_wdma_desc *desc = dev->tx_ring[i].desc; - - if (!desc) + if (!dev->tx_ring[i].desc) continue; - mtk_wed_ring_reset(desc, MTK_WED_TX_RING_SIZE); + mtk_wed_ring_reset(&dev->tx_ring[i], MTK_WED_TX_RING_SIZE); } if (mtk_wed_poll_busy(dev)) @@ -498,16 +634,16 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev) static int mtk_wed_ring_alloc(struct mtk_wed_device *dev, struct mtk_wed_ring *ring, - int size) + int size, u32 desc_size) { - ring->desc = dma_alloc_coherent(dev->hw->dev, - size * sizeof(*ring->desc), + ring->desc = dma_alloc_coherent(dev->hw->dev, size * desc_size, &ring->desc_phys, GFP_KERNEL); if (!ring->desc) return -ENOMEM; + ring->desc_size = desc_size; ring->size = size; - mtk_wed_ring_reset(ring->desc, size); + mtk_wed_ring_reset(ring, size); return 0; } @@ -515,9 +651,10 @@ mtk_wed_ring_alloc(struct mtk_wed_device *dev, struct mtk_wed_ring *ring, static int mtk_wed_wdma_ring_setup(struct mtk_wed_device *dev, int idx, int size) { + u32 desc_size = sizeof(struct mtk_wdma_desc) * dev->hw->version; struct mtk_wed_ring *wdma = &dev->tx_wdma[idx]; - if (mtk_wed_ring_alloc(dev, wdma, MTK_WED_WDMA_RING_SIZE)) + if (mtk_wed_ring_alloc(dev, wdma, MTK_WED_WDMA_RING_SIZE, desc_size)) return -ENOMEM; wdma_w32(dev, MTK_WDMA_RING_RX(idx) + MTK_WED_RING_OFS_BASE, @@ -546,16 +683,41 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) MTK_WED_CTRL_WED_TX_BM_EN | MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); - wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, - MTK_WED_PCIE_INT_TRIGGER_STATUS); + if (dev->hw->version == 1) { + wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, + MTK_WED_PCIE_INT_TRIGGER_STATUS); - wed_w32(dev, MTK_WED_WPDMA_INT_TRIGGER, - MTK_WED_WPDMA_INT_TRIGGER_RX_DONE | - MTK_WED_WPDMA_INT_TRIGGER_TX_DONE); + wed_w32(dev, MTK_WED_WPDMA_INT_TRIGGER, + MTK_WED_WPDMA_INT_TRIGGER_RX_DONE | + MTK_WED_WPDMA_INT_TRIGGER_TX_DONE); + + wed_clr(dev, MTK_WED_WDMA_INT_CTRL, wdma_mask); + } else { + /* initail tx interrupt trigger */ + wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_TX, + MTK_WED_WPDMA_INT_CTRL_TX0_DONE_EN | + MTK_WED_WPDMA_INT_CTRL_TX0_DONE_CLR | + MTK_WED_WPDMA_INT_CTRL_TX1_DONE_EN | + MTK_WED_WPDMA_INT_CTRL_TX1_DONE_CLR | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_TX0_DONE_TRIG, + dev->wlan.tx_tbit[0]) | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_TX1_DONE_TRIG, + dev->wlan.tx_tbit[1])); + + /* initail txfree interrupt trigger */ + wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_TX_FREE, + MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_EN | + MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_CLR | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_TRIG, + dev->wlan.txfree_tbit)); + + wed_w32(dev, MTK_WED_WDMA_INT_CLR, wdma_mask); + wed_set(dev, MTK_WED_WDMA_INT_CTRL, + FIELD_PREP(MTK_WED_WDMA_INT_CTRL_POLL_SRC_SEL, + dev->wdma_idx)); + } - /* initail wdma interrupt agent */ wed_w32(dev, MTK_WED_WDMA_INT_TRIGGER, wdma_mask); - wed_clr(dev, MTK_WED_WDMA_INT_CTRL, wdma_mask); wdma_w32(dev, MTK_WDMA_INT_MASK, wdma_mask); wdma_w32(dev, MTK_WDMA_INT_GRP2, wdma_mask); @@ -580,14 +742,28 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_TX_DMA_EN | MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO2_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES); + + if (dev->hw->version == 1) { + wdma_set(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); + } else { + wed_set(dev, MTK_WED_WPDMA_CTRL, + MTK_WED_WPDMA_CTRL_SDL1_FIXED); + + wed_set(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC); + + wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP | + MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV); + } } static void mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) { - u32 val; int i; for (i = 0; i < ARRAY_SIZE(dev->tx_wdma); i++) @@ -598,14 +774,17 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) mtk_wed_configure_irq(dev, irq_mask); mtk_wed_set_ext_int(dev, true); - val = dev->wlan.wpdma_phys | - MTK_PCIE_MIRROR_MAP_EN | - FIELD_PREP(MTK_PCIE_MIRROR_MAP_WED_ID, dev->hw->index); - if (dev->hw->index) - val |= BIT(1); - val |= BIT(0); - regmap_write(dev->hw->mirror, dev->hw->index * 4, val); + if (dev->hw->version == 1) { + u32 val = dev->wlan.wpdma_phys | MTK_PCIE_MIRROR_MAP_EN | + FIELD_PREP(MTK_PCIE_MIRROR_MAP_WED_ID, + dev->hw->index); + + val |= BIT(0) | (BIT(1) * !!dev->hw->index); + regmap_write(dev->hw->mirror, dev->hw->index * 4, val); + } else { + mtk_wed_set_512_support(dev, true); + } mtk_wed_dma_enable(dev); dev->running = true; @@ -639,7 +818,9 @@ mtk_wed_attach(struct mtk_wed_device *dev) goto out; } - dev_info(&dev->wlan.pci_dev->dev, "attaching wed device %d\n", hw->index); + dev_info(&dev->wlan.pci_dev->dev, + "attaching wed device %d version %d\n", + hw->index, hw->version); dev->hw = hw; dev->dev = hw->dev; @@ -657,7 +838,9 @@ mtk_wed_attach(struct mtk_wed_device *dev) } mtk_wed_hw_init_early(dev); - regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, BIT(hw->index), 0); + if (hw->hifsys) + regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, + BIT(hw->index), 0); out: mutex_unlock(&hw_lock); @@ -684,7 +867,8 @@ mtk_wed_tx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs) BUG_ON(idx >= ARRAY_SIZE(dev->tx_ring)); - if (mtk_wed_ring_alloc(dev, ring, MTK_WED_TX_RING_SIZE)) + if (mtk_wed_ring_alloc(dev, ring, MTK_WED_TX_RING_SIZE, + sizeof(*ring->desc))) return -ENOMEM; if (mtk_wed_wdma_ring_setup(dev, idx, MTK_WED_WDMA_RING_SIZE)) @@ -711,21 +895,21 @@ static int mtk_wed_txfree_ring_setup(struct mtk_wed_device *dev, void __iomem *regs) { struct mtk_wed_ring *ring = &dev->txfree_ring; - int i; + int i, index = dev->hw->version == 1; /* * For txfree event handling, the same DMA ring is shared between WED * and WLAN. The WLAN driver accesses the ring index registers through * WED */ - ring->reg_base = MTK_WED_RING_RX(1); + ring->reg_base = MTK_WED_RING_RX(index); ring->wpdma = regs; for (i = 0; i < 12; i += 4) { u32 val = readl(regs + i); - wed_w32(dev, MTK_WED_RING_RX(1) + i, val); - wed_w32(dev, MTK_WED_WPDMA_RING_RX(1) + i, val); + wed_w32(dev, MTK_WED_RING_RX(index) + i, val); + wed_w32(dev, MTK_WED_WPDMA_RING_RX(index) + i, val); } return 0; @@ -734,11 +918,19 @@ mtk_wed_txfree_ring_setup(struct mtk_wed_device *dev, void __iomem *regs) static u32 mtk_wed_irq_get(struct mtk_wed_device *dev, u32 mask) { - u32 val; + u32 val, ext_mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK; + + if (dev->hw->version == 1) + ext_mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR; + else + ext_mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH | + MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH | + MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT | + MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR; val = wed_r32(dev, MTK_WED_EXT_INT_STATUS); wed_w32(dev, MTK_WED_EXT_INT_STATUS, val); - val &= MTK_WED_EXT_INT_STATUS_ERROR_MASK; + val &= ext_mask; if (!dev->hw->num_flows) val &= ~MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD; if (val && net_ratelimit()) @@ -813,7 +1005,8 @@ void mtk_wed_flow_remove(int index) } void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, - void __iomem *wdma, int index) + void __iomem *wdma, phys_addr_t wdma_phy, + int index) { static const struct mtk_wed_ops wed_ops = { .attach = mtk_wed_attach, @@ -860,26 +1053,33 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, hw = kzalloc(sizeof(*hw), GFP_KERNEL); if (!hw) goto unlock; + hw->node = np; hw->regs = regs; hw->eth = eth; hw->dev = &pdev->dev; + hw->wdma_phy = wdma_phy; hw->wdma = wdma; hw->index = index; hw->irq = irq; - hw->mirror = syscon_regmap_lookup_by_phandle(eth_np, - "mediatek,pcie-mirror"); - hw->hifsys = syscon_regmap_lookup_by_phandle(eth_np, - "mediatek,hifsys"); - if (IS_ERR(hw->mirror) || IS_ERR(hw->hifsys)) { - kfree(hw); - goto unlock; - } + hw->version = MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2) ? 2 : 1; + + if (hw->version == 1) { + hw->mirror = syscon_regmap_lookup_by_phandle(eth_np, + "mediatek,pcie-mirror"); + hw->hifsys = syscon_regmap_lookup_by_phandle(eth_np, + "mediatek,hifsys"); + if (IS_ERR(hw->mirror) || IS_ERR(hw->hifsys)) { + kfree(hw); + goto unlock; + } - if (!index) { - regmap_write(hw->mirror, 0, 0); - regmap_write(hw->mirror, 4, 0); + if (!index) { + regmap_write(hw->mirror, 0, 0); + regmap_write(hw->mirror, 4, 0); + } } + mtk_wed_hw_add_debugfs(hw); hw_list[index] = hw; diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h index 981ec613f4b0..ae420ca01a48 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.h +++ b/drivers/net/ethernet/mediatek/mtk_wed.h @@ -18,11 +18,13 @@ struct mtk_wed_hw { struct regmap *hifsys; struct device *dev; void __iomem *wdma; + phys_addr_t wdma_phy; struct regmap *mirror; struct dentry *debugfs_dir; struct mtk_wed_device *wed_dev; u32 debugfs_reg; u32 num_flows; + u8 version; char dirname[5]; int irq; int index; @@ -101,14 +103,16 @@ wpdma_txfree_w32(struct mtk_wed_device *dev, u32 reg, u32 val) } void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, - void __iomem *wdma, int index); + void __iomem *wdma, phys_addr_t wdma_phy, + int index); void mtk_wed_exit(void); int mtk_wed_flow_add(int index); void mtk_wed_flow_remove(int index); #else static inline void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, - void __iomem *wdma, int index) + void __iomem *wdma, phys_addr_t wdma_phy, + int index) { } static inline void diff --git a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c index a81d3fd1a439..f420f187e837 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c @@ -116,6 +116,9 @@ wed_txinfo_show(struct seq_file *s, void *data) DUMP_WDMA(WDMA_GLO_CFG), DUMP_WDMA_RING(WDMA_RING_RX(0)), DUMP_WDMA_RING(WDMA_RING_RX(1)), + + DUMP_STR("TX FREE"), + DUMP_WED(WED_RX_MIB(0)), }; struct mtk_wed_hw *hw = s->private; struct mtk_wed_device *dev = hw->wed_dev; diff --git a/drivers/net/ethernet/mediatek/mtk_wed_regs.h b/drivers/net/ethernet/mediatek/mtk_wed_regs.h index eec22daebd30..ecd925fbd451 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_regs.h @@ -41,6 +41,7 @@ struct mtk_wdma_desc { #define MTK_WED_CTRL_RESERVE_EN BIT(12) #define MTK_WED_CTRL_RESERVE_BUSY BIT(13) #define MTK_WED_CTRL_FINAL_DIDX_READ BIT(24) +#define MTK_WED_CTRL_ETH_DMAD_FMT BIT(25) #define MTK_WED_CTRL_MIB_READ_CLEAR BIT(28) #define MTK_WED_EXT_INT_STATUS 0x020 @@ -57,7 +58,8 @@ struct mtk_wdma_desc { #define MTK_WED_EXT_INT_STATUS_RX_DRV_INIT_WDMA_EN BIT(19) #define MTK_WED_EXT_INT_STATUS_RX_DRV_BM_DMAD_COHERENT BIT(20) #define MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR BIT(21) -#define MTK_WED_EXT_INT_STATUS_TX_DRV_W_RESP_ERR BIT(22) +#define MTK_WED_EXT_INT_STATUS_TX_DMA_R_RESP_ERR BIT(22) +#define MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR BIT(23) #define MTK_WED_EXT_INT_STATUS_RX_DRV_DMA_RECYCLE BIT(24) #define MTK_WED_EXT_INT_STATUS_ERROR_MASK (MTK_WED_EXT_INT_STATUS_TF_LEN_ERR | \ MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD | \ @@ -65,8 +67,7 @@ struct mtk_wdma_desc { MTK_WED_EXT_INT_STATUS_RX_DRV_R_RESP_ERR | \ MTK_WED_EXT_INT_STATUS_RX_DRV_W_RESP_ERR | \ MTK_WED_EXT_INT_STATUS_RX_DRV_INIT_WDMA_EN | \ - MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR | \ - MTK_WED_EXT_INT_STATUS_TX_DRV_W_RESP_ERR) + MTK_WED_EXT_INT_STATUS_TX_DMA_R_RESP_ERR) #define MTK_WED_EXT_INT_MASK 0x028 @@ -96,6 +97,22 @@ struct mtk_wdma_desc { #define MTK_WED_TX_BM_DYN_THR_LO GENMASK(6, 0) #define MTK_WED_TX_BM_DYN_THR_HI GENMASK(22, 16) +#define MTK_WED_TX_TKID_CTRL 0x0c0 +#define MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM GENMASK(6, 0) +#define MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM GENMASK(22, 16) +#define MTK_WED_TX_TKID_CTRL_PAUSE BIT(28) + +#define MTK_WED_TX_TKID_DYN_THR 0x0e0 +#define MTK_WED_TX_TKID_DYN_THR_LO GENMASK(6, 0) +#define MTK_WED_TX_TKID_DYN_THR_HI GENMASK(22, 16) + +#define MTK_WED_TXP_DW0 0x120 +#define MTK_WED_TXP_DW1 0x124 +#define MTK_WED_WPDMA_WRITE_TXP GENMASK(31, 16) +#define MTK_WED_TXDP_CTRL 0x130 +#define MTK_WED_TXDP_DW9_OVERWR BIT(9) +#define MTK_WED_RX_BM_TKID_MIB 0x1cc + #define MTK_WED_INT_STATUS 0x200 #define MTK_WED_INT_MASK 0x204 @@ -125,6 +142,7 @@ struct mtk_wdma_desc { #define MTK_WED_RESET_IDX_RX GENMASK(17, 16) #define MTK_WED_TX_MIB(_n) (0x2a0 + (_n) * 4) +#define MTK_WED_RX_MIB(_n) (0x2e0 + (_n) * 4) #define MTK_WED_RING_TX(_n) (0x300 + (_n) * 0x10) @@ -155,21 +173,62 @@ struct mtk_wdma_desc { #define MTK_WED_WPDMA_GLO_CFG_BYTE_SWAP BIT(29) #define MTK_WED_WPDMA_GLO_CFG_RX_2B_OFFSET BIT(31) +/* CONFIG_MEDIATEK_NETSYS_V2 */ +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC BIT(4) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R1_PKT_PROC BIT(5) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC BIT(6) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R1_CRX_SYNC BIT(7) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_VER GENMASK(18, 16) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNSUPPORT_FMT BIT(19) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UEVENT_PKT_FMT_CHK BIT(20) +#define MTK_WED_WPDMA_GLO_CFG_RX_DDONE2_WR BIT(21) +#define MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP BIT(24) +#define MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV BIT(28) + #define MTK_WED_WPDMA_RESET_IDX 0x50c #define MTK_WED_WPDMA_RESET_IDX_TX GENMASK(3, 0) #define MTK_WED_WPDMA_RESET_IDX_RX GENMASK(17, 16) +#define MTK_WED_WPDMA_CTRL 0x518 +#define MTK_WED_WPDMA_CTRL_SDL1_FIXED BIT(31) + #define MTK_WED_WPDMA_INT_CTRL 0x520 #define MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV BIT(21) #define MTK_WED_WPDMA_INT_MASK 0x524 +#define MTK_WED_WPDMA_INT_CTRL_TX 0x530 +#define MTK_WED_WPDMA_INT_CTRL_TX0_DONE_EN BIT(0) +#define MTK_WED_WPDMA_INT_CTRL_TX0_DONE_CLR BIT(1) +#define MTK_WED_WPDMA_INT_CTRL_TX0_DONE_TRIG GENMASK(6, 2) +#define MTK_WED_WPDMA_INT_CTRL_TX1_DONE_EN BIT(8) +#define MTK_WED_WPDMA_INT_CTRL_TX1_DONE_CLR BIT(9) +#define MTK_WED_WPDMA_INT_CTRL_TX1_DONE_TRIG GENMASK(14, 10) + +#define MTK_WED_WPDMA_INT_CTRL_RX 0x534 + +#define MTK_WED_WPDMA_INT_CTRL_TX_FREE 0x538 +#define MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_EN BIT(0) +#define MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_CLR BIT(1) +#define MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_TRIG GENMASK(6, 2) + #define MTK_WED_PCIE_CFG_BASE 0x560 +#define MTK_WED_PCIE_CFG_BASE 0x560 +#define MTK_WED_PCIE_CFG_INTM 0x564 +#define MTK_WED_PCIE_CFG_MSIS 0x568 #define MTK_WED_PCIE_INT_TRIGGER 0x570 #define MTK_WED_PCIE_INT_TRIGGER_STATUS BIT(16) +#define MTK_WED_PCIE_INT_CTRL 0x57c +#define MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA BIT(20) +#define MTK_WED_PCIE_INT_CTRL_SRC_SEL GENMASK(17, 16) +#define MTK_WED_PCIE_INT_CTRL_POLL_EN GENMASK(13, 12) + #define MTK_WED_WPDMA_CFG_BASE 0x580 +#define MTK_WED_WPDMA_CFG_INT_MASK 0x584 +#define MTK_WED_WPDMA_CFG_TX 0x588 +#define MTK_WED_WPDMA_CFG_TX_FREE 0x58c #define MTK_WED_WPDMA_TX_MIB(_n) (0x5a0 + (_n) * 4) #define MTK_WED_WPDMA_TX_COHERENT_MIB(_n) (0x5d0 + (_n) * 4) @@ -203,15 +262,24 @@ struct mtk_wdma_desc { #define MTK_WED_WDMA_RESET_IDX_RX GENMASK(17, 16) #define MTK_WED_WDMA_RESET_IDX_DRV GENMASK(25, 24) +#define MTK_WED_WDMA_INT_CLR 0xa24 +#define MTK_WED_WDMA_INT_CLR_RX_DONE GENMASK(17, 16) + #define MTK_WED_WDMA_INT_TRIGGER 0xa28 #define MTK_WED_WDMA_INT_TRIGGER_RX_DONE GENMASK(17, 16) #define MTK_WED_WDMA_INT_CTRL 0xa2c #define MTK_WED_WDMA_INT_CTRL_POLL_SRC_SEL GENMASK(17, 16) +#define MTK_WED_WDMA_CFG_BASE 0xaa0 #define MTK_WED_WDMA_OFFSET0 0xaa4 #define MTK_WED_WDMA_OFFSET1 0xaa8 +#define MTK_WED_WDMA_OFST0_GLO_INTS GENMASK(15, 0) +#define MTK_WED_WDMA_OFST0_GLO_CFG GENMASK(31, 16) +#define MTK_WED_WDMA_OFST1_TX_CTRL GENMASK(15, 0) +#define MTK_WED_WDMA_OFST1_RX_CTRL GENMASK(31, 16) + #define MTK_WED_WDMA_RX_MIB(_n) (0xae0 + (_n) * 4) #define MTK_WED_WDMA_RX_RECYCLE_MIB(_n) (0xae8 + (_n) * 4) #define MTK_WED_WDMA_RX_PROCESSED_MIB(_n) (0xaf0 + (_n) * 4) @@ -221,6 +289,7 @@ struct mtk_wdma_desc { #define MTK_WED_RING_OFS_CPU_IDX 0x08 #define MTK_WED_RING_OFS_DMA_IDX 0x0c +#define MTK_WDMA_RING_TX(_n) (0x000 + (_n) * 0x10) #define MTK_WDMA_RING_RX(_n) (0x100 + (_n) * 0x10) #define MTK_WDMA_GLO_CFG 0x204 @@ -234,6 +303,8 @@ struct mtk_wdma_desc { #define MTK_WDMA_RESET_IDX_TX GENMASK(3, 0) #define MTK_WDMA_RESET_IDX_RX GENMASK(17, 16) +#define MTK_WDMA_INT_STATUS 0x220 + #define MTK_WDMA_INT_MASK 0x228 #define MTK_WDMA_INT_MASK_TX_DONE GENMASK(3, 0) #define MTK_WDMA_INT_MASK_RX_DONE GENMASK(17, 16) diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index 7e00cca06709..592221a7149b 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -14,6 +14,7 @@ struct mtk_wdma_desc; struct mtk_wed_ring { struct mtk_wdma_desc *desc; dma_addr_t desc_phys; + u32 desc_size; int size; u32 reg_base; @@ -45,10 +46,17 @@ struct mtk_wed_device { struct pci_dev *pci_dev; u32 wpdma_phys; + u32 wpdma_int; + u32 wpdma_mask; + u32 wpdma_tx; + u32 wpdma_txfree; u16 token_start; unsigned int nbuf; + u8 tx_tbit[MTK_WED_TX_QUEUES]; + u8 txfree_tbit; + u32 (*init_buf)(void *ptr, dma_addr_t phys, int token_id); int (*offload_enable)(struct mtk_wed_device *wed); void (*offload_disable)(struct mtk_wed_device *wed); From patchwork Thu Sep 8 19:33:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970564 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 533D4C38145 for ; Thu, 8 Sep 2022 19:36:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230382AbiIHTgU (ORCPT ); Thu, 8 Sep 2022 15:36:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231725AbiIHTfv (ORCPT ); Thu, 8 Sep 2022 15:35:51 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23AE2B3B03; Thu, 8 Sep 2022 12:35:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C6487B8223E; Thu, 8 Sep 2022 19:35:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30FC0C43140; Thu, 8 Sep 2022 19:35:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665744; bh=+p1Jq55Ye3mbrWOIaQolx7j7TEqwSNIGToayfN0ifMs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h5ImYnGwwbXMqENVbYphstevlvt4O2CahS8loj4Q1uQHX9H+AEsm/j88iSr+LebK6 5VQZ9qXDJaBPKgtgkWIi2DnKp+kShX6JNBHqkBfOZKNbuP86OAOKPewCg7/hvW+m32 p/Osbjdhwgcql7WqXCP5cqpFFtOd2i8v/aK42D33s9FNu+X6SfLsgYMSLkooE47ELC nBkvjGYxbjAUkcM9aPE/ThdkUHT5nqPAqBisuYTVCVyvzF7/rMwvXHnkh9PiQTMLUy /XseBdznhW4Fv2mxjELtHOkOouhx324sGNYFVxHtvgHypVEmMOSafdW8uNEZqbs6Oz yA/rpICiCqY6w== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 11/12] net: ethernet: mtk_eth_wed: add axi bus support Date: Thu, 8 Sep 2022 21:33:45 +0200 Message-Id: <09011397a0f4d6ba017abc0dc396e8be990e72aa.1662661555.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Other than pcie bus, introduce support for axi bus to mtk wed driver. Axi bus is used to connect mt7986-wmac soc chip available on mt7986 device. Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 112 +++++++++++++------ drivers/net/ethernet/mediatek/mtk_wed_regs.h | 2 + include/linux/soc/mediatek/mtk_wed.h | 7 ++ 3 files changed, 84 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 3ec891f05fa7..48c638c49355 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -85,11 +85,31 @@ static struct mtk_wed_hw * mtk_wed_assign(struct mtk_wed_device *dev) { struct mtk_wed_hw *hw; + int i; + + if (dev->wlan.bus_type == MTK_WED_BUS_PCIE) { + hw = hw_list[pci_domain_nr(dev->wlan.pci_dev->bus)]; + if (!hw) + return NULL; + + if (!hw->wed_dev) + goto out; + + if (hw->version == 1) + return NULL; + + /* MT7986 WED devices do not have any pcie slot restrictions */ + } + /* MT7986 PCIE or AXI */ + for (i = 0; i < ARRAY_SIZE(hw_list); i++) { + hw = hw_list[i]; + if (hw && !hw->wed_dev) + goto out; + } - hw = hw_list[pci_domain_nr(dev->wlan.pci_dev->bus)]; - if (!hw || hw->wed_dev) - return NULL; + return NULL; +out: hw->wed_dev = dev; return hw; } @@ -321,7 +341,6 @@ mtk_wed_stop(struct mtk_wed_device *dev) static void mtk_wed_detach(struct mtk_wed_device *dev) { - struct device_node *wlan_node = dev->wlan.pci_dev->dev.of_node; struct mtk_wed_hw *hw = dev->hw; mutex_lock(&hw_lock); @@ -336,9 +355,14 @@ mtk_wed_detach(struct mtk_wed_device *dev) mtk_wed_free_buffer(dev); mtk_wed_free_tx_rings(dev); - if (of_dma_is_coherent(wlan_node) && hw->hifsys) - regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, - BIT(hw->index), BIT(hw->index)); + if (dev->wlan.bus_type == MTK_WED_BUS_PCIE) { + struct device_node *wlan_node; + + wlan_node = dev->wlan.pci_dev->dev.of_node; + if (of_dma_is_coherent(wlan_node) && hw->hifsys) + regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, + BIT(hw->index), BIT(hw->index)); + } if (!hw_list[!hw->index]->wed_dev && hw->eth->dma_dev != hw->eth->dev) @@ -355,42 +379,55 @@ mtk_wed_detach(struct mtk_wed_device *dev) static void mtk_wed_bus_init(struct mtk_wed_device *dev) { - struct device_node *node; - void __iomem *base_addr; - u32 val; - - node = of_parse_phandle(dev->hw->node, "mediatek,wed_pcie", 0); - if (!node) - return; + switch (dev->wlan.bus_type) { + case MTK_WED_BUS_PCIE: { + struct device_node *node; + void __iomem *base_addr; + u32 val; + + node = of_parse_phandle(dev->hw->node, "mediatek,wed_pcie", 0); + if (!node) + break; - base_addr = of_iomap(node, 0); - val = BIT(0) | readl(base_addr); - writel(val, base_addr); + base_addr = of_iomap(node, 0); + val = BIT(0) | readl(base_addr); + writel(val, base_addr); - wed_w32(dev, MTK_WED_PCIE_INT_CTRL, - FIELD_PREP(MTK_WED_PCIE_INT_CTRL_POLL_EN, 2)); + wed_w32(dev, MTK_WED_PCIE_INT_CTRL, + FIELD_PREP(MTK_WED_PCIE_INT_CTRL_POLL_EN, 2)); - /* pcie interrupt control: pola/source selection */ - wed_set(dev, MTK_WED_PCIE_INT_CTRL, - MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA | - FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL, 1)); - wed_r32(dev, MTK_WED_PCIE_INT_CTRL); + /* pcie interrupt control: pola/source selection */ + wed_set(dev, MTK_WED_PCIE_INT_CTRL, + MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA | + FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL, 1)); + wed_r32(dev, MTK_WED_PCIE_INT_CTRL); - val = wed_r32(dev, MTK_WED_PCIE_CFG_INTM); - val = wed_r32(dev, MTK_WED_PCIE_CFG_BASE); - wed_w32(dev, MTK_WED_PCIE_CFG_INTM, PCIE_BASE_ADDR0 | 0x180); - wed_w32(dev, MTK_WED_PCIE_CFG_BASE, PCIE_BASE_ADDR0 | 0x184); + val = wed_r32(dev, MTK_WED_PCIE_CFG_INTM); + val = wed_r32(dev, MTK_WED_PCIE_CFG_BASE); + wed_w32(dev, MTK_WED_PCIE_CFG_INTM, PCIE_BASE_ADDR0 | 0x180); + wed_w32(dev, MTK_WED_PCIE_CFG_BASE, PCIE_BASE_ADDR0 | 0x184); - val = wed_r32(dev, MTK_WED_PCIE_CFG_INTM); - val = wed_r32(dev, MTK_WED_PCIE_CFG_BASE); + val = wed_r32(dev, MTK_WED_PCIE_CFG_INTM); + val = wed_r32(dev, MTK_WED_PCIE_CFG_BASE); - /* pcie interrupt status trigger register */ - wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24)); - wed_r32(dev, MTK_WED_PCIE_INT_TRIGGER); + /* pcie interrupt status trigger register */ + wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24)); + wed_r32(dev, MTK_WED_PCIE_INT_TRIGGER); - /* pola setting */ - val = wed_r32(dev, MTK_WED_PCIE_INT_CTRL); - wed_set(dev, MTK_WED_PCIE_INT_CTRL, MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA); + /* pola setting */ + val = wed_r32(dev, MTK_WED_PCIE_INT_CTRL); + wed_set(dev, MTK_WED_PCIE_INT_CTRL, + MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA); + break; + } + case MTK_WED_BUS_AXI: + wed_set(dev, MTK_WED_WPDMA_INT_CTRL, + MTK_WED_WPDMA_INT_CTRL_SIG_SRC | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_SRC_SEL, 0)); + break; + default: + break; + } } static void @@ -800,7 +837,8 @@ mtk_wed_attach(struct mtk_wed_device *dev) RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "mtk_wed_attach without holding the RCU read lock"); - if (pci_domain_nr(dev->wlan.pci_dev->bus) > 1 || + if ((dev->wlan.bus_type == MTK_WED_BUS_PCIE && + pci_domain_nr(dev->wlan.pci_dev->bus) > 1) || !try_module_get(THIS_MODULE)) ret = -ENODEV; diff --git a/drivers/net/ethernet/mediatek/mtk_wed_regs.h b/drivers/net/ethernet/mediatek/mtk_wed_regs.h index ecd925fbd451..1e2aafc10f4c 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_regs.h @@ -194,6 +194,8 @@ struct mtk_wdma_desc { #define MTK_WED_WPDMA_INT_CTRL 0x520 #define MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV BIT(21) +#define MTK_WED_WPDMA_INT_CTRL_SIG_SRC BIT(22) +#define MTK_WED_WPDMA_INT_CTRL_SRC_SEL GENMASK(17, 16) #define MTK_WED_WPDMA_INT_MASK 0x524 diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index 592221a7149b..f2ae5bd140a1 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -11,6 +11,11 @@ struct mtk_wed_hw; struct mtk_wdma_desc; +enum mtk_wed_bus_tye { + MTK_WED_BUS_PCIE, + MTK_WED_BUS_AXI, +}; + struct mtk_wed_ring { struct mtk_wdma_desc *desc; dma_addr_t desc_phys; @@ -45,6 +50,8 @@ struct mtk_wed_device { struct { struct pci_dev *pci_dev; + enum mtk_wed_bus_tye bus_type; + u32 wpdma_phys; u32 wpdma_int; u32 wpdma_mask; From patchwork Thu Sep 8 19:33:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12970565 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CF20C6FA8B for ; Thu, 8 Sep 2022 19:36:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231401AbiIHTgW (ORCPT ); Thu, 8 Sep 2022 15:36:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231952AbiIHTfv (ORCPT ); Thu, 8 Sep 2022 15:35:51 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F4FCC6FF2; Thu, 8 Sep 2022 12:35:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2017961DFA; Thu, 8 Sep 2022 19:35:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F281BC433C1; Thu, 8 Sep 2022 19:35:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662665748; bh=N3gGUtPQnoarYcxL8xMDNl4MjAyEvlH2AOujjg7IviY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jtTgWZ21IgLgZN9BGhoFLOfOjFs1hCqGPu9QOxyUXs7n2Rg5OE9v1AA8/vcgIKlzT /PPYoEKYV7Rj9SNIKjMScLxi9U8nbSJUif8KYw3ouAfO4s93K4jlxD0x8UvUXzbdU/ rgdaPBzJNR3/WGwWVZmV40e+Zws3F2SbRWxClwzIFHrSMxuAnRhP5FahOxEE+GbOH1 m5KsDEt5IW/UCCKXCn98UWLvXesTiT27gD7a74ArPqHIzUBP41k/XEGano0N02PJp2 7B+OBIOV3nhBaVeZO1TH1x/SVJ3lmHSLeMpGbmWVpH41Z5bf3HLEswr7WNECbNNNCK AVaCfVAjybLag== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, matthias.bgg@gmail.com, linux-mediatek@lists.infradead.org, lorenzo.bianconi@redhat.com, Bo.Jiao@mediatek.com, sujuan.chen@mediatek.com, ryder.Lee@mediatek.com, evelyn.tsai@mediatek.com, devicetree@vger.kernel.org, robh@kernel.org Subject: [PATCH net-next 12/12] net: ethernet: mtk_eth_soc: introduce flow offloading support for mt7986 Date: Thu, 8 Sep 2022 21:33:46 +0200 Message-Id: <6775ed6546cacd174a165c7085f7e1f77db9f30c.1662661555.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce hw flow offload support for mt7986 chipset. PPE is not enabled yet in mt7986 since mt76 support is not available yet. Co-developed-by: Bo Jiao Signed-off-by: Bo Jiao Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 27 +++++++++-- drivers/net/ethernet/mediatek/mtk_ppe.c | 45 ++++++++++++++----- drivers/net/ethernet/mediatek/mtk_ppe.h | 10 ++++- .../net/ethernet/mediatek/mtk_ppe_offload.c | 15 ++++++- drivers/net/ethernet/mediatek/mtk_ppe_regs.h | 8 ++++ 5 files changed, 87 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 7dae650c4586..cc790f12c9cc 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1906,12 +1906,14 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, bytes += skb->len; if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + reason = FIELD_GET(MTK_RXD5_PPE_CPU_REASON, trxd.rxd5); hash = trxd.rxd5 & MTK_RXD5_FOE_ENTRY; if (hash != MTK_RXD5_FOE_ENTRY) skb_set_hash(skb, jhash_1word(hash, 0), PKT_HASH_TYPE_L4); rxdcsum = &trxd.rxd3; } else { + reason = FIELD_GET(MTK_RXD4_PPE_CPU_REASON, trxd.rxd4); hash = trxd.rxd4 & MTK_RXD4_FOE_ENTRY; if (hash != MTK_RXD4_FOE_ENTRY) skb_set_hash(skb, jhash_1word(hash, 0), @@ -1925,7 +1927,6 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, skb_checksum_none_assert(skb); skb->protocol = eth_type_trans(skb, netdev); - reason = FIELD_GET(MTK_RXD4_PPE_CPU_REASON, trxd.rxd4); if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) mtk_ppe_check_skb(eth->ppe[0], skb, hash); @@ -4241,7 +4242,7 @@ static const struct mtk_soc_data mt7621_data = { .dma_len_offset = 16, }, .foe = { - .entry_size = sizeof(struct mtk_foe_entry), + .entry_size = sizeof(struct mtk_foe_entry) - 16, .hash_offset = 2, .ib1 = { .bind_ppoe = BIT(19), @@ -4279,7 +4280,7 @@ static const struct mtk_soc_data mt7622_data = { .dma_len_offset = 16, }, .foe = { - .entry_size = sizeof(struct mtk_foe_entry), + .entry_size = sizeof(struct mtk_foe_entry) - 16, .hash_offset = 2, .ib1 = { .bind_ppoe = BIT(19), @@ -4323,7 +4324,7 @@ static const struct mtk_soc_data mt7623_data = { .dma_len_offset = 16, }, .foe = { - .entry_size = sizeof(struct mtk_foe_entry), + .entry_size = sizeof(struct mtk_foe_entry) - 16, .hash_offset = 2, .ib1 = { .bind_ppoe = BIT(19), @@ -4365,6 +4366,7 @@ static const struct mtk_soc_data mt7986_data = { .reg_map = &mt7986_reg_map, .ana_rgc3 = 0x128, .caps = MT7986_CAPS, + .hw_features = MTK_HW_FEATURES, .required_clks = MT7986_CLKS_BITMAP, .required_pctl = false, .txrx = { @@ -4376,7 +4378,24 @@ static const struct mtk_soc_data mt7986_data = { .dma_len_offset = 8, }, .foe = { + .entry_size = sizeof(struct mtk_foe_entry), .hash_offset = 4, + .ib1 = { + .bind_ppoe = BIT(17), + .bind_vlan_tag = BIT(18), + .bind_cache = BIT(20), + .bind_ttl = BIT(22), + .bind_ts = GENMASK(7, 0), + .bind_vlan_layer = GENMASK(16, 14), + .pkt_type = GENMASK(27, 23), + }, + .ib2 = { + .multicast = BIT(13), + .wdma_winfo = BIT(19), + .port_ag = GENMASK(23, 20), + .port_mg = BIT(7), + .dst_port = GENMASK(12, 9), + }, }, .wed = { .desc_ctrl_len1 = GENMASK(13, 0), diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 4248a3b78aa6..4d495c2b19e3 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -172,9 +172,12 @@ int mtk_foe_entry_prepare(struct mtk_eth *eth, struct mtk_foe_entry *entry, eth->soc->foe.ib1.bind_cache; entry->ib1 = val; - val = MTK_FIELD_PREP(eth->soc->foe.ib2.port_mg, 0x3f) | - MTK_FIELD_PREP(eth->soc->foe.ib2.port_ag, 0x1f) | - MTK_FIELD_PREP(eth->soc->foe.ib2.dst_port, pse_port); + val = MTK_FIELD_PREP(eth->soc->foe.ib2.dst_port, pse_port); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + val |= MTK_FIELD_PREP(eth->soc->foe.ib2.port_ag, 0xf); + else + val |= MTK_FIELD_PREP(eth->soc->foe.ib2.port_mg, 0x3f) | + MTK_FIELD_PREP(eth->soc->foe.ib2.port_ag, 0x1f); if (is_multicast_ether_addr(dest_mac)) val |= eth->soc->foe.ib2.multicast; @@ -370,12 +373,17 @@ int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, *ib2 &= ~eth->soc->foe.ib2.port_mg; *ib2 |= eth->soc->foe.ib2.wdma_winfo; - if (wdma_idx) - *ib2 |= MTK_FOE_IB2_WDMA_DEVIDX; - - l2->vlan2 = FIELD_PREP(MTK_FOE_VLAN2_WINFO_BSS, bss) | - FIELD_PREP(MTK_FOE_VLAN2_WINFO_WCID, wcid) | - FIELD_PREP(MTK_FOE_VLAN2_WINFO_RING, txq); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + *ib2 |= FIELD_PREP(MTK_FOE_IB2_RX_IDX, txq); + l2->winfo = FIELD_PREP(MTK_FOE_WINFO_WCID, wcid) | + FIELD_PREP(MTK_FOE_WINFO_BSS, bss); + } else { + if (wdma_idx) + *ib2 |= MTK_FOE_IB2_WDMA_DEVIDX; + l2->vlan2 = FIELD_PREP(MTK_FOE_VLAN2_WINFO_BSS, bss) | + FIELD_PREP(MTK_FOE_VLAN2_WINFO_WCID, wcid) | + FIELD_PREP(MTK_FOE_VLAN2_WINFO_RING, txq); + } return 0; } @@ -784,6 +792,8 @@ void mtk_ppe_start(struct mtk_ppe *ppe) MTK_PPE_SCAN_MODE_KEEPALIVE_AGE) | FIELD_PREP(MTK_PPE_TB_CFG_ENTRY_NUM, MTK_PPE_ENTRIES_SHIFT); + if (MTK_HAS_CAPS(ppe->eth->soc->caps, MTK_NETSYS_V2)) + val |= MTK_PPE_TB_CFG_INFO_SEL; ppe_w32(ppe, MTK_PPE_TB_CFG, val); ppe_w32(ppe, MTK_PPE_IP_PROTO_CHK, @@ -791,15 +801,21 @@ void mtk_ppe_start(struct mtk_ppe *ppe) mtk_ppe_cache_enable(ppe, true); - val = MTK_PPE_FLOW_CFG_IP4_TCP_FRAG | - MTK_PPE_FLOW_CFG_IP4_UDP_FRAG | - MTK_PPE_FLOW_CFG_IP6_3T_ROUTE | + val = MTK_PPE_FLOW_CFG_IP6_3T_ROUTE | MTK_PPE_FLOW_CFG_IP6_5T_ROUTE | MTK_PPE_FLOW_CFG_IP6_6RD | MTK_PPE_FLOW_CFG_IP4_NAT | MTK_PPE_FLOW_CFG_IP4_NAPT | MTK_PPE_FLOW_CFG_IP4_DSLITE | MTK_PPE_FLOW_CFG_IP4_NAT_FRAG; + if (MTK_HAS_CAPS(ppe->eth->soc->caps, MTK_NETSYS_V2)) + val |= MTK_PPE_MD_TOAP_BYP_CRSN0 | + MTK_PPE_MD_TOAP_BYP_CRSN1 | + MTK_PPE_MD_TOAP_BYP_CRSN2 | + MTK_PPE_FLOW_CFG_IP4_HASH_GRE_KEY; + else + val |= MTK_PPE_FLOW_CFG_IP4_TCP_FRAG | + MTK_PPE_FLOW_CFG_IP4_UDP_FRAG; ppe_w32(ppe, MTK_PPE_FLOW_CFG, val); val = FIELD_PREP(MTK_PPE_UNBIND_AGE_MIN_PACKETS, 1000) | @@ -833,6 +849,11 @@ void mtk_ppe_start(struct mtk_ppe *ppe) ppe_w32(ppe, MTK_PPE_GLO_CFG, val); ppe_w32(ppe, MTK_PPE_DEFAULT_CPU_PORT, 0); + + if (MTK_HAS_CAPS(ppe->eth->soc->caps, MTK_NETSYS_V2)) { + ppe_w32(ppe, MTK_PPE_DEFAULT_CPU_PORT1, 0xcb777); + ppe_w32(ppe, MTK_PPE_SBW_CTRL, 0x7f); + } } int mtk_ppe_stop(struct mtk_ppe *ppe) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index a364f45edf38..1f584fd0632d 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -53,6 +53,7 @@ enum { #define MTK_FOE_IB2_PORT_MG GENMASK(17, 12) +#define MTK_FOE_IB2_RX_IDX GENMASK(18, 17) #define MTK_FOE_IB2_PORT_AG GENMASK(23, 18) #define MTK_FOE_IB2_DSCP GENMASK(31, 24) @@ -61,8 +62,12 @@ enum { #define MTK_FOE_VLAN2_WINFO_WCID GENMASK(13, 6) #define MTK_FOE_VLAN2_WINFO_RING GENMASK(15, 14) +#define MTK_FOE_WINFO_BSS GENMASK(5, 0) +#define MTK_FOE_WINFO_WCID GENMASK(15, 6) + #define MTK_FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) #define MTK_FIELD_GET(mask, val) ((typeof(mask))(((val) & (mask)) >> __bf_shf(mask))) + enum { MTK_FOE_STATE_INVALID, MTK_FOE_STATE_UNBIND, @@ -83,6 +88,9 @@ struct mtk_foe_mac_info { u16 pppoe_id; u16 src_mac_lo; + + u16 minfo; + u16 winfo; }; /* software-only entry type */ @@ -200,7 +208,7 @@ struct mtk_foe_entry { struct mtk_foe_ipv4_dslite dslite; struct mtk_foe_ipv6 ipv6; struct mtk_foe_ipv6_6rd ipv6_6rd; - u32 data[19]; + u32 data[23]; }; }; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 56c49ac712b9..c63eeee0ceba 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -193,7 +193,20 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, if (mtk_flow_get_wdma_info(dev, dest_mac, &info) == 0) { mtk_foe_entry_set_wdma(eth, foe, info.wdma_idx, info.queue, info.bss, info.wcid); - pse_port = 3; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + switch (info.wdma_idx) { + case 0: + pse_port = 8; + break; + case 1: + pse_port = 9; + break; + default: + return -EINVAL; + } + } else { + pse_port = 3; + } *wed_index = info.wdma_idx; goto out; } diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_regs.h b/drivers/net/ethernet/mediatek/mtk_ppe_regs.h index 0c45ea0900f1..59596d823d8b 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe_regs.h @@ -21,6 +21,9 @@ #define MTK_PPE_GLO_CFG_BUSY BIT(31) #define MTK_PPE_FLOW_CFG 0x204 +#define MTK_PPE_MD_TOAP_BYP_CRSN0 BIT(1) +#define MTK_PPE_MD_TOAP_BYP_CRSN1 BIT(2) +#define MTK_PPE_MD_TOAP_BYP_CRSN2 BIT(3) #define MTK_PPE_FLOW_CFG_IP4_TCP_FRAG BIT(6) #define MTK_PPE_FLOW_CFG_IP4_UDP_FRAG BIT(7) #define MTK_PPE_FLOW_CFG_IP6_3T_ROUTE BIT(8) @@ -54,6 +57,7 @@ #define MTK_PPE_TB_CFG_HASH_MODE GENMASK(15, 14) #define MTK_PPE_TB_CFG_SCAN_MODE GENMASK(17, 16) #define MTK_PPE_TB_CFG_HASH_DEBUG GENMASK(19, 18) +#define MTK_PPE_TB_CFG_INFO_SEL BIT(20) enum { MTK_PPE_SCAN_MODE_DISABLED, @@ -112,6 +116,8 @@ enum { #define MTK_PPE_DEFAULT_CPU_PORT 0x248 #define MTK_PPE_DEFAULT_CPU_PORT_MASK(_n) (GENMASK(2, 0) << ((_n) * 4)) +#define MTK_PPE_DEFAULT_CPU_PORT1 0x24c + #define MTK_PPE_MTU_DROP 0x308 #define MTK_PPE_VLAN_MTU0 0x30c @@ -141,4 +147,6 @@ enum { #define MTK_PPE_MIB_CACHE_CTL_EN BIT(0) #define MTK_PPE_MIB_CACHE_CTL_FLUSH BIT(2) +#define MTK_PPE_SBW_CTRL 0x374 + #endif