From patchwork Wed Feb 5 18:21:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961664 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16C081FCCE1; Wed, 5 Feb 2025 18:22:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779727; cv=none; b=YACXppf5yLuCpeRVni1pT7o46ZRML3+u6k3BXSZCr8GZxhdMYujIpPcSyx3n5o7S2FQdcc21IKZQx0izow+QD89DvnSp3liLYxXLL64YI7V1kP4p92TNACtzc9wbwB5Eyn66qadLKBQBMDGAlfFWGi4ELvmHP7xYP7TNZ9qFhLQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779727; c=relaxed/simple; bh=IpkbnQD0aqV4zTrDQLNPfko65wJI87nBEI5JGdjdZk8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=sNlbRH7WzAj+aaBU9Kt5vYtrf83uzADRRfDecmGEBUKuWthlCQm0fO5tjHOBqU/zUAtP7GoNqeR4Z9n7SC5MAnp/GT8Xo6hIzWdx7eVRcQhDP+B6r0TXUCyEv9uL0xzP2tOVFTuGwiuhUnuDNioiNuO+sSKjudo0jm0WBEf84iA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=m4PlEy7N; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="m4PlEy7N" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1140BC4CED1; Wed, 5 Feb 2025 18:22:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779725; bh=IpkbnQD0aqV4zTrDQLNPfko65wJI87nBEI5JGdjdZk8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=m4PlEy7Nt8kdFEyv8HviVE+31EDrhke8K9QLj73nMHDoe+Z0B+CD3gz2EXUXQwMkG O8r/Nv1Bev73vOoTGgIwoVp78unr9POXsLZ8mq4HfA1Tj1GMKS2R9O1FLcJ0zXAuaL JnD2/MOpXL45+NMVobB6mfQ9py6TbhAUoO8XZbn5MMoCsh7tqf0VpVEWHvs95RyWjB H5s6dsKk/cYby7pGIKQTNqBxjWjLmA2pkHBFk2EvaYJc5gugnYQ7V5uxshYTq9XN5N LnKLpC0G2kb6HC7NrzTnER0zU4hXprUk4KRl6POzXNPq5UHjqj5vhQtzNLyIr9BUnI Mzi4GSf0D6j9w== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:20 +0100 Subject: [PATCH net-next 01/13] net: airoha: Move airoha_eth driver in a dedicated folder Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-1-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org The airoha_eth driver has no codebase shared with mtk_eth_soc one. Moreover, the upcoming features (flowtable hw offloading, PCS, ..) will not reuse any code from MediaTek driver. Move the Airoha driver in a dedicated folder. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/Kconfig | 2 ++ drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/airoha/Kconfig | 18 ++++++++++++++++++ drivers/net/ethernet/airoha/Makefile | 6 ++++++ drivers/net/ethernet/{mediatek => airoha}/airoha_eth.c | 0 drivers/net/ethernet/mediatek/Kconfig | 8 -------- drivers/net/ethernet/mediatek/Makefile | 1 - 7 files changed, 27 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 977b42bc1e8c1e8804eb7fafa9ed85252d956cad..7941983d21e9e84cbd78241d5c1d48c95e50a8e4 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -20,6 +20,8 @@ source "drivers/net/ethernet/actions/Kconfig" source "drivers/net/ethernet/adaptec/Kconfig" source "drivers/net/ethernet/aeroflex/Kconfig" source "drivers/net/ethernet/agere/Kconfig" +source "drivers/net/ethernet/airoha/Kconfig" +source "drivers/net/ethernet/mellanox/Kconfig" source "drivers/net/ethernet/alacritech/Kconfig" source "drivers/net/ethernet/allwinner/Kconfig" source "drivers/net/ethernet/alteon/Kconfig" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index 99fa180dedb80555e64b0fbcd7767044262cf432..67182339469a0d8337cc4e92aa51e498c615156d 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_NET_VENDOR_ADAPTEC) += adaptec/ obj-$(CONFIG_GRETH) += aeroflex/ obj-$(CONFIG_NET_VENDOR_ADI) += adi/ obj-$(CONFIG_NET_VENDOR_AGERE) += agere/ +obj-$(CONFIG_NET_VENDOR_AIROHA) += airoha/ obj-$(CONFIG_NET_VENDOR_ALACRITECH) += alacritech/ obj-$(CONFIG_NET_VENDOR_ALLWINNER) += allwinner/ obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/ diff --git a/drivers/net/ethernet/airoha/Kconfig b/drivers/net/ethernet/airoha/Kconfig new file mode 100644 index 0000000000000000000000000000000000000000..b6a131845f13b23a12464cfc281e3abe5699389f --- /dev/null +++ b/drivers/net/ethernet/airoha/Kconfig @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: GPL-2.0-only +config NET_VENDOR_AIROHA + bool "Airoha devices" + depends on ARCH_AIROHA || COMPILE_TEST + help + If you have a Airoha SoC with ethernet, say Y. + +if NET_VENDOR_AIROHA + +config NET_AIROHA + tristate "Airoha SoC Gigabit Ethernet support" + depends on NET_DSA || !NET_DSA + select PAGE_POOL + help + This driver supports the gigabit ethernet MACs in the + Airoha SoC family. + +endif #NET_VENDOR_AIROHA diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..73a6f3680a4c4ce92ee785d83b905d76a63421df --- /dev/null +++ b/drivers/net/ethernet/airoha/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Airoha for the Mediatek SoCs built-in ethernet macs +# + +obj-$(CONFIG_NET_AIROHA) += airoha_eth.o diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c similarity index 100% rename from drivers/net/ethernet/mediatek/airoha_eth.c rename to drivers/net/ethernet/airoha/airoha_eth.c diff --git a/drivers/net/ethernet/mediatek/Kconfig b/drivers/net/ethernet/mediatek/Kconfig index 95c4405b7d7bee53b964243480a0c173b555da56..7bfd3f230ff50739b3fc6103cd5d0e57ab8f70e1 100644 --- a/drivers/net/ethernet/mediatek/Kconfig +++ b/drivers/net/ethernet/mediatek/Kconfig @@ -7,14 +7,6 @@ config NET_VENDOR_MEDIATEK if NET_VENDOR_MEDIATEK -config NET_AIROHA - tristate "Airoha SoC Gigabit Ethernet support" - depends on NET_DSA || !NET_DSA - select PAGE_POOL - help - This driver supports the gigabit ethernet MACs in the - Airoha SoC family. - config NET_MEDIATEK_SOC_WED depends on ARCH_MEDIATEK || COMPILE_TEST def_bool NET_MEDIATEK_SOC != n diff --git a/drivers/net/ethernet/mediatek/Makefile b/drivers/net/ethernet/mediatek/Makefile index ddbb7f4a516caccf5eef7140de1872e9b35e3471..03e008fbc859b35067682f8640dab05ccce6caf7 100644 --- a/drivers/net/ethernet/mediatek/Makefile +++ b/drivers/net/ethernet/mediatek/Makefile @@ -11,4 +11,3 @@ mtk_eth-$(CONFIG_NET_MEDIATEK_SOC_WED) += mtk_wed_debugfs.o endif obj-$(CONFIG_NET_MEDIATEK_SOC_WED) += mtk_wed_ops.o obj-$(CONFIG_NET_MEDIATEK_STAR_EMAC) += mtk_star_emac.o -obj-$(CONFIG_NET_AIROHA) += airoha_eth.o From patchwork Wed Feb 5 18:21:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961665 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3D1A1FCCE1; Wed, 5 Feb 2025 18:22:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779728; cv=none; b=rTS55dvYlGPu0qeai+ZGatnjirrwKvJINH11oIHrwtc9L6ScN2xJCIS2iUuqueOika35WjqA5O8f8lK6B3KmLTxDZ2bJ5igxLq4joov4hYnXxkGOf/iIv70BLKvB0EOQg8uYZ0CA3Z7JaQ17TGBUr3OjYPWnTUDyNDkPPSk0fDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779728; c=relaxed/simple; bh=E5CgnPuNJDJJvPpS9f1NTAdUYTnxtSuro4juipSapFA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WsBrGZlIVekwGvl5r1gqx8faQ0zUJHZ+ExUP/j24ouix0TI/tWyU4IomPaXr9lajI54Z79ivtdmsNP7gVZnzyDQHuh8xrPfA7dtwuvldWm5speq/sZvtvLVufTl9nHsT6K6jKkGlnYHLLyEKq7x1hp1dRWI6hmK1tLQP7vil3JE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SYighf5l; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SYighf5l" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10508C4CEE6; Wed, 5 Feb 2025 18:22:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779728; bh=E5CgnPuNJDJJvPpS9f1NTAdUYTnxtSuro4juipSapFA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=SYighf5l8gU2C4TMn0PDu7uR2FwQbISaK1RcfdPkO0Cu/FHA+jCLm2Xnx13aSD2AG kRr+okcMrFbhQcelIYmLUEFWymUCWgOSFl+V1yYetedNK8rHhRzddjPlIGNnIk/6+6 6D9Fp7u9fL1T0sK8k9n4AtI0raskYJx10JjuottXl1/+FC1Vz9vpoUMaOJMicnItjj 2AL6CLlLbySF+ZJAiR+syQ3ULgpcQTlzZdkhEleMOUNqnvj06GZ9fd6wBSuIFgpy3L Lf1ba7MTm2TWAJhsZfkfMKyBhHFdLtus4837IX3xTry/xY4ZsEm4Gs+6xtImvJQb2Q Mt/vs6bvmADXw== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:21 +0100 Subject: [PATCH net-next 02/13] net: airoha: Move definitions in airoha_eth.h Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-2-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Move common airoha_eth definitions in airoha_eth.h in order to reuse them for Packet Processor Engine (PPE) codebase. PPE module is used to enable support for flowtable hw offloading in airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 240 +---------------------------- drivers/net/ethernet/airoha/airoha_eth.h | 251 +++++++++++++++++++++++++++++++ 2 files changed, 252 insertions(+), 239 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 09f448f291240257c5748725848ede231c502fbd..a9074c6b55efc5c2fc1f09f8f8a6fdb34508d5cd 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -3,14 +3,9 @@ * Copyright (c) 2024 AIROHA Inc * Author: Lorenzo Bianconi */ -#include -#include -#include -#include #include #include #include -#include #include #include #include @@ -18,35 +13,7 @@ #include #include -#define AIROHA_MAX_NUM_GDM_PORTS 1 -#define AIROHA_MAX_NUM_QDMA 2 -#define AIROHA_MAX_NUM_RSTS 3 -#define AIROHA_MAX_NUM_XSI_RSTS 5 -#define AIROHA_MAX_MTU 2000 -#define AIROHA_MAX_PACKET_SIZE 2048 -#define AIROHA_NUM_QOS_CHANNELS 4 -#define AIROHA_NUM_QOS_QUEUES 8 -#define AIROHA_NUM_TX_RING 32 -#define AIROHA_NUM_RX_RING 32 -#define AIROHA_NUM_NETDEV_TX_RINGS (AIROHA_NUM_TX_RING + \ - AIROHA_NUM_QOS_CHANNELS) -#define AIROHA_FE_MC_MAX_VLAN_TABLE 64 -#define AIROHA_FE_MC_MAX_VLAN_PORT 16 -#define AIROHA_NUM_TX_IRQ 2 -#define HW_DSCP_NUM 2048 -#define IRQ_QUEUE_LEN(_n) ((_n) ? 1024 : 2048) -#define TX_DSCP_NUM 1024 -#define RX_DSCP_NUM(_n) \ - ((_n) == 2 ? 128 : \ - (_n) == 11 ? 128 : \ - (_n) == 15 ? 128 : \ - (_n) == 0 ? 1024 : 16) - -#define PSE_RSV_PAGES 128 -#define PSE_QUEUE_RSV_PAGES 64 - -#define QDMA_METER_IDX(_n) ((_n) & 0xff) -#define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) +#include "airoha_eth.h" /* FE */ #define PSE_BASE 0x0100 @@ -706,211 +673,6 @@ struct airoha_qdma_fwd_desc { __le32 rsv1; }; -enum { - QDMA_INT_REG_IDX0, - QDMA_INT_REG_IDX1, - QDMA_INT_REG_IDX2, - QDMA_INT_REG_IDX3, - QDMA_INT_REG_IDX4, - QDMA_INT_REG_MAX -}; - -enum { - XSI_PCIE0_PORT, - XSI_PCIE1_PORT, - XSI_USB_PORT, - XSI_AE_PORT, - XSI_ETH_PORT, -}; - -enum { - XSI_PCIE0_VIP_PORT_MASK = BIT(22), - XSI_PCIE1_VIP_PORT_MASK = BIT(23), - XSI_USB_VIP_PORT_MASK = BIT(25), - XSI_ETH_VIP_PORT_MASK = BIT(24), -}; - -enum { - DEV_STATE_INITIALIZED, -}; - -enum { - CDM_CRSN_QSEL_Q1 = 1, - CDM_CRSN_QSEL_Q5 = 5, - CDM_CRSN_QSEL_Q6 = 6, - CDM_CRSN_QSEL_Q15 = 15, -}; - -enum { - CRSN_08 = 0x8, - CRSN_21 = 0x15, /* KA */ - CRSN_22 = 0x16, /* hit bind and force route to CPU */ - CRSN_24 = 0x18, - CRSN_25 = 0x19, -}; - -enum { - FE_PSE_PORT_CDM1, - FE_PSE_PORT_GDM1, - FE_PSE_PORT_GDM2, - FE_PSE_PORT_GDM3, - FE_PSE_PORT_PPE1, - FE_PSE_PORT_CDM2, - FE_PSE_PORT_CDM3, - FE_PSE_PORT_CDM4, - FE_PSE_PORT_PPE2, - FE_PSE_PORT_GDM4, - FE_PSE_PORT_CDM5, - FE_PSE_PORT_DROP = 0xf, -}; - -enum tx_sched_mode { - TC_SCH_WRR8, - TC_SCH_SP, - TC_SCH_WRR7, - TC_SCH_WRR6, - TC_SCH_WRR5, - TC_SCH_WRR4, - TC_SCH_WRR3, - TC_SCH_WRR2, -}; - -enum trtcm_param_type { - TRTCM_MISC_MODE, /* meter_en, pps_mode, tick_sel */ - TRTCM_TOKEN_RATE_MODE, - TRTCM_BUCKETSIZE_SHIFT_MODE, - TRTCM_BUCKET_COUNTER_MODE, -}; - -enum trtcm_mode_type { - TRTCM_COMMIT_MODE, - TRTCM_PEAK_MODE, -}; - -enum trtcm_param { - TRTCM_TICK_SEL = BIT(0), - TRTCM_PKT_MODE = BIT(1), - TRTCM_METER_MODE = BIT(2), -}; - -#define MIN_TOKEN_SIZE 4096 -#define MAX_TOKEN_SIZE_OFFSET 17 -#define TRTCM_TOKEN_RATE_MASK GENMASK(23, 6) -#define TRTCM_TOKEN_RATE_FRACTION_MASK GENMASK(5, 0) - -struct airoha_queue_entry { - union { - void *buf; - struct sk_buff *skb; - }; - dma_addr_t dma_addr; - u16 dma_len; -}; - -struct airoha_queue { - struct airoha_qdma *qdma; - - /* protect concurrent queue accesses */ - spinlock_t lock; - struct airoha_queue_entry *entry; - struct airoha_qdma_desc *desc; - u16 head; - u16 tail; - - int queued; - int ndesc; - int free_thr; - int buf_size; - - struct napi_struct napi; - struct page_pool *page_pool; -}; - -struct airoha_tx_irq_queue { - struct airoha_qdma *qdma; - - struct napi_struct napi; - - int size; - u32 *q; -}; - -struct airoha_hw_stats { - /* protect concurrent hw_stats accesses */ - spinlock_t lock; - struct u64_stats_sync syncp; - - /* get_stats64 */ - u64 rx_ok_pkts; - u64 tx_ok_pkts; - u64 rx_ok_bytes; - u64 tx_ok_bytes; - u64 rx_multicast; - u64 rx_errors; - u64 rx_drops; - u64 tx_drops; - u64 rx_crc_error; - u64 rx_over_errors; - /* ethtool stats */ - u64 tx_broadcast; - u64 tx_multicast; - u64 tx_len[7]; - u64 rx_broadcast; - u64 rx_fragment; - u64 rx_jabber; - u64 rx_len[7]; -}; - -struct airoha_qdma { - struct airoha_eth *eth; - void __iomem *regs; - - /* protect concurrent irqmask accesses */ - spinlock_t irq_lock; - u32 irqmask[QDMA_INT_REG_MAX]; - int irq; - - struct airoha_tx_irq_queue q_tx_irq[AIROHA_NUM_TX_IRQ]; - - struct airoha_queue q_tx[AIROHA_NUM_TX_RING]; - struct airoha_queue q_rx[AIROHA_NUM_RX_RING]; - - /* descriptor and packet buffers for qdma hw forward */ - struct { - void *desc; - void *q; - } hfwd; -}; - -struct airoha_gdm_port { - struct airoha_qdma *qdma; - struct net_device *dev; - int id; - - struct airoha_hw_stats stats; - - DECLARE_BITMAP(qos_sq_bmap, AIROHA_NUM_QOS_CHANNELS); - - /* qos stats counters */ - u64 cpu_tx_packets; - u64 fwd_tx_packets; -}; - -struct airoha_eth { - struct device *dev; - - unsigned long state; - void __iomem *fe_regs; - - struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; - struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; - - struct net_device *napi_dev; - - struct airoha_qdma qdma[AIROHA_MAX_NUM_QDMA]; - struct airoha_gdm_port *ports[AIROHA_MAX_NUM_GDM_PORTS]; -}; - static u32 airoha_rr(void __iomem *base, u32 offset) { return readl(base + offset); diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h new file mode 100644 index 0000000000000000000000000000000000000000..3310e0cf85f1d240e95404a0c15703e5f6be26bc --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -0,0 +1,251 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#ifndef AIROHA_ETH_H +#define AIROHA_ETH_H + +#include +#include +#include +#include +#include + +#define AIROHA_MAX_NUM_GDM_PORTS 1 +#define AIROHA_MAX_NUM_QDMA 2 +#define AIROHA_MAX_NUM_RSTS 3 +#define AIROHA_MAX_NUM_XSI_RSTS 5 +#define AIROHA_MAX_MTU 2000 +#define AIROHA_MAX_PACKET_SIZE 2048 +#define AIROHA_NUM_QOS_CHANNELS 4 +#define AIROHA_NUM_QOS_QUEUES 8 +#define AIROHA_NUM_TX_RING 32 +#define AIROHA_NUM_RX_RING 32 +#define AIROHA_NUM_NETDEV_TX_RINGS (AIROHA_NUM_TX_RING + \ + AIROHA_NUM_QOS_CHANNELS) +#define AIROHA_FE_MC_MAX_VLAN_TABLE 64 +#define AIROHA_FE_MC_MAX_VLAN_PORT 16 +#define AIROHA_NUM_TX_IRQ 2 +#define HW_DSCP_NUM 2048 +#define IRQ_QUEUE_LEN(_n) ((_n) ? 1024 : 2048) +#define TX_DSCP_NUM 1024 +#define RX_DSCP_NUM(_n) \ + ((_n) == 2 ? 128 : \ + (_n) == 11 ? 128 : \ + (_n) == 15 ? 128 : \ + (_n) == 0 ? 1024 : 16) + +#define PSE_RSV_PAGES 128 +#define PSE_QUEUE_RSV_PAGES 64 + +#define QDMA_METER_IDX(_n) ((_n) & 0xff) +#define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) + +enum { + QDMA_INT_REG_IDX0, + QDMA_INT_REG_IDX1, + QDMA_INT_REG_IDX2, + QDMA_INT_REG_IDX3, + QDMA_INT_REG_IDX4, + QDMA_INT_REG_MAX +}; + +enum { + XSI_PCIE0_PORT, + XSI_PCIE1_PORT, + XSI_USB_PORT, + XSI_AE_PORT, + XSI_ETH_PORT, +}; + +enum { + XSI_PCIE0_VIP_PORT_MASK = BIT(22), + XSI_PCIE1_VIP_PORT_MASK = BIT(23), + XSI_USB_VIP_PORT_MASK = BIT(25), + XSI_ETH_VIP_PORT_MASK = BIT(24), +}; + +enum { + DEV_STATE_INITIALIZED, +}; + +enum { + CDM_CRSN_QSEL_Q1 = 1, + CDM_CRSN_QSEL_Q5 = 5, + CDM_CRSN_QSEL_Q6 = 6, + CDM_CRSN_QSEL_Q15 = 15, +}; + +enum { + CRSN_08 = 0x8, + CRSN_21 = 0x15, /* KA */ + CRSN_22 = 0x16, /* hit bind and force route to CPU */ + CRSN_24 = 0x18, + CRSN_25 = 0x19, +}; + +enum { + FE_PSE_PORT_CDM1, + FE_PSE_PORT_GDM1, + FE_PSE_PORT_GDM2, + FE_PSE_PORT_GDM3, + FE_PSE_PORT_PPE1, + FE_PSE_PORT_CDM2, + FE_PSE_PORT_CDM3, + FE_PSE_PORT_CDM4, + FE_PSE_PORT_PPE2, + FE_PSE_PORT_GDM4, + FE_PSE_PORT_CDM5, + FE_PSE_PORT_DROP = 0xf, +}; + +enum tx_sched_mode { + TC_SCH_WRR8, + TC_SCH_SP, + TC_SCH_WRR7, + TC_SCH_WRR6, + TC_SCH_WRR5, + TC_SCH_WRR4, + TC_SCH_WRR3, + TC_SCH_WRR2, +}; + +enum trtcm_param_type { + TRTCM_MISC_MODE, /* meter_en, pps_mode, tick_sel */ + TRTCM_TOKEN_RATE_MODE, + TRTCM_BUCKETSIZE_SHIFT_MODE, + TRTCM_BUCKET_COUNTER_MODE, +}; + +enum trtcm_mode_type { + TRTCM_COMMIT_MODE, + TRTCM_PEAK_MODE, +}; + +enum trtcm_param { + TRTCM_TICK_SEL = BIT(0), + TRTCM_PKT_MODE = BIT(1), + TRTCM_METER_MODE = BIT(2), +}; + +#define MIN_TOKEN_SIZE 4096 +#define MAX_TOKEN_SIZE_OFFSET 17 +#define TRTCM_TOKEN_RATE_MASK GENMASK(23, 6) +#define TRTCM_TOKEN_RATE_FRACTION_MASK GENMASK(5, 0) + +struct airoha_queue_entry { + union { + void *buf; + struct sk_buff *skb; + }; + dma_addr_t dma_addr; + u16 dma_len; +}; + +struct airoha_queue { + struct airoha_qdma *qdma; + + /* protect concurrent queue accesses */ + spinlock_t lock; + struct airoha_queue_entry *entry; + struct airoha_qdma_desc *desc; + u16 head; + u16 tail; + + int queued; + int ndesc; + int free_thr; + int buf_size; + + struct napi_struct napi; + struct page_pool *page_pool; +}; + +struct airoha_tx_irq_queue { + struct airoha_qdma *qdma; + + struct napi_struct napi; + + int size; + u32 *q; +}; + +struct airoha_hw_stats { + /* protect concurrent hw_stats accesses */ + spinlock_t lock; + struct u64_stats_sync syncp; + + /* get_stats64 */ + u64 rx_ok_pkts; + u64 tx_ok_pkts; + u64 rx_ok_bytes; + u64 tx_ok_bytes; + u64 rx_multicast; + u64 rx_errors; + u64 rx_drops; + u64 tx_drops; + u64 rx_crc_error; + u64 rx_over_errors; + /* ethtool stats */ + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_len[7]; + u64 rx_broadcast; + u64 rx_fragment; + u64 rx_jabber; + u64 rx_len[7]; +}; + +struct airoha_qdma { + struct airoha_eth *eth; + void __iomem *regs; + + /* protect concurrent irqmask accesses */ + spinlock_t irq_lock; + u32 irqmask[QDMA_INT_REG_MAX]; + int irq; + + struct airoha_tx_irq_queue q_tx_irq[AIROHA_NUM_TX_IRQ]; + + struct airoha_queue q_tx[AIROHA_NUM_TX_RING]; + struct airoha_queue q_rx[AIROHA_NUM_RX_RING]; + + /* descriptor and packet buffers for qdma hw forward */ + struct { + void *desc; + void *q; + } hfwd; +}; + +struct airoha_gdm_port { + struct airoha_qdma *qdma; + struct net_device *dev; + int id; + + struct airoha_hw_stats stats; + + DECLARE_BITMAP(qos_sq_bmap, AIROHA_NUM_QOS_CHANNELS); + + /* qos stats counters */ + u64 cpu_tx_packets; + u64 fwd_tx_packets; +}; + +struct airoha_eth { + struct device *dev; + + unsigned long state; + void __iomem *fe_regs; + + struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; + struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; + + struct net_device *napi_dev; + + struct airoha_qdma qdma[AIROHA_MAX_NUM_QDMA]; + struct airoha_gdm_port *ports[AIROHA_MAX_NUM_GDM_PORTS]; +}; + +#endif /* AIROHA_ETH_H */ From patchwork Wed Feb 5 18:21:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961666 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2A3F1FCCE1; Wed, 5 Feb 2025 18:22:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779732; cv=none; b=MsSoPV6D1N9ajUAJjCw3/A8md+xcMzih32vertEwGuutcKC1b1coq8kydbsFXZd/vnFgpjV6JbNTVFVSBiJg+btfiXi7qzkt8zUu6DDPpcMlNVUoLU7LV4sqhfn21wsEkcWjI+abUB6zUTK0uv9QJNxm6tCMzejl/tTWeTzYyYg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779732; c=relaxed/simple; bh=obYgSC2EXfhhCIacH2bcpqqvIZQXINjXQ4LiWYYsW7w=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=cwdYD7xQVXcbXz08ZyuI2H/6jMLbT2ZzeDX5RfmSmJZvBsd3aO6dTCRMVwuzDAhRFrtRJqyI2f1hYbE+EV1jh9cCqPVngGclLRZ/7G1iagEQPGIKlZDUBVFYKyyrgKH9EdwVpfsqfAxHbpRN2B8z4aEVif6oBWzRsOw2CRKG0jM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hd5iQwUK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hd5iQwUK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01272C4CED1; Wed, 5 Feb 2025 18:22:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779731; bh=obYgSC2EXfhhCIacH2bcpqqvIZQXINjXQ4LiWYYsW7w=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hd5iQwUKE6nvALoN5XA68zGbUt8uYAca5y/duLO1SgaU9SIWE14WIqSH5jrbPnnOa buIJSaZtB5ciXrU4viaHUFSFki0YiJKQzDQeyYtbt55edayDAkUXL+MwDHm5IaDjFS HgqqZvJ81vKvZ0BbP3wtYC6hR5H67X6m1+1OxOdOsTQdkg4pGOzwXMr7sGXVC9THi6 cgEIm/BnxwIJYKz38DTMQY74H/kvjpa6ZRQQsXW71Jqa4RjP9JbgO/SxXsNWWHUDe+ yJVTaDMHA7Kb+/UOFRjHPDlNiFe2UxR50jfIlew/54v7E3qSC2q1euwu88jgwHsqh8 i7clNchBINFjA== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:22 +0100 Subject: [PATCH net-next 03/13] net: airoha: Move reg/write utility routines in airoha_eth.h Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-3-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org This is a preliminary patch to introduce flowtable hw offloading support for airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 28 +++------------------------- drivers/net/ethernet/airoha/airoha_eth.h | 26 ++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index a9074c6b55efc5c2fc1f09f8f8a6fdb34508d5cd..5916600555d6094a86c4bd3c17fa7bf8750059bd 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -673,17 +673,17 @@ struct airoha_qdma_fwd_desc { __le32 rsv1; }; -static u32 airoha_rr(void __iomem *base, u32 offset) +u32 airoha_rr(void __iomem *base, u32 offset) { return readl(base + offset); } -static void airoha_wr(void __iomem *base, u32 offset, u32 val) +void airoha_wr(void __iomem *base, u32 offset, u32 val) { writel(val, base + offset); } -static u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val) +u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val) { val |= (airoha_rr(base, offset) & ~mask); airoha_wr(base, offset, val); @@ -691,28 +691,6 @@ static u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val) return val; } -#define airoha_fe_rr(eth, offset) \ - airoha_rr((eth)->fe_regs, (offset)) -#define airoha_fe_wr(eth, offset, val) \ - airoha_wr((eth)->fe_regs, (offset), (val)) -#define airoha_fe_rmw(eth, offset, mask, val) \ - airoha_rmw((eth)->fe_regs, (offset), (mask), (val)) -#define airoha_fe_set(eth, offset, val) \ - airoha_rmw((eth)->fe_regs, (offset), 0, (val)) -#define airoha_fe_clear(eth, offset, val) \ - airoha_rmw((eth)->fe_regs, (offset), (val), 0) - -#define airoha_qdma_rr(qdma, offset) \ - airoha_rr((qdma)->regs, (offset)) -#define airoha_qdma_wr(qdma, offset, val) \ - airoha_wr((qdma)->regs, (offset), (val)) -#define airoha_qdma_rmw(qdma, offset, mask, val) \ - airoha_rmw((qdma)->regs, (offset), (mask), (val)) -#define airoha_qdma_set(qdma, offset, val) \ - airoha_rmw((qdma)->regs, (offset), 0, (val)) -#define airoha_qdma_clear(qdma, offset, val) \ - airoha_rmw((qdma)->regs, (offset), (val), 0) - static void airoha_qdma_set_irqmask(struct airoha_qdma *qdma, int index, u32 clear, u32 set) { diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 3310e0cf85f1d240e95404a0c15703e5f6be26bc..743aaf10235fe09fb2a91b491f4b25064ed8319b 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -248,4 +248,30 @@ struct airoha_eth { struct airoha_gdm_port *ports[AIROHA_MAX_NUM_GDM_PORTS]; }; +u32 airoha_rr(void __iomem *base, u32 offset); +void airoha_wr(void __iomem *base, u32 offset, u32 val); +u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val); + +#define airoha_fe_rr(eth, offset) \ + airoha_rr((eth)->fe_regs, (offset)) +#define airoha_fe_wr(eth, offset, val) \ + airoha_wr((eth)->fe_regs, (offset), (val)) +#define airoha_fe_rmw(eth, offset, mask, val) \ + airoha_rmw((eth)->fe_regs, (offset), (mask), (val)) +#define airoha_fe_set(eth, offset, val) \ + airoha_rmw((eth)->fe_regs, (offset), 0, (val)) +#define airoha_fe_clear(eth, offset, val) \ + airoha_rmw((eth)->fe_regs, (offset), (val), 0) + +#define airoha_qdma_rr(qdma, offset) \ + airoha_rr((qdma)->regs, (offset)) +#define airoha_qdma_wr(qdma, offset, val) \ + airoha_wr((qdma)->regs, (offset), (val)) +#define airoha_qdma_rmw(qdma, offset, mask, val) \ + airoha_rmw((qdma)->regs, (offset), (mask), (val)) +#define airoha_qdma_set(qdma, offset, val) \ + airoha_rmw((qdma)->regs, (offset), 0, (val)) +#define airoha_qdma_clear(qdma, offset, val) \ + airoha_rmw((qdma)->regs, (offset), (val), 0) + #endif /* AIROHA_ETH_H */ From patchwork Wed Feb 5 18:21:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961667 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04FAD1FCCE1; Wed, 5 Feb 2025 18:22:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779735; cv=none; b=ALo6t8yYwWdchZ921gF7DyVKSh5gttw+kFDzQPgzmh+jl8nDC9jE8Msai76hGFz2ATjx4kPl3gXiSHETLRZz6Q3LpCbgnEJYBvPD6tJo1vNcTSRb0J61Udmp3mMegsK0pABjdkcw/YTbab+0wtvKBnBRdZLdUG5ZEzi8Ly97m+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779735; c=relaxed/simple; bh=EQ3ttSQnZMB0/LzpWUcH1wT0I5I3W7zu9r/usZJZ+Uc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=K7u2kUzalSq9CJed90uNRnSzrHvSLO3OrMJvf5HNqrA7zkipzu6POYm9YOuXbx4xuT/5QhUhU+6af3YbB9IM3BI7zq2X7tgwmqQ+1uUMk+p6EjqY/Ka4YUCfL/2sk5LE0uOQ7GdYeW2scUc+AEF5UkPEX+iQyy7GxGNAQnufmnU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jf5L54Q9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jf5L54Q9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCD70C4CED1; Wed, 5 Feb 2025 18:22:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779734; bh=EQ3ttSQnZMB0/LzpWUcH1wT0I5I3W7zu9r/usZJZ+Uc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jf5L54Q9rJ8RacLCWeqkjVu/USOB9/jYqybP46QcdufGbEB9EvU+JpSYg3+YZOOzl /yt961NZWHhVYBR0it/44XPNlV5zY8uANAe/qBmrdhjUuqg7EzfTTN0rSEupf/Obys 0G4uTBfZx6KnCC4AUbd0f1DqocSaBZ0cuLb6ObiPBVNgoFl0/lJgT403K6MXYLcFkM 8nfmM4bsULl8e451w4ts4UlUQOVVf0s4aZdC6viCYHT5PwbbfpFxAyRZdxWOKFSbZR PqEAVo0CpF6Xw9SmsdwTl1/FPWHSGY4WiA9WVMkb95GjDYXm1cuL1z6EdKjcxdmwvP yJuR8oGSpN9EQ== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:23 +0100 Subject: [PATCH net-next 04/13] net: airoha: Move register definitions in airoha_regs.h Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-4-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Move common airoha_eth register definitions in airoha_regs.h in order to reuse them for Packet Processor Engine (PPE) codebase. PPE module is used to enable support for flowtable hw offloading in airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 659 +---------------------------- drivers/net/ethernet/airoha/airoha_regs.h | 670 ++++++++++++++++++++++++++++++ 2 files changed, 671 insertions(+), 658 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 5916600555d6094a86c4bd3c17fa7bf8750059bd..a99d98be86a00f7bd000e2b8218afcde53b1946f 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -13,666 +13,9 @@ #include #include +#include "airoha_regs.h" #include "airoha_eth.h" -/* FE */ -#define PSE_BASE 0x0100 -#define CSR_IFC_BASE 0x0200 -#define CDM1_BASE 0x0400 -#define GDM1_BASE 0x0500 -#define PPE1_BASE 0x0c00 - -#define CDM2_BASE 0x1400 -#define GDM2_BASE 0x1500 - -#define GDM3_BASE 0x1100 -#define GDM4_BASE 0x2500 - -#define GDM_BASE(_n) \ - ((_n) == 4 ? GDM4_BASE : \ - (_n) == 3 ? GDM3_BASE : \ - (_n) == 2 ? GDM2_BASE : GDM1_BASE) - -#define REG_FE_DMA_GLO_CFG 0x0000 -#define FE_DMA_GLO_L2_SPACE_MASK GENMASK(7, 4) -#define FE_DMA_GLO_PG_SZ_MASK BIT(3) - -#define REG_FE_RST_GLO_CFG 0x0004 -#define FE_RST_GDM4_MBI_ARB_MASK BIT(3) -#define FE_RST_GDM3_MBI_ARB_MASK BIT(2) -#define FE_RST_CORE_MASK BIT(0) - -#define REG_FE_WAN_MAC_H 0x0030 -#define REG_FE_LAN_MAC_H 0x0040 - -#define REG_FE_MAC_LMIN(_n) ((_n) + 0x04) -#define REG_FE_MAC_LMAX(_n) ((_n) + 0x08) - -#define REG_FE_CDM1_OQ_MAP0 0x0050 -#define REG_FE_CDM1_OQ_MAP1 0x0054 -#define REG_FE_CDM1_OQ_MAP2 0x0058 -#define REG_FE_CDM1_OQ_MAP3 0x005c - -#define REG_FE_PCE_CFG 0x0070 -#define PCE_DPI_EN_MASK BIT(2) -#define PCE_KA_EN_MASK BIT(1) -#define PCE_MC_EN_MASK BIT(0) - -#define REG_FE_PSE_QUEUE_CFG_WR 0x0080 -#define PSE_CFG_PORT_ID_MASK GENMASK(27, 24) -#define PSE_CFG_QUEUE_ID_MASK GENMASK(20, 16) -#define PSE_CFG_WR_EN_MASK BIT(8) -#define PSE_CFG_OQRSV_SEL_MASK BIT(0) - -#define REG_FE_PSE_QUEUE_CFG_VAL 0x0084 -#define PSE_CFG_OQ_RSV_MASK GENMASK(13, 0) - -#define PSE_FQ_CFG 0x008c -#define PSE_FQ_LIMIT_MASK GENMASK(14, 0) - -#define REG_FE_PSE_BUF_SET 0x0090 -#define PSE_SHARE_USED_LTHD_MASK GENMASK(31, 16) -#define PSE_ALLRSV_MASK GENMASK(14, 0) - -#define REG_PSE_SHARE_USED_THD 0x0094 -#define PSE_SHARE_USED_MTHD_MASK GENMASK(31, 16) -#define PSE_SHARE_USED_HTHD_MASK GENMASK(15, 0) - -#define REG_GDM_MISC_CFG 0x0148 -#define GDM2_RDM_ACK_WAIT_PREF_MASK BIT(9) -#define GDM2_CHN_VLD_MODE_MASK BIT(5) - -#define REG_FE_CSR_IFC_CFG CSR_IFC_BASE -#define FE_IFC_EN_MASK BIT(0) - -#define REG_FE_VIP_PORT_EN 0x01f0 -#define REG_FE_IFC_PORT_EN 0x01f4 - -#define REG_PSE_IQ_REV1 (PSE_BASE + 0x08) -#define PSE_IQ_RES1_P2_MASK GENMASK(23, 16) - -#define REG_PSE_IQ_REV2 (PSE_BASE + 0x0c) -#define PSE_IQ_RES2_P5_MASK GENMASK(15, 8) -#define PSE_IQ_RES2_P4_MASK GENMASK(7, 0) - -#define REG_FE_VIP_EN(_n) (0x0300 + ((_n) << 3)) -#define PATN_FCPU_EN_MASK BIT(7) -#define PATN_SWP_EN_MASK BIT(6) -#define PATN_DP_EN_MASK BIT(5) -#define PATN_SP_EN_MASK BIT(4) -#define PATN_TYPE_MASK GENMASK(3, 1) -#define PATN_EN_MASK BIT(0) - -#define REG_FE_VIP_PATN(_n) (0x0304 + ((_n) << 3)) -#define PATN_DP_MASK GENMASK(31, 16) -#define PATN_SP_MASK GENMASK(15, 0) - -#define REG_CDM1_VLAN_CTRL CDM1_BASE -#define CDM1_VLAN_MASK GENMASK(31, 16) - -#define REG_CDM1_FWD_CFG (CDM1_BASE + 0x08) -#define CDM1_VIP_QSEL_MASK GENMASK(24, 20) - -#define REG_CDM1_CRSN_QSEL(_n) (CDM1_BASE + 0x10 + ((_n) << 2)) -#define CDM1_CRSN_QSEL_REASON_MASK(_n) \ - GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) - -#define REG_CDM2_FWD_CFG (CDM2_BASE + 0x08) -#define CDM2_OAM_QSEL_MASK GENMASK(31, 27) -#define CDM2_VIP_QSEL_MASK GENMASK(24, 20) - -#define REG_CDM2_CRSN_QSEL(_n) (CDM2_BASE + 0x10 + ((_n) << 2)) -#define CDM2_CRSN_QSEL_REASON_MASK(_n) \ - GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) - -#define REG_GDM_FWD_CFG(_n) GDM_BASE(_n) -#define GDM_DROP_CRC_ERR BIT(23) -#define GDM_IP4_CKSUM BIT(22) -#define GDM_TCP_CKSUM BIT(21) -#define GDM_UDP_CKSUM BIT(20) -#define GDM_UCFQ_MASK GENMASK(15, 12) -#define GDM_BCFQ_MASK GENMASK(11, 8) -#define GDM_MCFQ_MASK GENMASK(7, 4) -#define GDM_OCFQ_MASK GENMASK(3, 0) - -#define REG_GDM_INGRESS_CFG(_n) (GDM_BASE(_n) + 0x10) -#define GDM_INGRESS_FC_EN_MASK BIT(1) -#define GDM_STAG_EN_MASK BIT(0) - -#define REG_GDM_LEN_CFG(_n) (GDM_BASE(_n) + 0x14) -#define GDM_SHORT_LEN_MASK GENMASK(13, 0) -#define GDM_LONG_LEN_MASK GENMASK(29, 16) - -#define REG_FE_CPORT_CFG (GDM1_BASE + 0x40) -#define FE_CPORT_PAD BIT(26) -#define FE_CPORT_PORT_XFC_MASK BIT(25) -#define FE_CPORT_QUEUE_XFC_MASK BIT(24) - -#define REG_FE_GDM_MIB_CLEAR(_n) (GDM_BASE(_n) + 0xf0) -#define FE_GDM_MIB_RX_CLEAR_MASK BIT(1) -#define FE_GDM_MIB_TX_CLEAR_MASK BIT(0) - -#define REG_FE_GDM1_MIB_CFG (GDM1_BASE + 0xf4) -#define FE_STRICT_RFC2819_MODE_MASK BIT(31) -#define FE_GDM1_TX_MIB_SPLIT_EN_MASK BIT(17) -#define FE_GDM1_RX_MIB_SPLIT_EN_MASK BIT(16) -#define FE_TX_MIB_ID_MASK GENMASK(15, 8) -#define FE_RX_MIB_ID_MASK GENMASK(7, 0) - -#define REG_FE_GDM_TX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x104) -#define REG_FE_GDM_TX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x10c) -#define REG_FE_GDM_TX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x110) -#define REG_FE_GDM_TX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x114) -#define REG_FE_GDM_TX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x118) -#define REG_FE_GDM_TX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x11c) -#define REG_FE_GDM_TX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x120) -#define REG_FE_GDM_TX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x124) -#define REG_FE_GDM_TX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x128) -#define REG_FE_GDM_TX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x12c) -#define REG_FE_GDM_TX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x130) -#define REG_FE_GDM_TX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x134) -#define REG_FE_GDM_TX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x138) -#define REG_FE_GDM_TX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x13c) -#define REG_FE_GDM_TX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x140) - -#define REG_FE_GDM_RX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x148) -#define REG_FE_GDM_RX_FC_DROP_CNT(_n) (GDM_BASE(_n) + 0x14c) -#define REG_FE_GDM_RX_RC_DROP_CNT(_n) (GDM_BASE(_n) + 0x150) -#define REG_FE_GDM_RX_OVERFLOW_DROP_CNT(_n) (GDM_BASE(_n) + 0x154) -#define REG_FE_GDM_RX_ERROR_DROP_CNT(_n) (GDM_BASE(_n) + 0x158) -#define REG_FE_GDM_RX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x15c) -#define REG_FE_GDM_RX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x160) -#define REG_FE_GDM_RX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x164) -#define REG_FE_GDM_RX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x168) -#define REG_FE_GDM_RX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x16c) -#define REG_FE_GDM_RX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x170) -#define REG_FE_GDM_RX_ETH_CRC_ERR_CNT(_n) (GDM_BASE(_n) + 0x174) -#define REG_FE_GDM_RX_ETH_FRAG_CNT(_n) (GDM_BASE(_n) + 0x178) -#define REG_FE_GDM_RX_ETH_JABBER_CNT(_n) (GDM_BASE(_n) + 0x17c) -#define REG_FE_GDM_RX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x180) -#define REG_FE_GDM_RX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x184) -#define REG_FE_GDM_RX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x188) -#define REG_FE_GDM_RX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x18c) -#define REG_FE_GDM_RX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x190) -#define REG_FE_GDM_RX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x194) -#define REG_FE_GDM_RX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x198) -#define REG_FE_GDM_RX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x19c) - -#define REG_PPE1_TB_HASH_CFG (PPE1_BASE + 0x250) -#define PPE1_SRAM_TABLE_EN_MASK BIT(0) -#define PPE1_SRAM_HASH1_EN_MASK BIT(8) -#define PPE1_DRAM_TABLE_EN_MASK BIT(16) -#define PPE1_DRAM_HASH1_EN_MASK BIT(24) - -#define REG_FE_GDM_TX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x280) -#define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x284) -#define REG_FE_GDM_TX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x288) -#define REG_FE_GDM_TX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x28c) - -#define REG_FE_GDM_RX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x290) -#define REG_FE_GDM_RX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x294) -#define REG_FE_GDM_RX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x298) -#define REG_FE_GDM_RX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x29c) -#define REG_FE_GDM_TX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2b8) -#define REG_FE_GDM_TX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2bc) -#define REG_FE_GDM_TX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2c0) -#define REG_FE_GDM_TX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2c4) -#define REG_FE_GDM_TX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2c8) -#define REG_FE_GDM_TX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2cc) -#define REG_FE_GDM_RX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2e8) -#define REG_FE_GDM_RX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2ec) -#define REG_FE_GDM_RX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2f0) -#define REG_FE_GDM_RX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2f4) -#define REG_FE_GDM_RX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2f8) -#define REG_FE_GDM_RX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2fc) - -#define REG_GDM2_CHN_RLS (GDM2_BASE + 0x20) -#define MBI_RX_AGE_SEL_MASK GENMASK(26, 25) -#define MBI_TX_AGE_SEL_MASK GENMASK(18, 17) - -#define REG_GDM3_FWD_CFG GDM3_BASE -#define GDM3_PAD_EN_MASK BIT(28) - -#define REG_GDM4_FWD_CFG GDM4_BASE -#define GDM4_PAD_EN_MASK BIT(28) -#define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8) - -#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c) -#define GDM4_SPORT_OFF2_MASK GENMASK(19, 16) -#define GDM4_SPORT_OFF1_MASK GENMASK(15, 12) -#define GDM4_SPORT_OFF0_MASK GENMASK(11, 8) - -#define REG_IP_FRAG_FP 0x2010 -#define IP_ASSEMBLE_PORT_MASK GENMASK(24, 21) -#define IP_ASSEMBLE_NBQ_MASK GENMASK(20, 16) -#define IP_FRAGMENT_PORT_MASK GENMASK(8, 5) -#define IP_FRAGMENT_NBQ_MASK GENMASK(4, 0) - -#define REG_MC_VLAN_EN 0x2100 -#define MC_VLAN_EN_MASK BIT(0) - -#define REG_MC_VLAN_CFG 0x2104 -#define MC_VLAN_CFG_CMD_DONE_MASK BIT(31) -#define MC_VLAN_CFG_TABLE_ID_MASK GENMASK(21, 16) -#define MC_VLAN_CFG_PORT_ID_MASK GENMASK(11, 8) -#define MC_VLAN_CFG_TABLE_SEL_MASK BIT(4) -#define MC_VLAN_CFG_RW_MASK BIT(0) - -#define REG_MC_VLAN_DATA 0x2108 - -#define REG_CDM5_RX_OQ1_DROP_CNT 0x29d4 - -/* QDMA */ -#define REG_QDMA_GLOBAL_CFG 0x0004 -#define GLOBAL_CFG_RX_2B_OFFSET_MASK BIT(31) -#define GLOBAL_CFG_DMA_PREFERENCE_MASK GENMASK(30, 29) -#define GLOBAL_CFG_CPU_TXR_RR_MASK BIT(28) -#define GLOBAL_CFG_DSCP_BYTE_SWAP_MASK BIT(27) -#define GLOBAL_CFG_PAYLOAD_BYTE_SWAP_MASK BIT(26) -#define GLOBAL_CFG_MULTICAST_MODIFY_FP_MASK BIT(25) -#define GLOBAL_CFG_OAM_MODIFY_MASK BIT(24) -#define GLOBAL_CFG_RESET_MASK BIT(23) -#define GLOBAL_CFG_RESET_DONE_MASK BIT(22) -#define GLOBAL_CFG_MULTICAST_EN_MASK BIT(21) -#define GLOBAL_CFG_IRQ1_EN_MASK BIT(20) -#define GLOBAL_CFG_IRQ0_EN_MASK BIT(19) -#define GLOBAL_CFG_LOOPCNT_EN_MASK BIT(18) -#define GLOBAL_CFG_RD_BYPASS_WR_MASK BIT(17) -#define GLOBAL_CFG_QDMA_LOOPBACK_MASK BIT(16) -#define GLOBAL_CFG_LPBK_RXQ_SEL_MASK GENMASK(13, 8) -#define GLOBAL_CFG_CHECK_DONE_MASK BIT(7) -#define GLOBAL_CFG_TX_WB_DONE_MASK BIT(6) -#define GLOBAL_CFG_MAX_ISSUE_NUM_MASK GENMASK(5, 4) -#define GLOBAL_CFG_RX_DMA_BUSY_MASK BIT(3) -#define GLOBAL_CFG_RX_DMA_EN_MASK BIT(2) -#define GLOBAL_CFG_TX_DMA_BUSY_MASK BIT(1) -#define GLOBAL_CFG_TX_DMA_EN_MASK BIT(0) - -#define REG_FWD_DSCP_BASE 0x0010 -#define REG_FWD_BUF_BASE 0x0014 - -#define REG_HW_FWD_DSCP_CFG 0x0018 -#define HW_FWD_DSCP_PAYLOAD_SIZE_MASK GENMASK(29, 28) -#define HW_FWD_DSCP_SCATTER_LEN_MASK GENMASK(17, 16) -#define HW_FWD_DSCP_MIN_SCATTER_LEN_MASK GENMASK(15, 0) - -#define REG_INT_STATUS(_n) \ - (((_n) == 4) ? 0x0730 : \ - ((_n) == 3) ? 0x0724 : \ - ((_n) == 2) ? 0x0720 : \ - ((_n) == 1) ? 0x0024 : 0x0020) - -#define REG_INT_ENABLE(_n) \ - (((_n) == 4) ? 0x0750 : \ - ((_n) == 3) ? 0x0744 : \ - ((_n) == 2) ? 0x0740 : \ - ((_n) == 1) ? 0x002c : 0x0028) - -/* QDMA_CSR_INT_ENABLE1 */ -#define RX15_COHERENT_INT_MASK BIT(31) -#define RX14_COHERENT_INT_MASK BIT(30) -#define RX13_COHERENT_INT_MASK BIT(29) -#define RX12_COHERENT_INT_MASK BIT(28) -#define RX11_COHERENT_INT_MASK BIT(27) -#define RX10_COHERENT_INT_MASK BIT(26) -#define RX9_COHERENT_INT_MASK BIT(25) -#define RX8_COHERENT_INT_MASK BIT(24) -#define RX7_COHERENT_INT_MASK BIT(23) -#define RX6_COHERENT_INT_MASK BIT(22) -#define RX5_COHERENT_INT_MASK BIT(21) -#define RX4_COHERENT_INT_MASK BIT(20) -#define RX3_COHERENT_INT_MASK BIT(19) -#define RX2_COHERENT_INT_MASK BIT(18) -#define RX1_COHERENT_INT_MASK BIT(17) -#define RX0_COHERENT_INT_MASK BIT(16) -#define TX7_COHERENT_INT_MASK BIT(15) -#define TX6_COHERENT_INT_MASK BIT(14) -#define TX5_COHERENT_INT_MASK BIT(13) -#define TX4_COHERENT_INT_MASK BIT(12) -#define TX3_COHERENT_INT_MASK BIT(11) -#define TX2_COHERENT_INT_MASK BIT(10) -#define TX1_COHERENT_INT_MASK BIT(9) -#define TX0_COHERENT_INT_MASK BIT(8) -#define CNT_OVER_FLOW_INT_MASK BIT(7) -#define IRQ1_FULL_INT_MASK BIT(5) -#define IRQ1_INT_MASK BIT(4) -#define HWFWD_DSCP_LOW_INT_MASK BIT(3) -#define HWFWD_DSCP_EMPTY_INT_MASK BIT(2) -#define IRQ0_FULL_INT_MASK BIT(1) -#define IRQ0_INT_MASK BIT(0) - -#define TX_DONE_INT_MASK(_n) \ - ((_n) ? IRQ1_INT_MASK | IRQ1_FULL_INT_MASK \ - : IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) - -#define INT_TX_MASK \ - (IRQ1_INT_MASK | IRQ1_FULL_INT_MASK | \ - IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) - -#define INT_IDX0_MASK \ - (TX0_COHERENT_INT_MASK | TX1_COHERENT_INT_MASK | \ - TX2_COHERENT_INT_MASK | TX3_COHERENT_INT_MASK | \ - TX4_COHERENT_INT_MASK | TX5_COHERENT_INT_MASK | \ - TX6_COHERENT_INT_MASK | TX7_COHERENT_INT_MASK | \ - RX0_COHERENT_INT_MASK | RX1_COHERENT_INT_MASK | \ - RX2_COHERENT_INT_MASK | RX3_COHERENT_INT_MASK | \ - RX4_COHERENT_INT_MASK | RX7_COHERENT_INT_MASK | \ - RX8_COHERENT_INT_MASK | RX9_COHERENT_INT_MASK | \ - RX15_COHERENT_INT_MASK | INT_TX_MASK) - -/* QDMA_CSR_INT_ENABLE2 */ -#define RX15_NO_CPU_DSCP_INT_MASK BIT(31) -#define RX14_NO_CPU_DSCP_INT_MASK BIT(30) -#define RX13_NO_CPU_DSCP_INT_MASK BIT(29) -#define RX12_NO_CPU_DSCP_INT_MASK BIT(28) -#define RX11_NO_CPU_DSCP_INT_MASK BIT(27) -#define RX10_NO_CPU_DSCP_INT_MASK BIT(26) -#define RX9_NO_CPU_DSCP_INT_MASK BIT(25) -#define RX8_NO_CPU_DSCP_INT_MASK BIT(24) -#define RX7_NO_CPU_DSCP_INT_MASK BIT(23) -#define RX6_NO_CPU_DSCP_INT_MASK BIT(22) -#define RX5_NO_CPU_DSCP_INT_MASK BIT(21) -#define RX4_NO_CPU_DSCP_INT_MASK BIT(20) -#define RX3_NO_CPU_DSCP_INT_MASK BIT(19) -#define RX2_NO_CPU_DSCP_INT_MASK BIT(18) -#define RX1_NO_CPU_DSCP_INT_MASK BIT(17) -#define RX0_NO_CPU_DSCP_INT_MASK BIT(16) -#define RX15_DONE_INT_MASK BIT(15) -#define RX14_DONE_INT_MASK BIT(14) -#define RX13_DONE_INT_MASK BIT(13) -#define RX12_DONE_INT_MASK BIT(12) -#define RX11_DONE_INT_MASK BIT(11) -#define RX10_DONE_INT_MASK BIT(10) -#define RX9_DONE_INT_MASK BIT(9) -#define RX8_DONE_INT_MASK BIT(8) -#define RX7_DONE_INT_MASK BIT(7) -#define RX6_DONE_INT_MASK BIT(6) -#define RX5_DONE_INT_MASK BIT(5) -#define RX4_DONE_INT_MASK BIT(4) -#define RX3_DONE_INT_MASK BIT(3) -#define RX2_DONE_INT_MASK BIT(2) -#define RX1_DONE_INT_MASK BIT(1) -#define RX0_DONE_INT_MASK BIT(0) - -#define RX_DONE_INT_MASK \ - (RX0_DONE_INT_MASK | RX1_DONE_INT_MASK | \ - RX2_DONE_INT_MASK | RX3_DONE_INT_MASK | \ - RX4_DONE_INT_MASK | RX7_DONE_INT_MASK | \ - RX8_DONE_INT_MASK | RX9_DONE_INT_MASK | \ - RX15_DONE_INT_MASK) -#define INT_IDX1_MASK \ - (RX_DONE_INT_MASK | \ - RX0_NO_CPU_DSCP_INT_MASK | RX1_NO_CPU_DSCP_INT_MASK | \ - RX2_NO_CPU_DSCP_INT_MASK | RX3_NO_CPU_DSCP_INT_MASK | \ - RX4_NO_CPU_DSCP_INT_MASK | RX7_NO_CPU_DSCP_INT_MASK | \ - RX8_NO_CPU_DSCP_INT_MASK | RX9_NO_CPU_DSCP_INT_MASK | \ - RX15_NO_CPU_DSCP_INT_MASK) - -/* QDMA_CSR_INT_ENABLE5 */ -#define TX31_COHERENT_INT_MASK BIT(31) -#define TX30_COHERENT_INT_MASK BIT(30) -#define TX29_COHERENT_INT_MASK BIT(29) -#define TX28_COHERENT_INT_MASK BIT(28) -#define TX27_COHERENT_INT_MASK BIT(27) -#define TX26_COHERENT_INT_MASK BIT(26) -#define TX25_COHERENT_INT_MASK BIT(25) -#define TX24_COHERENT_INT_MASK BIT(24) -#define TX23_COHERENT_INT_MASK BIT(23) -#define TX22_COHERENT_INT_MASK BIT(22) -#define TX21_COHERENT_INT_MASK BIT(21) -#define TX20_COHERENT_INT_MASK BIT(20) -#define TX19_COHERENT_INT_MASK BIT(19) -#define TX18_COHERENT_INT_MASK BIT(18) -#define TX17_COHERENT_INT_MASK BIT(17) -#define TX16_COHERENT_INT_MASK BIT(16) -#define TX15_COHERENT_INT_MASK BIT(15) -#define TX14_COHERENT_INT_MASK BIT(14) -#define TX13_COHERENT_INT_MASK BIT(13) -#define TX12_COHERENT_INT_MASK BIT(12) -#define TX11_COHERENT_INT_MASK BIT(11) -#define TX10_COHERENT_INT_MASK BIT(10) -#define TX9_COHERENT_INT_MASK BIT(9) -#define TX8_COHERENT_INT_MASK BIT(8) - -#define INT_IDX4_MASK \ - (TX8_COHERENT_INT_MASK | TX9_COHERENT_INT_MASK | \ - TX10_COHERENT_INT_MASK | TX11_COHERENT_INT_MASK | \ - TX12_COHERENT_INT_MASK | TX13_COHERENT_INT_MASK | \ - TX14_COHERENT_INT_MASK | TX15_COHERENT_INT_MASK | \ - TX16_COHERENT_INT_MASK | TX17_COHERENT_INT_MASK | \ - TX18_COHERENT_INT_MASK | TX19_COHERENT_INT_MASK | \ - TX20_COHERENT_INT_MASK | TX21_COHERENT_INT_MASK | \ - TX22_COHERENT_INT_MASK | TX23_COHERENT_INT_MASK | \ - TX24_COHERENT_INT_MASK | TX25_COHERENT_INT_MASK | \ - TX26_COHERENT_INT_MASK | TX27_COHERENT_INT_MASK | \ - TX28_COHERENT_INT_MASK | TX29_COHERENT_INT_MASK | \ - TX30_COHERENT_INT_MASK | TX31_COHERENT_INT_MASK) - -#define REG_TX_IRQ_BASE(_n) ((_n) ? 0x0048 : 0x0050) - -#define REG_TX_IRQ_CFG(_n) ((_n) ? 0x004c : 0x0054) -#define TX_IRQ_THR_MASK GENMASK(27, 16) -#define TX_IRQ_DEPTH_MASK GENMASK(11, 0) - -#define REG_IRQ_CLEAR_LEN(_n) ((_n) ? 0x0064 : 0x0058) -#define IRQ_CLEAR_LEN_MASK GENMASK(7, 0) - -#define REG_IRQ_STATUS(_n) ((_n) ? 0x0068 : 0x005c) -#define IRQ_ENTRY_LEN_MASK GENMASK(27, 16) -#define IRQ_HEAD_IDX_MASK GENMASK(11, 0) - -#define REG_TX_RING_BASE(_n) \ - (((_n) < 8) ? 0x0100 + ((_n) << 5) : 0x0b00 + (((_n) - 8) << 5)) - -#define REG_TX_RING_BLOCKING(_n) \ - (((_n) < 8) ? 0x0104 + ((_n) << 5) : 0x0b04 + (((_n) - 8) << 5)) - -#define TX_RING_IRQ_BLOCKING_MAP_MASK BIT(6) -#define TX_RING_IRQ_BLOCKING_CFG_MASK BIT(4) -#define TX_RING_IRQ_BLOCKING_TX_DROP_EN_MASK BIT(2) -#define TX_RING_IRQ_BLOCKING_MAX_TH_TXRING_EN_MASK BIT(1) -#define TX_RING_IRQ_BLOCKING_MIN_TH_TXRING_EN_MASK BIT(0) - -#define REG_TX_CPU_IDX(_n) \ - (((_n) < 8) ? 0x0108 + ((_n) << 5) : 0x0b08 + (((_n) - 8) << 5)) - -#define TX_RING_CPU_IDX_MASK GENMASK(15, 0) - -#define REG_TX_DMA_IDX(_n) \ - (((_n) < 8) ? 0x010c + ((_n) << 5) : 0x0b0c + (((_n) - 8) << 5)) - -#define TX_RING_DMA_IDX_MASK GENMASK(15, 0) - -#define IRQ_RING_IDX_MASK GENMASK(20, 16) -#define IRQ_DESC_IDX_MASK GENMASK(15, 0) - -#define REG_RX_RING_BASE(_n) \ - (((_n) < 16) ? 0x0200 + ((_n) << 5) : 0x0e00 + (((_n) - 16) << 5)) - -#define REG_RX_RING_SIZE(_n) \ - (((_n) < 16) ? 0x0204 + ((_n) << 5) : 0x0e04 + (((_n) - 16) << 5)) - -#define RX_RING_THR_MASK GENMASK(31, 16) -#define RX_RING_SIZE_MASK GENMASK(15, 0) - -#define REG_RX_CPU_IDX(_n) \ - (((_n) < 16) ? 0x0208 + ((_n) << 5) : 0x0e08 + (((_n) - 16) << 5)) - -#define RX_RING_CPU_IDX_MASK GENMASK(15, 0) - -#define REG_RX_DMA_IDX(_n) \ - (((_n) < 16) ? 0x020c + ((_n) << 5) : 0x0e0c + (((_n) - 16) << 5)) - -#define REG_RX_DELAY_INT_IDX(_n) \ - (((_n) < 16) ? 0x0210 + ((_n) << 5) : 0x0e10 + (((_n) - 16) << 5)) - -#define RX_DELAY_INT_MASK GENMASK(15, 0) - -#define RX_RING_DMA_IDX_MASK GENMASK(15, 0) - -#define REG_INGRESS_TRTCM_CFG 0x0070 -#define INGRESS_TRTCM_EN_MASK BIT(31) -#define INGRESS_TRTCM_MODE_MASK BIT(30) -#define INGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define INGRESS_FAST_TICK_MASK GENMASK(15, 0) - -#define REG_QUEUE_CLOSE_CFG(_n) (0x00a0 + ((_n) & 0xfc)) -#define TXQ_DISABLE_CHAN_QUEUE_MASK(_n, _m) BIT((_m) + (((_n) & 0x3) << 3)) - -#define REG_TXQ_DIS_CFG_BASE(_n) ((_n) ? 0x20a0 : 0x00a0) -#define REG_TXQ_DIS_CFG(_n, _m) (REG_TXQ_DIS_CFG_BASE((_n)) + (_m) << 2) - -#define REG_CNTR_CFG(_n) (0x0400 + ((_n) << 3)) -#define CNTR_EN_MASK BIT(31) -#define CNTR_ALL_CHAN_EN_MASK BIT(30) -#define CNTR_ALL_QUEUE_EN_MASK BIT(29) -#define CNTR_ALL_DSCP_RING_EN_MASK BIT(28) -#define CNTR_SRC_MASK GENMASK(27, 24) -#define CNTR_DSCP_RING_MASK GENMASK(20, 16) -#define CNTR_CHAN_MASK GENMASK(7, 3) -#define CNTR_QUEUE_MASK GENMASK(2, 0) - -#define REG_CNTR_VAL(_n) (0x0404 + ((_n) << 3)) - -#define REG_LMGR_INIT_CFG 0x1000 -#define LMGR_INIT_START BIT(31) -#define LMGR_SRAM_MODE_MASK BIT(30) -#define HW_FWD_PKTSIZE_OVERHEAD_MASK GENMASK(27, 20) -#define HW_FWD_DESC_NUM_MASK GENMASK(16, 0) - -#define REG_FWD_DSCP_LOW_THR 0x1004 -#define FWD_DSCP_LOW_THR_MASK GENMASK(17, 0) - -#define REG_EGRESS_RATE_METER_CFG 0x100c -#define EGRESS_RATE_METER_EN_MASK BIT(31) -#define EGRESS_RATE_METER_EQ_RATE_EN_MASK BIT(17) -#define EGRESS_RATE_METER_WINDOW_SZ_MASK GENMASK(16, 12) -#define EGRESS_RATE_METER_TIMESLICE_MASK GENMASK(10, 0) - -#define REG_EGRESS_TRTCM_CFG 0x1010 -#define EGRESS_TRTCM_EN_MASK BIT(31) -#define EGRESS_TRTCM_MODE_MASK BIT(30) -#define EGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define EGRESS_FAST_TICK_MASK GENMASK(15, 0) - -#define TRTCM_PARAM_RW_MASK BIT(31) -#define TRTCM_PARAM_RW_DONE_MASK BIT(30) -#define TRTCM_PARAM_TYPE_MASK GENMASK(29, 28) -#define TRTCM_METER_GROUP_MASK GENMASK(27, 26) -#define TRTCM_PARAM_INDEX_MASK GENMASK(23, 17) -#define TRTCM_PARAM_RATE_TYPE_MASK BIT(16) - -#define REG_TRTCM_CFG_PARAM(_n) ((_n) + 0x4) -#define REG_TRTCM_DATA_LOW(_n) ((_n) + 0x8) -#define REG_TRTCM_DATA_HIGH(_n) ((_n) + 0xc) - -#define REG_TXWRR_MODE_CFG 0x1020 -#define TWRR_WEIGHT_SCALE_MASK BIT(31) -#define TWRR_WEIGHT_BASE_MASK BIT(3) - -#define REG_TXWRR_WEIGHT_CFG 0x1024 -#define TWRR_RW_CMD_MASK BIT(31) -#define TWRR_RW_CMD_DONE BIT(30) -#define TWRR_CHAN_IDX_MASK GENMASK(23, 19) -#define TWRR_QUEUE_IDX_MASK GENMASK(18, 16) -#define TWRR_VALUE_MASK GENMASK(15, 0) - -#define REG_PSE_BUF_USAGE_CFG 0x1028 -#define PSE_BUF_ESTIMATE_EN_MASK BIT(29) - -#define REG_CHAN_QOS_MODE(_n) (0x1040 + ((_n) << 2)) -#define CHAN_QOS_MODE_MASK(_n) GENMASK(2 + ((_n) << 2), (_n) << 2) - -#define REG_GLB_TRTCM_CFG 0x1080 -#define GLB_TRTCM_EN_MASK BIT(31) -#define GLB_TRTCM_MODE_MASK BIT(30) -#define GLB_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define GLB_FAST_TICK_MASK GENMASK(15, 0) - -#define REG_TXQ_CNGST_CFG 0x10a0 -#define TXQ_CNGST_DROP_EN BIT(31) -#define TXQ_CNGST_DEI_DROP_EN BIT(30) - -#define REG_SLA_TRTCM_CFG 0x1150 -#define SLA_TRTCM_EN_MASK BIT(31) -#define SLA_TRTCM_MODE_MASK BIT(30) -#define SLA_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define SLA_FAST_TICK_MASK GENMASK(15, 0) - -/* CTRL */ -#define QDMA_DESC_DONE_MASK BIT(31) -#define QDMA_DESC_DROP_MASK BIT(30) /* tx: drop - rx: overflow */ -#define QDMA_DESC_MORE_MASK BIT(29) /* more SG elements */ -#define QDMA_DESC_DEI_MASK BIT(25) -#define QDMA_DESC_NO_DROP_MASK BIT(24) -#define QDMA_DESC_LEN_MASK GENMASK(15, 0) -/* DATA */ -#define QDMA_DESC_NEXT_ID_MASK GENMASK(15, 0) -/* TX MSG0 */ -#define QDMA_ETH_TXMSG_MIC_IDX_MASK BIT(30) -#define QDMA_ETH_TXMSG_SP_TAG_MASK GENMASK(29, 14) -#define QDMA_ETH_TXMSG_ICO_MASK BIT(13) -#define QDMA_ETH_TXMSG_UCO_MASK BIT(12) -#define QDMA_ETH_TXMSG_TCO_MASK BIT(11) -#define QDMA_ETH_TXMSG_TSO_MASK BIT(10) -#define QDMA_ETH_TXMSG_FAST_MASK BIT(9) -#define QDMA_ETH_TXMSG_OAM_MASK BIT(8) -#define QDMA_ETH_TXMSG_CHAN_MASK GENMASK(7, 3) -#define QDMA_ETH_TXMSG_QUEUE_MASK GENMASK(2, 0) -/* TX MSG1 */ -#define QDMA_ETH_TXMSG_NO_DROP BIT(31) -#define QDMA_ETH_TXMSG_METER_MASK GENMASK(30, 24) /* 0x7f no meters */ -#define QDMA_ETH_TXMSG_FPORT_MASK GENMASK(23, 20) -#define QDMA_ETH_TXMSG_NBOQ_MASK GENMASK(19, 15) -#define QDMA_ETH_TXMSG_HWF_MASK BIT(14) -#define QDMA_ETH_TXMSG_HOP_MASK BIT(13) -#define QDMA_ETH_TXMSG_PTP_MASK BIT(12) -#define QDMA_ETH_TXMSG_ACNT_G1_MASK GENMASK(10, 6) /* 0x1f do not count */ -#define QDMA_ETH_TXMSG_ACNT_G0_MASK GENMASK(5, 0) /* 0x3f do not count */ - -/* RX MSG1 */ -#define QDMA_ETH_RXMSG_DEI_MASK BIT(31) -#define QDMA_ETH_RXMSG_IP6_MASK BIT(30) -#define QDMA_ETH_RXMSG_IP4_MASK BIT(29) -#define QDMA_ETH_RXMSG_IP4F_MASK BIT(28) -#define QDMA_ETH_RXMSG_L4_VALID_MASK BIT(27) -#define QDMA_ETH_RXMSG_L4F_MASK BIT(26) -#define QDMA_ETH_RXMSG_SPORT_MASK GENMASK(25, 21) -#define QDMA_ETH_RXMSG_CRSN_MASK GENMASK(20, 16) -#define QDMA_ETH_RXMSG_PPE_ENTRY_MASK GENMASK(15, 0) - -struct airoha_qdma_desc { - __le32 rsv; - __le32 ctrl; - __le32 addr; - __le32 data; - __le32 msg0; - __le32 msg1; - __le32 msg2; - __le32 msg3; -}; - -/* CTRL0 */ -#define QDMA_FWD_DESC_CTX_MASK BIT(31) -#define QDMA_FWD_DESC_RING_MASK GENMASK(30, 28) -#define QDMA_FWD_DESC_IDX_MASK GENMASK(27, 16) -#define QDMA_FWD_DESC_LEN_MASK GENMASK(15, 0) -/* CTRL1 */ -#define QDMA_FWD_DESC_FIRST_IDX_MASK GENMASK(15, 0) -/* CTRL2 */ -#define QDMA_FWD_DESC_MORE_PKT_NUM_MASK GENMASK(2, 0) - -struct airoha_qdma_fwd_desc { - __le32 addr; - __le32 ctrl0; - __le32 ctrl1; - __le32 ctrl2; - __le32 msg0; - __le32 msg1; - __le32 rsv0; - __le32 rsv1; -}; - u32 airoha_rr(void __iomem *base, u32 offset) { return readl(base + offset); diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h new file mode 100644 index 0000000000000000000000000000000000000000..7c9dadb348834cb5a856760abe45e8221d6fd700 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -0,0 +1,670 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#ifndef AIROHA_REGS_H +#define AIROHA_REGS_H + +#include + +/* FE */ +#define PSE_BASE 0x0100 +#define CSR_IFC_BASE 0x0200 +#define CDM1_BASE 0x0400 +#define GDM1_BASE 0x0500 +#define PPE1_BASE 0x0c00 + +#define CDM2_BASE 0x1400 +#define GDM2_BASE 0x1500 + +#define GDM3_BASE 0x1100 +#define GDM4_BASE 0x2500 + +#define GDM_BASE(_n) \ + ((_n) == 4 ? GDM4_BASE : \ + (_n) == 3 ? GDM3_BASE : \ + (_n) == 2 ? GDM2_BASE : GDM1_BASE) + +#define REG_FE_DMA_GLO_CFG 0x0000 +#define FE_DMA_GLO_L2_SPACE_MASK GENMASK(7, 4) +#define FE_DMA_GLO_PG_SZ_MASK BIT(3) + +#define REG_FE_RST_GLO_CFG 0x0004 +#define FE_RST_GDM4_MBI_ARB_MASK BIT(3) +#define FE_RST_GDM3_MBI_ARB_MASK BIT(2) +#define FE_RST_CORE_MASK BIT(0) + +#define REG_FE_WAN_MAC_H 0x0030 +#define REG_FE_LAN_MAC_H 0x0040 + +#define REG_FE_MAC_LMIN(_n) ((_n) + 0x04) +#define REG_FE_MAC_LMAX(_n) ((_n) + 0x08) + +#define REG_FE_CDM1_OQ_MAP0 0x0050 +#define REG_FE_CDM1_OQ_MAP1 0x0054 +#define REG_FE_CDM1_OQ_MAP2 0x0058 +#define REG_FE_CDM1_OQ_MAP3 0x005c + +#define REG_FE_PCE_CFG 0x0070 +#define PCE_DPI_EN_MASK BIT(2) +#define PCE_KA_EN_MASK BIT(1) +#define PCE_MC_EN_MASK BIT(0) + +#define REG_FE_PSE_QUEUE_CFG_WR 0x0080 +#define PSE_CFG_PORT_ID_MASK GENMASK(27, 24) +#define PSE_CFG_QUEUE_ID_MASK GENMASK(20, 16) +#define PSE_CFG_WR_EN_MASK BIT(8) +#define PSE_CFG_OQRSV_SEL_MASK BIT(0) + +#define REG_FE_PSE_QUEUE_CFG_VAL 0x0084 +#define PSE_CFG_OQ_RSV_MASK GENMASK(13, 0) + +#define PSE_FQ_CFG 0x008c +#define PSE_FQ_LIMIT_MASK GENMASK(14, 0) + +#define REG_FE_PSE_BUF_SET 0x0090 +#define PSE_SHARE_USED_LTHD_MASK GENMASK(31, 16) +#define PSE_ALLRSV_MASK GENMASK(14, 0) + +#define REG_PSE_SHARE_USED_THD 0x0094 +#define PSE_SHARE_USED_MTHD_MASK GENMASK(31, 16) +#define PSE_SHARE_USED_HTHD_MASK GENMASK(15, 0) + +#define REG_GDM_MISC_CFG 0x0148 +#define GDM2_RDM_ACK_WAIT_PREF_MASK BIT(9) +#define GDM2_CHN_VLD_MODE_MASK BIT(5) + +#define REG_FE_CSR_IFC_CFG CSR_IFC_BASE +#define FE_IFC_EN_MASK BIT(0) + +#define REG_FE_VIP_PORT_EN 0x01f0 +#define REG_FE_IFC_PORT_EN 0x01f4 + +#define REG_PSE_IQ_REV1 (PSE_BASE + 0x08) +#define PSE_IQ_RES1_P2_MASK GENMASK(23, 16) + +#define REG_PSE_IQ_REV2 (PSE_BASE + 0x0c) +#define PSE_IQ_RES2_P5_MASK GENMASK(15, 8) +#define PSE_IQ_RES2_P4_MASK GENMASK(7, 0) + +#define REG_FE_VIP_EN(_n) (0x0300 + ((_n) << 3)) +#define PATN_FCPU_EN_MASK BIT(7) +#define PATN_SWP_EN_MASK BIT(6) +#define PATN_DP_EN_MASK BIT(5) +#define PATN_SP_EN_MASK BIT(4) +#define PATN_TYPE_MASK GENMASK(3, 1) +#define PATN_EN_MASK BIT(0) + +#define REG_FE_VIP_PATN(_n) (0x0304 + ((_n) << 3)) +#define PATN_DP_MASK GENMASK(31, 16) +#define PATN_SP_MASK GENMASK(15, 0) + +#define REG_CDM1_VLAN_CTRL CDM1_BASE +#define CDM1_VLAN_MASK GENMASK(31, 16) + +#define REG_CDM1_FWD_CFG (CDM1_BASE + 0x08) +#define CDM1_VIP_QSEL_MASK GENMASK(24, 20) + +#define REG_CDM1_CRSN_QSEL(_n) (CDM1_BASE + 0x10 + ((_n) << 2)) +#define CDM1_CRSN_QSEL_REASON_MASK(_n) \ + GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) + +#define REG_CDM2_FWD_CFG (CDM2_BASE + 0x08) +#define CDM2_OAM_QSEL_MASK GENMASK(31, 27) +#define CDM2_VIP_QSEL_MASK GENMASK(24, 20) + +#define REG_CDM2_CRSN_QSEL(_n) (CDM2_BASE + 0x10 + ((_n) << 2)) +#define CDM2_CRSN_QSEL_REASON_MASK(_n) \ + GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) + +#define REG_GDM_FWD_CFG(_n) GDM_BASE(_n) +#define GDM_DROP_CRC_ERR BIT(23) +#define GDM_IP4_CKSUM BIT(22) +#define GDM_TCP_CKSUM BIT(21) +#define GDM_UDP_CKSUM BIT(20) +#define GDM_UCFQ_MASK GENMASK(15, 12) +#define GDM_BCFQ_MASK GENMASK(11, 8) +#define GDM_MCFQ_MASK GENMASK(7, 4) +#define GDM_OCFQ_MASK GENMASK(3, 0) + +#define REG_GDM_INGRESS_CFG(_n) (GDM_BASE(_n) + 0x10) +#define GDM_INGRESS_FC_EN_MASK BIT(1) +#define GDM_STAG_EN_MASK BIT(0) + +#define REG_GDM_LEN_CFG(_n) (GDM_BASE(_n) + 0x14) +#define GDM_SHORT_LEN_MASK GENMASK(13, 0) +#define GDM_LONG_LEN_MASK GENMASK(29, 16) + +#define REG_FE_CPORT_CFG (GDM1_BASE + 0x40) +#define FE_CPORT_PAD BIT(26) +#define FE_CPORT_PORT_XFC_MASK BIT(25) +#define FE_CPORT_QUEUE_XFC_MASK BIT(24) + +#define REG_FE_GDM_MIB_CLEAR(_n) (GDM_BASE(_n) + 0xf0) +#define FE_GDM_MIB_RX_CLEAR_MASK BIT(1) +#define FE_GDM_MIB_TX_CLEAR_MASK BIT(0) + +#define REG_FE_GDM1_MIB_CFG (GDM1_BASE + 0xf4) +#define FE_STRICT_RFC2819_MODE_MASK BIT(31) +#define FE_GDM1_TX_MIB_SPLIT_EN_MASK BIT(17) +#define FE_GDM1_RX_MIB_SPLIT_EN_MASK BIT(16) +#define FE_TX_MIB_ID_MASK GENMASK(15, 8) +#define FE_RX_MIB_ID_MASK GENMASK(7, 0) + +#define REG_FE_GDM_TX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x104) +#define REG_FE_GDM_TX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x10c) +#define REG_FE_GDM_TX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x110) +#define REG_FE_GDM_TX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x114) +#define REG_FE_GDM_TX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x118) +#define REG_FE_GDM_TX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x11c) +#define REG_FE_GDM_TX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x120) +#define REG_FE_GDM_TX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x124) +#define REG_FE_GDM_TX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x128) +#define REG_FE_GDM_TX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x12c) +#define REG_FE_GDM_TX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x130) +#define REG_FE_GDM_TX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x134) +#define REG_FE_GDM_TX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x138) +#define REG_FE_GDM_TX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x13c) +#define REG_FE_GDM_TX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x140) + +#define REG_FE_GDM_RX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x148) +#define REG_FE_GDM_RX_FC_DROP_CNT(_n) (GDM_BASE(_n) + 0x14c) +#define REG_FE_GDM_RX_RC_DROP_CNT(_n) (GDM_BASE(_n) + 0x150) +#define REG_FE_GDM_RX_OVERFLOW_DROP_CNT(_n) (GDM_BASE(_n) + 0x154) +#define REG_FE_GDM_RX_ERROR_DROP_CNT(_n) (GDM_BASE(_n) + 0x158) +#define REG_FE_GDM_RX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x15c) +#define REG_FE_GDM_RX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x160) +#define REG_FE_GDM_RX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x164) +#define REG_FE_GDM_RX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x168) +#define REG_FE_GDM_RX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x16c) +#define REG_FE_GDM_RX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x170) +#define REG_FE_GDM_RX_ETH_CRC_ERR_CNT(_n) (GDM_BASE(_n) + 0x174) +#define REG_FE_GDM_RX_ETH_FRAG_CNT(_n) (GDM_BASE(_n) + 0x178) +#define REG_FE_GDM_RX_ETH_JABBER_CNT(_n) (GDM_BASE(_n) + 0x17c) +#define REG_FE_GDM_RX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x180) +#define REG_FE_GDM_RX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x184) +#define REG_FE_GDM_RX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x188) +#define REG_FE_GDM_RX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x18c) +#define REG_FE_GDM_RX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x190) +#define REG_FE_GDM_RX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x194) +#define REG_FE_GDM_RX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x198) +#define REG_FE_GDM_RX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x19c) + +#define REG_PPE1_TB_HASH_CFG (PPE1_BASE + 0x250) +#define PPE1_SRAM_TABLE_EN_MASK BIT(0) +#define PPE1_SRAM_HASH1_EN_MASK BIT(8) +#define PPE1_DRAM_TABLE_EN_MASK BIT(16) +#define PPE1_DRAM_HASH1_EN_MASK BIT(24) + +#define REG_FE_GDM_TX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x280) +#define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x284) +#define REG_FE_GDM_TX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x288) +#define REG_FE_GDM_TX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x28c) + +#define REG_FE_GDM_RX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x290) +#define REG_FE_GDM_RX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x294) +#define REG_FE_GDM_RX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x298) +#define REG_FE_GDM_RX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x29c) +#define REG_FE_GDM_TX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2b8) +#define REG_FE_GDM_TX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2bc) +#define REG_FE_GDM_TX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2c0) +#define REG_FE_GDM_TX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2c4) +#define REG_FE_GDM_TX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2c8) +#define REG_FE_GDM_TX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2cc) +#define REG_FE_GDM_RX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2e8) +#define REG_FE_GDM_RX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2ec) +#define REG_FE_GDM_RX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2f0) +#define REG_FE_GDM_RX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2f4) +#define REG_FE_GDM_RX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2f8) +#define REG_FE_GDM_RX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2fc) + +#define REG_GDM2_CHN_RLS (GDM2_BASE + 0x20) +#define MBI_RX_AGE_SEL_MASK GENMASK(26, 25) +#define MBI_TX_AGE_SEL_MASK GENMASK(18, 17) + +#define REG_GDM3_FWD_CFG GDM3_BASE +#define GDM3_PAD_EN_MASK BIT(28) + +#define REG_GDM4_FWD_CFG GDM4_BASE +#define GDM4_PAD_EN_MASK BIT(28) +#define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8) + +#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c) +#define GDM4_SPORT_OFF2_MASK GENMASK(19, 16) +#define GDM4_SPORT_OFF1_MASK GENMASK(15, 12) +#define GDM4_SPORT_OFF0_MASK GENMASK(11, 8) + +#define REG_IP_FRAG_FP 0x2010 +#define IP_ASSEMBLE_PORT_MASK GENMASK(24, 21) +#define IP_ASSEMBLE_NBQ_MASK GENMASK(20, 16) +#define IP_FRAGMENT_PORT_MASK GENMASK(8, 5) +#define IP_FRAGMENT_NBQ_MASK GENMASK(4, 0) + +#define REG_MC_VLAN_EN 0x2100 +#define MC_VLAN_EN_MASK BIT(0) + +#define REG_MC_VLAN_CFG 0x2104 +#define MC_VLAN_CFG_CMD_DONE_MASK BIT(31) +#define MC_VLAN_CFG_TABLE_ID_MASK GENMASK(21, 16) +#define MC_VLAN_CFG_PORT_ID_MASK GENMASK(11, 8) +#define MC_VLAN_CFG_TABLE_SEL_MASK BIT(4) +#define MC_VLAN_CFG_RW_MASK BIT(0) + +#define REG_MC_VLAN_DATA 0x2108 + +#define REG_CDM5_RX_OQ1_DROP_CNT 0x29d4 + +/* QDMA */ +#define REG_QDMA_GLOBAL_CFG 0x0004 +#define GLOBAL_CFG_RX_2B_OFFSET_MASK BIT(31) +#define GLOBAL_CFG_DMA_PREFERENCE_MASK GENMASK(30, 29) +#define GLOBAL_CFG_CPU_TXR_RR_MASK BIT(28) +#define GLOBAL_CFG_DSCP_BYTE_SWAP_MASK BIT(27) +#define GLOBAL_CFG_PAYLOAD_BYTE_SWAP_MASK BIT(26) +#define GLOBAL_CFG_MULTICAST_MODIFY_FP_MASK BIT(25) +#define GLOBAL_CFG_OAM_MODIFY_MASK BIT(24) +#define GLOBAL_CFG_RESET_MASK BIT(23) +#define GLOBAL_CFG_RESET_DONE_MASK BIT(22) +#define GLOBAL_CFG_MULTICAST_EN_MASK BIT(21) +#define GLOBAL_CFG_IRQ1_EN_MASK BIT(20) +#define GLOBAL_CFG_IRQ0_EN_MASK BIT(19) +#define GLOBAL_CFG_LOOPCNT_EN_MASK BIT(18) +#define GLOBAL_CFG_RD_BYPASS_WR_MASK BIT(17) +#define GLOBAL_CFG_QDMA_LOOPBACK_MASK BIT(16) +#define GLOBAL_CFG_LPBK_RXQ_SEL_MASK GENMASK(13, 8) +#define GLOBAL_CFG_CHECK_DONE_MASK BIT(7) +#define GLOBAL_CFG_TX_WB_DONE_MASK BIT(6) +#define GLOBAL_CFG_MAX_ISSUE_NUM_MASK GENMASK(5, 4) +#define GLOBAL_CFG_RX_DMA_BUSY_MASK BIT(3) +#define GLOBAL_CFG_RX_DMA_EN_MASK BIT(2) +#define GLOBAL_CFG_TX_DMA_BUSY_MASK BIT(1) +#define GLOBAL_CFG_TX_DMA_EN_MASK BIT(0) + +#define REG_FWD_DSCP_BASE 0x0010 +#define REG_FWD_BUF_BASE 0x0014 + +#define REG_HW_FWD_DSCP_CFG 0x0018 +#define HW_FWD_DSCP_PAYLOAD_SIZE_MASK GENMASK(29, 28) +#define HW_FWD_DSCP_SCATTER_LEN_MASK GENMASK(17, 16) +#define HW_FWD_DSCP_MIN_SCATTER_LEN_MASK GENMASK(15, 0) + +#define REG_INT_STATUS(_n) \ + (((_n) == 4) ? 0x0730 : \ + ((_n) == 3) ? 0x0724 : \ + ((_n) == 2) ? 0x0720 : \ + ((_n) == 1) ? 0x0024 : 0x0020) + +#define REG_INT_ENABLE(_n) \ + (((_n) == 4) ? 0x0750 : \ + ((_n) == 3) ? 0x0744 : \ + ((_n) == 2) ? 0x0740 : \ + ((_n) == 1) ? 0x002c : 0x0028) + +/* QDMA_CSR_INT_ENABLE1 */ +#define RX15_COHERENT_INT_MASK BIT(31) +#define RX14_COHERENT_INT_MASK BIT(30) +#define RX13_COHERENT_INT_MASK BIT(29) +#define RX12_COHERENT_INT_MASK BIT(28) +#define RX11_COHERENT_INT_MASK BIT(27) +#define RX10_COHERENT_INT_MASK BIT(26) +#define RX9_COHERENT_INT_MASK BIT(25) +#define RX8_COHERENT_INT_MASK BIT(24) +#define RX7_COHERENT_INT_MASK BIT(23) +#define RX6_COHERENT_INT_MASK BIT(22) +#define RX5_COHERENT_INT_MASK BIT(21) +#define RX4_COHERENT_INT_MASK BIT(20) +#define RX3_COHERENT_INT_MASK BIT(19) +#define RX2_COHERENT_INT_MASK BIT(18) +#define RX1_COHERENT_INT_MASK BIT(17) +#define RX0_COHERENT_INT_MASK BIT(16) +#define TX7_COHERENT_INT_MASK BIT(15) +#define TX6_COHERENT_INT_MASK BIT(14) +#define TX5_COHERENT_INT_MASK BIT(13) +#define TX4_COHERENT_INT_MASK BIT(12) +#define TX3_COHERENT_INT_MASK BIT(11) +#define TX2_COHERENT_INT_MASK BIT(10) +#define TX1_COHERENT_INT_MASK BIT(9) +#define TX0_COHERENT_INT_MASK BIT(8) +#define CNT_OVER_FLOW_INT_MASK BIT(7) +#define IRQ1_FULL_INT_MASK BIT(5) +#define IRQ1_INT_MASK BIT(4) +#define HWFWD_DSCP_LOW_INT_MASK BIT(3) +#define HWFWD_DSCP_EMPTY_INT_MASK BIT(2) +#define IRQ0_FULL_INT_MASK BIT(1) +#define IRQ0_INT_MASK BIT(0) + +#define TX_DONE_INT_MASK(_n) \ + ((_n) ? IRQ1_INT_MASK | IRQ1_FULL_INT_MASK \ + : IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) + +#define INT_TX_MASK \ + (IRQ1_INT_MASK | IRQ1_FULL_INT_MASK | \ + IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) + +#define INT_IDX0_MASK \ + (TX0_COHERENT_INT_MASK | TX1_COHERENT_INT_MASK | \ + TX2_COHERENT_INT_MASK | TX3_COHERENT_INT_MASK | \ + TX4_COHERENT_INT_MASK | TX5_COHERENT_INT_MASK | \ + TX6_COHERENT_INT_MASK | TX7_COHERENT_INT_MASK | \ + RX0_COHERENT_INT_MASK | RX1_COHERENT_INT_MASK | \ + RX2_COHERENT_INT_MASK | RX3_COHERENT_INT_MASK | \ + RX4_COHERENT_INT_MASK | RX7_COHERENT_INT_MASK | \ + RX8_COHERENT_INT_MASK | RX9_COHERENT_INT_MASK | \ + RX15_COHERENT_INT_MASK | INT_TX_MASK) + +/* QDMA_CSR_INT_ENABLE2 */ +#define RX15_NO_CPU_DSCP_INT_MASK BIT(31) +#define RX14_NO_CPU_DSCP_INT_MASK BIT(30) +#define RX13_NO_CPU_DSCP_INT_MASK BIT(29) +#define RX12_NO_CPU_DSCP_INT_MASK BIT(28) +#define RX11_NO_CPU_DSCP_INT_MASK BIT(27) +#define RX10_NO_CPU_DSCP_INT_MASK BIT(26) +#define RX9_NO_CPU_DSCP_INT_MASK BIT(25) +#define RX8_NO_CPU_DSCP_INT_MASK BIT(24) +#define RX7_NO_CPU_DSCP_INT_MASK BIT(23) +#define RX6_NO_CPU_DSCP_INT_MASK BIT(22) +#define RX5_NO_CPU_DSCP_INT_MASK BIT(21) +#define RX4_NO_CPU_DSCP_INT_MASK BIT(20) +#define RX3_NO_CPU_DSCP_INT_MASK BIT(19) +#define RX2_NO_CPU_DSCP_INT_MASK BIT(18) +#define RX1_NO_CPU_DSCP_INT_MASK BIT(17) +#define RX0_NO_CPU_DSCP_INT_MASK BIT(16) +#define RX15_DONE_INT_MASK BIT(15) +#define RX14_DONE_INT_MASK BIT(14) +#define RX13_DONE_INT_MASK BIT(13) +#define RX12_DONE_INT_MASK BIT(12) +#define RX11_DONE_INT_MASK BIT(11) +#define RX10_DONE_INT_MASK BIT(10) +#define RX9_DONE_INT_MASK BIT(9) +#define RX8_DONE_INT_MASK BIT(8) +#define RX7_DONE_INT_MASK BIT(7) +#define RX6_DONE_INT_MASK BIT(6) +#define RX5_DONE_INT_MASK BIT(5) +#define RX4_DONE_INT_MASK BIT(4) +#define RX3_DONE_INT_MASK BIT(3) +#define RX2_DONE_INT_MASK BIT(2) +#define RX1_DONE_INT_MASK BIT(1) +#define RX0_DONE_INT_MASK BIT(0) + +#define RX_DONE_INT_MASK \ + (RX0_DONE_INT_MASK | RX1_DONE_INT_MASK | \ + RX2_DONE_INT_MASK | RX3_DONE_INT_MASK | \ + RX4_DONE_INT_MASK | RX7_DONE_INT_MASK | \ + RX8_DONE_INT_MASK | RX9_DONE_INT_MASK | \ + RX15_DONE_INT_MASK) +#define INT_IDX1_MASK \ + (RX_DONE_INT_MASK | \ + RX0_NO_CPU_DSCP_INT_MASK | RX1_NO_CPU_DSCP_INT_MASK | \ + RX2_NO_CPU_DSCP_INT_MASK | RX3_NO_CPU_DSCP_INT_MASK | \ + RX4_NO_CPU_DSCP_INT_MASK | RX7_NO_CPU_DSCP_INT_MASK | \ + RX8_NO_CPU_DSCP_INT_MASK | RX9_NO_CPU_DSCP_INT_MASK | \ + RX15_NO_CPU_DSCP_INT_MASK) + +/* QDMA_CSR_INT_ENABLE5 */ +#define TX31_COHERENT_INT_MASK BIT(31) +#define TX30_COHERENT_INT_MASK BIT(30) +#define TX29_COHERENT_INT_MASK BIT(29) +#define TX28_COHERENT_INT_MASK BIT(28) +#define TX27_COHERENT_INT_MASK BIT(27) +#define TX26_COHERENT_INT_MASK BIT(26) +#define TX25_COHERENT_INT_MASK BIT(25) +#define TX24_COHERENT_INT_MASK BIT(24) +#define TX23_COHERENT_INT_MASK BIT(23) +#define TX22_COHERENT_INT_MASK BIT(22) +#define TX21_COHERENT_INT_MASK BIT(21) +#define TX20_COHERENT_INT_MASK BIT(20) +#define TX19_COHERENT_INT_MASK BIT(19) +#define TX18_COHERENT_INT_MASK BIT(18) +#define TX17_COHERENT_INT_MASK BIT(17) +#define TX16_COHERENT_INT_MASK BIT(16) +#define TX15_COHERENT_INT_MASK BIT(15) +#define TX14_COHERENT_INT_MASK BIT(14) +#define TX13_COHERENT_INT_MASK BIT(13) +#define TX12_COHERENT_INT_MASK BIT(12) +#define TX11_COHERENT_INT_MASK BIT(11) +#define TX10_COHERENT_INT_MASK BIT(10) +#define TX9_COHERENT_INT_MASK BIT(9) +#define TX8_COHERENT_INT_MASK BIT(8) + +#define INT_IDX4_MASK \ + (TX8_COHERENT_INT_MASK | TX9_COHERENT_INT_MASK | \ + TX10_COHERENT_INT_MASK | TX11_COHERENT_INT_MASK | \ + TX12_COHERENT_INT_MASK | TX13_COHERENT_INT_MASK | \ + TX14_COHERENT_INT_MASK | TX15_COHERENT_INT_MASK | \ + TX16_COHERENT_INT_MASK | TX17_COHERENT_INT_MASK | \ + TX18_COHERENT_INT_MASK | TX19_COHERENT_INT_MASK | \ + TX20_COHERENT_INT_MASK | TX21_COHERENT_INT_MASK | \ + TX22_COHERENT_INT_MASK | TX23_COHERENT_INT_MASK | \ + TX24_COHERENT_INT_MASK | TX25_COHERENT_INT_MASK | \ + TX26_COHERENT_INT_MASK | TX27_COHERENT_INT_MASK | \ + TX28_COHERENT_INT_MASK | TX29_COHERENT_INT_MASK | \ + TX30_COHERENT_INT_MASK | TX31_COHERENT_INT_MASK) + +#define REG_TX_IRQ_BASE(_n) ((_n) ? 0x0048 : 0x0050) + +#define REG_TX_IRQ_CFG(_n) ((_n) ? 0x004c : 0x0054) +#define TX_IRQ_THR_MASK GENMASK(27, 16) +#define TX_IRQ_DEPTH_MASK GENMASK(11, 0) + +#define REG_IRQ_CLEAR_LEN(_n) ((_n) ? 0x0064 : 0x0058) +#define IRQ_CLEAR_LEN_MASK GENMASK(7, 0) + +#define REG_IRQ_STATUS(_n) ((_n) ? 0x0068 : 0x005c) +#define IRQ_ENTRY_LEN_MASK GENMASK(27, 16) +#define IRQ_HEAD_IDX_MASK GENMASK(11, 0) + +#define REG_TX_RING_BASE(_n) \ + (((_n) < 8) ? 0x0100 + ((_n) << 5) : 0x0b00 + (((_n) - 8) << 5)) + +#define REG_TX_RING_BLOCKING(_n) \ + (((_n) < 8) ? 0x0104 + ((_n) << 5) : 0x0b04 + (((_n) - 8) << 5)) + +#define TX_RING_IRQ_BLOCKING_MAP_MASK BIT(6) +#define TX_RING_IRQ_BLOCKING_CFG_MASK BIT(4) +#define TX_RING_IRQ_BLOCKING_TX_DROP_EN_MASK BIT(2) +#define TX_RING_IRQ_BLOCKING_MAX_TH_TXRING_EN_MASK BIT(1) +#define TX_RING_IRQ_BLOCKING_MIN_TH_TXRING_EN_MASK BIT(0) + +#define REG_TX_CPU_IDX(_n) \ + (((_n) < 8) ? 0x0108 + ((_n) << 5) : 0x0b08 + (((_n) - 8) << 5)) + +#define TX_RING_CPU_IDX_MASK GENMASK(15, 0) + +#define REG_TX_DMA_IDX(_n) \ + (((_n) < 8) ? 0x010c + ((_n) << 5) : 0x0b0c + (((_n) - 8) << 5)) + +#define TX_RING_DMA_IDX_MASK GENMASK(15, 0) + +#define IRQ_RING_IDX_MASK GENMASK(20, 16) +#define IRQ_DESC_IDX_MASK GENMASK(15, 0) + +#define REG_RX_RING_BASE(_n) \ + (((_n) < 16) ? 0x0200 + ((_n) << 5) : 0x0e00 + (((_n) - 16) << 5)) + +#define REG_RX_RING_SIZE(_n) \ + (((_n) < 16) ? 0x0204 + ((_n) << 5) : 0x0e04 + (((_n) - 16) << 5)) + +#define RX_RING_THR_MASK GENMASK(31, 16) +#define RX_RING_SIZE_MASK GENMASK(15, 0) + +#define REG_RX_CPU_IDX(_n) \ + (((_n) < 16) ? 0x0208 + ((_n) << 5) : 0x0e08 + (((_n) - 16) << 5)) + +#define RX_RING_CPU_IDX_MASK GENMASK(15, 0) + +#define REG_RX_DMA_IDX(_n) \ + (((_n) < 16) ? 0x020c + ((_n) << 5) : 0x0e0c + (((_n) - 16) << 5)) + +#define REG_RX_DELAY_INT_IDX(_n) \ + (((_n) < 16) ? 0x0210 + ((_n) << 5) : 0x0e10 + (((_n) - 16) << 5)) + +#define RX_DELAY_INT_MASK GENMASK(15, 0) + +#define RX_RING_DMA_IDX_MASK GENMASK(15, 0) + +#define REG_INGRESS_TRTCM_CFG 0x0070 +#define INGRESS_TRTCM_EN_MASK BIT(31) +#define INGRESS_TRTCM_MODE_MASK BIT(30) +#define INGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define INGRESS_FAST_TICK_MASK GENMASK(15, 0) + +#define REG_QUEUE_CLOSE_CFG(_n) (0x00a0 + ((_n) & 0xfc)) +#define TXQ_DISABLE_CHAN_QUEUE_MASK(_n, _m) BIT((_m) + (((_n) & 0x3) << 3)) + +#define REG_TXQ_DIS_CFG_BASE(_n) ((_n) ? 0x20a0 : 0x00a0) +#define REG_TXQ_DIS_CFG(_n, _m) (REG_TXQ_DIS_CFG_BASE((_n)) + (_m) << 2) + +#define REG_CNTR_CFG(_n) (0x0400 + ((_n) << 3)) +#define CNTR_EN_MASK BIT(31) +#define CNTR_ALL_CHAN_EN_MASK BIT(30) +#define CNTR_ALL_QUEUE_EN_MASK BIT(29) +#define CNTR_ALL_DSCP_RING_EN_MASK BIT(28) +#define CNTR_SRC_MASK GENMASK(27, 24) +#define CNTR_DSCP_RING_MASK GENMASK(20, 16) +#define CNTR_CHAN_MASK GENMASK(7, 3) +#define CNTR_QUEUE_MASK GENMASK(2, 0) + +#define REG_CNTR_VAL(_n) (0x0404 + ((_n) << 3)) + +#define REG_LMGR_INIT_CFG 0x1000 +#define LMGR_INIT_START BIT(31) +#define LMGR_SRAM_MODE_MASK BIT(30) +#define HW_FWD_PKTSIZE_OVERHEAD_MASK GENMASK(27, 20) +#define HW_FWD_DESC_NUM_MASK GENMASK(16, 0) + +#define REG_FWD_DSCP_LOW_THR 0x1004 +#define FWD_DSCP_LOW_THR_MASK GENMASK(17, 0) + +#define REG_EGRESS_RATE_METER_CFG 0x100c +#define EGRESS_RATE_METER_EN_MASK BIT(31) +#define EGRESS_RATE_METER_EQ_RATE_EN_MASK BIT(17) +#define EGRESS_RATE_METER_WINDOW_SZ_MASK GENMASK(16, 12) +#define EGRESS_RATE_METER_TIMESLICE_MASK GENMASK(10, 0) + +#define REG_EGRESS_TRTCM_CFG 0x1010 +#define EGRESS_TRTCM_EN_MASK BIT(31) +#define EGRESS_TRTCM_MODE_MASK BIT(30) +#define EGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define EGRESS_FAST_TICK_MASK GENMASK(15, 0) + +#define TRTCM_PARAM_RW_MASK BIT(31) +#define TRTCM_PARAM_RW_DONE_MASK BIT(30) +#define TRTCM_PARAM_TYPE_MASK GENMASK(29, 28) +#define TRTCM_METER_GROUP_MASK GENMASK(27, 26) +#define TRTCM_PARAM_INDEX_MASK GENMASK(23, 17) +#define TRTCM_PARAM_RATE_TYPE_MASK BIT(16) + +#define REG_TRTCM_CFG_PARAM(_n) ((_n) + 0x4) +#define REG_TRTCM_DATA_LOW(_n) ((_n) + 0x8) +#define REG_TRTCM_DATA_HIGH(_n) ((_n) + 0xc) + +#define REG_TXWRR_MODE_CFG 0x1020 +#define TWRR_WEIGHT_SCALE_MASK BIT(31) +#define TWRR_WEIGHT_BASE_MASK BIT(3) + +#define REG_TXWRR_WEIGHT_CFG 0x1024 +#define TWRR_RW_CMD_MASK BIT(31) +#define TWRR_RW_CMD_DONE BIT(30) +#define TWRR_CHAN_IDX_MASK GENMASK(23, 19) +#define TWRR_QUEUE_IDX_MASK GENMASK(18, 16) +#define TWRR_VALUE_MASK GENMASK(15, 0) + +#define REG_PSE_BUF_USAGE_CFG 0x1028 +#define PSE_BUF_ESTIMATE_EN_MASK BIT(29) + +#define REG_CHAN_QOS_MODE(_n) (0x1040 + ((_n) << 2)) +#define CHAN_QOS_MODE_MASK(_n) GENMASK(2 + ((_n) << 2), (_n) << 2) + +#define REG_GLB_TRTCM_CFG 0x1080 +#define GLB_TRTCM_EN_MASK BIT(31) +#define GLB_TRTCM_MODE_MASK BIT(30) +#define GLB_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define GLB_FAST_TICK_MASK GENMASK(15, 0) + +#define REG_TXQ_CNGST_CFG 0x10a0 +#define TXQ_CNGST_DROP_EN BIT(31) +#define TXQ_CNGST_DEI_DROP_EN BIT(30) + +#define REG_SLA_TRTCM_CFG 0x1150 +#define SLA_TRTCM_EN_MASK BIT(31) +#define SLA_TRTCM_MODE_MASK BIT(30) +#define SLA_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define SLA_FAST_TICK_MASK GENMASK(15, 0) + +/* CTRL */ +#define QDMA_DESC_DONE_MASK BIT(31) +#define QDMA_DESC_DROP_MASK BIT(30) /* tx: drop - rx: overflow */ +#define QDMA_DESC_MORE_MASK BIT(29) /* more SG elements */ +#define QDMA_DESC_DEI_MASK BIT(25) +#define QDMA_DESC_NO_DROP_MASK BIT(24) +#define QDMA_DESC_LEN_MASK GENMASK(15, 0) +/* DATA */ +#define QDMA_DESC_NEXT_ID_MASK GENMASK(15, 0) +/* TX MSG0 */ +#define QDMA_ETH_TXMSG_MIC_IDX_MASK BIT(30) +#define QDMA_ETH_TXMSG_SP_TAG_MASK GENMASK(29, 14) +#define QDMA_ETH_TXMSG_ICO_MASK BIT(13) +#define QDMA_ETH_TXMSG_UCO_MASK BIT(12) +#define QDMA_ETH_TXMSG_TCO_MASK BIT(11) +#define QDMA_ETH_TXMSG_TSO_MASK BIT(10) +#define QDMA_ETH_TXMSG_FAST_MASK BIT(9) +#define QDMA_ETH_TXMSG_OAM_MASK BIT(8) +#define QDMA_ETH_TXMSG_CHAN_MASK GENMASK(7, 3) +#define QDMA_ETH_TXMSG_QUEUE_MASK GENMASK(2, 0) +/* TX MSG1 */ +#define QDMA_ETH_TXMSG_NO_DROP BIT(31) +#define QDMA_ETH_TXMSG_METER_MASK GENMASK(30, 24) /* 0x7f no meters */ +#define QDMA_ETH_TXMSG_FPORT_MASK GENMASK(23, 20) +#define QDMA_ETH_TXMSG_NBOQ_MASK GENMASK(19, 15) +#define QDMA_ETH_TXMSG_HWF_MASK BIT(14) +#define QDMA_ETH_TXMSG_HOP_MASK BIT(13) +#define QDMA_ETH_TXMSG_PTP_MASK BIT(12) +#define QDMA_ETH_TXMSG_ACNT_G1_MASK GENMASK(10, 6) /* 0x1f do not count */ +#define QDMA_ETH_TXMSG_ACNT_G0_MASK GENMASK(5, 0) /* 0x3f do not count */ + +/* RX MSG1 */ +#define QDMA_ETH_RXMSG_DEI_MASK BIT(31) +#define QDMA_ETH_RXMSG_IP6_MASK BIT(30) +#define QDMA_ETH_RXMSG_IP4_MASK BIT(29) +#define QDMA_ETH_RXMSG_IP4F_MASK BIT(28) +#define QDMA_ETH_RXMSG_L4_VALID_MASK BIT(27) +#define QDMA_ETH_RXMSG_L4F_MASK BIT(26) +#define QDMA_ETH_RXMSG_SPORT_MASK GENMASK(25, 21) +#define QDMA_ETH_RXMSG_CRSN_MASK GENMASK(20, 16) +#define QDMA_ETH_RXMSG_PPE_ENTRY_MASK GENMASK(15, 0) + +struct airoha_qdma_desc { + __le32 rsv; + __le32 ctrl; + __le32 addr; + __le32 data; + __le32 msg0; + __le32 msg1; + __le32 msg2; + __le32 msg3; +}; + +/* CTRL0 */ +#define QDMA_FWD_DESC_CTX_MASK BIT(31) +#define QDMA_FWD_DESC_RING_MASK GENMASK(30, 28) +#define QDMA_FWD_DESC_IDX_MASK GENMASK(27, 16) +#define QDMA_FWD_DESC_LEN_MASK GENMASK(15, 0) +/* CTRL1 */ +#define QDMA_FWD_DESC_FIRST_IDX_MASK GENMASK(15, 0) +/* CTRL2 */ +#define QDMA_FWD_DESC_MORE_PKT_NUM_MASK GENMASK(2, 0) + +struct airoha_qdma_fwd_desc { + __le32 addr; + __le32 ctrl0; + __le32 ctrl1; + __le32 ctrl2; + __le32 msg0; + __le32 msg1; + __le32 rsv0; + __le32 rsv1; +}; + +#endif /* AIROHA_REGS_H */ From patchwork Wed Feb 5 18:21:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961668 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34CC11FCFDA; Wed, 5 Feb 2025 18:22:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779737; cv=none; b=uM3Sw+Q7UOn1SDGAQvOsOOV6kl9vgGrsumJ0SZbjCVEvJL3LE0c2mCkN5YnaQuDxztSCOjOGGcq3jPhj6D823HDhSzKYM/X/c7M9tD6z3Vs51eq/a4RFL5p04RkCz8R5ByAcUh0sjFVicwJbPXRLXGT2fEjpdjkcBOnLEeyTPGI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779737; c=relaxed/simple; bh=s5r+AI5CoWy7fB3orb1C3bppD7WLV8ax2lMNTdU5FZk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=nnKSY9rzi7D50vAd9F056DzjQ4MqSSHUxhGRs1CEi1KEd0ARFbVGcbicqxO8lS44JFiGGyjRtGOdMsGq/hSt02WH11FItjRGtdemZeK9zbNpgNS0CfCIA+VEQguO1lw78cBdv7XXslsrgCv2ytHRuaThZzfO1WCPgANx/M3azws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JsVIwOet; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JsVIwOet" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B33DFC4CEEA; Wed, 5 Feb 2025 18:22:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779737; bh=s5r+AI5CoWy7fB3orb1C3bppD7WLV8ax2lMNTdU5FZk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=JsVIwOet9cIVFOpiR4GfmzxB1X7Tzz9TgJrWCcQwI445ymBhSY7IluCQohy0YiQWA MYXpR6hXo6sW26FcAGzQ9nzbWmgUHGPJHn4AJiBB2I78r5g+iZv1Zj43MMB3VHlzxA yLvtHyT8odOr1YnxRMT0cg9kI1+DK1wcP9q+7ka/OuV7tuIImidV0d+o5GFBfM0LiS Y15wqPQTNaMXGp2egLBTFs2bLNxLcMFFd7ga9XTx+k+B3wZL+j/lckHVyYfNouZw5I W1EzpFPUkSyiVVz8C5J+e9QVIv7wNlb3u7vtK2I9Y9XmKwURG5gBgr4Kn6M6CNM1Dt ESLPYvkFq+M6A== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:24 +0100 Subject: [PATCH net-next 05/13] net: airoha: Move DSA tag in DMA descriptor Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-5-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Packet Processor Engine (PPE) module reads DSA tags from the DMA descriptor and requires untagged DSA packets to properly parse them. Move DSA tag in the DMA descriptor on TX side and read DSA tag from DMA descriptor on RX side. In order to avoid skb reallocation, store tag in skb_dst on RX side. This is a preliminary patch to enable netfilter flowtable hw offloading on EN7581 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 112 ++++++++++++++++++++++++++++-- drivers/net/ethernet/airoha/airoha_eth.h | 7 ++ drivers/net/ethernet/airoha/airoha_regs.h | 2 + 3 files changed, 116 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index a99d98be86a00f7bd000e2b8218afcde53b1946f..ab95f3da21d90bd42282a9f3f5474ce42a69c9d4 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -656,6 +657,7 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) struct airoha_qdma_desc *desc = &q->desc[q->tail]; dma_addr_t dma_addr = le32_to_cpu(desc->addr); u32 desc_ctrl = le32_to_cpu(desc->ctrl); + struct airoha_gdm_port *port; struct sk_buff *skb; int len, p; @@ -683,6 +685,7 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) continue; } + port = eth->ports[p]; skb = napi_build_skb(e->buf, q->buf_size); if (!skb) { page_pool_put_full_page(q->page_pool, @@ -694,10 +697,26 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) skb_reserve(skb, 2); __skb_put(skb, len); skb_mark_for_recycle(skb); - skb->dev = eth->ports[p]->dev; + skb->dev = port->dev; skb->protocol = eth_type_trans(skb, skb->dev); skb->ip_summed = CHECKSUM_UNNECESSARY; skb_record_rx_queue(skb, qid); + + if (netdev_uses_dsa(port->dev)) { + /* PPE module requires untagged packets to work + * properly and it provides DSA port index via the + * DMA descriptor. Report DSA tag to the DSA stack + * via skb dst info. + */ + u32 sptag = FIELD_GET(QDMA_ETH_RXMSG_SPTAG, + le32_to_cpu(desc->msg0)); + + if (sptag < ARRAY_SIZE(port->dsa_meta) && + port->dsa_meta[sptag]) + skb_dst_set_noref(skb, + &port->dsa_meta[sptag]->dst); + } + napi_gro_receive(&q->napi, skb); done++; @@ -1636,26 +1655,69 @@ static u16 airoha_dev_select_queue(struct net_device *dev, struct sk_buff *skb, return queue < dev->num_tx_queues ? queue : 0; } +static u32 airoha_get_dsa_tag(struct sk_buff *skb, struct net_device *dev) +{ +#if IS_ENABLED(CONFIG_NET_DSA) + __be16 *phdr = (__be16 *)(skb->data + 2 * ETH_ALEN); + u16 tag = be16_to_cpu(*phdr); + u8 xmit_tpid = tag >> 8; + struct dsa_port *dp; + + if (!netdev_uses_dsa(dev)) + return 0; + + dp = dev->dsa_ptr; + if (IS_ERR(dp)) + return 0; + + if (dp->tag_ops->proto != DSA_TAG_PROTO_MTK) + return 0; + + switch (xmit_tpid) { + case MTK_HDR_XMIT_TAGGED_TPID_8100: + *phdr = cpu_to_be16(ETH_P_8021Q); + break; + case MTK_HDR_XMIT_TAGGED_TPID_88A8: + *phdr = cpu_to_be16(ETH_P_8021AD); + break; + default: + /* PPE module requires untagged DSA packets to work properly, + * so move DSA tag to DMA descriptor. + */ + memmove(skb->data + MTK_HDR_LEN, skb->data, 2 * ETH_ALEN); + skb_pull_rcsum(skb, MTK_HDR_LEN); + break; + } + + return tag; +#else + return 0; +#endif +} + static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, struct net_device *dev) { struct skb_shared_info *sinfo = skb_shinfo(skb); struct airoha_gdm_port *port = netdev_priv(dev); - u32 msg0, msg1, len = skb_headlen(skb); + u32 tag, msg0, msg1, len = skb_headlen(skb); struct airoha_qdma *qdma = port->qdma; u32 nr_frags = 1 + sinfo->nr_frags; struct netdev_queue *txq; struct airoha_queue *q; - void *data = skb->data; + void *data; int i, qid; u16 index; u8 fport; qid = skb_get_queue_mapping(skb) % ARRAY_SIZE(qdma->q_tx); + tag = airoha_get_dsa_tag(skb, dev); + msg0 = FIELD_PREP(QDMA_ETH_TXMSG_CHAN_MASK, qid / AIROHA_NUM_QOS_QUEUES) | FIELD_PREP(QDMA_ETH_TXMSG_QUEUE_MASK, - qid % AIROHA_NUM_QOS_QUEUES); + qid % AIROHA_NUM_QOS_QUEUES) | + FIELD_PREP(QDMA_ETH_TXMSG_SP_TAG_MASK, tag); if (skb->ip_summed == CHECKSUM_PARTIAL) msg0 |= FIELD_PREP(QDMA_ETH_TXMSG_TCO_MASK, 1) | FIELD_PREP(QDMA_ETH_TXMSG_UCO_MASK, 1) | @@ -1692,7 +1754,9 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, return NETDEV_TX_BUSY; } + data = skb->data; index = q->head; + for (i = 0; i < nr_frags; i++) { struct airoha_qdma_desc *desc = &q->desc[index]; struct airoha_queue_entry *e = &q->entry[index]; @@ -2226,6 +2290,37 @@ static const struct ethtool_ops airoha_ethtool_ops = { .get_rmon_stats = airoha_ethtool_get_rmon_stats, }; +static int airoha_metadata_dst_alloc(struct airoha_gdm_port *port) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(port->dsa_meta); i++) { + struct metadata_dst *md_dst; + + md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, + GFP_KERNEL); + if (!md_dst) + return -ENOMEM; + + md_dst->u.port_info.port_id = i; + port->dsa_meta[i] = md_dst; + } + + return 0; +} + +static void airoha_metadata_dst_free(struct airoha_gdm_port *port) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(port->dsa_meta); i++) { + if (!port->dsa_meta[i]) + continue; + + metadata_dst_free(port->dsa_meta[i]); + } +} + static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) { const __be32 *id_ptr = of_get_property(np, "reg", NULL); @@ -2298,6 +2393,10 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) port->id = id; eth->ports[index] = port; + err = airoha_metadata_dst_alloc(port); + if (err) + return err; + return register_netdev(dev); } @@ -2390,8 +2489,10 @@ static int airoha_probe(struct platform_device *pdev) for (i = 0; i < ARRAY_SIZE(eth->ports); i++) { struct airoha_gdm_port *port = eth->ports[i]; - if (port && port->dev->reg_state == NETREG_REGISTERED) + if (port && port->dev->reg_state == NETREG_REGISTERED) { + airoha_metadata_dst_free(port); unregister_netdev(port->dev); + } } free_netdev(eth->napi_dev); platform_set_drvdata(pdev, NULL); @@ -2416,6 +2517,7 @@ static void airoha_remove(struct platform_device *pdev) continue; airoha_dev_stop(port->dev); + airoha_metadata_dst_free(port); unregister_netdev(port->dev); } free_netdev(eth->napi_dev); diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 743aaf10235fe09fb2a91b491f4b25064ed8319b..fee6c10eaedfd30207205b6557e856091fd45d7e 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -15,6 +15,7 @@ #define AIROHA_MAX_NUM_GDM_PORTS 1 #define AIROHA_MAX_NUM_QDMA 2 +#define AIROHA_MAX_DSA_PORTS 7 #define AIROHA_MAX_NUM_RSTS 3 #define AIROHA_MAX_NUM_XSI_RSTS 5 #define AIROHA_MAX_MTU 2000 @@ -43,6 +44,10 @@ #define QDMA_METER_IDX(_n) ((_n) & 0xff) #define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) +#define MTK_HDR_LEN 4 +#define MTK_HDR_XMIT_TAGGED_TPID_8100 1 +#define MTK_HDR_XMIT_TAGGED_TPID_88A8 2 + enum { QDMA_INT_REG_IDX0, QDMA_INT_REG_IDX1, @@ -231,6 +236,8 @@ struct airoha_gdm_port { /* qos stats counters */ u64 cpu_tx_packets; u64 fwd_tx_packets; + + struct metadata_dst *dsa_meta[AIROHA_MAX_DSA_PORTS]; }; struct airoha_eth { diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h index 7c9dadb348834cb5a856760abe45e8221d6fd700..e467dd81ff44a9ad560226cab42b7431812f5fb9 100644 --- a/drivers/net/ethernet/airoha/airoha_regs.h +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -624,6 +624,8 @@ #define QDMA_ETH_TXMSG_ACNT_G1_MASK GENMASK(10, 6) /* 0x1f do not count */ #define QDMA_ETH_TXMSG_ACNT_G0_MASK GENMASK(5, 0) /* 0x3f do not count */ +/* RX MSG0 */ +#define QDMA_ETH_RXMSG_SPTAG GENMASK(21, 14) /* RX MSG1 */ #define QDMA_ETH_RXMSG_DEI_MASK BIT(31) #define QDMA_ETH_RXMSG_IP6_MASK BIT(30) From patchwork Wed Feb 5 18:21:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961669 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 35BE91FCFEF; Wed, 5 Feb 2025 18:22:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779740; cv=none; b=cCDOIE7RiXWvgX4lJ2GFnVfYsGCR6zP7g0LhkAe0mr50AKaew8NMFj1zOkwngIgtKMvfnTgAq7hsEyA8Eeq8MqZ5DKBp8c36q1lQDymNpP6oUWspxyibqn6F28ChUg9xchqxfD9YqRdajWVsfEnPR6clSdbz0NXMWTvo0r7VSt4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779740; c=relaxed/simple; bh=b8KVn1GzHBRsxxYc2agv9jSTthBrazkDNNMMiDPN+yQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ANJUzgxwf3cVyD4RTazXbkubJDnYWwK9WBi/7GHKFnupWoKm5B99JnZ2tahQGWuI5oaFcr+S15SQEUwHpA1n4W7TnrugjhYFB+jn7yAn+rh3M6YIYBznlWa2MZX/Sc9aKDtzE9yekLYF+EkFQeqFPYXoutkfBnXZbmLAyZ6Tsg8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eW3/ADf6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eW3/ADf6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44CDAC4CED1; Wed, 5 Feb 2025 18:22:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779739; bh=b8KVn1GzHBRsxxYc2agv9jSTthBrazkDNNMMiDPN+yQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=eW3/ADf64TRG/83KCaYwsIQN2xeD3dzV1PE7IXXUL9xUmUz50ZjNTJ9pHZfz9YrVn 7iYcvSCnP+7QMzCboG+4JlOQHie6z/iJd5qFkp36vVgVLAIHLW6mjbGIMAdMX3vg0u NLKltVa/9JR3dKcDOTxi6Mx5l0n8L9yePKUQIb0Q+Rgc1+m9SjZsoIZbK+1JJxi0Vt geMGoaUTTcfdHgNWv6lFusht+r7EIc8L5MzsnLet7+LYFRR3RcsUuzPUtlmiEMb0vV kAzp+CUndFSvbZI1/BC9wzCVsg17/4e8qGLcanqCvB5xxaWmZnN/5yIc02/MU6V/cR /Uzqd8WWnJSYw== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:25 +0100 Subject: [PATCH net-next 06/13] net: airoha: Enable support for multiple net_devices Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-6-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com, Christian Marangi X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org In the current codebase airoha_eth driver supports just a single net_device connected to the Packet Switch Engine (PSE) lan port (GDM1). As shown in commit 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC"), PSE can switch packets between four GDM ports. Enable the capability to create a net_device for each GDM port of the PSE module. Moreover, since the QDMA blocks can be shared between net_devices, do not stop TX/RX DMA in airoha_dev_stop() if there are active net_devices for this QDMA block. This is a preliminary patch to enable flowtable hw offloading for EN7581 SoC. Co-developed-by: Christian Marangi Signed-off-by: Christian Marangi Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 35 +++++++++++++++++++------------- drivers/net/ethernet/airoha/airoha_eth.h | 4 +++- 2 files changed, 24 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index ab95f3da21d90bd42282a9f3f5474ce42a69c9d4..42b5e0945fb8738f70d0b77bf597a7a04e5f0716 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -1562,6 +1562,7 @@ static int airoha_dev_open(struct net_device *dev) airoha_qdma_set(qdma, REG_QDMA_GLOBAL_CFG, GLOBAL_CFG_TX_DMA_EN_MASK | GLOBAL_CFG_RX_DMA_EN_MASK); + atomic_inc(&qdma->users); return 0; } @@ -1577,16 +1578,20 @@ static int airoha_dev_stop(struct net_device *dev) if (err) return err; - airoha_qdma_clear(qdma, REG_QDMA_GLOBAL_CFG, - GLOBAL_CFG_TX_DMA_EN_MASK | - GLOBAL_CFG_RX_DMA_EN_MASK); + for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) + netdev_tx_reset_subqueue(dev, i); - for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) { - if (!qdma->q_tx[i].ndesc) - continue; + if (atomic_dec_and_test(&qdma->users)) { + airoha_qdma_clear(qdma, REG_QDMA_GLOBAL_CFG, + GLOBAL_CFG_TX_DMA_EN_MASK | + GLOBAL_CFG_RX_DMA_EN_MASK); - airoha_qdma_cleanup_tx_queue(&qdma->q_tx[i]); - netdev_tx_reset_subqueue(dev, i); + for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) { + if (!qdma->q_tx[i].ndesc) + continue; + + airoha_qdma_cleanup_tx_queue(&qdma->q_tx[i]); + } } return 0; @@ -2321,13 +2326,14 @@ static void airoha_metadata_dst_free(struct airoha_gdm_port *port) } } -static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) +static int airoha_alloc_gdm_port(struct airoha_eth *eth, + struct device_node *np, int index) { const __be32 *id_ptr = of_get_property(np, "reg", NULL); struct airoha_gdm_port *port; struct airoha_qdma *qdma; struct net_device *dev; - int err, index; + int err, p; u32 id; if (!id_ptr) { @@ -2336,14 +2342,14 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) } id = be32_to_cpup(id_ptr); - index = id - 1; + p = id - 1; if (!id || id > ARRAY_SIZE(eth->ports)) { dev_err(eth->dev, "invalid gdm port id: %d\n", id); return -EINVAL; } - if (eth->ports[index]) { + if (eth->ports[p]) { dev_err(eth->dev, "duplicate gdm port id: %d\n", id); return -EINVAL; } @@ -2391,7 +2397,7 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) port->qdma = qdma; port->dev = dev; port->id = id; - eth->ports[index] = port; + eth->ports[p] = port; err = airoha_metadata_dst_alloc(port); if (err) @@ -2463,6 +2469,7 @@ static int airoha_probe(struct platform_device *pdev) for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) airoha_qdma_start_napi(ð->qdma[i]); + i = 0; for_each_child_of_node(pdev->dev.of_node, np) { if (!of_device_is_compatible(np, "airoha,eth-mac")) continue; @@ -2470,7 +2477,7 @@ static int airoha_probe(struct platform_device *pdev) if (!of_device_is_available(np)) continue; - err = airoha_alloc_gdm_port(eth, np); + err = airoha_alloc_gdm_port(eth, np, i++); if (err) { of_node_put(np); goto error_napi_stop; diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index fee6c10eaedfd30207205b6557e856091fd45d7e..74bdfd9e8d2fb3706f5ec6a4e17fe07fbcb38c3d 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -13,7 +13,7 @@ #include #include -#define AIROHA_MAX_NUM_GDM_PORTS 1 +#define AIROHA_MAX_NUM_GDM_PORTS 4 #define AIROHA_MAX_NUM_QDMA 2 #define AIROHA_MAX_DSA_PORTS 7 #define AIROHA_MAX_NUM_RSTS 3 @@ -212,6 +212,8 @@ struct airoha_qdma { u32 irqmask[QDMA_INT_REG_MAX]; int irq; + atomic_t users; + struct airoha_tx_irq_queue q_tx_irq[AIROHA_NUM_TX_IRQ]; struct airoha_queue q_tx[AIROHA_NUM_TX_RING]; From patchwork Wed Feb 5 18:21:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961670 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F5C01FCFE1; Wed, 5 Feb 2025 18:22:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779742; cv=none; b=Izj44dKzyH31R8inHqLoQtw0y9FRS3qMwQuKqJ2Sk997wt7rVxZK8bkyeRcu1EDBhg4qjBnxsO5/l86/shggu7vxrUezNaJOwGcj3VmT9iMcJ9aPx8rfm3kujCyVhyGzQ2IxBAGlOfS3mgioaHf9X2rtjAr4MaKJG8i5rR/7VGE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779742; c=relaxed/simple; bh=WFtXkOHzpsVl/lSmvchFASlqyP6fybupy9T9sIM0y/o=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PUBYHjzzqGgJgmcGVExe48+UZreQFEVMSOWn2e8jNiN4gNTeUilLtlIknmY0EV79844x3CsadNrOjUM0Jlyz+MoEtPastOHSAw10WgjiKUeEPTzAOpxKpECmMsVPqIqGYpTHWqCQwK2uk8uFPLGfwYb/iQ34pOtoxbZKdf8RAbs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jvzI/B4/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jvzI/B4/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F04AFC4CED1; Wed, 5 Feb 2025 18:22:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779742; bh=WFtXkOHzpsVl/lSmvchFASlqyP6fybupy9T9sIM0y/o=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jvzI/B4/o3rR664OqP1QZ3awrF+0S2EZAl9U3ziymClJlMINKU/BpIblp4f5UlZoD TTw7lgTd1aam/asxspB7ABtVABVko0WWFCNxOfwvjMBHaPSFy6sjBSUnkLNvUoOMG/ ATwq+tW9bOnfl6Lp2eoDCkg2iIMWSus6hEXmiaH0JlB+g8Ict2QvlU9jH1Dr0VSmn1 KTVJulY88M1OAmEfZKTHGwKGIiM1meV/g9ae+WFcevtDPfYOCD1yRF0ZQDfFrroW7x lP3W4ZFDyM1zhGzHmP5lqlUofQBf4ujEh3czrQT/nLoLQ2APLaz11vrv8bFlIBqwn7 ryZ6tjUSpusIw== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:26 +0100 Subject: [PATCH net-next 07/13] net: airoha: Move REG_GDM_FWD_CFG() initialization in airoha_dev_init() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-7-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Move REG_GDM_FWD_CFG() register initialization in airoha_dev_init routine. Moreover, always send traffic PPE module in order to be processed by hw accelerator. This is a preliminary patch to enable netfilter flowtable hw offloading on EN7581 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 42b5e0945fb8738f70d0b77bf597a7a04e5f0716..db75a9b1d5766bfaaa92312d625c395e19c88368 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -107,25 +107,20 @@ static void airoha_set_gdm_port_fwd_cfg(struct airoha_eth *eth, u32 addr, static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) { - u32 val = enable ? FE_PSE_PORT_PPE1 : FE_PSE_PORT_DROP; - u32 vip_port, cfg_addr; + u32 vip_port; switch (port) { case XSI_PCIE0_PORT: vip_port = XSI_PCIE0_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(3); break; case XSI_PCIE1_PORT: vip_port = XSI_PCIE1_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(3); break; case XSI_USB_PORT: vip_port = XSI_USB_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(4); break; case XSI_ETH_PORT: vip_port = XSI_ETH_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(4); break; default: return -EINVAL; @@ -139,8 +134,6 @@ static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) airoha_fe_clear(eth, REG_FE_IFC_PORT_EN, vip_port); } - airoha_set_gdm_port_fwd_cfg(eth, cfg_addr, val); - return 0; } @@ -177,8 +170,6 @@ static void airoha_fe_maccr_init(struct airoha_eth *eth) airoha_fe_set(eth, REG_GDM_FWD_CFG(p), GDM_TCP_CKSUM | GDM_UDP_CKSUM | GDM_IP4_CKSUM | GDM_DROP_CRC_ERR); - airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(p), - FE_PSE_PORT_CDM1); airoha_fe_rmw(eth, REG_GDM_LEN_CFG(p), GDM_SHORT_LEN_MASK | GDM_LONG_LEN_MASK, FIELD_PREP(GDM_SHORT_LEN_MASK, 60) | @@ -1614,8 +1605,11 @@ static int airoha_dev_set_macaddr(struct net_device *dev, void *p) static int airoha_dev_init(struct net_device *dev) { struct airoha_gdm_port *port = netdev_priv(dev); + struct airoha_eth *eth = port->qdma->eth; airoha_set_macaddr(port, dev->dev_addr); + airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(port->id), + FE_PSE_PORT_PPE1); return 0; } From patchwork Wed Feb 5 18:21:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961671 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FCDB1FDA66; Wed, 5 Feb 2025 18:22:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779745; cv=none; b=jYeRqcJVchLvp7prJr43dI3c6c7mBwdDqlQGVnDdwAYXfFevLhp4SKC1zKN1rplF5pOTc9gWb/NglfIU97GFMK6Plq9F0RqwNsiEbztXdefBqDt4/CBX5zwAHLDp8TWXCxwFmxjUDKE476QEieBUanaemgXZrXA+iBwScj28NYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779745; c=relaxed/simple; bh=4Qk8X7qpIVCbN/KBSqhoxxEZY1Ovomp+1fX44eKyUmk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dpc/5E8Uepr77UshG0sijTe5sCEqzOYBseFvt1cM/WtLmVqNxwsrJccreLa2JxrHdEYzoEPokfQxb6bzXGieDF6hsQYMU/EPfuiO+jkgKAEgCoNO7zJm1m5PC66Gp/+vlGt6tmkbj+ECwGiSJwq52vVwsiJEc/I6M1tZq2zLP0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DaYOLG/M; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DaYOLG/M" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D1EBC4CED1; Wed, 5 Feb 2025 18:22:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779744; bh=4Qk8X7qpIVCbN/KBSqhoxxEZY1Ovomp+1fX44eKyUmk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=DaYOLG/MV4STzERM8xg+mcrZxf/nAxiqYkKWawOIQry9TYbXSp5Z65otLz0JLLc6e kqUDJHuVpO+LTt3QIp+akH6Rc2skVCsyYaiSiQ4XA+dh5hACYDRBi2xgIEsqtETFz1 WmzolBEYO5dF0PtWwfu7NKiJvDk3k8EOhxEvDD1mXcFZqMJi+ZCoJqsH3rdoKBA80H Xh6FM4y0vVKCeufy41SsKhe3FYaiV/3t7WYXo+1WyGRJ5UQGKDroJO9WNJohBbZ5SY VF5mXnfwZYbOvTxs/k4P9ABrBwhisWVGlThWOTuyf5l6imGzvdN8k96pmvSRAFwnDs /7SU9UQGRCAYg== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:27 +0100 Subject: [PATCH net-next 08/13] net: airoha: Rename airoha_set_gdm_port_fwd_cfg() in airoha_set_vip_for_gdm_port() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-8-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Rename airoha_set_gdm_port() in airoha_set_vip_for_gdm_port(). Get rid of airoha_set_gdm_ports routine. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 49 +++++++------------------------- drivers/net/ethernet/airoha/airoha_eth.h | 8 ------ 2 files changed, 11 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index db75a9b1d5766bfaaa92312d625c395e19c88368..52be276039eff7772b3c604cf7db5d456ec155ab 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -105,25 +105,23 @@ static void airoha_set_gdm_port_fwd_cfg(struct airoha_eth *eth, u32 addr, FIELD_PREP(GDM_UCFQ_MASK, val)); } -static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) +static int airoha_set_vip_for_gdm_port(struct airoha_gdm_port *port, + bool enable) { + struct airoha_eth *eth = port->qdma->eth; u32 vip_port; - switch (port) { - case XSI_PCIE0_PORT: + switch (port->id) { + case 3: + /* FIXME: handle XSI_PCIE1_PORT */ vip_port = XSI_PCIE0_VIP_PORT_MASK; break; - case XSI_PCIE1_PORT: - vip_port = XSI_PCIE1_VIP_PORT_MASK; - break; - case XSI_USB_PORT: - vip_port = XSI_USB_VIP_PORT_MASK; - break; - case XSI_ETH_PORT: + case 4: + /* FIXME: handle XSI_USB_PORT */ vip_port = XSI_ETH_VIP_PORT_MASK; break; default: - return -EINVAL; + return 0; } if (enable) { @@ -137,31 +135,6 @@ static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) return 0; } -static int airoha_set_gdm_ports(struct airoha_eth *eth, bool enable) -{ - const int port_list[] = { - XSI_PCIE0_PORT, - XSI_PCIE1_PORT, - XSI_USB_PORT, - XSI_ETH_PORT - }; - int i, err; - - for (i = 0; i < ARRAY_SIZE(port_list); i++) { - err = airoha_set_gdm_port(eth, port_list[i], enable); - if (err) - goto error; - } - - return 0; - -error: - for (i--; i >= 0; i--) - airoha_set_gdm_port(eth, port_list[i], false); - - return err; -} - static void airoha_fe_maccr_init(struct airoha_eth *eth) { int p; @@ -1539,7 +1512,7 @@ static int airoha_dev_open(struct net_device *dev) int err; netif_tx_start_all_queues(dev); - err = airoha_set_gdm_ports(qdma->eth, true); + err = airoha_set_vip_for_gdm_port(port, true); if (err) return err; @@ -1565,7 +1538,7 @@ static int airoha_dev_stop(struct net_device *dev) int i, err; netif_tx_disable(dev); - err = airoha_set_gdm_ports(qdma->eth, false); + err = airoha_set_vip_for_gdm_port(port, false); if (err) return err; diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 74bdfd9e8d2fb3706f5ec6a4e17fe07fbcb38c3d..44834227a58982d4491f3d8174b9e0bea542f785 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -57,14 +57,6 @@ enum { QDMA_INT_REG_MAX }; -enum { - XSI_PCIE0_PORT, - XSI_PCIE1_PORT, - XSI_USB_PORT, - XSI_AE_PORT, - XSI_ETH_PORT, -}; - enum { XSI_PCIE0_VIP_PORT_MASK = BIT(22), XSI_PCIE1_VIP_PORT_MASK = BIT(23), From patchwork Wed Feb 5 18:21:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961672 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8E601FCFE1; Wed, 5 Feb 2025 18:22:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779747; cv=none; b=LvStN8tUwdzpQ3OIymJZPiL9M1jf1n3P+P6WMkmVDqNsqWTAC4B0un4MZ1DETMVr7hN8Y/s5xih+x10HUWy/iECSF14xVkdc3d4suDzvalv8AeZAKaLahSBv8f/byGPftyjD/cn8si2Pqq7Tz2XnuT12Cyw352fCCLbv7QnKJeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779747; c=relaxed/simple; bh=zTdTUW0dwrafTX4itj2ohGtbrJbqJHnQvGtwFC7cl2g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Sk+XzjO/KzCBzHIhONlXq/LQQEJTd2e3e5ldh/zO/FO1iIc/NUrdMokOQwvlyDi+Nk/c8DHZEnonHJOloBF96DbxtaXHH3WWZYRylbZ0cKFRFQWyaTCLoIuKqvQPQg6PUwPTglM8sb/RZuH89I8jm37KNVdFiZcbOU2ydIlQnFs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DLGbUfrj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DLGbUfrj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 413D5C4CED1; Wed, 5 Feb 2025 18:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779747; bh=zTdTUW0dwrafTX4itj2ohGtbrJbqJHnQvGtwFC7cl2g=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=DLGbUfrjnmGg9dZdE579scDaj5f+/erqucRWaAd9TkBgkj7b6EU4T7nslZ5vlL6Hj wQNjU7RWK8v55s4XzTvK+SAx4IQuU0qGHPNayuwibRmmF8HDZZqqRkIrek+mvwtM8n luFYDhKOBCBAAHqxJAVX+WN1ArgSocG5wYkG0aIzv/4LX1bTTbBJHClpJwK1FsVJDk vkYv0Y9VNVySkUN9NFl4Js6A7VLO5GKK47C637yQcYGmEPkQdZM81ihfdiQxIpSkHE ocm0iHXdYWcMlzBUZgNk68ImdLfosItzmUv+4u8MN0Goy3ci2rMclkAyHAi+gOyX2C bnj6SB4kdxK2A== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:28 +0100 Subject: [PATCH net-next 09/13] dt-bindings: net: airoha: Add airoha,npu phandle property Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-9-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Introduce the airoha,npu property for the npu syscon node available on EN7581 SoC. The airoha Network Processor Unit (NPU) is used to offload network traffic forwarded between Packet Switch Engine (PSE) ports. Signed-off-by: Lorenzo Bianconi --- Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml b/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml index c578637c5826db4bf470a4d01ac6f3133976ae1a..6388afff64e990a20230b0c4e58534aa61f984da 100644 --- a/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml +++ b/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml @@ -63,6 +63,12 @@ properties: "#size-cells": const: 0 + airoha,npu: + $ref: /schemas/types.yaml#/definitions/phandle + description: + Phandle to the syscon node used to configure the NPU module + used for traffic offloading. + patternProperties: "^ethernet@[1-4]$": type: object @@ -132,6 +138,8 @@ examples: , ; + airoha,npu = <&npu>; + #address-cells = <1>; #size-cells = <0>; From patchwork Wed Feb 5 18:21:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961673 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66DCD1FDA96; Wed, 5 Feb 2025 18:22:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779750; cv=none; b=kiSS7ohEaCWUx7cKTcd0ov22l6/xg4yqnkIc+9HVXkT8tzuW68h47AsGN/nXP7N4AD4OqZmgRBb3zqhrI7VzWqMI0iUUlvsw7wXB3TaVwWACMPazZ1fTWS0iEBdMBgkETyRoYXZS9YxC2b+cucuXLIO8vzrrjGhjSDPnvZO/cnY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779750; c=relaxed/simple; bh=KP4OR6k8kygdu98W0AQGegWNCkTprYuv54jo7Hq0P0I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=XI1Zq7E3v2bcCF+fYqE3isnRUFd6ntbUNEB8n3gXcONZdruIb35Oz34v44Mn0CfO8Vm+RsGPK8lhS1x5iYBM3CrvL7v2P6vp0xFn4/CVnnbguEXDEmLrqkmLMhYyO/j6Ybp0EtNGDNaPkkvwHe1Jq6Omw6PQwykb4tmMRjJLXr4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=o+2EpQuG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="o+2EpQuG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC9F9C4CED1; Wed, 5 Feb 2025 18:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779750; bh=KP4OR6k8kygdu98W0AQGegWNCkTprYuv54jo7Hq0P0I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=o+2EpQuG0i416zMsI7cZ4RLDxwjEVCWUT2AX1voYUvgtPlg5lQUynyogHxZwJt59i Zeh8tGJ2ffK6J/OQEsqZc8lXGEI8NWDbkquRFemUyK9ly19Pskyf0TXubpyreVB2xv StZD2lxNGyCUDDklcOIhE4t8D3jUBDyc4KUT7QPp/uX3+1yrZRlzJTPZ7eYIAOIKtp dLeR5KvlBB6+13YTg8pA0PyWdoBAGbFDoLyliq1PEUmzpvzdYlaPoR7hxpIXHWeW2r lXtVGn7yMV0CxHTfEypeWTsoSqBtaBS4WX3zZROtIVrH9bn2eV3+XYnrNzZWUzYYoa B3bsVkUQx5NvQ== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:29 +0100 Subject: [PATCH net-next 10/13] net: airoha: Introduce PPE initialization via NPU Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-10-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Packet Processor Engine (PPE) module available on EN7581 SoC populates the PPE table with 5-tuples flower rules learned from traffic forwarded between the GDM ports connected to the Packet Switch Engine (PSE) module. The airoha_eth driver can enable hw acceleration of learned 5-tuples rules if the user configure them in netfilter flowtable (netfilter flowtable support will be added with subsequent patches). airoha_eth driver configures and collects data from the PPE module via a Network Processor Unit (NPU) RISC-V module available on the EN7581 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/Kconfig | 5 + drivers/net/ethernet/airoha/Makefile | 4 +- drivers/net/ethernet/airoha/airoha_eth.c | 7 +- drivers/net/ethernet/airoha/airoha_eth.h | 88 ++++++ drivers/net/ethernet/airoha/airoha_npu.c | 450 ++++++++++++++++++++++++++++++ drivers/net/ethernet/airoha/airoha_ppe.c | 128 +++++++++ drivers/net/ethernet/airoha/airoha_regs.h | 102 ++++++- 7 files changed, 775 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/airoha/Kconfig b/drivers/net/ethernet/airoha/Kconfig index b6a131845f13b23a12464cfc281e3abe5699389f..c5445431595f9d2098cee9be8683b71ca1da1ea4 100644 --- a/drivers/net/ethernet/airoha/Kconfig +++ b/drivers/net/ethernet/airoha/Kconfig @@ -7,10 +7,15 @@ config NET_VENDOR_AIROHA if NET_VENDOR_AIROHA +config NET_AIROHA_NPU + depends on ARCH_AIROHA || COMPILE_TEST + def_bool NET_AIROHA != n + config NET_AIROHA tristate "Airoha SoC Gigabit Ethernet support" depends on NET_DSA || !NET_DSA select PAGE_POOL + select WANT_DEV_COREDUMP help This driver supports the gigabit ethernet MACs in the Airoha SoC family. diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile index 73a6f3680a4c4ce92ee785d83b905d76a63421df..50028cfc3e3e04efbdd353b1bd65f46b488637d8 100644 --- a/drivers/net/ethernet/airoha/Makefile +++ b/drivers/net/ethernet/airoha/Makefile @@ -3,4 +3,6 @@ # Airoha for the Mediatek SoCs built-in ethernet macs # -obj-$(CONFIG_NET_AIROHA) += airoha_eth.o +obj-$(CONFIG_NET_AIROHA) += airoha-eth.o +airoha-eth-y := airoha_eth.o airoha_ppe.o +airoha-eth-$(CONFIG_NET_AIROHA_NPU) += airoha_npu.o diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 52be276039eff7772b3c604cf7db5d456ec155ab..6c0271899de05f99df32d6ee891f305957abfdc8 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -3,9 +3,6 @@ * Copyright (c) 2024 AIROHA Inc * Author: Lorenzo Bianconi */ -#include -#include -#include #include #include #include @@ -1301,6 +1298,9 @@ static int airoha_hw_init(struct platform_device *pdev, return err; } + if (airoha_ppe_init(eth)) + dev_err(eth->dev, "ppe initialization failed\n"); + set_bit(DEV_STATE_INITIALIZED, ð->state); return 0; @@ -2496,6 +2496,7 @@ static void airoha_remove(struct platform_device *pdev) } free_netdev(eth->napi_dev); + airoha_ppe_deinit(eth); platform_set_drvdata(pdev, NULL); } diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 44834227a58982d4491f3d8174b9e0bea542f785..e07c999ed49eca630537bccf37c98762e9e7f585 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -11,8 +11,13 @@ #include #include #include +#include +#include +#include +#include #include +#define AIROHA_NPU_NUM_CORES 8 #define AIROHA_MAX_NUM_GDM_PORTS 4 #define AIROHA_MAX_NUM_QDMA 2 #define AIROHA_MAX_DSA_PORTS 7 @@ -44,6 +49,14 @@ #define QDMA_METER_IDX(_n) ((_n) & 0xff) #define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) +#define PPE_NUM 2 +#define PPE_SRAM_NUM_ENTRIES (16 * 1024) +#define PPE1_SRAM_NUM_ENTRIES (8 * 1024) +#define PPE_DRAM_NUM_ENTRIES (16 * 1024) +#define PPE_NUM_ENTRIES (PPE_SRAM_NUM_ENTRIES + PPE_DRAM_NUM_ENTRIES) +#define PPE_HASH_MASK (PPE_NUM_ENTRIES - 1) +#define PPE_ENTRY_SIZE 80 + #define MTK_HDR_LEN 4 #define MTK_HDR_XMIT_TAGGED_TPID_8100 1 #define MTK_HDR_XMIT_TAGGED_TPID_88A8 2 @@ -195,6 +208,10 @@ struct airoha_hw_stats { u64 rx_len[7]; }; +struct airoha_foe_entry { + u8 data[PPE_ENTRY_SIZE]; +}; + struct airoha_qdma { struct airoha_eth *eth; void __iomem *regs; @@ -234,12 +251,36 @@ struct airoha_gdm_port { struct metadata_dst *dsa_meta[AIROHA_MAX_DSA_PORTS]; }; +struct airoha_npu { + struct platform_device *pdev; + struct device_node *np; + + void __iomem *base; + + struct airoha_npu_core { + struct airoha_npu *npu; + /* protect concurrent npu memory accesses */ + spinlock_t lock; + struct work_struct wdt_work; + } cores[AIROHA_NPU_NUM_CORES]; +}; + +struct airoha_ppe { + struct airoha_eth *eth; + + void *foe; + dma_addr_t foe_dma; +}; + struct airoha_eth { struct device *dev; unsigned long state; void __iomem *fe_regs; + struct airoha_npu *npu; + struct airoha_ppe *ppe; + struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; @@ -275,4 +316,51 @@ u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val); #define airoha_qdma_clear(qdma, offset, val) \ airoha_rmw((qdma)->regs, (offset), (val), 0) +bool airoha_ppe2_is_enabled(struct airoha_eth *eth); + +#ifdef CONFIG_NET_AIROHA_NPU +int airoha_ppe_init(struct airoha_eth *eth); +void airoha_ppe_deinit(struct airoha_eth *eth); +struct airoha_npu *airoha_npu_init(struct airoha_eth *eth); +void airoha_npu_deinit(struct airoha_npu *npu); +int airoha_npu_ppe_init(struct airoha_npu *npu); +int airoha_npu_ppe_deinit(struct airoha_npu *npu); +int airoha_npu_flush_ppe_sram_entries(struct airoha_npu *npu, + struct airoha_ppe *ppe); +#else +static inline int airoha_ppe_init(struct airoha_eth *eth) +{ + return -EOPNOTSUPP; +} + +static inline void airoha_ppe_deinit(struct airoha_eth *eth) +{ +} + +static inline struct airoha_npu *airoha_npu_init(struct airoha_eth *eth) +{ + return NULL; +} + +static inline void airoha_npu_deinit(struct airoha_npu *npu) +{ +} + +static inline int airoha_npu_ppe_init(struct airoha_npu *npu) +{ + return -EOPNOTSUPP; +} + +static inline int airoha_npu_ppe_deinit(struct airoha_npu *npu) +{ + return -EOPNOTSUPP; +} + +static inline int airoha_npu_flush_ppe_sram_entries(struct airoha_npu *npu, + struct airoha_ppe *ppe) +{ + return -EOPNOTSUPP; +} +#endif /* CONFIG_NET_AIROHA_NPU */ + #endif /* AIROHA_ETH_H */ diff --git a/drivers/net/ethernet/airoha/airoha_npu.c b/drivers/net/ethernet/airoha/airoha_npu.c new file mode 100644 index 0000000000000000000000000000000000000000..23a3c9c410e193f2694b6a4bc974b5d985c7e004 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_npu.c @@ -0,0 +1,450 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#include +#include +#include + +#include "airoha_eth.h" + +#define NPU_EN7581_FIRMWARE_DATA "airoha/en7581_npu_data.bin" +#define NPU_EN7581_FIRMWARE_RV32 "airoha/en7581_npu_rv32.bin" +#define NPU_EN7581_FIRMWARE_RV32_MAX_SIZE 0x200000 +#define NPU_EN7581_FIRMWARE_DATA_MAX_SIZE 0x10000 +#define NPU_DUMP_SIZE 512 + +#define REG_NPU_LOCAL_SRAM 0x0 + +#define NPU_PC_BASE_ADDR 0x305000 +#define REG_PC_DBG(_n) (0x305000 + ((_n) * 0x100)) + +#define NPU_CLUSTER_BASE_ADDR 0x306000 + +#define REG_CR_BOOT_TRIGGER (NPU_CLUSTER_BASE_ADDR + 0x000) +#define REG_CR_BOOT_CONFIG (NPU_CLUSTER_BASE_ADDR + 0x004) +#define REG_CR_BOOT_BASE(_n) (NPU_CLUSTER_BASE_ADDR + 0x020 + ((_n) << 2)) + +#define NPU_MBOX_BASE_ADDR 0x30c000 + +#define REG_CR_MBOX_INT_STATUS (NPU_MBOX_BASE_ADDR + 0x000) +#define MBOX_INT_STATUS_MASK BIT(8) + +#define REG_CR_MBOX_INT_MASK(_n) (NPU_MBOX_BASE_ADDR + 0x004 + ((_n) << 2)) +#define REG_CR_MBQ0_CTRL(_n) (NPU_MBOX_BASE_ADDR + 0x030 + ((_n) << 2)) +#define REG_CR_MBQ8_CTRL(_n) (NPU_MBOX_BASE_ADDR + 0x0b0 + ((_n) << 2)) +#define REG_CR_NPU_MIB(_n) (NPU_MBOX_BASE_ADDR + 0x140 + ((_n) << 2)) + +#define NPU_TIMER_BASE_ADDR 0x310100 +#define REG_WDT_TIMER_CTRL(_n) (NPU_TIMER_BASE_ADDR + ((_n) * 0x100)) +#define WDT_EN_MASK BIT(25) +#define WDT_INTR_MASK BIT(21) + +enum { + NPU_OP_SET = 1, + NPU_OP_SET_NO_WAIT, + NPU_OP_GET, + NPU_OP_GET_NO_WAIT, +}; + +enum { + NPU_FUNC_WIFI, + NPU_FUNC_TUNNEL, + NPU_FUNC_NOTIFY, + NPU_FUNC_DBA, + NPU_FUNC_TR471, + NPU_FUNC_PPE, +}; + +enum { + NPU_MBOX_ERROR, + NPU_MBOX_SUCCESS, +}; + +enum { + PPE_FUNC_SET_WAIT, + PPE_FUNC_SET_WAIT_HWNAT_INIT, + PPE_FUNC_SET_WAIT_HWNAT_DEINIT, + PPE_FUNC_SET_WAIT_API, +}; + +enum { + PPE2_SRAM_SET_ENTRY, + PPE_SRAM_SET_ENTRY, + PPE_SRAM_SET_VAL, + PPE_SRAM_RESET_VAL, +}; + +enum { + QDMA_WAN_ETHER = 1, + QDMA_WAN_PON_XDSL, +}; + +struct npu_mbox_metadata { + union { + struct { + u16 wait_rsp:1; + u16 done:1; + u16 status:3; + u16 static_buf:1; + u16 rsv:5; + u16 func_id:4; + }; + u16 data; + }; +}; + +#define PPE_TYPE_L2B_IPV4 2 +#define PPE_TYPE_L2B_IPV4_IPV6 3 + +struct ppe_mbox_data { + u32 func_type; + u32 func_id; + union { + struct { + u8 cds; + u8 xpon_hal_api; + u8 wan_xsi; + u8 ct_joyme4; + int ppe_type; + int wan_mode; + int wan_sel; + } init_info; + struct { + int func_id; + u32 size; + u32 data; + } set_info; + }; +}; + +static u32 airoha_npu_rr(struct airoha_npu *npu, u32 reg) +{ + return readl(npu->base + reg); +} + +static void airoha_npu_wr(struct airoha_npu *npu, u32 reg, u32 val) +{ + writel(val, npu->base + reg); +} + +static u32 airoha_npu_rmw(struct airoha_npu *npu, u32 reg, u32 mask, u32 val) +{ + val |= airoha_npu_rr(npu, reg) & ~mask; + airoha_npu_wr(npu, reg, val); + + return val; +} + +static int airoha_npu_send_msg(struct airoha_npu *npu, int func_id, + void *p, int size) +{ + struct device *dev = &npu->pdev->dev; + struct npu_mbox_metadata meta = { + .wait_rsp = 1, + .func_id = func_id, + }; + u16 core = 0; /* FIXME */ + u32 val, offset = core << 4; + dma_addr_t dma_addr; + void *addr; + int ret; + + addr = kzalloc(size, GFP_ATOMIC | GFP_DMA); + if (!addr) + return -ENOMEM; + + memcpy(addr, p, size); + dma_addr = dma_map_single(dev, addr, size, DMA_TO_DEVICE); + ret = dma_mapping_error(dev, dma_addr); + if (ret) + goto out; + + spin_lock_bh(&npu->cores[core].lock); + + airoha_npu_wr(npu, REG_CR_MBQ0_CTRL(0) + offset, dma_addr); + airoha_npu_wr(npu, REG_CR_MBQ0_CTRL(1) + offset, size); + val = airoha_npu_rr(npu, REG_CR_MBQ0_CTRL(2) + offset); + airoha_npu_wr(npu, REG_CR_MBQ0_CTRL(2) + offset, val + 1); + airoha_npu_wr(npu, REG_CR_MBQ0_CTRL(3) + offset, meta.data); + + ret = read_poll_timeout_atomic(airoha_npu_rr, meta.data, meta.done, + 100, 100 * MSEC_PER_SEC, false, npu, + REG_CR_MBQ0_CTRL(3) + offset); + if (!ret) + ret = meta.status == NPU_MBOX_SUCCESS ? 0 : -EINVAL; + + spin_unlock_bh(&npu->cores[core].lock); + + dma_unmap_single(dev, dma_addr, size, DMA_TO_DEVICE); +out: + kfree(addr); + + return ret; +} + +static int airoha_npu_run_firmware(struct airoha_npu *npu, struct reserved_mem *rmem) +{ + struct device *dev = &npu->pdev->dev; + const struct firmware *fw; + void __iomem *addr; + int ret; + + ret = request_firmware(&fw, NPU_EN7581_FIRMWARE_RV32, dev); + if (ret) + return ret; + + if (fw->size > NPU_EN7581_FIRMWARE_RV32_MAX_SIZE) { + dev_err(dev, "%s: fw size too overlimit (%ld)\n", + NPU_EN7581_FIRMWARE_RV32, fw->size); + ret = -E2BIG; + goto out; + } + + addr = devm_ioremap(dev, rmem->base, rmem->size); + if (!addr) { + ret = -ENOMEM; + goto out; + } + + memcpy_toio(addr, fw->data, fw->size); + release_firmware(fw); + + ret = request_firmware(&fw, NPU_EN7581_FIRMWARE_DATA, dev); + if (ret) + return ret; + + if (fw->size > NPU_EN7581_FIRMWARE_DATA_MAX_SIZE) { + dev_err(dev, "%s: fw size too overlimit (%ld)\n", + NPU_EN7581_FIRMWARE_DATA, fw->size); + ret = -E2BIG; + goto out; + } + + memcpy_toio(npu->base + REG_NPU_LOCAL_SRAM, fw->data, fw->size); +out: + release_firmware(fw); + + return ret; +} + +static irqreturn_t airoha_npu_mbox_handler(int irq, void *npu_instance) +{ + struct airoha_npu *npu = npu_instance; + struct npu_mbox_metadata meta; + + /* clear mbox interrupt status */ + airoha_npu_wr(npu, REG_CR_MBOX_INT_STATUS, MBOX_INT_STATUS_MASK); + + /* acknowledge npu */ + meta.data = airoha_npu_rr(npu, REG_CR_MBQ8_CTRL(3)); + meta.status = 0; + meta.done = 1; + airoha_npu_wr(npu, REG_CR_MBQ8_CTRL(3), meta.data); + + return IRQ_HANDLED; +} + +int airoha_npu_ppe_init(struct airoha_npu *npu) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_HWNAT_INIT, + .init_info = { + .ppe_type = PPE_TYPE_L2B_IPV4_IPV6, + .wan_mode = QDMA_WAN_ETHER, + }, + }; + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} + +int airoha_npu_ppe_deinit(struct airoha_npu *npu) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_HWNAT_DEINIT, + }; + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} + +int airoha_npu_flush_ppe_sram_entries(struct airoha_npu *npu, + struct airoha_ppe *ppe) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_API, + .set_info = { + .func_id = PPE_SRAM_RESET_VAL, + .data = ppe->foe_dma, + .size = PPE_SRAM_NUM_ENTRIES, + }, + }; + u32 sram_tb_size; + + sram_tb_size = PPE_SRAM_NUM_ENTRIES * sizeof(struct airoha_foe_entry); + if (airoha_ppe2_is_enabled(ppe->eth)) + memset(ppe->foe, 0, sram_tb_size / 2); + else + memset(ppe->foe, 0, sram_tb_size); + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} + +static void airoha_npu_wdt_work(struct work_struct *work) +{ + struct airoha_npu_core *core; + struct airoha_npu *npu; + void *dump; + int c; + + core = container_of(work, struct airoha_npu_core, wdt_work); + npu = core->npu; + + dump = vzalloc(NPU_DUMP_SIZE); + if (!dump) + return; + + c = core - &npu->cores[0]; + snprintf(dump, NPU_DUMP_SIZE, "PC: %08x SP: %08x LR: %08x\n", + airoha_npu_rr(npu, REG_PC_DBG(c)), + airoha_npu_rr(npu, REG_PC_DBG(c) + 0x4), + airoha_npu_rr(npu, REG_PC_DBG(c) + 0x8)); + + dev_coredumpv(&npu->pdev->dev, dump, NPU_DUMP_SIZE, GFP_KERNEL); +} + +static irqreturn_t airoha_npu_wdt_handler(int irq, void *core_instance) +{ + struct airoha_npu_core *core = core_instance; + struct airoha_npu *npu = core->npu; + int c = core - &npu->cores[0]; + u32 val; + + airoha_npu_rmw(npu, REG_WDT_TIMER_CTRL(c), 0, WDT_INTR_MASK); + val = airoha_npu_rr(npu, REG_WDT_TIMER_CTRL(c)); + if (FIELD_GET(WDT_EN_MASK, val)) + schedule_work(&core->wdt_work); + + return IRQ_HANDLED; +} + +struct airoha_npu *airoha_npu_init(struct airoha_eth *eth) +{ + struct reserved_mem *rmem; + int i, irq, err = -ENODEV; + struct airoha_npu *npu; + struct device_node *np; + + npu = devm_kzalloc(eth->dev, sizeof(*npu), GFP_KERNEL); + if (!npu) + return ERR_PTR(-ENOMEM); + + npu->np = of_parse_phandle(eth->dev->of_node, "airoha,npu", 0); + if (!npu->np) + return ERR_PTR(-ENODEV); + + npu->pdev = of_find_device_by_node(npu->np); + if (!npu->pdev) + goto error_of_node_put; + + get_device(&npu->pdev->dev); + + npu->base = devm_platform_ioremap_resource(npu->pdev, 0); + if (IS_ERR(npu->base)) + goto error_put_dev; + + np = of_parse_phandle(npu->np, "memory-region", 0); + if (!np) + goto error_put_dev; + + rmem = of_reserved_mem_lookup(np); + of_node_put(np); + + if (!rmem) + goto error_put_dev; + + irq = platform_get_irq(npu->pdev, 0); + if (irq < 0) { + err = irq; + goto error_put_dev; + } + + err = devm_request_irq(&npu->pdev->dev, irq, airoha_npu_mbox_handler, + IRQF_SHARED, "airoha-npu-mbox", npu); + if (err) + goto error_put_dev; + + for (i = 0; i < ARRAY_SIZE(npu->cores); i++) { + struct airoha_npu_core *core = &npu->cores[i]; + + spin_lock_init(&core->lock); + core->npu = npu; + + irq = platform_get_irq(npu->pdev, i + 1); + if (irq < 0) { + err = irq; + goto error_put_dev; + } + + err = devm_request_irq(&npu->pdev->dev, irq, + airoha_npu_wdt_handler, IRQF_SHARED, + "airoha-npu-wdt", core); + if (err) + goto error_put_dev; + + INIT_WORK(&core->wdt_work, airoha_npu_wdt_work); + } + + if (dma_set_coherent_mask(&npu->pdev->dev, 0xbfffffff)) + dev_err(&npu->pdev->dev, + "failed coherent DMA configuration\n"); + + err = airoha_npu_run_firmware(npu, rmem); + if (err) + goto error_put_dev; + + airoha_npu_wr(npu, REG_CR_NPU_MIB(10), + rmem->base + NPU_EN7581_FIRMWARE_RV32_MAX_SIZE); + airoha_npu_wr(npu, REG_CR_NPU_MIB(11), 0x40000); /* SRAM 256K */ + airoha_npu_wr(npu, REG_CR_NPU_MIB(12), 0); + airoha_npu_wr(npu, REG_CR_NPU_MIB(21), 1); + msleep(100); + + /* setting booting address */ + for (i = 0; i < AIROHA_NPU_NUM_CORES; i++) + airoha_npu_wr(npu, REG_CR_BOOT_BASE(i), rmem->base); + usleep_range(1000, 2000); + + /* enable NPU cores */ + /* do not start core3 since it is used for WiFi offloading */ + airoha_npu_wr(npu, REG_CR_BOOT_CONFIG, 0xf7); + airoha_npu_wr(npu, REG_CR_BOOT_TRIGGER, 0x1); + msleep(100); + + return npu; + +error_put_dev: + put_device(&npu->pdev->dev); +error_of_node_put: + of_node_put(npu->np); + + return ERR_PTR(err); +} + +void airoha_npu_deinit(struct airoha_npu *npu) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(npu->cores); i++) + cancel_work_sync(&npu->cores[i].wdt_work); + + put_device(&npu->pdev->dev); + of_node_put(npu->np); +} diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c new file mode 100644 index 0000000000000000000000000000000000000000..bb64c3c09fa37b797bd028ad7012d81662d82ba6 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -0,0 +1,128 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#include "airoha_regs.h" +#include "airoha_eth.h" + +bool airoha_ppe2_is_enabled(struct airoha_eth *eth) +{ + return airoha_fe_rr(eth, REG_PPE_GLO_CFG(1)) & PPE_GLO_CFG_EN_MASK; +} + +static void airoha_ppe_hw_init(struct airoha_ppe *ppe) +{ + struct airoha_eth *eth = ppe->eth; + u32 sram_tb_size; + int i; + + sram_tb_size = PPE_SRAM_NUM_ENTRIES * sizeof(struct airoha_foe_entry); + for (i = 0; i < PPE_NUM; i++) { + airoha_fe_wr(eth, REG_PPE_TB_BASE(i), + ppe->foe_dma + sram_tb_size); + + airoha_fe_rmw(eth, REG_PPE_BND_AGE0(i), + PPE_BIND_AGE0_DELTA_NON_L4 | + PPE_BIND_AGE0_DELTA_UDP, + FIELD_PREP(PPE_BIND_AGE0_DELTA_NON_L4, 1) | + FIELD_PREP(PPE_BIND_AGE0_DELTA_UDP, 12)); + airoha_fe_rmw(eth, REG_PPE_BND_AGE1(i), + PPE_BIND_AGE1_DELTA_TCP_FIN | + PPE_BIND_AGE1_DELTA_TCP, + FIELD_PREP(PPE_BIND_AGE1_DELTA_TCP_FIN, 1) | + FIELD_PREP(PPE_BIND_AGE1_DELTA_TCP, 7)); + + airoha_fe_rmw(eth, REG_PPE_TB_HASH_CFG(i), + PPE_SRAM_TABLE_EN_MASK | + PPE_SRAM_HASH1_EN_MASK | + PPE_DRAM_TABLE_EN_MASK | + PPE_SRAM_HASH0_MODE_MASK | + PPE_SRAM_HASH1_MODE_MASK | + PPE_DRAM_HASH0_MODE_MASK | + PPE_DRAM_HASH1_MODE_MASK, + FIELD_PREP(PPE_SRAM_TABLE_EN_MASK, 1) | + FIELD_PREP(PPE_SRAM_HASH1_EN_MASK, 1) | + FIELD_PREP(PPE_SRAM_HASH1_MODE_MASK, 1) | + FIELD_PREP(PPE_DRAM_HASH1_MODE_MASK, 3)); + + airoha_fe_rmw(eth, REG_PPE_TB_CFG(i), + PPE_TB_CFG_SEARCH_MISS_MASK | + PPE_TB_ENTRY_SIZE_MASK, + FIELD_PREP(PPE_TB_CFG_SEARCH_MISS_MASK, 3) | + FIELD_PREP(PPE_TB_ENTRY_SIZE_MASK, 0)); + + airoha_fe_wr(eth, REG_PPE_HASH_SEED(i), PPE_HASH_SEED); + } + + if (airoha_ppe2_is_enabled(eth)) { + airoha_fe_rmw(eth, REG_PPE_TB_CFG(0), + PPE_SRAM_TB_NUM_ENTRY_MASK | + PPE_DRAM_TB_NUM_ENTRY_MASK, + FIELD_PREP(PPE_SRAM_TB_NUM_ENTRY_MASK, 3) | + FIELD_PREP(PPE_DRAM_TB_NUM_ENTRY_MASK, 4)); + airoha_fe_rmw(eth, REG_PPE_TB_CFG(1), + PPE_SRAM_TB_NUM_ENTRY_MASK | + PPE_DRAM_TB_NUM_ENTRY_MASK, + FIELD_PREP(PPE_SRAM_TB_NUM_ENTRY_MASK, 3) | + FIELD_PREP(PPE_DRAM_TB_NUM_ENTRY_MASK, 4)); + } else { + airoha_fe_rmw(eth, REG_PPE_TB_CFG(0), + PPE_SRAM_TB_NUM_ENTRY_MASK | + PPE_DRAM_TB_NUM_ENTRY_MASK, + FIELD_PREP(PPE_SRAM_TB_NUM_ENTRY_MASK, 4) | + FIELD_PREP(PPE_DRAM_TB_NUM_ENTRY_MASK, 4)); + } +} + +int airoha_ppe_init(struct airoha_eth *eth) +{ + struct airoha_npu *npu; + struct airoha_ppe *ppe; + int foe_size, err; + + ppe = devm_kzalloc(eth->dev, sizeof(*ppe), GFP_KERNEL); + if (!ppe) + return -ENOMEM; + + foe_size = PPE_NUM_ENTRIES * sizeof(struct airoha_foe_entry); + ppe->foe = dmam_alloc_coherent(eth->dev, foe_size, &ppe->foe_dma, + GFP_KERNEL); + if (!ppe->foe) + return -ENOMEM; + + memset(ppe->foe, 0, foe_size); + ppe->eth = eth; + eth->ppe = ppe; + + npu = airoha_npu_init(eth); + if (IS_ERR(npu)) + return PTR_ERR(npu); + + eth->npu = npu; + err = airoha_npu_ppe_init(npu); + if (err) + goto error; + + airoha_ppe_hw_init(ppe); + err = airoha_npu_flush_ppe_sram_entries(npu, ppe); + if (err) + goto error; + + return 0; + +error: + airoha_npu_deinit(npu); + eth->npu = NULL; + + return err; +} + +void airoha_ppe_deinit(struct airoha_eth *eth) +{ + if (eth->npu) { + airoha_npu_ppe_deinit(eth->npu); + airoha_npu_deinit(eth->npu); + } +} diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h index e467dd81ff44a9ad560226cab42b7431812f5fb9..3ad2c5d11c2f9b0d00744cc43394d02d47347f61 100644 --- a/drivers/net/ethernet/airoha/airoha_regs.h +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -15,6 +15,7 @@ #define CDM1_BASE 0x0400 #define GDM1_BASE 0x0500 #define PPE1_BASE 0x0c00 +#define PPE2_BASE 0x1c00 #define CDM2_BASE 0x1400 #define GDM2_BASE 0x1500 @@ -36,6 +37,7 @@ #define FE_RST_GDM3_MBI_ARB_MASK BIT(2) #define FE_RST_CORE_MASK BIT(0) +#define REG_FE_FOE_TS 0x0010 #define REG_FE_WAN_MAC_H 0x0030 #define REG_FE_LAN_MAC_H 0x0040 @@ -192,11 +194,101 @@ #define REG_FE_GDM_RX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x198) #define REG_FE_GDM_RX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x19c) -#define REG_PPE1_TB_HASH_CFG (PPE1_BASE + 0x250) -#define PPE1_SRAM_TABLE_EN_MASK BIT(0) -#define PPE1_SRAM_HASH1_EN_MASK BIT(8) -#define PPE1_DRAM_TABLE_EN_MASK BIT(16) -#define PPE1_DRAM_HASH1_EN_MASK BIT(24) +#define REG_PPE_GLO_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x200) +#define PPE_GLO_CFG_BUSY_MASK BIT(31) +#define PPE_GLO_CFG_FLOW_DROP_UPDATE_MASK BIT(9) +#define PPE_GLO_CFG_PSE_HASH_OFS_MASK BIT(6) +#define PPE_GLO_CFG_PPE_BSWAP_MASK BIT(5) +#define PPE_GLO_CFG_TTL_DROP_MASK BIT(4) +#define PPE_GLO_CFG_IP4_CS_DROP_MASK BIT(3) +#define PPE_GLO_CFG_IP4_L4_CS_DROP_MASK BIT(2) +#define PPE_GLO_CFG_EN_MASK BIT(0) + +#define REG_PPE_PPE_FLOW_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x204) +#define PPE_FLOW_CFG_IP6_HASH_GRE_KEY_MASK BIT(20) +#define PPE_FLOW_CFG_IP4_HASH_GRE_KEY_MASK BIT(19) +#define PPE_FLOW_CFG_IP4_HASH_FLOW_LABEL_MASK BIT(18) +#define PPE_FLOW_CFG_IP4_NAT_FRAG_MASK BIT(17) +#define PPE_FLOW_CFG_IP_PROTO_BLACKLIST_MASK BIT(16) +#define PPE_FLOW_CFG_IP4_DSLITE_MASK BIT(14) +#define PPE_FLOW_CFG_IP4_NAPT_MASK BIT(13) +#define PPE_FLOW_CFG_IP4_NAT_MASK BIT(12) +#define PPE_FLOW_CFG_IP6_6RD_MASK BIT(10) +#define PPE_FLOW_CFG_IP6_5T_ROUTE_MASK BIT(9) +#define PPE_FLOW_CFG_IP6_3T_ROUTE_MASK BIT(8) +#define PPE_FLOW_CFG_IP4_UDP_FRAG_MASK BIT(7) +#define PPE_FLOW_CFG_IP4_TCP_FRAG_MASK BIT(6) + +#define REG_PPE_IP_PROTO_CHK(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x208) +#define PPE_IP_PROTO_CHK_IPV4_MASK GENMASK(15, 0) +#define PPE_IP_PROTO_CHK_IPV6_MASK GENMASK(31, 16) + +#define REG_PPE_TB_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x21c) +#define PPE_SRAM_TB_NUM_ENTRY_MASK GENMASK(26, 24) +#define PPE_TB_CFG_KEEPALIVE_MASK GENMASK(13, 12) +#define PPE_TB_CFG_AGE_TCP_FIN_MASK BIT(11) +#define PPE_TB_CFG_AGE_UDP_MASK BIT(10) +#define PPE_TB_CFG_AGE_TCP_MASK BIT(9) +#define PPE_TB_CFG_AGE_UNBIND_MASK BIT(8) +#define PPE_TB_CFG_AGE_NON_L4_MASK BIT(7) +#define PPE_TB_CFG_AGE_PREBIND_MASK BIT(6) +#define PPE_TB_CFG_SEARCH_MISS_MASK GENMASK(5, 4) +#define PPE_TB_ENTRY_SIZE_MASK BIT(3) +#define PPE_DRAM_TB_NUM_ENTRY_MASK GENMASK(2, 0) + +#define REG_PPE_TB_BASE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x220) + +#define REG_PPE_BIND_RATE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x228) +#define PPE_BIND_RATE_L2B_BIND_MASK GENMASK(31, 16) +#define PPE_BIND_RATE_BIND_MASK GENMASK(15, 0) + +#define REG_PPE_BIND_LIMIT0(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x22c) +#define PPE_BIND_LIMIT0_HALF_MASK GENMASK(29, 16) +#define PPE_BIND_LIMIT0_QUARTER_MASK GENMASK(13, 0) + +#define REG_PPE_BIND_LIMIT1(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x230) +#define PPE_BIND_LIMIT1_NON_L4_MASK GENMASK(23, 16) +#define PPE_BIND_LIMIT1_FULL_MASK GENMASK(13, 0) + +#define REG_PPE_BND_AGE0(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x23c) +#define PPE_BIND_AGE0_DELTA_NON_L4 GENMASK(30, 16) +#define PPE_BIND_AGE0_DELTA_UDP GENMASK(14, 0) + +#define REG_PPE_UNBIND_AGE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x238) +#define PPE_UNBIND_AGE_MIN_PACKETS_MASK GENMASK(31, 16) +#define PPE_UNBIND_AGE_DELTA_MASK GENMASK(7, 0) + +#define REG_PPE_BND_AGE1(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x240) +#define PPE_BIND_AGE1_DELTA_TCP_FIN GENMASK(30, 16) +#define PPE_BIND_AGE1_DELTA_TCP GENMASK(14, 0) + +#define REG_PPE_HASH_SEED(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x244) +#define PPE_HASH_SEED 0x12345678 + +#define REG_PPE_DFT_CPORT0(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x248) + +#define REG_PPE_DFT_CPORT1(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x24c) + +#define REG_PPE_TB_HASH_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x250) +#define PPE_DRAM_HASH1_MODE_MASK GENMASK(31, 28) +#define PPE_DRAM_HASH1_EN_MASK BIT(24) +#define PPE_DRAM_HASH0_MODE_MASK GENMASK(23, 20) +#define PPE_DRAM_TABLE_EN_MASK BIT(16) +#define PPE_SRAM_HASH1_MODE_MASK GENMASK(15, 12) +#define PPE_SRAM_HASH1_EN_MASK BIT(8) +#define PPE_SRAM_HASH0_MODE_MASK GENMASK(7, 4) +#define PPE_SRAM_TABLE_EN_MASK BIT(0) + +#define REG_PPE_RAM_CTRL(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x31c) +#define PPE_SRAM_CTRL_ACK_MASK BIT(31) +#define PPE_SRAM_CTRL_DUAL_SUCESS_MASK BIT(30) +#define PPE_SRAM_CTRL_ENTRY_MASK GENMASK(23, 8) +#define PPE_SRAM_WR_DUAL_DIRECTION_MASK BIT(2) +#define PPE_SRAM_CTRL_WR_MASK BIT(1) +#define PPE_SRAM_CTRL_REQ_MASK BIT(0) + +#define REG_PPE_RAM_BASE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x320) +#define REG_PPE_RAM_ENTRY(_m, _n) (REG_PPE_RAM_BASE(_m) + ((_n) << 2)) #define REG_FE_GDM_TX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x280) #define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x284) From patchwork Wed Feb 5 18:21:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961674 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FEFF1FCFCB; Wed, 5 Feb 2025 18:22:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779753; cv=none; b=IQJkChRtml6MIA+Q6sgEzcQqA9ECeHRni+ZWe0T9PbAHszBJycABiH+nBs095UKfo0f8U4RTvEVAtSGh5FilbqJPpO8buyQ2twVBSrEmeblaz+WTYC45e8l/7UbxWQYkv3SF496wktCtmhogElRwZvJ+F+/Ba2T5gw77x2G+FcU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779753; c=relaxed/simple; bh=tpRDx9zhLPv4QdtzfNsizx67Y3q8RleylqkMbwBIJuE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=i2/3raT83eQQzrekkljRFEaNaKi61n1GR2AzbJed571ppSOKktQvCqDzwjL0zXDkrJpGBte1pS9f8+/G2rd2drt3Bpv2fXuPYh4gs5g/Kg7W7kAaWP0TcmwPy7A7jorHL1ncFnJQmrdnUSWYr+pooktn7PsaxjVESMrymQ95mJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jEnEraIP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jEnEraIP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A38AAC4CEDD; Wed, 5 Feb 2025 18:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779753; bh=tpRDx9zhLPv4QdtzfNsizx67Y3q8RleylqkMbwBIJuE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jEnEraIP2Alp/j2gFDCgN4FFAlqum4m3IMcnIWfqZjXMGCahZufBWylNgRVeeB0nu axErUdHwlMkRAgqWeNentGf0Czvx8lYa8vimXBW0jo9OVebSKcuUDcNO1ZG60S4dYy AIx1+w2wma1hIRoBNRlID89tZ3Kv97ZqMt7FSPZd0cWFQxaQv3r/nOLuM5JWT3RqVK /a/8ED7GVgPHfn7Qkuv7KqC2cG/0s77ILg4YhZpb1m22W0a7l/liV8U65t2ZhRXhWL PEeldnDKW3wMH4VRi6SE2cUvcJI1HFYaBj+KsVW4ZNkaPjKGYgAONc8/uP7QFzmyND Gxl0Dq2s+s4zA== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:30 +0100 Subject: [PATCH net-next 11/13] net: airoha: Introduce flowtable offload support Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-11-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Introduce netfilter flowtable integration in order to allow airoha_eth driver to offload 5-tuple flower rules learned by the PPE module if the user accelerates them using a nft configuration similar to the one reported below: table inet filter { flowtable ft { hook ingress priority filter devices = { lan1, lan2, lan3, lan4, eth1 } flags offload; } chain forward { type filter hook forward priority filter; policy accept; meta l4proto { tcp, udp } flow add @ft } } Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 56 ++- drivers/net/ethernet/airoha/airoha_eth.h | 242 ++++++++++- drivers/net/ethernet/airoha/airoha_npu.c | 50 +++ drivers/net/ethernet/airoha/airoha_ppe.c | 690 ++++++++++++++++++++++++++++++- 4 files changed, 1029 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 6c0271899de05f99df32d6ee891f305957abfdc8..138e33845d5c7b1ef421d672a18d312d77b625c9 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -5,10 +5,8 @@ */ #include #include -#include #include #include -#include #include #include "airoha_regs.h" @@ -616,6 +614,7 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) while (done < budget) { struct airoha_queue_entry *e = &q->entry[q->tail]; struct airoha_qdma_desc *desc = &q->desc[q->tail]; + u32 hash, reason, msg1 = le32_to_cpu(desc->msg1); dma_addr_t dma_addr = le32_to_cpu(desc->addr); u32 desc_ctrl = le32_to_cpu(desc->ctrl); struct airoha_gdm_port *port; @@ -678,6 +677,15 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) &port->dsa_meta[sptag]->dst); } + hash = FIELD_GET(AIROHA_RXD4_FOE_ENTRY, msg1); + if (hash != AIROHA_RXD4_FOE_ENTRY) + skb_set_hash(skb, jhash_1word(hash, 0), + PKT_HASH_TYPE_L4); + + reason = FIELD_GET(AIROHA_RXD4_PPE_CPU_REASON, msg1); + if (reason == PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) + airoha_ppe_check_skb(eth->ppe, hash); + napi_gro_receive(&q->napi, skb); done++; @@ -2157,6 +2165,47 @@ static int airoha_tc_htb_alloc_leaf_queue(struct airoha_gdm_port *port, return 0; } +static int airoha_dev_setup_tc_block(struct airoha_gdm_port *port, + struct flow_block_offload *f) +{ + flow_setup_cb_t *cb = airoha_ppe_setup_tc_block_cb; + static LIST_HEAD(block_cb_list); + struct flow_block_cb *block_cb; + + if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) + return -EOPNOTSUPP; + + f->driver_block_list = &block_cb_list; + switch (f->command) { + case FLOW_BLOCK_BIND: + block_cb = flow_block_cb_lookup(f->block, cb, port->dev); + if (block_cb) { + flow_block_cb_incref(block_cb); + return 0; + } + block_cb = flow_block_cb_alloc(cb, port->dev, port->dev, NULL); + if (IS_ERR(block_cb)) + return PTR_ERR(block_cb); + + flow_block_cb_incref(block_cb); + flow_block_cb_add(block_cb, f); + list_add_tail(&block_cb->driver_list, &block_cb_list); + return 0; + case FLOW_BLOCK_UNBIND: + block_cb = flow_block_cb_lookup(f->block, cb, port->dev); + if (!block_cb) + return -ENOENT; + + if (!flow_block_cb_decref(block_cb)) { + flow_block_cb_remove(block_cb, f); + list_del(&block_cb->driver_list); + } + return 0; + default: + return -EOPNOTSUPP; + } +} + static void airoha_tc_remove_htb_queue(struct airoha_gdm_port *port, int queue) { struct net_device *dev = port->dev; @@ -2240,6 +2289,9 @@ static int airoha_dev_tc_setup(struct net_device *dev, enum tc_setup_type type, return airoha_tc_setup_qdisc_ets(port, type_data); case TC_SETUP_QDISC_HTB: return airoha_tc_setup_qdisc_htb(port, type_data); + case TC_SETUP_BLOCK: + case TC_SETUP_FT: + return airoha_dev_setup_tc_block(port, type_data); default: return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index e07c999ed49eca630537bccf37c98762e9e7f585..14f2d863fd1cc3f087f3f168714d0993db9e1ba7 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include #define AIROHA_NPU_NUM_CORES 8 #define AIROHA_MAX_NUM_GDM_PORTS 4 @@ -208,8 +210,224 @@ struct airoha_hw_stats { u64 rx_len[7]; }; +enum { + PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED = 0x0f, +}; + +enum { + AIROHA_FOE_STATE_INVALID, + AIROHA_FOE_STATE_UNBIND, + AIROHA_FOE_STATE_BIND, + AIROHA_FOE_STATE_FIN +}; + +enum { + PPE_PKT_TYPE_IPV4_HNAPT = 0, + PPE_PKT_TYPE_IPV4_ROUTE = 1, + PPE_PKT_TYPE_BRIDGE = 2, + PPE_PKT_TYPE_IPV4_DSLITE = 3, + PPE_PKT_TYPE_IPV6_ROUTE_3T = 4, + PPE_PKT_TYPE_IPV6_ROUTE_5T = 5, + PPE_PKT_TYPE_IPV6_6RD = 7, +}; + +#define AIROHA_FOE_MAC_PPPOE_ID GENMASK(15, 0) +#define AIROHA_FOE_MAC_SMAC_ID GENMASK(20, 16) + +struct airoha_foe_mac_info_common { + u16 vlan1; + u16 etype; + + u32 dest_mac_hi; + + u16 vlan2; + u16 dest_mac_lo; + + u32 src_mac_hi; +}; + +struct airoha_foe_mac_info { + struct airoha_foe_mac_info_common common; + + u16 pppoe_id; + u16 src_mac_lo; +}; + +#define AIROHA_FOE_IB1_UNBIND_TIMESTAMP GENMASK(7, 0) +#define AIROHA_FOE_IB1_UNBIND_PACKETS GENMASK(23, 8) +#define AIROHA_FOE_IB1_UNBIND_PREBIND BIT(24) + +#define AIROHA_FOE_IB1_BIND_TIMESTAMP GENMASK(14, 0) +#define AIROHA_FOE_IB1_BIND_KEEPALIVE BIT(15) +#define AIROHA_FOE_IB1_BIND_VLAN_LAYER GENMASK(18, 16) +#define AIROHA_FOE_IB1_BIND_PPPOE BIT(19) +#define AIROHA_FOE_IB1_BIND_VLAN_TAG BIT(20) +#define AIROHA_FOE_IB1_BIND_PKT_SAMPLE BIT(21) +#define AIROHA_FOE_IB1_BIND_CACHE BIT(22) +#define AIROHA_FOE_IB1_BIND_TUNNEL_DECAP BIT(23) +#define AIROHA_FOE_IB1_BIND_TTL BIT(24) +#define AIROHA_FOE_IB1_PACKET_TYPE GENMASK(27, 25) +#define AIROHA_FOE_IB1_STATE GENMASK(29, 28) +#define AIROHA_FOE_IB1_UDP BIT(30) +#define AIROHA_FOE_IB1_STATIC BIT(31) + +#define AIROHA_FOE_IB2_NBQ GENMASK(4, 0) +#define AIROHA_FOE_IB2_PSE_PORT GENMASK(8, 5) +#define AIROHA_FOE_IB2_PSE_QOS BIT(9) +#define AIROHA_FOE_IB2_FAST_PATH BIT(10) +#define AIROHA_FOE_IB2_MULTICAST BIT(11) +#define AIROHA_FOE_IB2_PCP BIT(12) +#define AIROHA_FOE_IB2_PORT_AG GENMASK(23, 13) +#define AIROHA_FOE_IB2_DSCP GENMASK(31, 24) + +#define AIROHA_FOE_TUNNEL_ID GENMASK(5, 0) +#define AIROHA_FOE_TUNNEL BIT(6) +#define AIROHA_FOE_DPI BIT(7) +#define AIROHA_FOE_QID GENMASK(10, 8) +#define AIROHA_FOE_CHANNEL GENMASK(15, 11) +#define AIROHA_FOE_SHAPER_ID GENMASK(23, 16) +#define AIROHA_FOE_ACTDP GENMASK(31, 24) + +struct airoha_foe_bridge { + u32 dest_mac_hi; + + u16 src_mac_hi; + u16 dest_mac_lo; + + u32 src_mac_lo; + + u32 ib2; + + u32 rsv[5]; + + u32 data; + + struct airoha_foe_mac_info l2; +}; + +struct airoha_foe_ipv4_tuple { + u32 src_ip; + u32 dest_ip; + union { + struct { + u16 dest_port; + u16 src_port; + }; + struct { + u8 protocol; + u8 _pad[3]; /* fill with 0xa5a5a5 */ + }; + u32 ports; + }; +}; + +struct airoha_foe_ipv4 { + struct airoha_foe_ipv4_tuple orig_tuple; + + u32 ib2; + + struct airoha_foe_ipv4_tuple new_tuple; + + u32 rsv[2]; + + u32 data; + + struct airoha_foe_mac_info l2; +}; + +struct airoha_foe_ipv4_dslite { + struct airoha_foe_ipv4_tuple ip4; + + u32 ib2; + + u8 flow_label[3]; + u8 priority; + + u32 rsv[4]; + + u32 data; + + struct airoha_foe_mac_info l2; +}; + +struct airoha_foe_ipv6 { + u32 src_ip[4]; + u32 dest_ip[4]; + + union { + struct { + u16 dest_port; + u16 src_port; + }; + struct { + u8 protocol; + u8 pad[3]; + }; + u32 ports; + }; + + u32 data; + + u32 ib2; + + struct airoha_foe_mac_info_common l2; +}; + struct airoha_foe_entry { - u8 data[PPE_ENTRY_SIZE]; + union { + struct { + u32 ib1; + union { + struct airoha_foe_bridge bridge; + struct airoha_foe_ipv4 ipv4; + struct airoha_foe_ipv4_dslite dslite; + struct airoha_foe_ipv6 ipv6; + DECLARE_FLEX_ARRAY(u32, d); + }; + }; + u8 data[PPE_ENTRY_SIZE]; + }; +}; + +struct airoha_flow_data { + struct ethhdr eth; + + union { + struct { + __be32 src_addr; + __be32 dst_addr; + } v4; + + struct { + struct in6_addr src_addr; + struct in6_addr dst_addr; + } v6; + }; + + __be16 src_port; + __be16 dst_port; + + u16 vlan_in; + + struct { + u16 id; + __be16 proto; + u8 num; + } vlan; + struct { + u16 sid; + u8 num; + } pppoe; +}; + +struct airoha_flow_table_entry { + struct hlist_node list; + + struct airoha_foe_entry data; + u32 hash; + + struct rhash_head node; + unsigned long cookie; }; struct airoha_qdma { @@ -265,11 +483,17 @@ struct airoha_npu { } cores[AIROHA_NPU_NUM_CORES]; }; +#define AIROHA_RXD4_FOE_ENTRY GENMASK(15, 0) +#define AIROHA_RXD4_PPE_CPU_REASON GENMASK(20, 16) + struct airoha_ppe { struct airoha_eth *eth; void *foe; dma_addr_t foe_dma; + + struct hlist_head *foe_flow; + u16 foe_check_time[PPE_NUM_ENTRIES]; }; struct airoha_eth { @@ -280,6 +504,7 @@ struct airoha_eth { struct airoha_npu *npu; struct airoha_ppe *ppe; + struct rhashtable flow_table; struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; @@ -317,6 +542,12 @@ u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val); airoha_rmw((qdma)->regs, (offset), (val), 0) bool airoha_ppe2_is_enabled(struct airoha_eth *eth); +void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash); +int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv); +u32 airoha_ppe_get_timestamp(struct airoha_ppe *ppe); +struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe, + u32 hash); #ifdef CONFIG_NET_AIROHA_NPU int airoha_ppe_init(struct airoha_eth *eth); @@ -327,6 +558,8 @@ int airoha_npu_ppe_init(struct airoha_npu *npu); int airoha_npu_ppe_deinit(struct airoha_npu *npu); int airoha_npu_flush_ppe_sram_entries(struct airoha_npu *npu, struct airoha_ppe *ppe); +int airoha_npu_foe_commit_entry(struct airoha_ppe *ppe, + struct airoha_foe_entry *e, u32 hash); #else static inline int airoha_ppe_init(struct airoha_eth *eth) { @@ -361,6 +594,13 @@ static inline int airoha_npu_flush_ppe_sram_entries(struct airoha_npu *npu, { return -EOPNOTSUPP; } + +static inline int airoha_npu_foe_commit_entry(struct airoha_ppe *ppe, + struct airoha_foe_entry *e, + u32 hash) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_NET_AIROHA_NPU */ #endif /* AIROHA_ETH_H */ diff --git a/drivers/net/ethernet/airoha/airoha_npu.c b/drivers/net/ethernet/airoha/airoha_npu.c index 23a3c9c410e193f2694b6a4bc974b5d985c7e004..52e0fb7ddda42eb3be24e6bfe1acfc319d941334 100644 --- a/drivers/net/ethernet/airoha/airoha_npu.c +++ b/drivers/net/ethernet/airoha/airoha_npu.c @@ -335,6 +335,56 @@ static irqreturn_t airoha_npu_wdt_handler(int irq, void *core_instance) return IRQ_HANDLED; } +int airoha_npu_foe_commit_entry(struct airoha_ppe *ppe, + struct airoha_foe_entry *e, u32 hash) +{ + struct airoha_foe_entry *hwe = ppe->foe + hash * sizeof(*hwe); + u16 ts = airoha_ppe_get_timestamp(ppe); + + memcpy(&hwe->d, &e->d, sizeof(*hwe) - sizeof(hwe->ib1)); + wmb(); + + e->ib1 &= ~AIROHA_FOE_IB1_BIND_TIMESTAMP; + e->ib1 |= FIELD_PREP(AIROHA_FOE_IB1_BIND_TIMESTAMP, ts); + hwe->ib1 = e->ib1; + + if (hash < PPE_SRAM_NUM_ENTRIES) { + dma_addr_t addr = ppe->foe_dma + hash * sizeof(*hwe); + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_API, + .set_info = { + .data = addr, + .size = sizeof(*hwe), + }, + }; + struct airoha_eth *eth = ppe->eth; + bool ppe2; + int err; + + ppe2 = airoha_ppe2_is_enabled(ppe->eth) && + hash >= PPE1_SRAM_NUM_ENTRIES; + ppe_data.set_info.func_id = ppe2 ? PPE2_SRAM_SET_ENTRY + : PPE_SRAM_SET_ENTRY; + + err = airoha_npu_send_msg(eth->npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); + if (err) + return err; + + ppe_data.set_info.func_id = PPE_SRAM_SET_VAL; + ppe_data.set_info.data = hash; + ppe_data.set_info.size = sizeof(u32); + + err = airoha_npu_send_msg(eth->npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); + if (err) + return err; + } + + return 0; +} + struct airoha_npu *airoha_npu_init(struct airoha_eth *eth) { struct reserved_mem *rmem; diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c index bb64c3c09fa37b797bd028ad7012d81662d82ba6..e9d67a6d1fbcc2a565d7de043e705b6b4ab26059 100644 --- a/drivers/net/ethernet/airoha/airoha_ppe.c +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -4,14 +4,678 @@ * Author: Lorenzo Bianconi */ +#include +#include +#include +#include + #include "airoha_regs.h" #include "airoha_eth.h" +static DEFINE_MUTEX(flow_offload_mutex); +static DEFINE_SPINLOCK(ppe_lock); + bool airoha_ppe2_is_enabled(struct airoha_eth *eth) { return airoha_fe_rr(eth, REG_PPE_GLO_CFG(1)) & PPE_GLO_CFG_EN_MASK; } +static const struct rhashtable_params airoha_flow_table_params = { + .head_offset = offsetof(struct airoha_flow_table_entry, node), + .key_offset = offsetof(struct airoha_flow_table_entry, cookie), + .key_len = sizeof(unsigned long), + .automatic_shrinking = true, +}; + +u32 airoha_ppe_get_timestamp(struct airoha_ppe *ppe) +{ + u16 timestamp = airoha_fe_rr(ppe->eth, REG_FE_FOE_TS); + + return FIELD_GET(AIROHA_FOE_IB1_BIND_TIMESTAMP, timestamp); +} + +static void airoha_ppe_flow_mangle_eth(const struct flow_action_entry *act, void *eth) +{ + void *dest = eth + act->mangle.offset; + const void *src = &act->mangle.val; + + if (act->mangle.offset > 8) + return; + + if (act->mangle.mask == 0xffff) { + src += 2; + dest += 2; + } + + memcpy(dest, src, act->mangle.mask ? 2 : 4); +} + +static int airoha_ppe_flow_mangle_ports(const struct flow_action_entry *act, + struct airoha_flow_data *data) +{ + u32 val = be32_to_cpu(act->mangle.val); + + switch (act->mangle.offset) { + case 0: + if (act->mangle.mask == ~cpu_to_be32(0xffff)) + data->dst_port = cpu_to_be16(val); + else + data->src_port = cpu_to_be16(val >> 16); + break; + case 2: + data->dst_port = cpu_to_be16(val); + break; + default: + return -EINVAL; + } + + return 0; +} + +static int airoha_ppe_flow_mangle_ipv4(const struct flow_action_entry *act, + struct airoha_flow_data *data) +{ + __be32 *dest; + + switch (act->mangle.offset) { + case offsetof(struct iphdr, saddr): + dest = &data->v4.src_addr; + break; + case offsetof(struct iphdr, daddr): + dest = &data->v4.dst_addr; + break; + default: + return -EINVAL; + } + + memcpy(dest, &act->mangle.val, sizeof(u32)); + + return 0; +} + +static int airoha_get_dsa_port(struct net_device **dev) +{ +#if IS_ENABLED(CONFIG_NET_DSA) + struct dsa_port *dp = dsa_port_from_netdev(*dev); + + if (IS_ERR(dp)) + return -ENODEV; + + *dev = dsa_port_to_conduit(dp); + return dp->index; +#else + return -ENODEV; +#endif +} + +static int airoha_ppe_foe_entry_prepare(struct airoha_foe_entry *hwe, + struct net_device *dev, int type, + int l4proto, u8 *src_mac, u8 *dest_mac) +{ + int dsa_port = airoha_get_dsa_port(&dev); + struct airoha_foe_mac_info_common *l2; + u32 data, ports_pad, val; + + memset(hwe, 0, sizeof(*hwe)); + + val = FIELD_PREP(AIROHA_FOE_IB1_STATE, AIROHA_FOE_STATE_BIND) | + FIELD_PREP(AIROHA_FOE_IB1_PACKET_TYPE, type) | + FIELD_PREP(AIROHA_FOE_IB1_UDP, l4proto == IPPROTO_UDP) | + AIROHA_FOE_IB1_BIND_TTL; + hwe->ib1 = val; + + val = FIELD_PREP(AIROHA_FOE_IB2_PORT_AG, 0x1f); + if (dsa_port >= 0) + val |= FIELD_PREP(AIROHA_FOE_IB2_NBQ, dsa_port); + if (dev) { + struct airoha_gdm_port *port = netdev_priv(dev); + u8 pse_port; + + pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; + val |= FIELD_PREP(AIROHA_FOE_IB2_PSE_PORT, pse_port); + } + + /* FIXME: implement QoS support setting pse_port to 2 (loopback) + * for uplink and setting qos bit in ib2 + */ + + if (is_multicast_ether_addr(dest_mac)) + val |= AIROHA_FOE_IB2_MULTICAST; + + ports_pad = 0xa5a5a500 | (l4proto & 0xff); + if (type == PPE_PKT_TYPE_IPV4_ROUTE) + hwe->ipv4.orig_tuple.ports = ports_pad; + if (type == PPE_PKT_TYPE_IPV6_ROUTE_3T) + hwe->ipv6.ports = ports_pad; + + data = FIELD_PREP(AIROHA_FOE_SHAPER_ID, 0x7f); + if (type == PPE_PKT_TYPE_BRIDGE) { + hwe->bridge.dest_mac_hi = get_unaligned_be32(dest_mac); + hwe->bridge.dest_mac_lo = get_unaligned_be16(dest_mac + 4); + hwe->bridge.src_mac_hi = get_unaligned_be16(src_mac); + hwe->bridge.src_mac_lo = get_unaligned_be32(src_mac + 2); + hwe->bridge.data = data; + hwe->bridge.ib2 = val; + l2 = &hwe->bridge.l2.common; + } else if (type >= PPE_PKT_TYPE_IPV6_ROUTE_3T) { + hwe->ipv6.data = data; + hwe->ipv6.ib2 = val; + l2 = &hwe->ipv6.l2; + } else { + hwe->ipv4.data = data; + hwe->ipv4.ib2 = val; + l2 = &hwe->ipv4.l2.common; + } + + l2->dest_mac_hi = get_unaligned_be32(dest_mac); + l2->dest_mac_lo = get_unaligned_be16(dest_mac + 4); + if (type <= PPE_PKT_TYPE_IPV4_DSLITE) { + l2->src_mac_hi = get_unaligned_be32(src_mac); + hwe->ipv4.l2.src_mac_lo = get_unaligned_be16(src_mac + 4); + } else { + l2->src_mac_hi = FIELD_PREP(AIROHA_FOE_MAC_SMAC_ID, 0xf); + } + + if (dsa_port >= 0) + l2->etype = BIT(15) | BIT(dsa_port); + else if (type >= PPE_PKT_TYPE_IPV6_ROUTE_3T) + l2->etype = ETH_P_IPV6; + else + l2->etype = ETH_P_IP; + + return 0; +} + +static int airoha_ppe_foe_entry_set_ipv4_tuple(struct airoha_foe_entry *hwe, + struct airoha_flow_data *data, + bool egress) +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + struct airoha_foe_ipv4_tuple *t; + + switch (type) { + case PPE_PKT_TYPE_IPV4_HNAPT: + if (egress) { + t = &hwe->ipv4.new_tuple; + break; + } + fallthrough; + case PPE_PKT_TYPE_IPV4_DSLITE: + case PPE_PKT_TYPE_IPV4_ROUTE: + t = &hwe->ipv4.orig_tuple; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + t->src_ip = be32_to_cpu(data->v4.src_addr); + t->dest_ip = be32_to_cpu(data->v4.dst_addr); + + if (type != PPE_PKT_TYPE_IPV4_ROUTE) { + t->src_port = be16_to_cpu(data->src_port); + t->dest_port = be16_to_cpu(data->dst_port); + } + + return 0; +} + +static int airoha_ppe_foe_entry_set_ipv6_tuple(struct airoha_foe_entry *hwe, + struct airoha_flow_data *data) + +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + u32 *src, *dest; + + switch (type) { + case PPE_PKT_TYPE_IPV6_ROUTE_5T: + case PPE_PKT_TYPE_IPV6_6RD: + hwe->ipv6.src_port = be16_to_cpu(data->src_port); + hwe->ipv6.dest_port = be16_to_cpu(data->dst_port); + fallthrough; + case PPE_PKT_TYPE_IPV6_ROUTE_3T: + src = hwe->ipv6.src_ip; + dest = hwe->ipv6.dest_ip; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + ipv6_addr_be32_to_cpu(src, data->v6.src_addr.s6_addr32); + ipv6_addr_be32_to_cpu(dest, data->v6.dst_addr.s6_addr32); + + return 0; +} + +static u32 airoha_ppe_foe_get_entry_hash(struct airoha_foe_entry *hwe) +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + u32 hash, hv1, hv2, hv3; + + switch (type) { + case PPE_PKT_TYPE_IPV4_ROUTE: + case PPE_PKT_TYPE_IPV4_HNAPT: + hv1 = hwe->ipv4.orig_tuple.ports; + hv2 = hwe->ipv4.orig_tuple.dest_ip; + hv3 = hwe->ipv4.orig_tuple.src_ip; + break; + case PPE_PKT_TYPE_IPV6_ROUTE_3T: + case PPE_PKT_TYPE_IPV6_ROUTE_5T: + hv1 = hwe->ipv6.src_ip[3] ^ hwe->ipv6.dest_ip[3]; + hv1 ^= hwe->ipv6.ports; + + hv2 = hwe->ipv6.src_ip[2] ^ hwe->ipv6.dest_ip[2]; + hv2 ^= hwe->ipv6.dest_ip[0]; + + hv3 = hwe->ipv6.src_ip[1] ^ hwe->ipv6.dest_ip[1]; + hv3 ^= hwe->ipv6.src_ip[0]; + break; + case PPE_PKT_TYPE_IPV4_DSLITE: + case PPE_PKT_TYPE_IPV6_6RD: + default: + WARN_ON_ONCE(1); + return PPE_HASH_MASK; + } + + hash = (hv1 & hv2) | ((~hv1) & hv3); + hash = (hash >> 24) | ((hash & 0xffffff) << 8); + hash ^= hv1 ^ hv2 ^ hv3; + hash ^= hash >> 16; + hash &= PPE_NUM_ENTRIES - 1; + + return hash; +} + +struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe, + u32 hash) +{ + if (hash < PPE_SRAM_NUM_ENTRIES) { + u32 *hwe = ppe->foe + hash * sizeof(struct airoha_foe_entry); + struct airoha_eth *eth = ppe->eth; + bool ppe2; + u32 val; + int i; + + ppe2 = airoha_ppe2_is_enabled(ppe->eth) && + hash >= PPE1_SRAM_NUM_ENTRIES; + airoha_fe_wr(ppe->eth, REG_PPE_RAM_CTRL(ppe2), + FIELD_PREP(PPE_SRAM_CTRL_ENTRY_MASK, hash) | + PPE_SRAM_CTRL_REQ_MASK); + if (read_poll_timeout_atomic(airoha_fe_rr, val, + val & PPE_SRAM_CTRL_ACK_MASK, + 10, 100, false, eth, + REG_PPE_RAM_CTRL(ppe2))) + return NULL; + + for (i = 0; i < sizeof(struct airoha_foe_entry) / 4; i++) + hwe[i] = airoha_fe_rr(eth, + REG_PPE_RAM_ENTRY(ppe2, i)); + } + + return ppe->foe + hash * sizeof(struct airoha_foe_entry); +} + +static bool airoha_ppe_foe_compare_entry(struct airoha_flow_table_entry *e, + struct airoha_foe_entry *hwe) +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, e->data.ib1), len; + + if ((hwe->ib1 ^ e->data.ib1) & AIROHA_FOE_IB1_UDP) + return false; + + if (type > PPE_PKT_TYPE_IPV4_DSLITE) + len = offsetof(struct airoha_foe_entry, ipv6.data); + else + len = offsetof(struct airoha_foe_entry, ipv4.ib2); + + return !memcmp(&e->data.d, &hwe->d, len - sizeof(hwe->ib1)); +} + +static void airoha_ppe_foe_insert_entry(struct airoha_ppe *ppe, u32 hash) +{ + struct airoha_flow_table_entry *e; + struct airoha_foe_entry *hwe; + struct hlist_node *n; + u32 index; + + spin_lock_bh(&ppe_lock); + + hwe = airoha_ppe_foe_get_entry(ppe, hash); + if (!hwe) + goto unlock; + + if (FIELD_GET(AIROHA_FOE_IB1_STATE, hwe->ib1) == AIROHA_FOE_STATE_BIND) + goto unlock; + + index = airoha_ppe_foe_get_entry_hash(hwe); + hlist_for_each_entry_safe(e, n, &ppe->foe_flow[index], list) { + if (airoha_ppe_foe_compare_entry(e, hwe)) { + airoha_npu_foe_commit_entry(ppe, &e->data, hash); + e->hash = hash; + break; + } + } +unlock: + spin_unlock_bh(&ppe_lock); +} + +static int airoha_ppe_foe_flow_commit_entry(struct airoha_ppe *ppe, + struct airoha_flow_table_entry *e) +{ + u32 hash = airoha_ppe_foe_get_entry_hash(&e->data); + + e->hash = 0xffff; + + spin_lock_bh(&ppe_lock); + hlist_add_head(&e->list, &ppe->foe_flow[hash]); + spin_unlock_bh(&ppe_lock); + + return 0; +} + +static void airoha_ppe_foe_flow_remove_entry(struct airoha_ppe *ppe, + struct airoha_flow_table_entry *e) +{ + spin_lock_bh(&ppe_lock); + + hlist_del_init(&e->list); + if (e->hash != 0xffff) { + e->data.ib1 &= ~AIROHA_FOE_IB1_STATE; + e->data.ib1 |= FIELD_PREP(AIROHA_FOE_IB1_STATE, + AIROHA_FOE_STATE_INVALID); + airoha_npu_foe_commit_entry(ppe, &e->data, e->hash); + e->hash = 0xffff; + } + + spin_unlock_bh(&ppe_lock); +} + +static int airoha_ppe_flow_offload_replace(struct airoha_gdm_port *port, + struct flow_cls_offload *f) +{ + struct flow_rule *rule = flow_cls_offload_flow_rule(f); + struct airoha_eth *eth = port->qdma->eth; + struct airoha_flow_table_entry *e; + struct airoha_flow_data data = {}; + struct net_device *odev = NULL; + struct flow_action_entry *act; + struct airoha_foe_entry hwe; + int err, i, offload_type; + u16 addr_type = 0; + u8 l4proto = 0; + + if (rhashtable_lookup(ð->flow_table, &f->cookie, + airoha_flow_table_params)) + return -EEXIST; + + if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META)) + return -EOPNOTSUPP; + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) { + struct flow_match_control match; + + flow_rule_match_control(rule, &match); + addr_type = match.key->addr_type; + if (flow_rule_has_control_flags(match.mask->flags, + f->common.extack)) + return -EOPNOTSUPP; + } else { + return -EOPNOTSUPP; + } + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) { + struct flow_match_basic match; + + flow_rule_match_basic(rule, &match); + l4proto = match.key->ip_proto; + } else { + return -EOPNOTSUPP; + } + + switch (addr_type) { + case 0: + offload_type = PPE_PKT_TYPE_BRIDGE; + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { + struct flow_match_eth_addrs match; + + flow_rule_match_eth_addrs(rule, &match); + memcpy(data.eth.h_dest, match.key->dst, ETH_ALEN); + memcpy(data.eth.h_source, match.key->src, ETH_ALEN); + } else { + return -EOPNOTSUPP; + } + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { + struct flow_match_vlan match; + + flow_rule_match_vlan(rule, &match); + if (match.key->vlan_tpid != cpu_to_be16(ETH_P_8021Q)) + return -EOPNOTSUPP; + + data.vlan_in = match.key->vlan_id; + } + break; + case FLOW_DISSECTOR_KEY_IPV4_ADDRS: + offload_type = PPE_PKT_TYPE_IPV4_HNAPT; + break; + case FLOW_DISSECTOR_KEY_IPV6_ADDRS: + offload_type = PPE_PKT_TYPE_IPV6_ROUTE_5T; + break; + default: + return -EOPNOTSUPP; + } + + flow_action_for_each(i, act, &rule->action) { + switch (act->id) { + case FLOW_ACTION_MANGLE: + if (offload_type == PPE_PKT_TYPE_BRIDGE) + return -EOPNOTSUPP; + + if (act->mangle.htype == FLOW_ACT_MANGLE_HDR_TYPE_ETH) + airoha_ppe_flow_mangle_eth(act, &data.eth); + break; + case FLOW_ACTION_REDIRECT: + odev = act->dev; + break; + case FLOW_ACTION_CSUM: + break; + case FLOW_ACTION_VLAN_PUSH: + if (data.vlan.num == 1 || + act->vlan.proto != htons(ETH_P_8021Q)) + return -EOPNOTSUPP; + + data.vlan.id = act->vlan.vid; + data.vlan.proto = act->vlan.proto; + data.vlan.num++; + break; + case FLOW_ACTION_VLAN_POP: + break; + case FLOW_ACTION_PPPOE_PUSH: + if (data.pppoe.num == 1) + return -EOPNOTSUPP; + + data.pppoe.sid = act->pppoe.sid; + data.pppoe.num++; + break; + default: + return -EOPNOTSUPP; + } + } + + if (!is_valid_ether_addr(data.eth.h_source) || + !is_valid_ether_addr(data.eth.h_dest)) + return -EINVAL; + + err = airoha_ppe_foe_entry_prepare(&hwe, odev, offload_type, l4proto, + data.eth.h_source, data.eth.h_dest); + if (err) + return err; + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) { + struct flow_match_ports ports; + + if (offload_type == PPE_PKT_TYPE_BRIDGE) + return -EOPNOTSUPP; + + flow_rule_match_ports(rule, &ports); + data.src_port = ports.key->src; + data.dst_port = ports.key->dst; + } else if (offload_type != PPE_PKT_TYPE_BRIDGE) { + return -EOPNOTSUPP; + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { + struct flow_match_ipv4_addrs addrs; + + flow_rule_match_ipv4_addrs(rule, &addrs); + data.v4.src_addr = addrs.key->src; + data.v4.dst_addr = addrs.key->dst; + airoha_ppe_foe_entry_set_ipv4_tuple(&hwe, &data, false); + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { + struct flow_match_ipv6_addrs addrs; + + flow_rule_match_ipv6_addrs(rule, &addrs); + + data.v6.src_addr = addrs.key->src; + data.v6.dst_addr = addrs.key->dst; + airoha_ppe_foe_entry_set_ipv6_tuple(&hwe, &data); + } + + flow_action_for_each(i, act, &rule->action) { + if (act->id != FLOW_ACTION_MANGLE) + continue; + + if (offload_type == PPE_PKT_TYPE_BRIDGE) + return -EOPNOTSUPP; + + switch (act->mangle.htype) { + case FLOW_ACT_MANGLE_HDR_TYPE_TCP: + case FLOW_ACT_MANGLE_HDR_TYPE_UDP: + err = airoha_ppe_flow_mangle_ports(act, &data); + break; + case FLOW_ACT_MANGLE_HDR_TYPE_IP4: + err = airoha_ppe_flow_mangle_ipv4(act, &data); + break; + case FLOW_ACT_MANGLE_HDR_TYPE_ETH: + /* handled earlier */ + break; + default: + return -EOPNOTSUPP; + } + + if (err) + return err; + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { + err = airoha_ppe_foe_entry_set_ipv4_tuple(&hwe, &data, true); + if (err) + return err; + } + + e = kzalloc(sizeof(*e), GFP_KERNEL); + if (!e) + return -ENOMEM; + + e->cookie = f->cookie; + memcpy(&e->data, &hwe, sizeof(e->data)); + + err = airoha_ppe_foe_flow_commit_entry(eth->ppe, e); + if (err) + goto free_entry; + + err = rhashtable_insert_fast(ð->flow_table, &e->node, + airoha_flow_table_params); + if (err < 0) + goto remove_foe_entry; + + return 0; + +remove_foe_entry: + airoha_ppe_foe_flow_remove_entry(eth->ppe, e); +free_entry: + kfree(e); + + return err; +} + +static int airoha_ppe_flow_offload_destroy(struct airoha_gdm_port *port, + struct flow_cls_offload *f) +{ + struct airoha_eth *eth = port->qdma->eth; + struct airoha_flow_table_entry *e; + + e = rhashtable_lookup(ð->flow_table, &f->cookie, + airoha_flow_table_params); + if (!e) + return -ENOENT; + + airoha_ppe_foe_flow_remove_entry(eth->ppe, e); + rhashtable_remove_fast(ð->flow_table, &e->node, + airoha_flow_table_params); + kfree(e); + + return 0; +} + +static int airoha_ppe_flow_offload_cmd(struct airoha_gdm_port *port, + struct flow_cls_offload *f) +{ + int err = -EOPNOTSUPP; + + mutex_lock(&flow_offload_mutex); + + switch (f->command) { + case FLOW_CLS_REPLACE: + err = airoha_ppe_flow_offload_replace(port, f); + break; + case FLOW_CLS_DESTROY: + err = airoha_ppe_flow_offload_destroy(port, f); + break; + default: + break; + } + + mutex_unlock(&flow_offload_mutex); + + return err; +} + +int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv) +{ + struct flow_cls_offload *cls = type_data; + struct net_device *dev = cb_priv; + struct airoha_gdm_port *port = netdev_priv(dev); + + if (!tc_can_offload(dev) || type != TC_SETUP_CLSFLOWER) + return -EOPNOTSUPP; + + if (!port->qdma->eth->npu) + return -EOPNOTSUPP; + + return airoha_ppe_flow_offload_cmd(port, cls); +} + +void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash) +{ + u16 now, diff; + + if (hash > PPE_HASH_MASK) + return; + + now = (u16)jiffies; + diff = now - ppe->foe_check_time[hash]; + if (diff < HZ / 10) + return; + + ppe->foe_check_time[hash] = now; + airoha_ppe_foe_insert_entry(ppe, hash); +} + static void airoha_ppe_hw_init(struct airoha_ppe *ppe) { struct airoha_eth *eth = ppe->eth; @@ -96,33 +760,47 @@ int airoha_ppe_init(struct airoha_eth *eth) ppe->eth = eth; eth->ppe = ppe; + ppe->foe_flow = devm_kzalloc(eth->dev, + PPE_NUM_ENTRIES * sizeof(*ppe->foe_flow), + GFP_KERNEL); + if (!ppe->foe_flow) + return -ENOMEM; + + err = rhashtable_init(ð->flow_table, &airoha_flow_table_params); + if (err) + return err; + npu = airoha_npu_init(eth); - if (IS_ERR(npu)) - return PTR_ERR(npu); + if (IS_ERR(npu)) { + err = PTR_ERR(npu); + goto error_destroy_flow_table; + } eth->npu = npu; err = airoha_npu_ppe_init(npu); if (err) - goto error; + goto error_npu_deinit; airoha_ppe_hw_init(ppe); err = airoha_npu_flush_ppe_sram_entries(npu, ppe); if (err) - goto error; + goto error_npu_deinit; return 0; -error: +error_npu_deinit: airoha_npu_deinit(npu); eth->npu = NULL; +error_destroy_flow_table: + rhashtable_destroy(ð->flow_table); return err; } - void airoha_ppe_deinit(struct airoha_eth *eth) { if (eth->npu) { airoha_npu_ppe_deinit(eth->npu); airoha_npu_deinit(eth->npu); } + rhashtable_destroy(ð->flow_table); } From patchwork Wed Feb 5 18:21:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961675 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51C191FCFCB; Wed, 5 Feb 2025 18:22:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779756; cv=none; b=DQDCtkArlGp+DDaUHItkCeVCEiwYS1l8zkND6JfiO94lX/RfYsnnCPSOQQIMclussEVxzVla8C3kCFs7xkr3n0LU1djZ12WLFsBq6sL+yC+sidaUMiFF1jnE57nCbyBkHPEctLUhlck4AEB11J+Z+s7rhA0Qp3h6iVsB8CIShWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779756; c=relaxed/simple; bh=P2I/3RHzFpVnt49oJEHtW/oCB7rnjPKC9FAZgPOn5bU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=RGipYFM+hh1xhHKw9cqcx+QAIRnFpwkenCET+wUk5AzziqmPFtv9hrSoJQkVYi+gE78clVmEeajQ5VsuLL/KIfKtLzJlY0aqNQOGkz9lP2PsCiyNv5hwMBLdMpGBf23hg8TFIJHQVOE0BN9BmNbgZ85leo83MzVN2WUDmI5m0Qo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JMw/Cn6T; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JMw/Cn6T" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64805C4CED1; Wed, 5 Feb 2025 18:22:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779755; bh=P2I/3RHzFpVnt49oJEHtW/oCB7rnjPKC9FAZgPOn5bU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=JMw/Cn6TZyy7BtOxW2iAWL2Kms3pPaKMoXNAuLyUi+ZqfP6c6u1dXS7jg7IJIDE1i 49h9N46CjIWbMr6qbIOdhZhoa/eQFQT6S2ZcIVhR9AKP/4J0R5ihKmOaY61Jduu84L eFzNVl6vOVOsjoJwJqcUVc2Ea7l4S6NxgMNJQpHsnJo4OgulOUAiXjWNGd9qD32V4E LxPxIaI2KxFcElx09H8DQQiAtzmOWWih9/zdkhOYoFwqxfC76vvnUM1G0IZDx2T89r K9D+KiEHXSPpg6hIx4gZ5FIOFeo3BfhPsAyjAzWHc3yLfqO6vPTIAOfFEo+LLGiV3/ m+dDVarvjZNNA== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:31 +0100 Subject: [PATCH net-next 12/13] net: airoha: Add loopback support for GDM2 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-12-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Enable hw redirection for traffic received on GDM2 port to GDM{3,4}. This is required to apply Qdisc offloading (HTB or ETS) for traffic to and from GDM{3,4} port. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 67 ++++++++++++++++++++++++++++++- drivers/net/ethernet/airoha/airoha_eth.h | 7 ++++ drivers/net/ethernet/airoha/airoha_ppe.c | 13 +++--- drivers/net/ethernet/airoha/airoha_regs.h | 29 +++++++++++++ 4 files changed, 108 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 138e33845d5c7b1ef421d672a18d312d77b625c9..b17476b446466f9bb5a0be6a55fa2ad208443cbb 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -1583,14 +1583,77 @@ static int airoha_dev_set_macaddr(struct net_device *dev, void *p) return 0; } +static void airhoha_set_gdm2_loopback(struct airoha_gdm_port *port) +{ + u32 pse_port = port->id == 3 ? FE_PSE_PORT_GDM3 : FE_PSE_PORT_GDM4; + struct airoha_eth *eth = port->qdma->eth; + u32 chan = port->id == 3 ? 4 : 0; + + /* Forward the traffic to the proper GDM port */ + airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(2), pse_port); + airoha_fe_clear(eth, REG_GDM_FWD_CFG(2), GDM_STRIP_CRC); + airoha_fe_rmw(eth, REG_GDM_LEN_CFG(2), GDM_LONG_LEN_MASK, + FIELD_PREP(GDM_LONG_LEN_MASK, 2000)); + + /* Enable GDM2 loopback */ + airoha_fe_wr(eth, REG_GDM_TXCHN_EN(2), 0xffffffff); + airoha_fe_wr(eth, REG_GDM_RXCHN_EN(2), 0xffff); + airoha_fe_rmw(eth, REG_GDM_LPBK_CFG(2), + LPBK_CHAN_MASK | LPBK_MODE_MASK | LPBK_EN_MASK, + FIELD_PREP(LPBK_CHAN_MASK, chan) | LPBK_EN_MASK); + + /* Disable VIP and IFC for GDM2 */ + airoha_fe_clear(eth, REG_FE_VIP_PORT_EN, BIT(2)); + airoha_fe_clear(eth, REG_FE_IFC_PORT_EN, BIT(2)); + + if (port->id == 3) { + /* FIXME: handle XSI_PCE1_PORT */ + airoha_fe_wr(eth, REG_PPE_DFT_CPORT0(0), 0x5500); + airoha_fe_rmw(eth, REG_FE_WAN_PORT, + WAN1_EN_MASK | WAN1_MASK | WAN0_MASK, + FIELD_PREP(WAN0_MASK, HSGMII_LAN_PCIE0_SRCPORT)); + airoha_fe_rmw(eth, + REG_SP_DFT_CPORT(HSGMII_LAN_PCIE0_SRCPORT >> 3), + SP_CPORT_PCIE0_MASK, + FIELD_PREP(SP_CPORT_PCIE0_MASK, + FE_PSE_PORT_CDM2)); + } else { + /* FIXME: handle XSI_USB_PORT */ + airoha_fe_rmw(eth, REG_SRC_PORT_FC_MAP6, + FC_ID_OF_SRC_PORT24_MASK, + FIELD_PREP(FC_ID_OF_SRC_PORT24_MASK, 2)); + airoha_fe_rmw(eth, REG_FE_WAN_PORT, + WAN1_EN_MASK | WAN1_MASK | WAN0_MASK, + FIELD_PREP(WAN0_MASK, HSGMII_LAN_ETH_SRCPORT)); + airoha_fe_rmw(eth, + REG_SP_DFT_CPORT(HSGMII_LAN_ETH_SRCPORT >> 3), + SP_CPORT_ETH_MASK, + FIELD_PREP(SP_CPORT_ETH_MASK, FE_PSE_PORT_CDM2)); + } +} + static int airoha_dev_init(struct net_device *dev) { struct airoha_gdm_port *port = netdev_priv(dev); struct airoha_eth *eth = port->qdma->eth; + u32 pse_port; airoha_set_macaddr(port, dev->dev_addr); - airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(port->id), - FE_PSE_PORT_PPE1); + + switch (port->id) { + case 3: + case 4: + airhoha_set_gdm2_loopback(port); + fallthrough; + case 2: + pse_port = FE_PSE_PORT_PPE2; + break; + default: + pse_port = FE_PSE_PORT_PPE1; + break; + } + + airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(port->id), pse_port); return 0; } diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 14f2d863fd1cc3f087f3f168714d0993db9e1ba7..20e870a5c20091f0bb572f87f111c660b2075c32 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -72,6 +72,13 @@ enum { QDMA_INT_REG_MAX }; +enum { + HSGMII_LAN_PCIE0_SRCPORT = 0x16, + HSGMII_LAN_PCIE1_SRCPORT, + HSGMII_LAN_ETH_SRCPORT, + HSGMII_LAN_USB_SRCPORT, +}; + enum { XSI_PCIE0_VIP_PORT_MASK = BIT(22), XSI_PCIE1_VIP_PORT_MASK = BIT(23), diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c index e9d67a6d1fbcc2a565d7de043e705b6b4ab26059..00bf30d102ee868ab51d72e09a013adf8d6e1fad 100644 --- a/drivers/net/ethernet/airoha/airoha_ppe.c +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -124,21 +124,22 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_foe_entry *hwe, AIROHA_FOE_IB1_BIND_TTL; hwe->ib1 = val; - val = FIELD_PREP(AIROHA_FOE_IB2_PORT_AG, 0x1f); + val = FIELD_PREP(AIROHA_FOE_IB2_PORT_AG, 0x1f) | + AIROHA_FOE_IB2_PSE_QOS; if (dsa_port >= 0) val |= FIELD_PREP(AIROHA_FOE_IB2_NBQ, dsa_port); + if (dev) { struct airoha_gdm_port *port = netdev_priv(dev); u8 pse_port; - pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; + if (dsa_port >= 0) + pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; + else + pse_port = 2; /* uplink relies on GDM2 loopback */ val |= FIELD_PREP(AIROHA_FOE_IB2_PSE_PORT, pse_port); } - /* FIXME: implement QoS support setting pse_port to 2 (loopback) - * for uplink and setting qos bit in ib2 - */ - if (is_multicast_ether_addr(dest_mac)) val |= AIROHA_FOE_IB2_MULTICAST; diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h index 3ad2c5d11c2f9b0d00744cc43394d02d47347f61..9ec2476af24101c7789c8816a236fec977fcf437 100644 --- a/drivers/net/ethernet/airoha/airoha_regs.h +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -38,6 +38,12 @@ #define FE_RST_CORE_MASK BIT(0) #define REG_FE_FOE_TS 0x0010 + +#define REG_FE_WAN_PORT 0x0024 +#define WAN1_EN_MASK BIT(16) +#define WAN1_MASK GENMASK(12, 8) +#define WAN0_MASK GENMASK(4, 0) + #define REG_FE_WAN_MAC_H 0x0030 #define REG_FE_LAN_MAC_H 0x0040 @@ -126,6 +132,7 @@ #define GDM_IP4_CKSUM BIT(22) #define GDM_TCP_CKSUM BIT(21) #define GDM_UDP_CKSUM BIT(20) +#define GDM_STRIP_CRC BIT(16) #define GDM_UCFQ_MASK GENMASK(15, 12) #define GDM_BCFQ_MASK GENMASK(11, 8) #define GDM_MCFQ_MASK GENMASK(7, 4) @@ -139,6 +146,16 @@ #define GDM_SHORT_LEN_MASK GENMASK(13, 0) #define GDM_LONG_LEN_MASK GENMASK(29, 16) +#define REG_GDM_LPBK_CFG(_n) (GDM_BASE(_n) + 0x1c) +#define LPBK_GAP_MASK GENMASK(31, 24) +#define LPBK_LEN_MASK GENMASK(23, 10) +#define LPBK_CHAN_MASK GENMASK(8, 4) +#define LPBK_MODE_MASK GENMASK(3, 1) +#define LPBK_EN_MASK BIT(0) + +#define REG_GDM_TXCHN_EN(_n) (GDM_BASE(_n) + 0x24) +#define REG_GDM_RXCHN_EN(_n) (GDM_BASE(_n) + 0x28) + #define REG_FE_CPORT_CFG (GDM1_BASE + 0x40) #define FE_CPORT_PAD BIT(26) #define FE_CPORT_PORT_XFC_MASK BIT(25) @@ -346,6 +363,18 @@ #define REG_MC_VLAN_DATA 0x2108 +#define REG_SP_DFT_CPORT(_n) (0x20e0 + ((_n) << 2)) +#define SP_CPORT_PCIE1_MASK GENMASK(31, 28) +#define SP_CPORT_PCIE0_MASK GENMASK(27, 24) +#define SP_CPORT_USB_MASK GENMASK(7, 4) +#define SP_CPORT_ETH_MASK GENMASK(7, 4) + +#define REG_SRC_PORT_FC_MAP6 0x2298 +#define FC_ID_OF_SRC_PORT27_MASK GENMASK(28, 24) +#define FC_ID_OF_SRC_PORT26_MASK GENMASK(20, 16) +#define FC_ID_OF_SRC_PORT25_MASK GENMASK(12, 8) +#define FC_ID_OF_SRC_PORT24_MASK GENMASK(4, 0) + #define REG_CDM5_RX_OQ1_DROP_CNT 0x29d4 /* QDMA */ From patchwork Wed Feb 5 18:21:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13961676 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA1521FDE02; Wed, 5 Feb 2025 18:22:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779759; cv=none; b=pAdGS42AEkfXFBG/3nU3enVtqPoblgU0ej9NV36HD0Ivfq0dnIM94zQvAOe2G4UD33UBxeHdKSC9+o15vGGvrgGOWxGWrMYw0RheM7vc7UOvS+9b40D9Wt2wGs1H7vo5cOUusU+91DJDfNXBqSEJcXFLSlE+5LDEFPOBbzV8d1I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738779759; c=relaxed/simple; bh=vq1yg8bBwcHYIQZdoIwbIlxcXj/zSlKnkeYluYCBJ54=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=XH79NJysdH1PMeFgo1WvmWqnv/btBgbsjjF8IWqMU90mJ9klnFZQtUDZ5+E3J/+eFHUgZkr5Be+As2cqiVaIpPkdCEuQ1Xy8bqaSSrl1vonehOJGVDKoFf1ZKNiYEEMEY01EEq1RCT6OtzCo6a2J2bqr9Wk7CrNazQRCThd2MM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gBUEWd3U; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gBUEWd3U" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE7A6C4CEDD; Wed, 5 Feb 2025 18:22:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738779758; bh=vq1yg8bBwcHYIQZdoIwbIlxcXj/zSlKnkeYluYCBJ54=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=gBUEWd3Usl5t/QpQMl8FC+mB7+aSdL4UxeO9H8U6lZc75+2MiS11eg8oQ9mJ6bmO6 N937eqez/eFyB5iFdFLaU0hDF3GzdVR2Z9Mruj9CE8SM6L8KJvzqZxwiWuFSIETtQC ka/xvuGFPs8TrcoakVQ/iI1e44DuxmT25r3qOqHsC/IR89sESeQmJPyJfpRRMp8W+L Eksf8WBqL+eYpjF8q276VnDhvHrVewEyi+EY3OCSx5NUwhiFq926fNA9ahvt6zX+cL eAY03IKY5M7ZnPBRmiMEfZjTuycI6V+umYbf7OUpsuS8N9AE1NrsonJq1Puw3BiaQ2 2Nw6L19jghzqw== From: Lorenzo Bianconi Date: Wed, 05 Feb 2025 19:21:32 +0100 Subject: [PATCH net-next 13/13] net: airoha: Introduce PPE debugfs support Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250205-airoha-en7581-flowtable-offload-v1-13-d362cfa97b01@kernel.org> References: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> In-Reply-To: <20250205-airoha-en7581-flowtable-offload-v1-0-d362cfa97b01@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Similar to PPE support for Mediatek devices, introduce PPE debugfs in order to dump binded and unbinded flows. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/Makefile | 1 + drivers/net/ethernet/airoha/airoha_eth.h | 12 ++ drivers/net/ethernet/airoha/airoha_ppe.c | 5 + drivers/net/ethernet/airoha/airoha_ppe_debugfs.c | 175 +++++++++++++++++++++++ 4 files changed, 193 insertions(+) diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile index 50028cfc3e3e04efbdd353b1bd65f46b488637d8..e5321effdbeaeaf7c4ad3c6df9d0261fe12fdd12 100644 --- a/drivers/net/ethernet/airoha/Makefile +++ b/drivers/net/ethernet/airoha/Makefile @@ -5,4 +5,5 @@ obj-$(CONFIG_NET_AIROHA) += airoha-eth.o airoha-eth-y := airoha_eth.o airoha_ppe.o +airoha-eth-$(CONFIG_DEBUG_FS) += airoha_ppe_debugfs.o airoha-eth-$(CONFIG_NET_AIROHA_NPU) += airoha_npu.o diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 20e870a5c20091f0bb572f87f111c660b2075c32..b5002b333f9c907e298c5e95c530ba8318a292db 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -7,6 +7,7 @@ #ifndef AIROHA_ETH_H #define AIROHA_ETH_H +#include #include #include #include @@ -501,6 +502,8 @@ struct airoha_ppe { struct hlist_head *foe_flow; u16 foe_check_time[PPE_NUM_ENTRIES]; + + struct dentry *debugfs_dir; }; struct airoha_eth { @@ -610,4 +613,13 @@ static inline int airoha_npu_foe_commit_entry(struct airoha_ppe *ppe, } #endif /* CONFIG_NET_AIROHA_NPU */ +#if CONFIG_DEBUG_FS +int airoha_ppe_debugfs_init(struct airoha_ppe *ppe); +#else +static inline int airoha_ppe_debugfs_init(struct airoha_ppe *ppe) +{ + return 0; +} +#endif + #endif /* AIROHA_ETH_H */ diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c index 00bf30d102ee868ab51d72e09a013adf8d6e1fad..4a0e7e7a7d8cd9a1e7e43bc60588eace31659b14 100644 --- a/drivers/net/ethernet/airoha/airoha_ppe.c +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -787,6 +787,10 @@ int airoha_ppe_init(struct airoha_eth *eth) if (err) goto error_npu_deinit; + err = airoha_ppe_debugfs_init(ppe); + if (err) + goto error_npu_deinit; + return 0; error_npu_deinit: @@ -804,4 +808,5 @@ void airoha_ppe_deinit(struct airoha_eth *eth) airoha_npu_deinit(eth->npu); } rhashtable_destroy(ð->flow_table); + debugfs_remove(eth->ppe->debugfs_dir); } diff --git a/drivers/net/ethernet/airoha/airoha_ppe_debugfs.c b/drivers/net/ethernet/airoha/airoha_ppe_debugfs.c new file mode 100644 index 0000000000000000000000000000000000000000..f79788458903c93ca73d774edaca97f1e08f92a7 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_ppe_debugfs.c @@ -0,0 +1,175 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#include "airoha_eth.h" + +static void airoha_debugfs_ppe_print_tuple(struct seq_file *m, + void *src_addr, void *dest_addr, + u16 *src_port, u16 *dest_port, + bool ipv6) +{ + __be32 n_addr[IPV6_ADDR_WORDS]; + + if (ipv6) { + ipv6_addr_cpu_to_be32(n_addr, src_addr); + seq_printf(m, "%pI6", n_addr); + } else { + seq_printf(m, "%pI4h", src_addr); + } + if (src_port) + seq_printf(m, ":%d", *src_port); + + seq_puts(m, "->"); + + if (ipv6) { + ipv6_addr_cpu_to_be32(n_addr, dest_addr); + seq_printf(m, "%pI6", n_addr); + } else { + seq_printf(m, "%pI4h", dest_addr); + } + if (dest_port) + seq_printf(m, ":%d", *dest_port); +} + +static int airoha_ppe_debugfs_foe_show(struct seq_file *m, void *private, + bool bind) +{ + static const char *const ppe_type_str[] = { + [PPE_PKT_TYPE_IPV4_HNAPT] = "IPv4 5T", + [PPE_PKT_TYPE_IPV4_ROUTE] = "IPv4 3T", + [PPE_PKT_TYPE_BRIDGE] = "L2B", + [PPE_PKT_TYPE_IPV4_DSLITE] = "DS-LITE", + [PPE_PKT_TYPE_IPV6_ROUTE_3T] = "IPv6 3T", + [PPE_PKT_TYPE_IPV6_ROUTE_5T] = "IPv6 5T", + [PPE_PKT_TYPE_IPV6_6RD] = "6RD", + }; + static const char *const ppe_state_str[] = { + [AIROHA_FOE_STATE_INVALID] = "INV", + [AIROHA_FOE_STATE_UNBIND] = "UNB", + [AIROHA_FOE_STATE_BIND] = "BND", + [AIROHA_FOE_STATE_FIN] = "FIN", + }; + struct airoha_ppe *ppe = m->private; + int i; + + for (i = 0; i < PPE_NUM_ENTRIES; i++) { + const char *state_str, *type_str = "UNKNOWN"; + u16 *src_port = NULL, *dest_port = NULL; + struct airoha_foe_mac_info_common *l2; + unsigned char h_source[ETH_ALEN] = {}; + unsigned char h_dest[ETH_ALEN]; + struct airoha_foe_entry *hwe; + u32 type, state, ib2, data; + void *src_addr, *dest_addr; + bool ipv6 = false; + + hwe = airoha_ppe_foe_get_entry(ppe, i); + if (!hwe) + continue; + + state = FIELD_GET(AIROHA_FOE_IB1_STATE, hwe->ib1); + if (!state) + continue; + + if (bind && state != AIROHA_FOE_STATE_BIND) + continue; + + state_str = ppe_state_str[state % ARRAY_SIZE(ppe_state_str)]; + type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + if (type < ARRAY_SIZE(ppe_type_str) && ppe_type_str[type]) + type_str = ppe_type_str[type]; + + seq_printf(m, "%05x %s %7s", i, state_str, type_str); + + switch (type) { + case PPE_PKT_TYPE_IPV4_HNAPT: + case PPE_PKT_TYPE_IPV4_DSLITE: + src_port = &hwe->ipv4.orig_tuple.src_port; + dest_port = &hwe->ipv4.orig_tuple.dest_port; + fallthrough; + case PPE_PKT_TYPE_IPV4_ROUTE: + src_addr = &hwe->ipv4.orig_tuple.src_ip; + dest_addr = &hwe->ipv4.orig_tuple.dest_ip; + break; + case PPE_PKT_TYPE_IPV6_ROUTE_5T: + src_port = &hwe->ipv6.src_port; + dest_port = &hwe->ipv6.dest_port; + fallthrough; + case PPE_PKT_TYPE_IPV6_ROUTE_3T: + case PPE_PKT_TYPE_IPV6_6RD: + src_addr = &hwe->ipv6.src_ip; + dest_addr = &hwe->ipv6.dest_ip; + ipv6 = true; + break; + } + + seq_puts(m, " orig="); + airoha_debugfs_ppe_print_tuple(m, src_addr, dest_addr, + src_port, dest_port, ipv6); + + switch (type) { + case PPE_PKT_TYPE_IPV4_HNAPT: + case PPE_PKT_TYPE_IPV4_DSLITE: + src_port = &hwe->ipv4.new_tuple.src_port; + dest_port = &hwe->ipv4.new_tuple.dest_port; + fallthrough; + case PPE_PKT_TYPE_IPV4_ROUTE: + src_addr = &hwe->ipv4.new_tuple.src_ip; + dest_addr = &hwe->ipv4.new_tuple.dest_ip; + seq_puts(m, " new="); + airoha_debugfs_ppe_print_tuple(m, src_addr, dest_addr, + src_port, dest_port, + ipv6); + break; + } + + if (type >= PPE_PKT_TYPE_IPV6_ROUTE_3T) { + data = hwe->ipv6.data; + ib2 = hwe->ipv6.ib2; + l2 = &hwe->ipv6.l2; + } else { + data = hwe->ipv4.data; + ib2 = hwe->ipv4.ib2; + l2 = &hwe->ipv4.l2.common; + *((__be16 *)&h_source[4]) = + cpu_to_be16(hwe->ipv4.l2.src_mac_lo); + } + + *((__be32 *)h_dest) = cpu_to_be32(l2->dest_mac_hi); + *((__be16 *)&h_dest[4]) = cpu_to_be16(l2->dest_mac_lo); + *((__be32 *)h_source) = cpu_to_be32(l2->src_mac_hi); + + seq_printf(m, " eth=%pM->%pM etype=%04x data=%08x" + " vlan=%d,%d ib1=%08x ib2=%08x\n", + h_source, h_dest, l2->etype, data, + l2->vlan1, l2->vlan2, hwe->ib1, ib2); + } + + return 0; +} + +static int airoha_ppe_debugfs_foe_all_show(struct seq_file *m, void *private) +{ + return airoha_ppe_debugfs_foe_show(m, private, false); +} +DEFINE_SHOW_ATTRIBUTE(airoha_ppe_debugfs_foe_all); + +static int airoha_ppe_debugfs_foe_bind_show(struct seq_file *m, void *private) +{ + return airoha_ppe_debugfs_foe_show(m, private, true); +} +DEFINE_SHOW_ATTRIBUTE(airoha_ppe_debugfs_foe_bind); + +int airoha_ppe_debugfs_init(struct airoha_ppe *ppe) +{ + ppe->debugfs_dir = debugfs_create_dir("ppe", NULL); + debugfs_create_file("entries", 0444, ppe->debugfs_dir, ppe, + &airoha_ppe_debugfs_foe_all_fops); + debugfs_create_file("bind", 0444, ppe->debugfs_dir, ppe, + &airoha_ppe_debugfs_foe_bind_fops); + + return 0; +}