From patchwork Thu Sep 14 14:38:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385508 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87FC610A02; Thu, 14 Sep 2023 14:39:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 970ACC433C8; Thu, 14 Sep 2023 14:39:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702351; bh=Ruv2x+AxMFAY05kafZgsuUzxSPVlmNqIubkGcvMlyRM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l2mIIobFDMOV9kqkWSqIuMkSForVh87qtJy+dmi4OvVWHItJzjLVrslEw5lR6OA6+ vLtwuG3P+4DzjoOPSy4a8TuifLGVT6vWK+AWRG/Ze8tMSx+QsSZ/LIJOStrAzLRBJL yBZ4reqmEMV+s9q9gTdNOBMUYgKw6WZKgc8SO/opE7SJxnStc4TXMnmmMqULgFKT7G GqjuqTvM07lfr/QBqOwq7XkSQZLqZ723dzzt9wevmznKkPj/Q/7WRgM4wMvkfr/YYo 1tDw2jWsLZPLLmfPqJcDt1UcH+2E1NRsBLmoS5mpUShgWDWmMjRTNgejERYwi2CtXf zLOSvB1seaB6g== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 01/15] dt-bindings: soc: mediatek: mt7986-wo-ccif: add binding for MT7988 SoC Date: Thu, 14 Sep 2023 16:38:06 +0200 Message-ID: <148f4f9ff2ec891955f9e9292aff9595f07beded.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Introduce MT7988 SoC compatibility string in mt7986-wo-ccif binding. Signed-off-by: Lorenzo Bianconi Acked-by: Rob Herring --- .../bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml b/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml index f0fa92b04b32..3b212f26abc5 100644 --- a/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml +++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml @@ -20,6 +20,7 @@ properties: items: - enum: - mediatek,mt7986-wo-ccif + - mediatek,mt7988-wo-ccif - const: syscon reg: From patchwork Thu Sep 14 14:38:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385509 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DF0E18E03; Thu, 14 Sep 2023 14:39:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9260BC433CB; Thu, 14 Sep 2023 14:39:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702354; bh=EsVzeIyCHOXl6PgzNuGZeoWesA2vqyz58aq8GZxby6s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h6EL81IoUl36SYq+3PUnwJ8BDPeUGtghxPazqu3ODXRMQzzxs76wGqeEqdU/AjZz4 E9OE3ESf6j9F13Ke14qmcyFMAzAvhPgNfp2CDhNLMqqby+sc2MXmUOpabwDt3AUwco yqehV3docffvWnldk82A4RD2vLMbAFjwdFH98jnnsByzVYWW7uKi1sadmU+Syx/sZA Tlby2fqiIw5qMp6fmsQI+WYZQ91qrgiFGEdCdd73FvWjbgEEYufT4qci+Frljb4HVh iF5GbKJoqCEf9MLk/OKqnLvrO/FeT+AZHboHKotYaWulrIGgE0sQ57cxzVpZTmowUa VekFCZi3eyJLw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 02/15] dt-bindings: arm: mediatek: mt7622-wed: add WED binding for MT7988 SoC Date: Thu, 14 Sep 2023 16:38:07 +0200 Message-ID: <9b84b6b9641a2eebc91e763e2ba9a341e3de1071.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Introduce MT7988 SoC compatibility string in mtk_wed binding. Signed-off-by: Lorenzo Bianconi Acked-by: Rob Herring --- .../devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml b/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml index 28ded09d72e3..e7720caf31b3 100644 --- a/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml +++ b/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml @@ -22,6 +22,7 @@ properties: - mediatek,mt7622-wed - mediatek,mt7981-wed - mediatek,mt7986-wed + - mediatek,mt7988-wed - const: syscon reg: From patchwork Thu Sep 14 14:38:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385510 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F45F18E03; Thu, 14 Sep 2023 14:39:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71A8EC43395; Thu, 14 Sep 2023 14:39:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702358; bh=yAXZDfthDM+JXclNb0yFwzC4eXIf9CoNOD2FqTErOCY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q1TOPEPkaXOeudzlYYtKMtpinNYpsrXo5btDc5we6Zx+gWi6C0l/nPY7ePNawU0Ri 50VrHVXkdxhwPCuebTdWzkAaB1JnWqhen5HRGl6y1cRkBL6vSjuzU65tztPm8NEZIR mkypxKYoTkSJeULC2JxGNQmjKynfVSzajTNCw6SJKugnlFALsy7it2v6CTAyx4myDM nCcyMRWJN/aFmOqTOMOcF4KA6/q5ArxrB5oXXh4A7cb3pbTAS88ThzH+PNzJ8nDrFc PeRoAo+8AYIvIbfpwGvr0CgZBqkUypNyydTukfAuHF8mFglNTcOdmteG1sC8fOSi3y atRXVyS8zlM7Q== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 03/15] net: ethernet: mtk_wed: introduce versioning utility routines Date: Thu, 14 Sep 2023 16:38:08 +0200 Message-ID: <3f1924490483d9b460cccbefb2a16ce1c89f6e74.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Similar to mtk_eth_soc, introduce the following wed versioning utility routines: - mtk_wed_is_v1 - mtk_wed_is_v2 This is a preliminary patch to introduce WED support for MT7988 SoC Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 40 +++++++++---------- drivers/net/ethernet/mediatek/mtk_wed.h | 10 +++++ .../net/ethernet/mediatek/mtk_wed_debugfs.c | 2 +- drivers/net/ethernet/mediatek/mtk_wed_mcu.c | 2 +- 4 files changed, 32 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index e7d3525d2e30..ce1ca98ea1d6 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -278,7 +278,7 @@ mtk_wed_assign(struct mtk_wed_device *dev) if (!hw->wed_dev) goto out; - if (hw->version == 1) + if (mtk_wed_is_v1(hw)) return NULL; /* MT7986 WED devices do not have any pcie slot restrictions */ @@ -359,7 +359,7 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) desc->buf0 = cpu_to_le32(buf_phys); desc->buf1 = cpu_to_le32(buf_phys + txd_size); - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size) | FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1, MTK_WED_BUF_SIZE - txd_size) | @@ -498,7 +498,7 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en) { u32 mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK; - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR; else mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH | @@ -577,7 +577,7 @@ mtk_wed_dma_disable(struct mtk_wed_device *dev) MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | MTK_WDMA_GLO_CFG_RX_INFO2_PRERES); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { regmap_write(dev->hw->mirror, dev->hw->index * 4, 0); wdma_clr(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); @@ -606,7 +606,7 @@ mtk_wed_stop(struct mtk_wed_device *dev) wdma_w32(dev, MTK_WDMA_INT_GRP2, 0); wed_w32(dev, MTK_WED_WPDMA_INT_MASK, 0); - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) return; wed_w32(dev, MTK_WED_EXT_INT_MASK1, 0); @@ -625,7 +625,7 @@ mtk_wed_deinit(struct mtk_wed_device *dev) MTK_WED_CTRL_WED_TX_BM_EN | MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) return; wed_clr(dev, MTK_WED_CTRL, @@ -731,7 +731,7 @@ mtk_wed_bus_init(struct mtk_wed_device *dev) static void mtk_wed_set_wpdma(struct mtk_wed_device *dev) { - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys); } else { mtk_wed_bus_init(dev); @@ -762,7 +762,7 @@ mtk_wed_hw_init_early(struct mtk_wed_device *dev) MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY; wed_m32(dev, MTK_WED_WDMA_GLO_CFG, mask, set); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { u32 offset = dev->hw->index ? 0x04000400 : 0; wdma_set(dev, MTK_WDMA_GLO_CFG, @@ -935,7 +935,7 @@ mtk_wed_hw_init(struct mtk_wed_device *dev) wed_w32(dev, MTK_WED_TX_BM_BUF_LEN, MTK_WED_PKT_SIZE); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { wed_w32(dev, MTK_WED_TX_BM_TKID, FIELD_PREP(MTK_WED_TX_BM_TKID_START, dev->wlan.token_start) | @@ -968,7 +968,7 @@ mtk_wed_hw_init(struct mtk_wed_device *dev) mtk_wed_reset(dev, MTK_WED_RESET_TX_BM); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_TX_BM_EN | MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); @@ -1218,7 +1218,7 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev) } dev->init_done = false; - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) return; if (!busy) { @@ -1344,7 +1344,7 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) MTK_WED_CTRL_WED_TX_BM_EN | MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, MTK_WED_PCIE_INT_TRIGGER_STATUS); @@ -1417,7 +1417,7 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | MTK_WDMA_GLO_CFG_RX_INFO2_PRERES); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); } else { @@ -1466,7 +1466,7 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) mtk_wed_set_ext_int(dev, true); - if (dev->hw->version == 1) { + if (mtk_wed_is_v1(dev->hw)) { u32 val = dev->wlan.wpdma_phys | MTK_PCIE_MIRROR_MAP_EN | FIELD_PREP(MTK_PCIE_MIRROR_MAP_WED_ID, dev->hw->index); @@ -1551,7 +1551,7 @@ mtk_wed_attach(struct mtk_wed_device *dev) } mtk_wed_hw_init_early(dev); - if (hw->version == 1) { + if (mtk_wed_is_v1(hw)) { regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, BIT(hw->index), 0); } else { @@ -1619,7 +1619,7 @@ static int mtk_wed_txfree_ring_setup(struct mtk_wed_device *dev, void __iomem *regs) { struct mtk_wed_ring *ring = &dev->txfree_ring; - int i, index = dev->hw->version == 1; + int i, index = mtk_wed_is_v1(dev->hw); /* * For txfree event handling, the same DMA ring is shared between WED @@ -1677,7 +1677,7 @@ mtk_wed_irq_get(struct mtk_wed_device *dev, u32 mask) { u32 val, ext_mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK; - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) ext_mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR; else ext_mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH | @@ -1844,7 +1844,7 @@ mtk_wed_setup_tc(struct mtk_wed_device *wed, struct net_device *dev, { struct mtk_wed_hw *hw = wed->hw; - if (hw->version < 2) + if (mtk_wed_is_v1(hw)) return -EOPNOTSUPP; switch (type) { @@ -1918,9 +1918,9 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, hw->wdma = wdma; hw->index = index; hw->irq = irq; - hw->version = mtk_is_netsys_v1(eth) ? 1 : 2; + hw->version = eth->soc->version; - if (hw->version == 1) { + if (mtk_wed_is_v1(hw)) { hw->mirror = syscon_regmap_lookup_by_phandle(eth_np, "mediatek,pcie-mirror"); hw->hifsys = syscon_regmap_lookup_by_phandle(eth_np, diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h index 43ab77eaf683..6f5db891a6b9 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.h +++ b/drivers/net/ethernet/mediatek/mtk_wed.h @@ -40,6 +40,16 @@ struct mtk_wdma_info { }; #ifdef CONFIG_NET_MEDIATEK_SOC_WED +static inline bool mtk_wed_is_v1(struct mtk_wed_hw *hw) +{ + return hw->version == 1; +} + +static inline bool mtk_wed_is_v2(struct mtk_wed_hw *hw) +{ + return hw->version == 2; +} + static inline void wed_w32(struct mtk_wed_device *dev, u32 reg, u32 val) { diff --git a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c index e24afeaea0da..674e919d0d3a 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c @@ -261,7 +261,7 @@ void mtk_wed_hw_add_debugfs(struct mtk_wed_hw *hw) debugfs_create_u32("regidx", 0600, dir, &hw->debugfs_reg); debugfs_create_file_unsafe("regval", 0600, dir, hw, &fops_regval); debugfs_create_file_unsafe("txinfo", 0400, dir, hw, &wed_txinfo_fops); - if (hw->version != 1) + if (!mtk_wed_is_v1(hw)) debugfs_create_file_unsafe("rxinfo", 0400, dir, hw, &wed_rxinfo_fops); } diff --git a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c index 72bcdaed12a9..8216403e5834 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c @@ -207,7 +207,7 @@ int mtk_wed_mcu_msg_update(struct mtk_wed_device *dev, int id, void *data, { struct mtk_wed_wo *wo = dev->hw->wed_wo; - if (dev->hw->version == 1) + if (mtk_wed_is_v1(dev->hw)) return 0; if (WARN_ON(!wo)) From patchwork Thu Sep 14 14:38:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385511 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6D39210E6; Thu, 14 Sep 2023 14:39:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10EDEC433D9; Thu, 14 Sep 2023 14:39:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702362; bh=3r82y1pgCz3z5rfJXgtl4TIaNeByv4XWt/0bOvFdPxI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eZp4wlOv5KSHGEkkqKweIy/eEoPd/Eyd8ZFlyY/6PZPA3nLEfAjcHYum3GMFXt42P lDrncaOxqoxXfq8efsMosdPCEyhK/w6bfPtb6UkhlRZ1CEOpyz7DT7TNU5ivVuyv6B dMH2+gdt7pL90zfHmvrS63FK0cEyutxIc+azokCEc9pTQYUu0j14D494ecjbl78K/o LFclDexGdY2c2PMGN5RYjJPAHh+Wdt6ZR+gSkzrXsTZC5Lq8izi9ZGAqQDpXzmiZNi wdMJdQ5mQkY0xZSUbyYLsCSaJfEBi/VRVyYijHEAVTM9wu5s/mUv+u8vv3Pvm3RGLA 35s4wrUE4JqVA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 04/15] net: ethernet: mtk_wed: introduce mtk_wed_wdma_get_desc_size utility routine Date: Thu, 14 Sep 2023 16:38:09 +0200 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org This is a preliminary patch to introduce Wireless Ethernet Dispatcher support for MT7988 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index ce1ca98ea1d6..ac284b1e599f 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -1245,11 +1245,20 @@ mtk_wed_ring_alloc(struct mtk_wed_device *dev, struct mtk_wed_ring *ring, return 0; } +static u32 +mtk_wed_wdma_get_desc_size(struct mtk_wed_hw *hw) +{ + if (mtk_wed_is_v1(hw)) + return sizeof(struct mtk_wdma_desc); + + return 2 * sizeof(struct mtk_wdma_desc); +} + static int mtk_wed_wdma_rx_ring_setup(struct mtk_wed_device *dev, int idx, int size, bool reset) { - u32 desc_size = sizeof(struct mtk_wdma_desc) * dev->hw->version; + u32 desc_size = mtk_wed_wdma_get_desc_size(dev->hw); struct mtk_wed_ring *wdma; if (idx >= ARRAY_SIZE(dev->rx_wdma)) @@ -1278,7 +1287,7 @@ static int mtk_wed_wdma_tx_ring_setup(struct mtk_wed_device *dev, int idx, int size, bool reset) { - u32 desc_size = sizeof(struct mtk_wdma_desc) * dev->hw->version; + u32 desc_size = mtk_wed_wdma_get_desc_size(dev->hw); struct mtk_wed_ring *wdma; if (idx >= ARRAY_SIZE(dev->tx_wdma)) From patchwork Thu Sep 14 14:38:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385512 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0EB418E3D; Thu, 14 Sep 2023 14:39:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F523C433D9; Thu, 14 Sep 2023 14:39:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702366; bh=A/n4QYw0peCKBDyBwkcPpqsvirMbU5+9d/3PqDCAbNw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZyE7rh29QjC8HBgGrP+Q8tj36fLZuMfkjefxd40MdGlkyZJFuM3vve/Pq0W9dF/O4 fNOk08q9BxO6GdBXo8TaOYVolKblDIxzBcEALVcxpWHIX7EJuvs6MtvhZQ1SAxYbXl 9LQMNNqhGlQSno+Zl91I6CYGX1LYBrUCXjvc88bATQPQmRR8LifXcgHH5GbFIyWDL8 GzW5pbbw/H3yuLZrcM61cJ1snxH5BRhhkXpVxnhVfufXHf9bPy6VZGcWf6shD8Yv1R utubsFYZ0rC+q8wT9FKzua0yNciYxg33G10cgwVazVntOm/i8bN6nsAMSolD2zDR3R PNjUyR9PAO44w== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 05/15] net: ethernet: mtk_wed: do not configure rx offload if not supported Date: Thu, 14 Sep 2023 16:38:10 +0200 Message-ID: <2ee57f396292f604b001b277d96f1544fc1b92e6.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Check if rx offload is supported running mtk_wed_get_rx_capa routine before configuring it. This is a preliminary patch to introduce Wireless Ethernet Dispatcher (WED) support for MT7988 SoC. Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 126 +++++++++++--------- drivers/net/ethernet/mediatek/mtk_wed_mcu.c | 2 +- 2 files changed, 70 insertions(+), 58 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index ac284b1e599f..100546c63e5a 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -606,7 +606,7 @@ mtk_wed_stop(struct mtk_wed_device *dev) wdma_w32(dev, MTK_WDMA_INT_GRP2, 0); wed_w32(dev, MTK_WED_WPDMA_INT_MASK, 0); - if (mtk_wed_is_v1(dev->hw)) + if (!mtk_wed_get_rx_capa(dev)) return; wed_w32(dev, MTK_WED_EXT_INT_MASK1, 0); @@ -733,16 +733,21 @@ mtk_wed_set_wpdma(struct mtk_wed_device *dev) { if (mtk_wed_is_v1(dev->hw)) { wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys); - } else { - mtk_wed_bus_init(dev); - - wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_int); - wed_w32(dev, MTK_WED_WPDMA_CFG_INT_MASK, dev->wlan.wpdma_mask); - wed_w32(dev, MTK_WED_WPDMA_CFG_TX, dev->wlan.wpdma_tx); - wed_w32(dev, MTK_WED_WPDMA_CFG_TX_FREE, dev->wlan.wpdma_txfree); - wed_w32(dev, MTK_WED_WPDMA_RX_GLO_CFG, dev->wlan.wpdma_rx_glo); - wed_w32(dev, MTK_WED_WPDMA_RX_RING, dev->wlan.wpdma_rx); + return; } + + mtk_wed_bus_init(dev); + + wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_int); + wed_w32(dev, MTK_WED_WPDMA_CFG_INT_MASK, dev->wlan.wpdma_mask); + wed_w32(dev, MTK_WED_WPDMA_CFG_TX, dev->wlan.wpdma_tx); + wed_w32(dev, MTK_WED_WPDMA_CFG_TX_FREE, dev->wlan.wpdma_txfree); + + if (!mtk_wed_get_rx_capa(dev)) + return; + + wed_w32(dev, MTK_WED_WPDMA_RX_GLO_CFG, dev->wlan.wpdma_rx_glo); + wed_w32(dev, MTK_WED_WPDMA_RX_RING, dev->wlan.wpdma_rx); } static void @@ -974,15 +979,17 @@ mtk_wed_hw_init(struct mtk_wed_device *dev) MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); } else { wed_clr(dev, MTK_WED_TX_TKID_CTRL, MTK_WED_TX_TKID_CTRL_PAUSE); - /* rx hw init */ - wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, - MTK_WED_WPDMA_RX_D_RST_CRX_IDX | - MTK_WED_WPDMA_RX_D_RST_DRV_IDX); - wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, 0); - - mtk_wed_rx_buffer_hw_init(dev); - mtk_wed_rro_hw_init(dev); - mtk_wed_route_qm_hw_init(dev); + if (mtk_wed_get_rx_capa(dev)) { + /* rx hw init */ + wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, + MTK_WED_WPDMA_RX_D_RST_CRX_IDX | + MTK_WED_WPDMA_RX_D_RST_DRV_IDX); + wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, 0); + + mtk_wed_rx_buffer_hw_init(dev); + mtk_wed_rro_hw_init(dev); + mtk_wed_route_qm_hw_init(dev); + } } wed_clr(dev, MTK_WED_TX_BM_CTRL, MTK_WED_TX_BM_CTRL_PAUSE); @@ -1363,8 +1370,6 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) wed_clr(dev, MTK_WED_WDMA_INT_CTRL, wdma_mask); } else { - wdma_mask |= FIELD_PREP(MTK_WDMA_INT_MASK_TX_DONE, - GENMASK(1, 0)); /* initail tx interrupt trigger */ wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_TX, MTK_WED_WPDMA_INT_CTRL_TX0_DONE_EN | @@ -1383,15 +1388,20 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_TRIG, dev->wlan.txfree_tbit)); - wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RX, - MTK_WED_WPDMA_INT_CTRL_RX0_EN | - MTK_WED_WPDMA_INT_CTRL_RX0_CLR | - MTK_WED_WPDMA_INT_CTRL_RX1_EN | - MTK_WED_WPDMA_INT_CTRL_RX1_CLR | - FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX0_DONE_TRIG, - dev->wlan.rx_tbit[0]) | - FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX1_DONE_TRIG, - dev->wlan.rx_tbit[1])); + if (mtk_wed_get_rx_capa(dev)) { + wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RX, + MTK_WED_WPDMA_INT_CTRL_RX0_EN | + MTK_WED_WPDMA_INT_CTRL_RX0_CLR | + MTK_WED_WPDMA_INT_CTRL_RX1_EN | + MTK_WED_WPDMA_INT_CTRL_RX1_CLR | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX0_DONE_TRIG, + dev->wlan.rx_tbit[0]) | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX1_DONE_TRIG, + dev->wlan.rx_tbit[1])); + + wdma_mask |= FIELD_PREP(MTK_WDMA_INT_MASK_TX_DONE, + GENMASK(1, 0)); + } wed_w32(dev, MTK_WED_WDMA_INT_CLR, wdma_mask); wed_set(dev, MTK_WED_WDMA_INT_CTRL, @@ -1410,6 +1420,8 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) static void mtk_wed_dma_enable(struct mtk_wed_device *dev) { + int i; + wed_set(dev, MTK_WED_WPDMA_INT_CTRL, MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV); wed_set(dev, MTK_WED_GLO_CFG, @@ -1429,33 +1441,33 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) if (mtk_wed_is_v1(dev->hw)) { wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); - } else { - int i; - - wed_set(dev, MTK_WED_WPDMA_CTRL, - MTK_WED_WPDMA_CTRL_SDL1_FIXED); + return; + } - wed_set(dev, MTK_WED_WDMA_GLO_CFG, - MTK_WED_WDMA_GLO_CFG_TX_DRV_EN | - MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK); + wed_set(dev, MTK_WED_WPDMA_CTRL, + MTK_WED_WPDMA_CTRL_SDL1_FIXED); + wed_set(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC); + wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP | + MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV); - wed_set(dev, MTK_WED_WPDMA_GLO_CFG, - MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC | - MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC); + if (!mtk_wed_get_rx_capa(dev)) + return; - wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, - MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP | - MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV); + wed_set(dev, MTK_WED_WDMA_GLO_CFG, + MTK_WED_WDMA_GLO_CFG_TX_DRV_EN | + MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK); - wed_set(dev, MTK_WED_WPDMA_RX_D_GLO_CFG, - MTK_WED_WPDMA_RX_D_RX_DRV_EN | - FIELD_PREP(MTK_WED_WPDMA_RX_D_RXD_READ_LEN, 0x18) | - FIELD_PREP(MTK_WED_WPDMA_RX_D_INIT_PHASE_RXEN_SEL, - 0x2)); + wed_set(dev, MTK_WED_WPDMA_RX_D_GLO_CFG, + MTK_WED_WPDMA_RX_D_RX_DRV_EN | + FIELD_PREP(MTK_WED_WPDMA_RX_D_RXD_READ_LEN, 0x18) | + FIELD_PREP(MTK_WED_WPDMA_RX_D_INIT_PHASE_RXEN_SEL, + 0x2)); - for (i = 0; i < MTK_WED_RX_QUEUES; i++) - mtk_wed_check_wfdma_rx_fill(dev, i); - } + for (i = 0; i < MTK_WED_RX_QUEUES; i++) + mtk_wed_check_wfdma_rx_fill(dev, i); } static void @@ -1482,7 +1494,7 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) val |= BIT(0) | (BIT(1) * !!dev->hw->index); regmap_write(dev->hw->mirror, dev->hw->index * 4, val); - } else { + } else if (mtk_wed_get_rx_capa(dev)) { /* driver set mid ready and only once */ wed_w32(dev, MTK_WED_EXT_INT_MASK1, MTK_WED_EXT_INT_STATUS_WPDMA_MID_RDY); @@ -1494,7 +1506,6 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) if (mtk_wed_rro_cfg(dev)) return; - } mtk_wed_set_512_support(dev, dev->wlan.wcid_512); @@ -1560,13 +1571,14 @@ mtk_wed_attach(struct mtk_wed_device *dev) } mtk_wed_hw_init_early(dev); - if (mtk_wed_is_v1(hw)) { + if (mtk_wed_is_v1(hw)) regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP, BIT(hw->index), 0); - } else { + else dev->rev_id = wed_r32(dev, MTK_WED_REV_ID); + + if (mtk_wed_get_rx_capa(dev)) ret = mtk_wed_wo_init(hw); - } out: if (ret) { dev_err(dev->hw->dev, "failed to attach wed device\n"); diff --git a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c index 8216403e5834..4e48905ac70d 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c @@ -207,7 +207,7 @@ int mtk_wed_mcu_msg_update(struct mtk_wed_device *dev, int id, void *data, { struct mtk_wed_wo *wo = dev->hw->wed_wo; - if (mtk_wed_is_v1(dev->hw)) + if (!mtk_wed_get_rx_capa(dev)) return 0; if (WARN_ON(!wo)) From patchwork Thu Sep 14 14:38:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385513 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9529721117; Thu, 14 Sep 2023 14:39:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4B07C433CB; Thu, 14 Sep 2023 14:39:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702370; bh=gXQWsZnC/K4vRGdHADEZFnyLphroni4lZ4MEK3xmE5Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jve63uts2x9V6DGU+PaObUFChZrUIEcUeedN7mWPYgE/CmoxOVMSvF7mlmd9K7w9r uvl6ocICuXB0+0MsIu/jCrqEoNHs/Nao3vM9ChHyAQV6fy1IA2STzKKkHxRY3KcI/k 1BwLEbwykYLxHVu3UnC3kUJ5vL5CcBnOfSI5Fr/ceuVX/Iks8ZIOZjp+6raBmzSfRr MVCxzi9iKBzhr7sVAqM1UKr19rfd1HHj3AwtXZa7jW9IRpmS5kgU3Aw8Pq92GeXBDw gYYiQj+a8X0Ayf+HwohIdJtELi8lLflkuw3LDUnDL+2kCr594pcIvAbaPD16GsIoUK TMidQUInwI4ug== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 06/15] net: ethernet: mtk_wed: rename mtk_rxbm_desc in mtk_wed_bm_desc Date: Thu, 14 Sep 2023 16:38:11 +0200 Message-ID: <1b05e610e5c8582ddaa0c5f71e6e6e6016d50cb9.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Rename mtk_rxbm_desc structure in mtk_wed_bm_desc since it will be used even on tx side by MT7988 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 4 ++-- drivers/net/wireless/mediatek/mt76/mt7915/mmio.c | 2 +- include/linux/soc/mediatek/mtk_wed.h | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 100546c63e5a..8880b018ffca 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -422,7 +422,7 @@ mtk_wed_free_tx_buffer(struct mtk_wed_device *dev) static int mtk_wed_rx_buffer_alloc(struct mtk_wed_device *dev) { - struct mtk_rxbm_desc *desc; + struct mtk_wed_bm_desc *desc; dma_addr_t desc_phys; dev->rx_buf_ring.size = dev->wlan.rx_nbuf; @@ -442,7 +442,7 @@ mtk_wed_rx_buffer_alloc(struct mtk_wed_device *dev) static void mtk_wed_free_rx_buffer(struct mtk_wed_device *dev) { - struct mtk_rxbm_desc *desc = dev->rx_buf_ring.desc; + struct mtk_wed_bm_desc *desc = dev->rx_buf_ring.desc; if (!desc) return; diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c index fc7ace638ce8..e7d8e03f826f 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c +++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c @@ -591,7 +591,7 @@ static void mt7915_mmio_wed_release_rx_buf(struct mtk_wed_device *wed) static u32 mt7915_mmio_wed_init_rx_buf(struct mtk_wed_device *wed, int size) { - struct mtk_rxbm_desc *desc = wed->rx_buf_ring.desc; + struct mtk_wed_bm_desc *desc = wed->rx_buf_ring.desc; struct mt76_txwi_cache *t = NULL; struct mt7915_dev *dev; struct mt76_queue *q; diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index b2b28180dff7..c6512c216b27 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -45,7 +45,7 @@ enum mtk_wed_wo_cmd { MTK_WED_WO_CMD_WED_END }; -struct mtk_rxbm_desc { +struct mtk_wed_bm_desc { __le32 buf0; __le32 token; } __packed __aligned(4); @@ -104,7 +104,7 @@ struct mtk_wed_device { struct { int size; - struct mtk_rxbm_desc *desc; + struct mtk_wed_bm_desc *desc; dma_addr_t desc_phys; } rx_buf_ring; From patchwork Thu Sep 14 14:38:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385514 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1C5121344; Thu, 14 Sep 2023 14:39:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CAC6BC43397; Thu, 14 Sep 2023 14:39:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702374; bh=bptKA4oxu938HC76Gt7Y+c/P6Zyx8f+qya9UM/jezdY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LQgRUn8d7kTWrl1/JS/ClNubszPNHA6pcsRN4+rsXGTkNvYl+sKsyw7CM1h0WAvY8 FZcRQGYtD0hFzUCjECOTcdEtnk98pPYKRWxJSxUcsZBT2PpBBk8VzuC4wgytU1rp25 gq6+1N1II972L4ai0sEE9otV8HXxBA1WDhWPdDul/PnxroyM6JivlxC8o6NSC+5JZw ADYFszokqS3Fbjdug9TKwJig0Jh28ktgDg8/L80Pyka5Eymw/ba+LqtSixhQsTl1Jg zKOBbdJa9KcJeLxiJrkc2CAbE2D2BbnQWDbNnjVj/Crc7trZxyhzIUji/9gNJPvraY M8lgQLYR2F9jg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 07/15] net: ethernet: mtk_wed: introduce mtk_wed_buf structure Date: Thu, 14 Sep 2023 16:38:12 +0200 Message-ID: <7eede969f46fee8c05913fe1893cb60db9144edf.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Introduce mtk_wed_buf structure to store both virtual and physical addresses allocated in mtk_wed_tx_buffer_alloc() routine. This is a preliminary patch to add WED support for MT7988 SoC since it relies on a different dma descriptor layout not storing page dma addresses. Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 12 ++++++------ include/linux/soc/mediatek/mtk_wed.h | 7 ++++++- 2 files changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 8880b018ffca..58d97be98029 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -300,9 +300,9 @@ mtk_wed_assign(struct mtk_wed_device *dev) static int mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) { + struct mtk_wed_buf *page_list; struct mtk_wdma_desc *desc; dma_addr_t desc_phys; - void **page_list; int token = dev->wlan.token_start; int ring_size; int n_pages; @@ -343,7 +343,8 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) return -ENOMEM; } - page_list[page_idx++] = page; + page_list[page_idx].p = page; + page_list[page_idx++].phy_addr = page_phys; dma_sync_single_for_cpu(dev->hw->dev, page_phys, PAGE_SIZE, DMA_BIDIRECTIONAL); @@ -387,8 +388,8 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) static void mtk_wed_free_tx_buffer(struct mtk_wed_device *dev) { + struct mtk_wed_buf *page_list = dev->tx_buf_ring.pages; struct mtk_wdma_desc *desc = dev->tx_buf_ring.desc; - void **page_list = dev->tx_buf_ring.pages; int page_idx; int i; @@ -400,13 +401,12 @@ mtk_wed_free_tx_buffer(struct mtk_wed_device *dev) for (i = 0, page_idx = 0; i < dev->tx_buf_ring.size; i += MTK_WED_BUF_PER_PAGE) { - void *page = page_list[page_idx++]; - dma_addr_t buf_addr; + dma_addr_t buf_addr = page_list[page_idx].phy_addr; + void *page = page_list[page_idx++].p; if (!page) break; - buf_addr = le32_to_cpu(desc[i].buf0); dma_unmap_page(dev->hw->dev, buf_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); __free_page(page); diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index c6512c216b27..5f00dc26582b 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -76,6 +76,11 @@ struct mtk_wed_wo_rx_stats { __le32 rx_drop_cnt; }; +struct mtk_wed_buf { + void *p; + dma_addr_t phy_addr; +}; + struct mtk_wed_device { #ifdef CONFIG_NET_MEDIATEK_SOC_WED const struct mtk_wed_ops *ops; @@ -97,7 +102,7 @@ struct mtk_wed_device { struct { int size; - void **pages; + struct mtk_wed_buf *pages; struct mtk_wdma_desc *desc; dma_addr_t desc_phys; } tx_buf_ring; From patchwork Thu Sep 14 14:38:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385515 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D6E921352; Thu, 14 Sep 2023 14:39:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC44DC433C8; Thu, 14 Sep 2023 14:39:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702378; bh=NuGvozSv9WccAC4JHs7SvmzDslrfdkBPTRpl9vmfPbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XP/5j5UGQQIJMpHxUgBLukt0K67oVVXI9Tk1OgNH7LivXba9HAtQARD4VhfbvpzHP P7+oJ0VKWOhl4zljq4U3styg8PcE8T5fiK99iQpgZ7QitUK9B3D2rw8Iq8FCHIaoBK gewwfmLXfcWTDnJsoPt7M+Witqti9GaQV8R2u5qftw4nPpo7AHvM8X3qtUy4PNbt9u wHrqsMmcxCQ4IYCXoBQiKbRDvljiqH7cwMSeWtolzOCapZvDpOXwAcRmHY7cBrSS8g hkL/FLHa4k0p0F8OkFnXADaydGoVgwTy/ltnAbtokr4bj8SUiDGVs2TpjY3SNw35uC ZfAq8tLVQwqLw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 08/15] net: ethernet: mtk_wed: move mem_region array out of mtk_wed_mcu_load_firmware Date: Thu, 14 Sep 2023 16:38:13 +0200 Message-ID: <0b85368e324422b99cd29d4c1dad2e93bb7ad660.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Remove mtk_wed_wo_memory_region boot structure in mtk_wed_wo. This is a preliminary patch to introduce WED support for MT7988 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed_mcu.c | 37 ++++++++++----------- drivers/net/ethernet/mediatek/mtk_wed_wo.h | 1 - 2 files changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c index 4e48905ac70d..cc54fbd7380a 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c @@ -16,14 +16,30 @@ #include "mtk_wed_wo.h" #include "mtk_wed.h" +static struct mtk_wed_wo_memory_region mem_region[] = { + [MTK_WED_WO_REGION_EMI] = { + .name = "wo-emi", + }, + [MTK_WED_WO_REGION_ILM] = { + .name = "wo-ilm", + }, + [MTK_WED_WO_REGION_DATA] = { + .name = "wo-data", + .shared = true, + }, + [MTK_WED_WO_REGION_BOOT] = { + .name = "wo-boot", + }, +}; + static u32 wo_r32(struct mtk_wed_wo *wo, u32 reg) { - return readl(wo->boot.addr + reg); + return readl(mem_region[MTK_WED_WO_REGION_BOOT].addr + reg); } static void wo_w32(struct mtk_wed_wo *wo, u32 reg, u32 val) { - writel(val, wo->boot.addr + reg); + writel(val, mem_region[MTK_WED_WO_REGION_BOOT].addr + reg); } static struct sk_buff * @@ -294,18 +310,6 @@ mtk_wed_mcu_run_firmware(struct mtk_wed_wo *wo, const struct firmware *fw, static int mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo) { - static struct mtk_wed_wo_memory_region mem_region[] = { - [MTK_WED_WO_REGION_EMI] = { - .name = "wo-emi", - }, - [MTK_WED_WO_REGION_ILM] = { - .name = "wo-ilm", - }, - [MTK_WED_WO_REGION_DATA] = { - .name = "wo-data", - .shared = true, - }, - }; const struct mtk_wed_fw_trailer *trailer; const struct firmware *fw; const char *fw_name; @@ -319,11 +323,6 @@ mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo) return ret; } - wo->boot.name = "wo-boot"; - ret = mtk_wed_get_memory_region(wo, &wo->boot); - if (ret) - return ret; - /* set dummy cr */ wed_w32(wo->hw->wed_dev, MTK_WED_SCR0 + 4 * MTK_WED_DUMMY_CR_FWDL, wo->hw->index + 1); diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h index 7a1a2a28f1ac..8ed81761bf10 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h @@ -228,7 +228,6 @@ struct mtk_wed_wo_queue { struct mtk_wed_wo { struct mtk_wed_hw *hw; - struct mtk_wed_wo_memory_region boot; struct mtk_wed_wo_queue q_tx; struct mtk_wed_wo_queue q_rx; From patchwork Thu Sep 14 14:38:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385516 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 834BE21352; Thu, 14 Sep 2023 14:39:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D146EC433C8; Thu, 14 Sep 2023 14:39:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702382; bh=y6yDaYvJ/bA5abRx1KNylEBwaNNnEQUvQMq0/2yBjhA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HF3ManiqI62v1YyLv3kAwbSJKc3z2b9Si/MZVsGwgRIU5BJRHKiDtUK/P7KBC+E4X kqEf6wNDpr6tmHmVBST26ZBmSsHqW6ab8pS9udjLecvCtAS1pTJMP49AzSw991zREZ 4qG6Pg3eePqaxrhzM7Ri0ynMIO6YnYIRW/QKMPTDeKHkFqp0P8KgNLIvsxmggSufZP 0IvnTvVCEO7E73u/BmSrW7jIr7kaAWgXZgsFdFBof5ShZnriHz2ZFqsnC4cZ6/DdjP cKmvYWIuPmJAmVoWkCFVX3JrkbLs+djR55tahOMpXUOW//PFYU45u0pT9mbHZA0qXz tv1u8fp4PIanw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 09/15] net: ethernet: mtk_wed: make memory region optional Date: Thu, 14 Sep 2023 16:38:14 +0200 Message-ID: <475f2d35ed2686c116c99fe3514f5c360a15c658.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Make mtk_wed_wo_memory_region optionals. This is a preliminary patch to introduce Wireless Ethernet Dispatcher support for MT7988 SoC since MT7988 WED fw image will have a different layout. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed_mcu.c | 23 ++++++++++++--------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c index cc54fbd7380a..e53531252bd9 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c @@ -234,19 +234,13 @@ int mtk_wed_mcu_msg_update(struct mtk_wed_device *dev, int id, void *data, } static int -mtk_wed_get_memory_region(struct mtk_wed_wo *wo, +mtk_wed_get_memory_region(struct mtk_wed_hw *hw, int index, struct mtk_wed_wo_memory_region *region) { struct reserved_mem *rmem; struct device_node *np; - int index; - index = of_property_match_string(wo->hw->node, "memory-region-names", - region->name); - if (index < 0) - return index; - - np = of_parse_phandle(wo->hw->node, "memory-region", index); + np = of_parse_phandle(hw->node, "memory-region", index); if (!np) return -ENODEV; @@ -258,7 +252,7 @@ mtk_wed_get_memory_region(struct mtk_wed_wo *wo, region->phy_addr = rmem->base; region->size = rmem->size; - region->addr = devm_ioremap(wo->hw->dev, region->phy_addr, region->size); + region->addr = devm_ioremap(hw->dev, region->phy_addr, region->size); return !region->addr ? -EINVAL : 0; } @@ -271,6 +265,9 @@ mtk_wed_mcu_run_firmware(struct mtk_wed_wo *wo, const struct firmware *fw, const struct mtk_wed_fw_trailer *trailer; const struct mtk_wed_fw_region *fw_region; + if (!region->phy_addr || !region->size) + return 0; + trailer_ptr = fw->data + fw->size - sizeof(*trailer); trailer = (const struct mtk_wed_fw_trailer *)trailer_ptr; region_ptr = trailer_ptr - trailer->num_region * sizeof(*fw_region); @@ -318,7 +315,13 @@ mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo) /* load firmware region metadata */ for (i = 0; i < ARRAY_SIZE(mem_region); i++) { - ret = mtk_wed_get_memory_region(wo, &mem_region[i]); + int index = of_property_match_string(wo->hw->node, + "memory-region-names", + mem_region[i].name); + if (index < 0) + continue; + + ret = mtk_wed_get_memory_region(wo->hw, index, &mem_region[i]); if (ret) return ret; } From patchwork Thu Sep 14 14:38:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385517 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1554628E2A; Thu, 14 Sep 2023 14:39:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E57DEC433CA; Thu, 14 Sep 2023 14:39:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702386; bh=/xt+Ss7I7Iyx71b6NcEH4ekpgvO95J12V3F02agSOSw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JGHKuX9RlbtUNFBTOn8p90G11ZW2oauCv2pqJhDk0cnSiH3h0aQE4g2Jk5L/bVSfB vm0O3EiJXgReMc6YiWh09vg2erVEUKAhhNNSUZDcMxzqGxP+yfp/Ut+jgSMteFjIJ3 TUsMwK+5HJcdDCkooleKh+7hTMVIg31Fxg8WrBFuCf0YPaOPjs7Ub0I5g3JD0yBY3a 7vqvzmDlSfRTCCv+JWxo5FHndu1vcqoXKcQB9X/pkOP0YzYt/szUR3osC5vhHhUkzZ in7RJuuSA/i577Omk7cBFrucvCtgzgMVcjEqkGHj8yMFvK5UeFse3tuGKx+FlxUaoE 7Y/tJCGTYWN6Q== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 10/15] net: ethernet: mtk_wed: introduce WED support for MT7988 Date: Thu, 14 Sep 2023 16:38:15 +0200 Message-ID: <330efa9f15a6da8a8e7596d3a942f3e893730e12.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Sujuan Chen Similar to MT7986 and MT7622, enable Wireless Ethernet Ditpatcher for MT7988 in order to offload traffic forwarded from LAN/WLAN to WLAN/LAN Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi Signed-off-by: Sujuan Chen --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 1 + drivers/net/ethernet/mediatek/mtk_eth_soc.h | 2 +- .../net/ethernet/mediatek/mtk_ppe_offload.c | 3 + drivers/net/ethernet/mediatek/mtk_wed.c | 458 +++++++++++++----- drivers/net/ethernet/mediatek/mtk_wed.h | 28 ++ drivers/net/ethernet/mediatek/mtk_wed_mcu.c | 33 +- drivers/net/ethernet/mediatek/mtk_wed_regs.h | 228 ++++++++- drivers/net/ethernet/mediatek/mtk_wed_wo.h | 2 + include/linux/soc/mediatek/mtk_wed.h | 9 +- 9 files changed, 618 insertions(+), 146 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 3cffd1bd3067..697620c6354b 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -197,6 +197,7 @@ static const struct mtk_reg_map mt7988_reg_map = { .wdma_base = { [0] = 0x4800, [1] = 0x4c00, + [2] = 0x5000, }, .pse_iq_sta = 0x0180, .pse_oq_sta = 0x01a0, diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 403219d987ef..9ae3b8a71d0e 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -1132,7 +1132,7 @@ struct mtk_reg_map { u32 gdm1_cnt; u32 gdma_to_ppe; u32 ppe_base; - u32 wdma_base[2]; + u32 wdma_base[3]; u32 pse_iq_sta; u32 pse_oq_sta; }; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index ef3980840695..95f76975f258 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -201,6 +201,9 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, case 1: pse_port = PSE_WDMA1_PORT; break; + case 2: + pse_port = PSE_WDMA2_PORT; + break; default: return -EINVAL; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 58d97be98029..0d8e10df9da2 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -17,17 +17,19 @@ #include #include #include "mtk_eth_soc.h" -#include "mtk_wed_regs.h" #include "mtk_wed.h" #include "mtk_ppe.h" #include "mtk_wed_wo.h" #define MTK_PCIE_BASE(n) (0x1a143000 + (n) * 0x2000) -#define MTK_WED_PKT_SIZE 1900 +#define MTK_WED_PKT_SIZE 1920 #define MTK_WED_BUF_SIZE 2048 +#define MTK_WED_PAGE_BUF_SIZE 128 #define MTK_WED_BUF_PER_PAGE (PAGE_SIZE / 2048) +#define MTK_WED_RX_PAGE_BUF_PER_PAGE (PAGE_SIZE / 128) #define MTK_WED_RX_RING_SIZE 1536 +#define MTK_WED_RX_PG_BM_CNT 8192 #define MTK_WED_TX_RING_SIZE 2048 #define MTK_WED_WDMA_RING_SIZE 1024 @@ -41,7 +43,10 @@ #define MTK_WED_RRO_QUE_CNT 8192 #define MTK_WED_MIOD_ENTRY_CNT 128 -static struct mtk_wed_hw *hw_list[2]; +#define MTK_WED_TX_BM_DMA_SIZE 65536 +#define MTK_WED_TX_BM_PKT_CNT 32768 + +static struct mtk_wed_hw *hw_list[3]; static DEFINE_MUTEX(hw_lock); struct mtk_wed_flow_block_priv { @@ -300,33 +305,39 @@ mtk_wed_assign(struct mtk_wed_device *dev) static int mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) { + int i, page_idx = 0, n_pages, ring_size; + int token = dev->wlan.token_start; struct mtk_wed_buf *page_list; - struct mtk_wdma_desc *desc; dma_addr_t desc_phys; - int token = dev->wlan.token_start; - int ring_size; - int n_pages; - int i, page_idx; + void *desc_ptr; - ring_size = dev->wlan.nbuf & ~(MTK_WED_BUF_PER_PAGE - 1); - n_pages = ring_size / MTK_WED_BUF_PER_PAGE; + if (!mtk_wed_is_v3_or_greater(dev->hw)) { + dev->tx_buf_ring.desc_size = sizeof(struct mtk_wdma_desc); + ring_size = dev->wlan.nbuf & ~(MTK_WED_BUF_PER_PAGE - 1); + dev->tx_buf_ring.size = ring_size; + } else { + dev->tx_buf_ring.desc_size = sizeof(struct mtk_wed_bm_desc); + dev->tx_buf_ring.size = MTK_WED_TX_BM_DMA_SIZE; + ring_size = MTK_WED_TX_BM_PKT_CNT; + } + n_pages = dev->tx_buf_ring.size / MTK_WED_BUF_PER_PAGE; page_list = kcalloc(n_pages, sizeof(*page_list), GFP_KERNEL); if (!page_list) return -ENOMEM; - dev->tx_buf_ring.size = ring_size; dev->tx_buf_ring.pages = page_list; - desc = dma_alloc_coherent(dev->hw->dev, ring_size * sizeof(*desc), - &desc_phys, GFP_KERNEL); - if (!desc) + desc_ptr = dma_alloc_coherent(dev->hw->dev, + dev->tx_buf_ring.size * dev->tx_buf_ring.desc_size, + &desc_phys, GFP_KERNEL); + if (!desc_ptr) return -ENOMEM; - dev->tx_buf_ring.desc = desc; + dev->tx_buf_ring.desc = desc_ptr; dev->tx_buf_ring.desc_phys = desc_phys; - for (i = 0, page_idx = 0; i < ring_size; i += MTK_WED_BUF_PER_PAGE) { + for (i = 0; i < ring_size; i += MTK_WED_BUF_PER_PAGE) { dma_addr_t page_phys, buf_phys; struct page *page; void *buf; @@ -352,28 +363,34 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) buf_phys = page_phys; for (s = 0; s < MTK_WED_BUF_PER_PAGE; s++) { - u32 txd_size; - u32 ctrl; - - txd_size = dev->wlan.init_buf(buf, buf_phys, token++); + struct mtk_wdma_desc *desc = desc_ptr; desc->buf0 = cpu_to_le32(buf_phys); - desc->buf1 = cpu_to_le32(buf_phys + txd_size); - - if (mtk_wed_is_v1(dev->hw)) - ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size) | - FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1, - MTK_WED_BUF_SIZE - txd_size) | - MTK_WDMA_DESC_CTRL_LAST_SEG1; - else - ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size) | - FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1_V2, - MTK_WED_BUF_SIZE - txd_size) | - MTK_WDMA_DESC_CTRL_LAST_SEG0; - desc->ctrl = cpu_to_le32(ctrl); - desc->info = 0; - desc++; - + if (!mtk_wed_is_v3_or_greater(dev->hw)) { + u32 txd_size, ctrl; + + txd_size = dev->wlan.init_buf(buf, buf_phys, token++); + desc->buf1 = cpu_to_le32(buf_phys + txd_size); + + if (mtk_wed_is_v1(dev->hw)) + ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, + txd_size) | + FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1, + MTK_WED_BUF_SIZE - txd_size) | + MTK_WDMA_DESC_CTRL_LAST_SEG1; + else + ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, + txd_size) | + FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1_V2, + MTK_WED_BUF_SIZE - txd_size) | + MTK_WDMA_DESC_CTRL_LAST_SEG0; + desc->ctrl = cpu_to_le32(ctrl); + desc->info = 0; + } else { + desc->ctrl = cpu_to_le32(token << 16); + } + + desc_ptr += dev->tx_buf_ring.desc_size; buf += MTK_WED_BUF_SIZE; buf_phys += MTK_WED_BUF_SIZE; } @@ -389,31 +406,29 @@ static void mtk_wed_free_tx_buffer(struct mtk_wed_device *dev) { struct mtk_wed_buf *page_list = dev->tx_buf_ring.pages; - struct mtk_wdma_desc *desc = dev->tx_buf_ring.desc; - int page_idx; - int i; + int i, page_idx = 0; if (!page_list) return; - if (!desc) + if (!dev->tx_buf_ring.desc) goto free_pagelist; - for (i = 0, page_idx = 0; i < dev->tx_buf_ring.size; - i += MTK_WED_BUF_PER_PAGE) { - dma_addr_t buf_addr = page_list[page_idx].phy_addr; + for (i = 0; i < dev->tx_buf_ring.size; i += MTK_WED_BUF_PER_PAGE) { + dma_addr_t page_phy = page_list[page_idx].phy_addr; void *page = page_list[page_idx++].p; if (!page) break; - dma_unmap_page(dev->hw->dev, buf_addr, PAGE_SIZE, + dma_unmap_page(dev->hw->dev, page_phy, PAGE_SIZE, DMA_BIDIRECTIONAL); __free_page(page); } - dma_free_coherent(dev->hw->dev, dev->tx_buf_ring.size * sizeof(*desc), - desc, dev->tx_buf_ring.desc_phys); + dma_free_coherent(dev->hw->dev, + dev->tx_buf_ring.size * dev->tx_buf_ring.desc_size, + dev->tx_buf_ring.desc, dev->tx_buf_ring.desc_phys); free_pagelist: kfree(page_list); @@ -498,13 +513,22 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en) { u32 mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK; - if (mtk_wed_is_v1(dev->hw)) + switch (dev->hw->version) { + case 1: mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR; - else + break; + case 2: mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH | MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH | MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT | MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR; + break; + case 3: + mask = MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT; + break; + default: + break; + } if (!dev->hw->num_flows) mask &= ~MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD; @@ -516,6 +540,9 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en) static void mtk_wed_set_512_support(struct mtk_wed_device *dev, bool enable) { + if (!mtk_wed_is_v2(dev->hw)) + return; + if (enable) { wed_w32(dev, MTK_WED_TXDP_CTRL, MTK_WED_TXDP_DW9_OVERWR); wed_w32(dev, MTK_WED_TXP_DW1, @@ -590,6 +617,14 @@ mtk_wed_dma_disable(struct mtk_wed_device *dev) MTK_WED_WPDMA_RX_D_RX_DRV_EN); wed_clr(dev, MTK_WED_WDMA_GLO_CFG, MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK); + + if (mtk_wed_is_v3_or_greater(dev->hw) && + mtk_wed_get_rx_capa(dev)) { + wdma_clr(dev, MTK_WDMA_PREF_TX_CFG, + MTK_WDMA_PREF_TX_CFG_PREF_EN); + wdma_clr(dev, MTK_WDMA_PREF_RX_CFG, + MTK_WDMA_PREF_RX_CFG_PREF_EN); + } } mtk_wed_set_512_support(dev, false); @@ -632,6 +667,14 @@ mtk_wed_deinit(struct mtk_wed_device *dev) MTK_WED_CTRL_RX_ROUTE_QM_EN | MTK_WED_CTRL_WED_RX_BM_EN | MTK_WED_CTRL_RX_RRO_QM_EN); + + if (mtk_wed_is_v3_or_greater(dev->hw)) { + wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_AMSDU_EN); + wed_clr(dev, MTK_WED_RESET, MTK_WED_RESET_TX_AMSDU); + wed_clr(dev, MTK_WED_PCIE_INT_CTRL, + MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA | + MTK_WED_PCIE_INT_CTRL_MSK_IRQ_FILTER); + } } static void @@ -681,21 +724,37 @@ mtk_wed_detach(struct mtk_wed_device *dev) mutex_unlock(&hw_lock); } -#define PCIE_BASE_ADDR0 0x11280000 static void mtk_wed_bus_init(struct mtk_wed_device *dev) { switch (dev->wlan.bus_type) { case MTK_WED_BUS_PCIE: { struct device_node *np = dev->hw->eth->dev->of_node; - struct regmap *regs; - regs = syscon_regmap_lookup_by_phandle(np, - "mediatek,wed-pcie"); - if (IS_ERR(regs)) - break; + if (mtk_wed_is_v2(dev->hw)) { + struct regmap *regs; + + regs = syscon_regmap_lookup_by_phandle(np, + "mediatek,wed-pcie"); + if (IS_ERR(regs)) + break; - regmap_update_bits(regs, 0, BIT(0), BIT(0)); + regmap_update_bits(regs, 0, BIT(0), BIT(0)); + } + + if (dev->wlan.msi) { + wed_w32(dev, MTK_WED_PCIE_CFG_INTM, + dev->hw->pcie_base | 0xc08); + wed_w32(dev, MTK_WED_PCIE_CFG_BASE, + dev->hw->pcie_base | 0xc04); + wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(8)); + } else { + wed_w32(dev, MTK_WED_PCIE_CFG_INTM, + dev->hw->pcie_base | 0x180); + wed_w32(dev, MTK_WED_PCIE_CFG_BASE, + dev->hw->pcie_base | 0x184); + wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24)); + } wed_w32(dev, MTK_WED_PCIE_INT_CTRL, FIELD_PREP(MTK_WED_PCIE_INT_CTRL_POLL_EN, 2)); @@ -703,19 +762,9 @@ mtk_wed_bus_init(struct mtk_wed_device *dev) /* pcie interrupt control: pola/source selection */ wed_set(dev, MTK_WED_PCIE_INT_CTRL, MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA | - FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL, 1)); - wed_r32(dev, MTK_WED_PCIE_INT_CTRL); - - wed_w32(dev, MTK_WED_PCIE_CFG_INTM, PCIE_BASE_ADDR0 | 0x180); - wed_w32(dev, MTK_WED_PCIE_CFG_BASE, PCIE_BASE_ADDR0 | 0x184); - - /* pcie interrupt status trigger register */ - wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24)); - wed_r32(dev, MTK_WED_PCIE_INT_TRIGGER); - - /* pola setting */ - wed_set(dev, MTK_WED_PCIE_INT_CTRL, - MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA); + MTK_WED_PCIE_INT_CTRL_MSK_IRQ_FILTER | + FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL, + dev->hw->index)); break; } case MTK_WED_BUS_AXI: @@ -747,7 +796,10 @@ mtk_wed_set_wpdma(struct mtk_wed_device *dev) return; wed_w32(dev, MTK_WED_WPDMA_RX_GLO_CFG, dev->wlan.wpdma_rx_glo); - wed_w32(dev, MTK_WED_WPDMA_RX_RING, dev->wlan.wpdma_rx); + if (mtk_wed_is_v3_or_greater(dev->hw)) + wed_w32(dev, MTK_WED_WPDMA_RX_RING0_V3, dev->wlan.wpdma_rx); + else + wed_w32(dev, MTK_WED_WPDMA_RX_RING0, dev->wlan.wpdma_rx); } static void @@ -759,12 +811,17 @@ mtk_wed_hw_init_early(struct mtk_wed_device *dev) mtk_wed_reset(dev, MTK_WED_RESET_WED); mtk_wed_set_wpdma(dev); - mask = MTK_WED_WDMA_GLO_CFG_BT_SIZE | - MTK_WED_WDMA_GLO_CFG_DYNAMIC_DMAD_RECYCLE | - MTK_WED_WDMA_GLO_CFG_RX_DIS_FSM_AUTO_IDLE; - set = FIELD_PREP(MTK_WED_WDMA_GLO_CFG_BT_SIZE, 2) | - MTK_WED_WDMA_GLO_CFG_DYNAMIC_SKIP_DMAD_PREP | - MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY; + if (mtk_wed_is_v3_or_greater(dev->hw)) { + mask = MTK_WED_WDMA_GLO_CFG_BT_SIZE; + set = FIELD_PREP(MTK_WED_WDMA_GLO_CFG_BT_SIZE, 2); + } else { + mask = MTK_WED_WDMA_GLO_CFG_BT_SIZE | + MTK_WED_WDMA_GLO_CFG_DYNAMIC_DMAD_RECYCLE | + MTK_WED_WDMA_GLO_CFG_RX_DIS_FSM_AUTO_IDLE; + set = FIELD_PREP(MTK_WED_WDMA_GLO_CFG_BT_SIZE, 2) | + MTK_WED_WDMA_GLO_CFG_DYNAMIC_SKIP_DMAD_PREP | + MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY; + } wed_m32(dev, MTK_WED_WDMA_GLO_CFG, mask, set); if (mtk_wed_is_v1(dev->hw)) { @@ -912,11 +969,18 @@ mtk_wed_route_qm_hw_init(struct mtk_wed_device *dev) } /* configure RX_ROUTE_QM */ - wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST); - wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_TXDMAD_FPORT); - wed_set(dev, MTK_WED_RTQM_GLO_CFG, - FIELD_PREP(MTK_WED_RTQM_TXDMAD_FPORT, 0x3 + dev->hw->index)); - wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST); + if (mtk_wed_is_v2(dev->hw)) { + wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST); + wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_TXDMAD_FPORT); + wed_set(dev, MTK_WED_RTQM_GLO_CFG, + FIELD_PREP(MTK_WED_RTQM_TXDMAD_FPORT, + 0x3 + dev->hw->index)); + wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST); + } else { + wed_set(dev, MTK_WED_RTQM_ENQ_CFG0, + FIELD_PREP(MTK_WED_RTQM_ENQ_CFG_TXDMAD_FPORT, + 0x3 + dev->hw->index)); + } /* enable RX_ROUTE_QM */ wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_RX_ROUTE_QM_EN); } @@ -929,18 +993,17 @@ mtk_wed_hw_init(struct mtk_wed_device *dev) dev->init_done = true; mtk_wed_set_ext_int(dev, false); - wed_w32(dev, MTK_WED_TX_BM_CTRL, - MTK_WED_TX_BM_CTRL_PAUSE | - FIELD_PREP(MTK_WED_TX_BM_CTRL_VLD_GRP_NUM, - dev->tx_buf_ring.size / 128) | - FIELD_PREP(MTK_WED_TX_BM_CTRL_RSV_GRP_NUM, - MTK_WED_TX_RING_SIZE / 256)); wed_w32(dev, MTK_WED_TX_BM_BASE, dev->tx_buf_ring.desc_phys); - wed_w32(dev, MTK_WED_TX_BM_BUF_LEN, MTK_WED_PKT_SIZE); if (mtk_wed_is_v1(dev->hw)) { + wed_w32(dev, MTK_WED_TX_BM_CTRL, + MTK_WED_TX_BM_CTRL_PAUSE | + FIELD_PREP(MTK_WED_TX_BM_CTRL_VLD_GRP_NUM, + dev->tx_buf_ring.size / 128) | + FIELD_PREP(MTK_WED_TX_BM_CTRL_RSV_GRP_NUM, + MTK_WED_TX_RING_SIZE / 256)); wed_w32(dev, MTK_WED_TX_BM_TKID, FIELD_PREP(MTK_WED_TX_BM_TKID_START, dev->wlan.token_start) | @@ -951,48 +1014,93 @@ mtk_wed_hw_init(struct mtk_wed_device *dev) FIELD_PREP(MTK_WED_TX_BM_DYN_THR_LO, 1) | MTK_WED_TX_BM_DYN_THR_HI); } else { + if (mtk_wed_is_v2(dev->hw)) { + wed_w32(dev, MTK_WED_TX_BM_CTRL, + MTK_WED_TX_BM_CTRL_PAUSE | + FIELD_PREP(MTK_WED_TX_BM_CTRL_VLD_GRP_NUM, + dev->tx_buf_ring.size / 128) | + FIELD_PREP(MTK_WED_TX_BM_CTRL_RSV_GRP_NUM, + MTK_WED_TX_RING_SIZE / 256)); + wed_w32(dev, MTK_WED_TX_TKID_DYN_THR, + FIELD_PREP(MTK_WED_TX_TKID_DYN_THR_LO, 0) | + MTK_WED_TX_TKID_DYN_THR_HI); + wed_w32(dev, MTK_WED_TX_BM_DYN_THR, + FIELD_PREP(MTK_WED_TX_BM_DYN_THR_LO_V2, 0) | + MTK_WED_TX_BM_DYN_THR_HI_V2); + wed_w32(dev, MTK_WED_TX_TKID_CTRL, + MTK_WED_TX_TKID_CTRL_PAUSE | + FIELD_PREP(MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM, + dev->tx_buf_ring.size / 128) | + FIELD_PREP(MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM, + dev->tx_buf_ring.size / 128)); + } + wed_w32(dev, MTK_WED_TX_BM_TKID_V2, FIELD_PREP(MTK_WED_TX_BM_TKID_START, dev->wlan.token_start) | FIELD_PREP(MTK_WED_TX_BM_TKID_END, dev->wlan.token_start + dev->wlan.nbuf - 1)); - wed_w32(dev, MTK_WED_TX_BM_DYN_THR, - FIELD_PREP(MTK_WED_TX_BM_DYN_THR_LO_V2, 0) | - MTK_WED_TX_BM_DYN_THR_HI_V2); - wed_w32(dev, MTK_WED_TX_TKID_CTRL, - MTK_WED_TX_TKID_CTRL_PAUSE | - FIELD_PREP(MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM, - dev->tx_buf_ring.size / 128) | - FIELD_PREP(MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM, - dev->tx_buf_ring.size / 128)); - wed_w32(dev, MTK_WED_TX_TKID_DYN_THR, - FIELD_PREP(MTK_WED_TX_TKID_DYN_THR_LO, 0) | - MTK_WED_TX_TKID_DYN_THR_HI); } mtk_wed_reset(dev, MTK_WED_RESET_TX_BM); + if (mtk_wed_is_v3_or_greater(dev->hw)) { + /* switch to new bm architecture */ + wed_clr(dev, MTK_WED_TX_BM_CTRL, + MTK_WED_TX_BM_CTRL_LEGACY_EN); + + wed_w32(dev, MTK_WED_TX_TKID_CTRL, + MTK_WED_TX_TKID_CTRL_PAUSE | + FIELD_PREP(MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM_V3, + dev->wlan.nbuf / 128) | + FIELD_PREP(MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM_V3, + dev->wlan.nbuf / 128)); + /* return SKBID + SDP back to bm */ + wed_set(dev, MTK_WED_TX_TKID_CTRL, + MTK_WED_TX_TKID_CTRL_FREE_FORMAT); + + wed_w32(dev, MTK_WED_TX_BM_INIT_PTR, + MTK_WED_TX_BM_PKT_CNT | + MTK_WED_TX_BM_INIT_SW_TAIL_IDX); + } + if (mtk_wed_is_v1(dev->hw)) { wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_TX_BM_EN | MTK_WED_CTRL_WED_TX_FREE_AGENT_EN); - } else { - wed_clr(dev, MTK_WED_TX_TKID_CTRL, MTK_WED_TX_TKID_CTRL_PAUSE); - if (mtk_wed_get_rx_capa(dev)) { - /* rx hw init */ - wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, - MTK_WED_WPDMA_RX_D_RST_CRX_IDX | - MTK_WED_WPDMA_RX_D_RST_DRV_IDX); - wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, 0); - - mtk_wed_rx_buffer_hw_init(dev); - mtk_wed_rro_hw_init(dev); - mtk_wed_route_qm_hw_init(dev); - } + } else if (mtk_wed_get_rx_capa(dev)) { + /* rx hw init */ + wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, + MTK_WED_WPDMA_RX_D_RST_CRX_IDX | + MTK_WED_WPDMA_RX_D_RST_DRV_IDX); + wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, 0); + + /* reset prefetch index of ring */ + wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_RX0_SIDX, + MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR); + wed_clr(dev, MTK_WED_WPDMA_RX_D_PREF_RX0_SIDX, + MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR); + + wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_RX1_SIDX, + MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR); + wed_clr(dev, MTK_WED_WPDMA_RX_D_PREF_RX1_SIDX, + MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR); + + /* reset prefetch FIFO of ring */ + wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG, + MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R0_CLR | + MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R1_CLR); + wed_w32(dev, MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG, 0); + + mtk_wed_rx_buffer_hw_init(dev); + mtk_wed_rro_hw_init(dev); + mtk_wed_route_qm_hw_init(dev); } wed_clr(dev, MTK_WED_TX_BM_CTRL, MTK_WED_TX_BM_CTRL_PAUSE); + if (!mtk_wed_is_v1(dev->hw)) + wed_clr(dev, MTK_WED_TX_TKID_CTRL, MTK_WED_TX_TKID_CTRL_PAUSE); } static void @@ -1305,6 +1413,24 @@ mtk_wed_wdma_tx_ring_setup(struct mtk_wed_device *dev, int idx, int size, desc_size, true)) return -ENOMEM; + if (mtk_wed_is_v3_or_greater(dev->hw)) { + struct mtk_wdma_desc *desc = wdma->desc; + int i; + + for (i = 0; i < MTK_WED_WDMA_RING_SIZE; i++) { + desc->buf0 = 0; + desc->ctrl = cpu_to_le32(MTK_WDMA_DESC_CTRL_DMA_DONE); + desc->buf1 = 0; + desc->info = cpu_to_le32(MTK_WDMA_TXD0_DESC_INFO_DMA_DONE); + desc++; + desc->buf0 = 0; + desc->ctrl = cpu_to_le32(MTK_WDMA_DESC_CTRL_DMA_DONE); + desc->buf1 = 0; + desc->info = cpu_to_le32(MTK_WDMA_TXD1_DESC_INFO_DMA_DONE); + desc++; + } + } + wdma_w32(dev, MTK_WDMA_RING_TX(idx) + MTK_WED_RING_OFS_BASE, wdma->desc_phys); wdma_w32(dev, MTK_WDMA_RING_TX(idx) + MTK_WED_RING_OFS_COUNT, @@ -1370,6 +1496,9 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) wed_clr(dev, MTK_WED_WDMA_INT_CTRL, wdma_mask); } else { + if (mtk_wed_is_v3_or_greater(dev->hw)) + wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_TKID_ALI_EN); + /* initail tx interrupt trigger */ wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_TX, MTK_WED_WPDMA_INT_CTRL_TX0_DONE_EN | @@ -1422,33 +1551,60 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) { int i; - wed_set(dev, MTK_WED_WPDMA_INT_CTRL, MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV); + if (!mtk_wed_is_v3_or_greater(dev->hw)) { + wed_set(dev, MTK_WED_WPDMA_INT_CTRL, + MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV); + wed_set(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN); + wdma_set(dev, MTK_WDMA_GLO_CFG, + MTK_WDMA_GLO_CFG_TX_DMA_EN | + MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | + MTK_WDMA_GLO_CFG_RX_INFO2_PRERES); + wed_set(dev, MTK_WED_WPDMA_CTRL, MTK_WED_WPDMA_CTRL_SDL1_FIXED); + } else { + wed_set(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN | + MTK_WED_WPDMA_GLO_CFG_RX_DDONE2_WR); + wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_TX_DMA_EN); + } wed_set(dev, MTK_WED_GLO_CFG, MTK_WED_GLO_CFG_TX_DMA_EN | MTK_WED_GLO_CFG_RX_DMA_EN); - wed_set(dev, MTK_WED_WPDMA_GLO_CFG, - MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN | - MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN); + wed_set(dev, MTK_WED_WDMA_GLO_CFG, MTK_WED_WDMA_GLO_CFG_RX_DRV_EN); - wdma_set(dev, MTK_WDMA_GLO_CFG, - MTK_WDMA_GLO_CFG_TX_DMA_EN | - MTK_WDMA_GLO_CFG_RX_INFO1_PRERES | - MTK_WDMA_GLO_CFG_RX_INFO2_PRERES); - if (mtk_wed_is_v1(dev->hw)) { wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_RX_INFO3_PRERES); return; } - wed_set(dev, MTK_WED_WPDMA_CTRL, - MTK_WED_WPDMA_CTRL_SDL1_FIXED); wed_set(dev, MTK_WED_WPDMA_GLO_CFG, MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC | MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC); + + if (mtk_wed_is_v3_or_greater(dev->hw)) { + wed_set(dev, MTK_WED_WDMA_RX_PREF_CFG, + FIELD_PREP(MTK_WED_WDMA_RX_PREF_BURST_SIZE, 0x10) | + FIELD_PREP(MTK_WED_WDMA_RX_PREF_LOW_THRES, 0x8)); + wed_clr(dev, MTK_WED_WDMA_RX_PREF_CFG, + MTK_WED_WDMA_RX_PREF_DDONE2_EN); + wed_set(dev, MTK_WED_WDMA_RX_PREF_CFG, MTK_WED_WDMA_RX_PREF_EN); + + wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK_LAST); + wed_set(dev, MTK_WED_WPDMA_GLO_CFG, + MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_CHK | + MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNS_VER_FORCE_4); + + wdma_set(dev, MTK_WDMA_PREF_RX_CFG, MTK_WDMA_PREF_RX_CFG_PREF_EN); + } + wed_clr(dev, MTK_WED_WPDMA_GLO_CFG, MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP | MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV); @@ -1460,11 +1616,22 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) MTK_WED_WDMA_GLO_CFG_TX_DRV_EN | MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK); + wed_clr(dev, MTK_WED_WPDMA_RX_D_GLO_CFG, MTK_WED_WPDMA_RX_D_RXD_READ_LEN); wed_set(dev, MTK_WED_WPDMA_RX_D_GLO_CFG, MTK_WED_WPDMA_RX_D_RX_DRV_EN | FIELD_PREP(MTK_WED_WPDMA_RX_D_RXD_READ_LEN, 0x18) | - FIELD_PREP(MTK_WED_WPDMA_RX_D_INIT_PHASE_RXEN_SEL, - 0x2)); + FIELD_PREP(MTK_WED_WPDMA_RX_D_INIT_PHASE_RXEN_SEL, 0x2)); + + if (mtk_wed_is_v3_or_greater(dev->hw)) { + wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_CFG, + MTK_WED_WPDMA_RX_D_PREF_EN | + FIELD_PREP(MTK_WED_WPDMA_RX_D_PREF_BURST_SIZE, 0x10) | + FIELD_PREP(MTK_WED_WPDMA_RX_D_PREF_LOW_THRES, 0x8)); + + wed_set(dev, MTK_WED_RRO_RX_D_CFG(2), MTK_WED_RRO_RX_D_DRV_EN); + wdma_set(dev, MTK_WDMA_PREF_TX_CFG, MTK_WDMA_PREF_TX_CFG_PREF_EN); + wdma_set(dev, MTK_WDMA_WRBK_TX_CFG, MTK_WDMA_WRBK_TX_CFG_WRBK_EN); + } for (i = 0; i < MTK_WED_RX_QUEUES; i++) mtk_wed_check_wfdma_rx_fill(dev, i); @@ -1504,6 +1671,12 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) wed_r32(dev, MTK_WED_EXT_INT_MASK1); wed_r32(dev, MTK_WED_EXT_INT_MASK2); + if (mtk_wed_is_v3_or_greater(dev->hw)) { + wed_w32(dev, MTK_WED_EXT_INT_MASK3, + MTK_WED_EXT_INT_STATUS_WPDMA_MID_RDY); + wed_r32(dev, MTK_WED_EXT_INT_MASK3); + } + if (mtk_wed_rro_cfg(dev)) return; } @@ -1555,6 +1728,7 @@ mtk_wed_attach(struct mtk_wed_device *dev) dev->irq = hw->irq; dev->wdma_idx = hw->index; dev->version = hw->version; + dev->hw->pcie_base = mtk_wed_get_pcie_base(dev); if (hw->eth->dma_dev == hw->eth->dev && of_dma_is_coherent(hw->eth->dev->of_node)) @@ -1622,6 +1796,23 @@ mtk_wed_tx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs, ring->reg_base = MTK_WED_RING_TX(idx); ring->wpdma = regs; + if (mtk_wed_is_v3_or_greater(dev->hw) && idx == 1) { + /* reset prefetch index */ + wed_set(dev, MTK_WED_WDMA_RX_PREF_CFG, + MTK_WED_WDMA_RX_PREF_RX0_SIDX_CLR | + MTK_WED_WDMA_RX_PREF_RX1_SIDX_CLR); + + wed_clr(dev, MTK_WED_WDMA_RX_PREF_CFG, + MTK_WED_WDMA_RX_PREF_RX0_SIDX_CLR | + MTK_WED_WDMA_RX_PREF_RX1_SIDX_CLR); + + /* reset prefetch FIFO */ + wed_w32(dev, MTK_WED_WDMA_RX_PREF_FIFO_CFG, + MTK_WED_WDMA_RX_PREF_FIFO_RX0_CLR | + MTK_WED_WDMA_RX_PREF_FIFO_RX1_CLR); + wed_w32(dev, MTK_WED_WDMA_RX_PREF_FIFO_CFG, 0); + } + /* WED -> WPDMA */ wpdma_tx_w32(dev, idx, MTK_WED_RING_OFS_BASE, ring->desc_phys); wpdma_tx_w32(dev, idx, MTK_WED_RING_OFS_COUNT, MTK_WED_TX_RING_SIZE); @@ -1698,13 +1889,22 @@ mtk_wed_irq_get(struct mtk_wed_device *dev, u32 mask) { u32 val, ext_mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK; - if (mtk_wed_is_v1(dev->hw)) + switch (dev->hw->version) { + case 1: ext_mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR; - else - ext_mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH | - MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH | + break; + case 2: + ext_mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH_V2 | + MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH_V2 | MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT | MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR; + break; + case 3: + ext_mask = MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT; + break; + default: + break; + } val = wed_r32(dev, MTK_WED_EXT_INT_STATUS); wed_w32(dev, MTK_WED_EXT_INT_STATUS, val); diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h index 6f5db891a6b9..224ff00bdd8b 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.h +++ b/drivers/net/ethernet/mediatek/mtk_wed.h @@ -9,6 +9,8 @@ #include #include +#include "mtk_wed_regs.h" + struct mtk_eth; struct mtk_wed_wo; @@ -24,6 +26,7 @@ struct mtk_wed_hw { struct dentry *debugfs_dir; struct mtk_wed_device *wed_dev; struct mtk_wed_wo *wed_wo; + u32 pcie_base; u32 debugfs_reg; u32 num_flows; u8 version; @@ -50,6 +53,16 @@ static inline bool mtk_wed_is_v2(struct mtk_wed_hw *hw) return hw->version == 2; } +static inline bool mtk_wed_is_v3(struct mtk_wed_hw *hw) +{ + return hw->version == 3; +} + +static inline bool mtk_wed_is_v3_or_greater(struct mtk_wed_hw *hw) +{ + return hw->version > 2; +} + static inline void wed_w32(struct mtk_wed_device *dev, u32 reg, u32 val) { @@ -132,6 +145,21 @@ wpdma_txfree_w32(struct mtk_wed_device *dev, u32 reg, u32 val) writel(val, dev->txfree_ring.wpdma + reg); } +static inline u32 mtk_wed_get_pcie_base(struct mtk_wed_device *dev) +{ + if (!mtk_wed_is_v3_or_greater(dev->hw)) + return MTK_WED_PCIE_BASE; + + switch (dev->hw->index) { + case 1: + return MTK_WED_PCIE_BASE1; + case 2: + return MTK_WED_PCIE_BASE2; + default: + return MTK_WED_PCIE_BASE0; + } +} + void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, void __iomem *wdma, phys_addr_t wdma_phy, int index); diff --git a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c index e53531252bd9..65a78e274009 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c @@ -331,10 +331,22 @@ mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo) wo->hw->index + 1); /* load firmware */ - if (of_device_is_compatible(wo->hw->node, "mediatek,mt7981-wed")) - fw_name = MT7981_FIRMWARE_WO; - else - fw_name = wo->hw->index ? MT7986_FIRMWARE_WO1 : MT7986_FIRMWARE_WO0; + switch (wo->hw->version) { + case 2: + if (of_device_is_compatible(wo->hw->node, + "mediatek,mt7981-wed")) + fw_name = MT7981_FIRMWARE_WO; + else + fw_name = wo->hw->index ? MT7986_FIRMWARE_WO1 + : MT7986_FIRMWARE_WO0; + break; + case 3: + fw_name = wo->hw->index ? MT7988_FIRMWARE_WO1 + : MT7988_FIRMWARE_WO0; + break; + default: + return -EINVAL; + } ret = request_firmware(&fw, fw_name, wo->hw->dev); if (ret) @@ -355,15 +367,16 @@ mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo) } /* set the start address */ - boot_cr = wo->hw->index ? MTK_WO_MCU_CFG_LS_WA_BOOT_ADDR_ADDR - : MTK_WO_MCU_CFG_LS_WM_BOOT_ADDR_ADDR; + if (!mtk_wed_is_v3_or_greater(wo->hw) && wo->hw->index) + boot_cr = MTK_WO_MCU_CFG_LS_WA_BOOT_ADDR_ADDR; + else + boot_cr = MTK_WO_MCU_CFG_LS_WM_BOOT_ADDR_ADDR; wo_w32(wo, boot_cr, mem_region[MTK_WED_WO_REGION_EMI].phy_addr >> 16); /* wo firmware reset */ wo_w32(wo, MTK_WO_MCU_CFG_LS_WF_MCCR_CLR_ADDR, 0xc00); - val = wo_r32(wo, MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR); - val |= wo->hw->index ? MTK_WO_MCU_CFG_LS_WF_WM_WA_WA_CPU_RSTB_MASK - : MTK_WO_MCU_CFG_LS_WF_WM_WA_WM_CPU_RSTB_MASK; + val = wo_r32(wo, MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR) | + MTK_WO_MCU_CFG_LS_WF_WM_WA_WM_CPU_RSTB_MASK; wo_w32(wo, MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR, val); out: release_firmware(fw); @@ -398,3 +411,5 @@ int mtk_wed_mcu_init(struct mtk_wed_wo *wo) MODULE_FIRMWARE(MT7981_FIRMWARE_WO); MODULE_FIRMWARE(MT7986_FIRMWARE_WO0); MODULE_FIRMWARE(MT7986_FIRMWARE_WO1); +MODULE_FIRMWARE(MT7988_FIRMWARE_WO0); +MODULE_FIRMWARE(MT7988_FIRMWARE_WO1); diff --git a/drivers/net/ethernet/mediatek/mtk_wed_regs.h b/drivers/net/ethernet/mediatek/mtk_wed_regs.h index 47ea69feb3b2..d50ccdd3a69b 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_regs.h @@ -13,6 +13,9 @@ #define MTK_WDMA_DESC_CTRL_LAST_SEG0 BIT(30) #define MTK_WDMA_DESC_CTRL_DMA_DONE BIT(31) +#define MTK_WDMA_TXD0_DESC_INFO_DMA_DONE BIT(29) +#define MTK_WDMA_TXD1_DESC_INFO_DMA_DONE BIT(31) + struct mtk_wdma_desc { __le32 buf0; __le32 ctrl; @@ -37,6 +40,7 @@ struct mtk_wdma_desc { #define MTK_WED_RESET_WDMA_INT_AGENT BIT(19) #define MTK_WED_RESET_RX_RRO_QM BIT(20) #define MTK_WED_RESET_RX_ROUTE_QM BIT(21) +#define MTK_WED_RESET_TX_AMSDU BIT(22) #define MTK_WED_RESET_WED BIT(31) #define MTK_WED_CTRL 0x00c @@ -44,6 +48,9 @@ struct mtk_wdma_desc { #define MTK_WED_CTRL_WPDMA_INT_AGENT_BUSY BIT(1) #define MTK_WED_CTRL_WDMA_INT_AGENT_EN BIT(2) #define MTK_WED_CTRL_WDMA_INT_AGENT_BUSY BIT(3) +#define MTK_WED_CTRL_WED_RX_IND_CMD_EN BIT(5) +#define MTK_WED_CTRL_WED_RX_PG_BM_EN BIT(6) +#define MTK_WED_CTRL_WED_RX_PG_BM_BUSU BIT(7) #define MTK_WED_CTRL_WED_TX_BM_EN BIT(8) #define MTK_WED_CTRL_WED_TX_BM_BUSY BIT(9) #define MTK_WED_CTRL_WED_TX_FREE_AGENT_EN BIT(10) @@ -54,9 +61,14 @@ struct mtk_wdma_desc { #define MTK_WED_CTRL_RX_RRO_QM_BUSY BIT(15) #define MTK_WED_CTRL_RX_ROUTE_QM_EN BIT(16) #define MTK_WED_CTRL_RX_ROUTE_QM_BUSY BIT(17) +#define MTK_WED_CTRL_TX_TKID_ALI_EN BIT(20) +#define MTK_WED_CTRL_TX_TKID_ALI_BUSY BIT(21) +#define MTK_WED_CTRL_TX_AMSDU_EN BIT(22) +#define MTK_WED_CTRL_TX_AMSDU_BUSY BIT(23) #define MTK_WED_CTRL_FINAL_DIDX_READ BIT(24) #define MTK_WED_CTRL_ETH_DMAD_FMT BIT(25) #define MTK_WED_CTRL_MIB_READ_CLEAR BIT(28) +#define MTK_WED_CTRL_FLD_MIB_RD_CLR BIT(28) #define MTK_WED_EXT_INT_STATUS 0x020 #define MTK_WED_EXT_INT_STATUS_TF_LEN_ERR BIT(0) @@ -64,6 +76,8 @@ struct mtk_wdma_desc { #define MTK_WED_EXT_INT_STATUS_TKID_TITO_INVALID BIT(4) #define MTK_WED_EXT_INT_STATUS_TX_FBUF_LO_TH BIT(8) #define MTK_WED_EXT_INT_STATUS_TX_FBUF_HI_TH BIT(9) +#define MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH_V2 BIT(10) +#define MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH_V2 BIT(11) #define MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH BIT(12) #define MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH BIT(13) #define MTK_WED_EXT_INT_STATUS_RX_DRV_R_RESP_ERR BIT(16) @@ -89,6 +103,7 @@ struct mtk_wdma_desc { #define MTK_WED_EXT_INT_MASK 0x028 #define MTK_WED_EXT_INT_MASK1 0x02c #define MTK_WED_EXT_INT_MASK2 0x030 +#define MTK_WED_EXT_INT_MASK3 0x034 #define MTK_WED_STATUS 0x060 #define MTK_WED_STATUS_TX GENMASK(15, 8) @@ -96,9 +111,14 @@ struct mtk_wdma_desc { #define MTK_WED_TX_BM_CTRL 0x080 #define MTK_WED_TX_BM_CTRL_VLD_GRP_NUM GENMASK(6, 0) #define MTK_WED_TX_BM_CTRL_RSV_GRP_NUM GENMASK(22, 16) +#define MTK_WED_TX_BM_CTRL_LEGACY_EN BIT(26) +#define MTK_WED_TX_TKID_CTRL_FREE_FORMAT BIT(27) #define MTK_WED_TX_BM_CTRL_PAUSE BIT(28) #define MTK_WED_TX_BM_BASE 0x084 +#define MTK_WED_TX_BM_INIT_PTR 0x088 +#define MTK_WED_TX_BM_SW_TAIL_IDX GENMASK(16, 0) +#define MTK_WED_TX_BM_INIT_SW_TAIL_IDX BIT(16) #define MTK_WED_TX_BM_TKID 0x088 #define MTK_WED_TX_BM_TKID_V2 0x0c8 @@ -124,6 +144,9 @@ struct mtk_wdma_desc { #define MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM GENMASK(22, 16) #define MTK_WED_TX_TKID_CTRL_PAUSE BIT(28) +#define MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM_V3 GENMASK(7, 0) +#define MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM_V3 GENMASK(23, 16) + #define MTK_WED_TX_TKID_DYN_THR 0x0e0 #define MTK_WED_TX_TKID_DYN_THR_LO GENMASK(6, 0) #define MTK_WED_TX_TKID_DYN_THR_HI GENMASK(22, 16) @@ -204,12 +227,15 @@ struct mtk_wdma_desc { #define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R1_PKT_PROC BIT(5) #define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC BIT(6) #define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R1_CRX_SYNC BIT(7) -#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_VER GENMASK(18, 16) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_VER GENMASK(15, 12) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNS_VER_FORCE_4 BIT(18) #define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNSUPPORT_FMT BIT(19) -#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UEVENT_PKT_FMT_CHK BIT(20) +#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_CHK BIT(20) #define MTK_WED_WPDMA_GLO_CFG_RX_DDONE2_WR BIT(21) #define MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP BIT(24) +#define MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK_LAST BIT(25) #define MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV BIT(28) +#define MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK BIT(30) #define MTK_WED_WPDMA_RESET_IDX 0x50c #define MTK_WED_WPDMA_RESET_IDX_TX GENMASK(3, 0) @@ -255,9 +281,10 @@ struct mtk_wdma_desc { #define MTK_WED_PCIE_INT_TRIGGER_STATUS BIT(16) #define MTK_WED_PCIE_INT_CTRL 0x57c -#define MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA BIT(20) -#define MTK_WED_PCIE_INT_CTRL_SRC_SEL GENMASK(17, 16) #define MTK_WED_PCIE_INT_CTRL_POLL_EN GENMASK(13, 12) +#define MTK_WED_PCIE_INT_CTRL_SRC_SEL GENMASK(17, 16) +#define MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA BIT(20) +#define MTK_WED_PCIE_INT_CTRL_MSK_IRQ_FILTER BIT(21) #define MTK_WED_WPDMA_CFG_BASE 0x580 #define MTK_WED_WPDMA_CFG_INT_MASK 0x584 @@ -286,12 +313,27 @@ struct mtk_wdma_desc { #define MTK_WED_WPDMA_RX_D_RST_DRV_IDX GENMASK(25, 24) #define MTK_WED_WPDMA_RX_GLO_CFG 0x76c -#define MTK_WED_WPDMA_RX_RING 0x770 +#define MTK_WED_WPDMA_RX_RING0 0x770 +#define MTK_WED_WPDMA_RX_RING0_V3 0x7d0 #define MTK_WED_WPDMA_RX_D_MIB(_n) (0x774 + (_n) * 4) #define MTK_WED_WPDMA_RX_D_PROCESSED_MIB(_n) (0x784 + (_n) * 4) #define MTK_WED_WPDMA_RX_D_COHERENT_MIB 0x78c +#define MTK_WED_WPDMA_RX_D_PREF_CFG 0x7b4 +#define MTK_WED_WPDMA_RX_D_PREF_EN BIT(0) +#define MTK_WED_WPDMA_RX_D_PREF_BURST_SIZE GENMASK(12, 8) +#define MTK_WED_WPDMA_RX_D_PREF_LOW_THRES GENMASK(21, 16) + +#define MTK_WED_WPDMA_RX_D_PREF_RX0_SIDX 0x7b8 +#define MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR BIT(15) + +#define MTK_WED_WPDMA_RX_D_PREF_RX1_SIDX 0x7bc + +#define MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG 0x7c0 +#define MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R0_CLR BIT(0) +#define MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R1_CLR BIT(16) + #define MTK_WED_WDMA_RING_TX 0x800 #define MTK_WED_WDMA_TX_MIB 0x810 @@ -299,6 +341,18 @@ struct mtk_wdma_desc { #define MTK_WED_WDMA_RING_RX(_n) (0x900 + (_n) * 0x10) #define MTK_WED_WDMA_RX_THRES(_n) (0x940 + (_n) * 0x4) +#define MTK_WED_WDMA_RX_PREF_CFG 0x950 +#define MTK_WED_WDMA_RX_PREF_EN BIT(0) +#define MTK_WED_WDMA_RX_PREF_BURST_SIZE GENMASK(12, 8) +#define MTK_WED_WDMA_RX_PREF_LOW_THRES GENMASK(21, 16) +#define MTK_WED_WDMA_RX_PREF_RX0_SIDX_CLR BIT(24) +#define MTK_WED_WDMA_RX_PREF_RX1_SIDX_CLR BIT(25) +#define MTK_WED_WDMA_RX_PREF_DDONE2_EN BIT(26) + +#define MTK_WED_WDMA_RX_PREF_FIFO_CFG 0x95C +#define MTK_WED_WDMA_RX_PREF_FIFO_RX0_CLR BIT(0) +#define MTK_WED_WDMA_RX_PREF_FIFO_RX1_CLR BIT(16) + #define MTK_WED_WDMA_GLO_CFG 0xa04 #define MTK_WED_WDMA_GLO_CFG_TX_DRV_EN BIT(0) #define MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK BIT(1) @@ -331,6 +385,7 @@ struct mtk_wdma_desc { #define MTK_WED_WDMA_INT_TRIGGER_RX_DONE GENMASK(17, 16) #define MTK_WED_WDMA_INT_CTRL 0xa2c +#define MTK_WED_WDMA_INT_POLL_PRD GENMASK(7, 0) #define MTK_WED_WDMA_INT_CTRL_POLL_SRC_SEL GENMASK(17, 16) #define MTK_WED_WDMA_CFG_BASE 0xaa0 @@ -394,6 +449,18 @@ struct mtk_wdma_desc { #define MTK_WDMA_INT_GRP1 0x250 #define MTK_WDMA_INT_GRP2 0x254 +#define MTK_WDMA_PREF_TX_CFG 0x2d0 +#define MTK_WDMA_PREF_TX_CFG_PREF_EN BIT(0) + +#define MTK_WDMA_PREF_RX_CFG 0x2dc +#define MTK_WDMA_PREF_RX_CFG_PREF_EN BIT(0) + +#define MTK_WDMA_WRBK_TX_CFG 0x300 +#define MTK_WDMA_WRBK_TX_CFG_WRBK_EN BIT(30) + +#define MTK_WDMA_WRBK_RX_CFG 0x344 +#define MTK_WDMA_WRBK_RX_CFG_WRBK_EN BIT(30) + #define MTK_PCIE_MIRROR_MAP(n) ((n) ? 0x4 : 0x0) #define MTK_PCIE_MIRROR_MAP_EN BIT(0) #define MTK_PCIE_MIRROR_MAP_WED_ID BIT(1) @@ -407,6 +474,30 @@ struct mtk_wdma_desc { #define MTK_WED_RTQM_Q_DBG_BYPASS BIT(5) #define MTK_WED_RTQM_TXDMAD_FPORT GENMASK(23, 20) +#define MTK_WED_RTQM_IGRS0_I2HW_DMAD_CNT 0xb1c +#define MTK_WED_RTQM_IGRS0_I2H_DMAD_CNT(_n) (0xb20 + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS0_I2HW_PKT_CNT 0xb28 +#define MTK_WED_RTQM_IGRS0_I2H_PKT_CNT(_n) (0xb2c + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS0_FDROP_CNT 0xb34 + +#define MTK_WED_RTQM_IGRS1_I2HW_DMAD_CNT 0xb44 +#define MTK_WED_RTQM_IGRS1_I2H_DMAD_CNT(_n) (0xb48 + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS1_I2HW_PKT_CNT 0xb50 +#define MTK_WED_RTQM_IGRS1_I2H_PKT_CNT(_n) (0xb54 + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS1_FDROP_CNT 0xb5c + +#define MTK_WED_RTQM_IGRS2_I2HW_DMAD_CNT 0xb6c +#define MTK_WED_RTQM_IGRS2_I2H_DMAD_CNT(_n) (0xb70 + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS2_I2HW_PKT_CNT 0xb78 +#define MTK_WED_RTQM_IGRS2_I2H_PKT_CNT(_n) (0xb7c + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS2_FDROP_CNT 0xb84 + +#define MTK_WED_RTQM_IGRS3_I2HW_DMAD_CNT 0xb94 +#define MTK_WED_RTQM_IGRS3_I2H_DMAD_CNT(_n) (0xb98 + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS3_I2HW_PKT_CNT 0xba0 +#define MTK_WED_RTQM_IGRS3_I2H_PKT_CNT(_n) (0xba4 + (_n) * 0x4) +#define MTK_WED_RTQM_IGRS3_FDROP_CNT 0xbac + #define MTK_WED_RTQM_R2H_MIB(_n) (0xb70 + (_n) * 0x4) #define MTK_WED_RTQM_R2Q_MIB(_n) (0xb78 + (_n) * 0x4) #define MTK_WED_RTQM_Q2N_MIB 0xb80 @@ -415,6 +506,24 @@ struct mtk_wdma_desc { #define MTK_WED_RTQM_Q2B_MIB 0xb8c #define MTK_WED_RTQM_PFDBK_MIB 0xb90 +#define MTK_WED_RTQM_ENQ_CFG0 0xbb8 +#define MTK_WED_RTQM_ENQ_CFG_TXDMAD_FPORT GENMASK(15, 12) + +#define MTK_WED_RTQM_FDROP_MIB 0xb84 +#define MTK_WED_RTQM_ENQ_I2Q_DMAD_CNT 0xbbc +#define MTK_WED_RTQM_ENQ_I2N_DMAD_CNT 0xbc0 +#define MTK_WED_RTQM_ENQ_I2Q_PKT_CNT 0xbc4 +#define MTK_WED_RTQM_ENQ_I2N_PKT_CNT 0xbc8 +#define MTK_WED_RTQM_ENQ_USED_ENTRY_CNT 0xbcc +#define MTK_WED_RTQM_ENQ_ERR_CNT 0xbd0 + +#define MTK_WED_RTQM_DEQ_DMAD_CNT 0xbd8 +#define MTK_WED_RTQM_DEQ_Q2I_DMAD_CNT 0xbdc +#define MTK_WED_RTQM_DEQ_PKT_CNT 0xbe0 +#define MTK_WED_RTQM_DEQ_Q2I_PKT_CNT 0xbe4 +#define MTK_WED_RTQM_DEQ_USED_PFDBK_CNT 0xbe8 +#define MTK_WED_RTQM_DEQ_ERR_CNT 0xbec + #define MTK_WED_RROQM_GLO_CFG 0xc04 #define MTK_WED_RROQM_RST_IDX 0xc08 #define MTK_WED_RROQM_RST_IDX_MIOD BIT(0) @@ -464,7 +573,116 @@ struct mtk_wdma_desc { #define MTK_WED_RX_BM_INTF 0xd9c #define MTK_WED_RX_BM_ERR_STS 0xda8 +#define MTK_RRO_IND_CMD_SIGNATURE 0xe00 +#define MTK_RRO_IND_CMD_DMA_IDX GENMASK(11, 0) +#define MTK_RRO_IND_CMD_MAGIC_CNT GENMASK(30, 28) + +#define MTK_WED_IND_CMD_RX_CTRL0 0xe04 +#define MTK_WED_IND_CMD_PROC_IDX GENMASK(11, 0) +#define MTK_WED_IND_CMD_PREFETCH_FREE_CNT GENMASK(19, 16) +#define MTK_WED_IND_CMD_MAGIC_CNT GENMASK(30, 28) + +#define MTK_WED_IND_CMD_RX_CTRL1 0xe08 +#define MTK_WED_IND_CMD_RX_CTRL2 0xe0c +#define MTK_WED_IND_CMD_MAX_CNT GENMASK(11, 0) +#define MTK_WED_IND_CMD_BASE_M GENMASK(19, 16) + +#define MTK_WED_RRO_CFG0 0xe10 +#define MTK_WED_RRO_CFG1 0xe14 +#define MTK_WED_RRO_CFG1_MAX_WIN_SZ GENMASK(31, 29) +#define MTK_WED_RRO_CFG1_ACK_SN_BASE_M GENMASK(19, 16) +#define MTK_WED_RRO_CFG1_PARTICL_SE_ID GENMASK(11, 0) + +#define MTK_WED_ADDR_ELEM_CFG0 0xe18 +#define MTK_WED_ADDR_ELEM_CFG1 0xe1c +#define MTK_WED_ADDR_ELEM_PREFETCH_FREE_CNT GENMASK(19, 16) + +#define MTK_WED_ADDR_ELEM_TBL_CFG 0xe20 +#define MTK_WED_ADDR_ELEM_TBL_OFFSET GENMASK(6, 0) +#define MTK_WED_ADDR_ELEM_TBL_RD_RDY BIT(28) +#define MTK_WED_ADDR_ELEM_TBL_WR_RDY BIT(29) +#define MTK_WED_ADDR_ELEM_TBL_RD BIT(30) +#define MTK_WED_ADDR_ELEM_TBL_WR BIT(31) + +#define MTK_WED_RADDR_ELEM_TBL_WDATA 0xe24 +#define MTK_WED_RADDR_ELEM_TBL_RDATA 0xe28 + +#define MTK_WED_PN_CHECK_CFG 0xe30 +#define MTK_WED_PN_CHECK_SE_ID GENMASK(11, 0) +#define MTK_WED_PN_CHECK_RD_RDY BIT(28) +#define MTK_WED_PN_CHECK_WR_RDY BIT(29) +#define MTK_WED_PN_CHECK_RD BIT(30) +#define MTK_WED_PN_CHECK_WR BIT(31) + +#define MTK_WED_PN_CHECK_WDATA_M 0xe38 +#define MTK_WED_PN_CHECK_IS_FIRST BIT(17) + +#define MTK_WED_RRO_MSDU_PG_RING_CFG(_n) (0xe44 + (_n) * 0x8) + +#define MTK_WED_RRO_MSDU_PG_RING2_CFG 0xe58 +#define MTK_WED_RRO_MSDU_PG_DRV_CLR BIT(26) +#define MTK_WED_RRO_MSDU_PG_DRV_EN BIT(31) + +#define MTK_WED_RRO_MSDU_PG_CTRL0(_n) (0xe5c + (_n) * 0xc) +#define MTK_WED_RRO_MSDU_PG_CTRL1(_n) (0xe60 + (_n) * 0xc) +#define MTK_WED_RRO_MSDU_PG_CTRL2(_n) (0xe64 + (_n) * 0xc) + +#define MTK_WED_RRO_RX_D_RX(_n) (0xe80 + (_n) * 0x10) + +#define MTK_WED_RRO_RX_MAGIC_CNT BIT(13) + +#define MTK_WED_RRO_RX_D_CFG(_n) (0xea0 + (_n) * 0x4) +#define MTK_WED_RRO_RX_D_DRV_CLR BIT(26) +#define MTK_WED_RRO_RX_D_DRV_EN BIT(31) + +#define MTK_WED_RRO_PG_BM_RX_DMAM 0xeb0 +#define MTK_WED_RRO_PG_BM_RX_SDL0 GENMASK(13, 0) + +#define MTK_WED_RRO_PG_BM_BASE 0xeb4 +#define MTK_WED_RRO_PG_BM_INIT_PTR 0xeb8 +#define MTK_WED_RRO_PG_BM_SW_TAIL_IDX GENMASK(15, 0) +#define MTK_WED_RRO_PG_BM_INIT_SW_TAIL_IDX BIT(16) + +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX 0xeec +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX0_EN BIT(0) +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX0_CLR BIT(1) +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX0_DONE_TRIG GENMASK(6, 2) +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX1_EN BIT(8) +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX1_CLR BIT(9) +#define MTK_WED_WPDMA_INT_CTRL_RRO_RX1_DONE_TRIG GENMASK(14, 10) + +#define MTK_WED_WPDMA_INT_CTRL_RRO_MSDU_PG 0xef4 +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG0_EN BIT(0) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG0_CLR BIT(1) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG0_DONE_TRIG GENMASK(6, 2) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG1_EN BIT(8) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG1_CLR BIT(9) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG1_DONE_TRIG GENMASK(14, 10) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG2_EN BIT(16) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG2_CLR BIT(17) +#define MTK_WED_WPDMA_INT_CTRL_RRO_PG2_DONE_TRIG GENMASK(22, 18) + +#define MTK_WED_RX_IND_CMD_CNT0 0xf20 +#define MTK_WED_RX_IND_CMD_DBG_CNT_EN BIT(31) + +#define MTK_WED_RX_IND_CMD_CNT(_n) (0xf20 + (_n) * 0x4) +#define MTK_WED_IND_CMD_MAGIC_CNT_FAIL_CNT GENMASK(15, 0) + +#define MTK_WED_RX_ADDR_ELEM_CNT(_n) (0xf48 + (_n) * 0x4) +#define MTK_WED_ADDR_ELEM_SIG_FAIL_CNT GENMASK(15, 0) +#define MTK_WED_ADDR_ELEM_FIRST_SIG_FAIL_CNT GENMASK(31, 16) +#define MTK_WED_ADDR_ELEM_ACKSN_CNT GENMASK(27, 0) + +#define MTK_WED_RX_MSDU_PG_CNT(_n) (0xf5c + (_n) * 0x4) + +#define MTK_WED_RX_PN_CHK_CNT 0xf70 +#define MTK_WED_PN_CHK_FAIL_CNT GENMASK(15, 0) + #define MTK_WED_WOCPU_VIEW_MIOD_BASE 0x8000 #define MTK_WED_PCIE_INT_MASK 0x0 +#define MTK_WED_PCIE_BASE 0x11280000 +#define MTK_WED_PCIE_BASE0 0x11300000 +#define MTK_WED_PCIE_BASE1 0x11310000 +#define MTK_WED_PCIE_BASE2 0x11290000 #endif diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h index 8ed81761bf10..87a67fa3868d 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h @@ -91,6 +91,8 @@ enum mtk_wed_dummy_cr_idx { #define MT7981_FIRMWARE_WO "mediatek/mt7981_wo.bin" #define MT7986_FIRMWARE_WO0 "mediatek/mt7986_wo_0.bin" #define MT7986_FIRMWARE_WO1 "mediatek/mt7986_wo_1.bin" +#define MT7988_FIRMWARE_WO0 "mediatek/mt7988_wo_0.bin" +#define MT7988_FIRMWARE_WO1 "mediatek/mt7988_wo_1.bin" #define MTK_WO_MCU_CFG_LS_BASE 0 #define MTK_WO_MCU_CFG_LS_HW_VER_ADDR (MTK_WO_MCU_CFG_LS_BASE + 0x000) diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index 5f00dc26582b..0beccbe45585 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -102,6 +102,7 @@ struct mtk_wed_device { struct { int size; + int desc_size; struct mtk_wed_buf *pages; struct mtk_wdma_desc *desc; dma_addr_t desc_phys; @@ -138,6 +139,8 @@ struct mtk_wed_device { u32 wpdma_rx; bool wcid_512; + bool hw_rro; + bool msi; u16 token_start; unsigned int nbuf; @@ -211,10 +214,12 @@ mtk_wed_device_attach(struct mtk_wed_device *dev) return ret; } -static inline bool -mtk_wed_get_rx_capa(struct mtk_wed_device *dev) +static inline bool mtk_wed_get_rx_capa(struct mtk_wed_device *dev) { #ifdef CONFIG_NET_MEDIATEK_SOC_WED + if (dev->version == 3) + return dev->wlan.hw_rro; + return dev->version != 1; #else return false; From patchwork Thu Sep 14 14:38:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385518 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72F092137A; Thu, 14 Sep 2023 14:39:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5747C433C8; Thu, 14 Sep 2023 14:39:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702390; bh=pdNAyyWoHcPfZPFOImFhDdd+CXIxgjMrheWwMocSdGE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rzPzWRoGVvuvniyzNWIbBjubfdWysnp5iMNgFhbJoUEqQWFFK+rwMgfM74k8+jxNv 5+umePkV47dgFUkknvZX8m0/j04B8U5uSr3FymaEXvCHhLG6kLQERqNvmBrv3Hqtp/ sco96PwiV4lV5tgTQwX0jcJXVPuhXv8IQRRLnrIBHdNK6B29H19DwtNd3BJn6+pdPo NGVJnIqDkIZSYRgDQmFgRZs4NDrHkDk9g/zNvJ3wLrmrjkF4lunpM1Wz2frjM/z09V FeG9cMgaAV8TWxl61SQl9K+VOhjrI0+uBrbl/hyakK5AlwIPg7P1FD207ZeIrXAAMS ZJWBhEjWZEyzA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 11/15] net: ethernet: mtk_wed: refactor mtk_wed_check_wfdma_rx_fill routine Date: Thu, 14 Sep 2023 16:38:16 +0200 Message-ID: <7cc25d625181a5a73c8dab3f79fff310de95b7c3.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor mtk_wed_check_wfdma_rx_fill() in order to be reused adding HW receive offload support for MT7988 SoC. Co-developed-by: Sujuan Chen Signed-off-by: Sujuan Chen Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_wed.c | 44 +++++++++++++++---------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 0d8e10df9da2..ad7cd6c88d64 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -554,22 +554,15 @@ mtk_wed_set_512_support(struct mtk_wed_device *dev, bool enable) } } -#define MTK_WFMDA_RX_DMA_EN BIT(2) -static void -mtk_wed_check_wfdma_rx_fill(struct mtk_wed_device *dev, int idx) +static int +mtk_wed_check_wfdma_rx_fill(struct mtk_wed_device *dev, + struct mtk_wed_ring *ring) { - u32 val; int i; - if (!(dev->rx_ring[idx].flags & MTK_WED_RING_CONFIGURED)) - return; /* queue is not configured by mt76 */ - for (i = 0; i < 3; i++) { - u32 cur_idx; + u32 cur_idx = readl(ring->wpdma + MTK_WED_RING_OFS_CPU_IDX); - cur_idx = wed_r32(dev, - MTK_WED_WPDMA_RING_RX_DATA(idx) + - MTK_WED_RING_OFS_CPU_IDX); if (cur_idx == MTK_WED_RX_RING_SIZE - 1) break; @@ -578,12 +571,10 @@ mtk_wed_check_wfdma_rx_fill(struct mtk_wed_device *dev, int idx) if (i == 3) { dev_err(dev->hw->dev, "rx dma enable failed\n"); - return; + return -ETIMEDOUT; } - val = wifi_r32(dev, dev->wlan.wpdma_rx_glo - dev->wlan.phy_base) | - MTK_WFMDA_RX_DMA_EN; - wifi_w32(dev, dev->wlan.wpdma_rx_glo - dev->wlan.phy_base, val); + return 0; } static void @@ -1546,6 +1537,7 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask) wed_w32(dev, MTK_WED_INT_MASK, irq_mask); } +#define MTK_WFMDA_RX_DMA_EN BIT(2) static void mtk_wed_dma_enable(struct mtk_wed_device *dev) { @@ -1633,8 +1625,26 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) wdma_set(dev, MTK_WDMA_WRBK_TX_CFG, MTK_WDMA_WRBK_TX_CFG_WRBK_EN); } - for (i = 0; i < MTK_WED_RX_QUEUES; i++) - mtk_wed_check_wfdma_rx_fill(dev, i); + for (i = 0; i < MTK_WED_RX_QUEUES; i++) { + struct mtk_wed_ring *ring = &dev->rx_ring[i]; + u32 val; + + if (!(ring->flags & MTK_WED_RING_CONFIGURED)) + continue; /* queue is not configured by mt76 */ + + if (mtk_wed_check_wfdma_rx_fill(dev, ring)) { + dev_err(dev->hw->dev, + "rx_ring(%d) dma enable failed\n", i); + continue; + } + + val = wifi_r32(dev, + dev->wlan.wpdma_rx_glo - + dev->wlan.phy_base) | MTK_WFMDA_RX_DMA_EN; + wifi_w32(dev, + dev->wlan.wpdma_rx_glo - dev->wlan.phy_base, + val); + } } static void From patchwork Thu Sep 14 14:38:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385519 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D38862137A; Thu, 14 Sep 2023 14:39:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAD41C43397; Thu, 14 Sep 2023 14:39:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702394; bh=LVQpf2Ee+KLxBywq1NdTuKmHlHiNzxizey4ccDcwL/Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VlViPAG5f/lE9LsPIcAJ6rqOQ0k1gpAiq2XIZifOnTDZ8Hcmig0rjYX7JhdSmzZ1a r7a1EPGRlJSb+LpPUpmCf9hopWsKI3wQ88uQIDBh4Os4Nf1dRgH9DYk90QHpSuq2F0 399C24WZWrdDbxd+FFRMkKkGCmpNQq1m1H1Rcs9oKH5ZCaVUGG57udeqbUs75sxDkh FIIoE+Z2xN3wJCaT9LUb2ssf8GHx2MPKyD05BJkJ93jFIe/3Hwe437TMptmXTH0TVM BJ3VVkN+/JdPbrpHlENZWD5L1nqk9LI944X8z3+kx7YVWbUlAFv/jJG7vyvUKH11Xj htkdDfTTlipoA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 12/15] net: ethernet: mtk_wed: introduce partial AMSDU offload support for MT7988 Date: Thu, 14 Sep 2023 16:38:17 +0200 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Sujuan Chen Introduce partial AMSDU offload support for MT7988 SoC in order to merge in hw packets belonging to the same AMSDU before passing them to the WLAN nic. Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi Signed-off-by: Sujuan Chen --- drivers/net/ethernet/mediatek/mtk_ppe.c | 4 +- drivers/net/ethernet/mediatek/mtk_ppe.h | 19 ++- .../net/ethernet/mediatek/mtk_ppe_offload.c | 3 +- drivers/net/ethernet/mediatek/mtk_wed.c | 154 ++++++++++++++++-- drivers/net/ethernet/mediatek/mtk_wed.h | 7 + drivers/net/ethernet/mediatek/mtk_wed_regs.h | 76 +++++++++ include/linux/netdevice.h | 1 + include/linux/soc/mediatek/mtk_wed.h | 12 ++ 8 files changed, 248 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 86f32f486043..b2a5d9c3733d 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -425,7 +425,8 @@ int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry, } int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, - int wdma_idx, int txq, int bss, int wcid) + int wdma_idx, int txq, int bss, int wcid, + bool amsdu_en) { struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(eth, entry); u32 *ib2 = mtk_foe_entry_ib2(eth, entry); @@ -437,6 +438,7 @@ int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, MTK_FOE_IB2_WDMA_WINFO_V2; l2->w3info = FIELD_PREP(MTK_FOE_WINFO_WCID_V3, wcid) | FIELD_PREP(MTK_FOE_WINFO_BSS_V3, bss); + l2->amsdu = FIELD_PREP(MTK_FOE_WINFO_AMSDU_EN, amsdu_en); break; case 2: *ib2 &= ~MTK_FOE_IB2_PORT_MG_V2; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index e3d0ec72bc69..691806bca372 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -88,13 +88,13 @@ enum { #define MTK_FOE_WINFO_BSS_V3 GENMASK(23, 16) #define MTK_FOE_WINFO_WCID_V3 GENMASK(15, 0) -#define MTK_FOE_WINFO_PAO_USR_INFO GENMASK(15, 0) -#define MTK_FOE_WINFO_PAO_TID GENMASK(19, 16) -#define MTK_FOE_WINFO_PAO_IS_FIXEDRATE BIT(20) -#define MTK_FOE_WINFO_PAO_IS_PRIOR BIT(21) -#define MTK_FOE_WINFO_PAO_IS_SP BIT(22) -#define MTK_FOE_WINFO_PAO_HF BIT(23) -#define MTK_FOE_WINFO_PAO_AMSDU_EN BIT(24) +#define MTK_FOE_WINFO_AMSDU_USR_INFO GENMASK(15, 0) +#define MTK_FOE_WINFO_AMSDU_TID GENMASK(19, 16) +#define MTK_FOE_WINFO_AMSDU_IS_FIXEDRATE BIT(20) +#define MTK_FOE_WINFO_AMSDU_IS_PRIOR BIT(21) +#define MTK_FOE_WINFO_AMSDU_IS_SP BIT(22) +#define MTK_FOE_WINFO_AMSDU_HF BIT(23) +#define MTK_FOE_WINFO_AMSDU_EN BIT(24) enum { MTK_FOE_STATE_INVALID, @@ -123,7 +123,7 @@ struct mtk_foe_mac_info { /* netsys_v3 */ u32 w3info; - u32 wpao; + u32 amsdu; }; /* software-only entry type */ @@ -392,7 +392,8 @@ int mtk_foe_entry_set_vlan(struct mtk_eth *eth, struct mtk_foe_entry *entry, int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry, int sid); int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, - int wdma_idx, int txq, int bss, int wcid); + int wdma_idx, int txq, int bss, int wcid, + bool amsdu_en); int mtk_foe_entry_set_queue(struct mtk_eth *eth, struct mtk_foe_entry *entry, unsigned int queue); int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 95f76975f258..e073d2b5542c 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -111,6 +111,7 @@ mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_i info->queue = path->mtk_wdma.queue; info->bss = path->mtk_wdma.bss; info->wcid = path->mtk_wdma.wcid; + info->amsdu = path->mtk_wdma.amsdu; return 0; } @@ -192,7 +193,7 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, if (mtk_flow_get_wdma_info(dev, dest_mac, &info) == 0) { mtk_foe_entry_set_wdma(eth, foe, info.wdma_idx, info.queue, - info.bss, info.wcid); + info.bss, info.wcid, info.amsdu); if (mtk_is_netsys_v2_or_greater(eth)) { switch (info.wdma_idx) { case 0: diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index ad7cd6c88d64..7750869509c3 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -30,6 +30,8 @@ #define MTK_WED_RX_PAGE_BUF_PER_PAGE (PAGE_SIZE / 128) #define MTK_WED_RX_RING_SIZE 1536 #define MTK_WED_RX_PG_BM_CNT 8192 +#define MTK_WED_AMSDU_BUF_SIZE (PAGE_SIZE << 4) +#define MTK_WED_AMSDU_NPAGES 32 #define MTK_WED_TX_RING_SIZE 2048 #define MTK_WED_WDMA_RING_SIZE 1024 @@ -140,6 +142,23 @@ mtk_wdma_rx_reset(struct mtk_wed_device *dev) return ret; } +static u32 +mtk_wed_check_busy(struct mtk_wed_device *dev, u32 reg, u32 mask) +{ + return !!(wed_r32(dev, reg) & mask); +} + +static int +mtk_wed_poll_busy(struct mtk_wed_device *dev, u32 reg, u32 mask) +{ + int sleep = 15000; + int timeout = 100 * sleep; + u32 val; + + return read_poll_timeout(mtk_wed_check_busy, val, !val, sleep, + timeout, false, dev, reg, mask); +} + static void mtk_wdma_tx_reset(struct mtk_wed_device *dev) { @@ -302,6 +321,118 @@ mtk_wed_assign(struct mtk_wed_device *dev) return hw; } +static int +mtk_wed_amsdu_buffer_alloc(struct mtk_wed_device *dev) +{ + struct mtk_wed_hw *hw = dev->hw; + struct mtk_wed_amsdu *wed_amsdu; + int i; + + if (!mtk_wed_is_v3_or_greater(hw)) + return 0; + + wed_amsdu = devm_kcalloc(hw->dev, MTK_WED_AMSDU_NPAGES, + sizeof(*wed_amsdu), GFP_KERNEL); + if (!wed_amsdu) + return -ENOMEM; + + for (i = 0; i < MTK_WED_AMSDU_NPAGES; i++) { + void *ptr; + + /* each segment is 64K */ + ptr = (void *)__get_free_pages(GFP_KERNEL | __GFP_NOWARN | + __GFP_ZERO | __GFP_COMP | + GFP_DMA32, + get_order(MTK_WED_AMSDU_BUF_SIZE)); + if (!ptr) + goto error; + + wed_amsdu[i].txd = ptr; + wed_amsdu[i].txd_phy = dma_map_single(hw->dev, ptr, + MTK_WED_AMSDU_BUF_SIZE, + DMA_TO_DEVICE); + if (dma_mapping_error(hw->dev, wed_amsdu[i].txd_phy)) + goto error; + } + dev->hw->wed_amsdu = wed_amsdu; + + return 0; + +error: + for (i--; i >= 0; i--) + dma_unmap_single(hw->dev, wed_amsdu[i].txd_phy, + MTK_WED_AMSDU_BUF_SIZE, DMA_TO_DEVICE); + return -ENOMEM; +} + +static void +mtk_wed_amsdu_free_buffer(struct mtk_wed_device *dev) +{ + struct mtk_wed_amsdu *wed_amsdu = dev->hw->wed_amsdu; + int i; + + if (!wed_amsdu) + return; + + for (i = 0; i < MTK_WED_AMSDU_NPAGES; i++) { + dma_unmap_single(dev->hw->dev, wed_amsdu[i].txd_phy, + MTK_WED_AMSDU_BUF_SIZE, DMA_TO_DEVICE); + free_pages((unsigned long)wed_amsdu[i].txd, + get_order(MTK_WED_AMSDU_BUF_SIZE)); + } +} + +static int +mtk_wed_amsdu_init(struct mtk_wed_device *dev) +{ + struct mtk_wed_amsdu *wed_amsdu = dev->hw->wed_amsdu; + int i, ret; + + if (!wed_amsdu) + return 0; + + for (i = 0; i < MTK_WED_AMSDU_NPAGES; i++) + wed_w32(dev, MTK_WED_AMSDU_HIFTXD_BASE_L(i), + wed_amsdu[i].txd_phy); + + /* init all sta parameter */ + wed_w32(dev, MTK_WED_AMSDU_STA_INFO_INIT, MTK_WED_AMSDU_STA_RMVL | + MTK_WED_AMSDU_STA_WTBL_HDRT_MODE | + FIELD_PREP(MTK_WED_AMSDU_STA_MAX_AMSDU_LEN, + dev->wlan.amsdu_max_len >> 8) | + FIELD_PREP(MTK_WED_AMSDU_STA_MAX_AMSDU_NUM, + dev->wlan.amsdu_max_subframes)); + + wed_w32(dev, MTK_WED_AMSDU_STA_INFO, MTK_WED_AMSDU_STA_INFO_DO_INIT); + + ret = mtk_wed_poll_busy(dev, MTK_WED_AMSDU_STA_INFO, + MTK_WED_AMSDU_STA_INFO_DO_INIT); + if (ret) { + dev_err(dev->hw->dev, "amsdu initialization failed\n"); + return ret; + } + + /* init partial amsdu offload txd src */ + wed_set(dev, MTK_WED_AMSDU_HIFTXD_CFG, + FIELD_PREP(MTK_WED_AMSDU_HIFTXD_SRC, dev->hw->index)); + + /* init qmem */ + wed_set(dev, MTK_WED_AMSDU_PSE, MTK_WED_AMSDU_PSE_RESET); + ret = mtk_wed_poll_busy(dev, MTK_WED_MON_AMSDU_QMEM_STS1, BIT(29)); + if (ret) { + pr_info("%s: amsdu qmem initialization failed\n", __func__); + return ret; + } + + /* eagle E1 PCIE1 tx ring 22 flow control issue */ + if (dev->wlan.id == 0x7991) + wed_clr(dev, MTK_WED_AMSDU_FIFO, MTK_WED_AMSDU_IS_PRIOR0_RING); + + wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_AMSDU_EN); + + return 0; +} + static int mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev) { @@ -677,6 +808,7 @@ __mtk_wed_detach(struct mtk_wed_device *dev) mtk_wdma_rx_reset(dev); mtk_wed_reset(dev, MTK_WED_RESET_WED); + mtk_wed_amsdu_free_buffer(dev); mtk_wed_free_tx_buffer(dev); mtk_wed_free_tx_rings(dev); @@ -1114,23 +1246,6 @@ mtk_wed_ring_reset(struct mtk_wed_ring *ring, int size, bool tx) } } -static u32 -mtk_wed_check_busy(struct mtk_wed_device *dev, u32 reg, u32 mask) -{ - return !!(wed_r32(dev, reg) & mask); -} - -static int -mtk_wed_poll_busy(struct mtk_wed_device *dev, u32 reg, u32 mask) -{ - int sleep = 15000; - int timeout = 100 * sleep; - u32 val; - - return read_poll_timeout(mtk_wed_check_busy, val, !val, sleep, - timeout, false, dev, reg, mask); -} - static int mtk_wed_rx_reset(struct mtk_wed_device *dev) { @@ -1692,6 +1807,7 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) } mtk_wed_set_512_support(dev, dev->wlan.wcid_512); + mtk_wed_amsdu_init(dev); mtk_wed_dma_enable(dev); dev->running = true; @@ -1748,6 +1864,10 @@ mtk_wed_attach(struct mtk_wed_device *dev) if (ret) goto out; + ret = mtk_wed_amsdu_buffer_alloc(dev); + if (ret) + goto out; + if (mtk_wed_get_rx_capa(dev)) { ret = mtk_wed_rro_alloc(dev); if (ret) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h index 224ff00bdd8b..9c443bb4a850 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.h +++ b/drivers/net/ethernet/mediatek/mtk_wed.h @@ -14,6 +14,11 @@ struct mtk_eth; struct mtk_wed_wo; +struct mtk_wed_amsdu { + void *txd; + dma_addr_t txd_phy; +}; + struct mtk_wed_hw { struct device_node *node; struct mtk_eth *eth; @@ -26,6 +31,7 @@ struct mtk_wed_hw { struct dentry *debugfs_dir; struct mtk_wed_device *wed_dev; struct mtk_wed_wo *wed_wo; + struct mtk_wed_amsdu *wed_amsdu; u32 pcie_base; u32 debugfs_reg; u32 num_flows; @@ -40,6 +46,7 @@ struct mtk_wdma_info { u8 queue; u16 wcid; u8 bss; + u8 amsdu; }; #ifdef CONFIG_NET_MEDIATEK_SOC_WED diff --git a/drivers/net/ethernet/mediatek/mtk_wed_regs.h b/drivers/net/ethernet/mediatek/mtk_wed_regs.h index d50ccdd3a69b..6472650f3a59 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_regs.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_regs.h @@ -681,6 +681,82 @@ struct mtk_wdma_desc { #define MTK_WED_WOCPU_VIEW_MIOD_BASE 0x8000 #define MTK_WED_PCIE_INT_MASK 0x0 +#define MTK_WED_AMSDU_FIFO 0x1800 +#define MTK_WED_AMSDU_IS_PRIOR0_RING BIT(10) + +#define MTK_WED_AMSDU_STA_INFO 0x01810 +#define MTK_WED_AMSDU_STA_INFO_DO_INIT BIT(0) +#define MTK_WED_AMSDU_STA_INFO_SET_INIT BIT(1) + +#define MTK_WED_AMSDU_STA_INFO_INIT 0x01814 +#define MTK_WED_AMSDU_STA_WTBL_HDRT_MODE BIT(0) +#define MTK_WED_AMSDU_STA_RMVL BIT(1) +#define MTK_WED_AMSDU_STA_MAX_AMSDU_LEN GENMASK(7, 2) +#define MTK_WED_AMSDU_STA_MAX_AMSDU_NUM GENMASK(11, 8) + +#define MTK_WED_AMSDU_HIFTXD_BASE_L(_n) (0x1980 + (_n) * 0x4) + +#define MTK_WED_AMSDU_PSE 0x1910 +#define MTK_WED_AMSDU_PSE_RESET BIT(16) + +#define MTK_WED_AMSDU_HIFTXD_CFG 0x1968 +#define MTK_WED_AMSDU_HIFTXD_SRC GENMASK(16, 15) + +#define MTK_WED_MON_AMSDU_FIFO_DMAD 0x1a34 + +#define MTK_WED_MON_AMSDU_ENG_DMAD(_n) (0x1a80 + (_n) * 0x50) +#define MTK_WED_MON_AMSDU_ENG_QFPL(_n) (0x1a84 + (_n) * 0x50) +#define MTK_WED_MON_AMSDU_ENG_QENI(_n) (0x1a88 + (_n) * 0x50) +#define MTK_WED_MON_AMSDU_ENG_QENO(_n) (0x1a8c + (_n) * 0x50) +#define MTK_WED_MON_AMSDU_ENG_MERG(_n) (0x1a90 + (_n) * 0x50) + +#define MTK_WED_MON_AMSDU_ENG_CNT8(_n) (0x1a94 + (_n) * 0x50) +#define MTK_WED_AMSDU_ENG_MAX_QGPP_CNT GENMASK(10, 0) +#define MTK_WED_AMSDU_ENG_MAX_PL_CNT GENMASK(27, 16) + +#define MTK_WED_MON_AMSDU_ENG_CNT9(_n) (0x1a98 + (_n) * 0x50) +#define MTK_WED_AMSDU_ENG_CUR_ENTRY GENMASK(10, 0) +#define MTK_WED_AMSDU_ENG_MAX_BUF_MERGED GENMASK(20, 16) +#define MTK_WED_AMSDU_ENG_MAX_MSDU_MERGED GENMASK(28, 24) + +#define MTK_WED_MON_AMSDU_QMEM_STS1 0x1e04 + +#define MTK_WED_MON_AMSDU_QMEM_CNT(_n) (0x1e0c + (_n) * 0x4) +#define MTK_WED_AMSDU_QMEM_FQ_CNT GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_SP_QCNT GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID0_QCNT GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID1_QCNT GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID2_QCNT GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID3_QCNT GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID4_QCNT GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID5_QCNT GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID6_QCNT GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID7_QCNT GENMASK(11, 0) + +#define MTK_WED_MON_AMSDU_QMEM_PTR(_n) (0x1e20 + (_n) * 0x4) +#define MTK_WED_AMSDU_QMEM_FQ_HEAD GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_SP_QHEAD GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID0_QHEAD GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID1_QHEAD GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID2_QHEAD GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID3_QHEAD GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID4_QHEAD GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID5_QHEAD GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID6_QHEAD GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID7_QHEAD GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_FQ_TAIL GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_SP_QTAIL GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID0_QTAIL GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID1_QTAIL GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID2_QTAIL GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID3_QTAIL GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID4_QTAIL GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID5_QTAIL GENMASK(11, 0) +#define MTK_WED_AMSDU_QMEM_TID6_QTAIL GENMASK(27, 16) +#define MTK_WED_AMSDU_QMEM_TID7_QTAIL GENMASK(11, 0) + +#define MTK_WED_MON_AMSDU_HIFTXD_FETCH_MSDU(_n) (0x1ec4 + (_n) * 0x4) + #define MTK_WED_PCIE_BASE 0x11280000 #define MTK_WED_PCIE_BASE0 0x11300000 #define MTK_WED_PCIE_BASE1 0x11310000 diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 0896aaa91dd7..1b00b34dc352 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -917,6 +917,7 @@ struct net_device_path { u8 queue; u16 wcid; u8 bss; + u8 amsdu; } mtk_wdma; }; }; diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index 0beccbe45585..802e38e0840d 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -129,6 +129,7 @@ struct mtk_wed_device { enum mtk_wed_bus_tye bus_type; void __iomem *base; u32 phy_base; + u32 id; u32 wpdma_phys; u32 wpdma_int; @@ -147,10 +148,12 @@ struct mtk_wed_device { unsigned int rx_nbuf; unsigned int rx_npkt; unsigned int rx_size; + unsigned int amsdu_max_len; u8 tx_tbit[MTK_WED_TX_QUEUES]; u8 rx_tbit[MTK_WED_RX_QUEUES]; u8 txfree_tbit; + u8 amsdu_max_subframes; u32 (*init_buf)(void *ptr, dma_addr_t phys, int token_id); int (*offload_enable)(struct mtk_wed_device *wed); @@ -226,6 +229,15 @@ static inline bool mtk_wed_get_rx_capa(struct mtk_wed_device *dev) #endif } +static inline bool mtk_wed_is_amsdu_supported(struct mtk_wed_device *dev) +{ +#ifdef CONFIG_NET_MEDIATEK_SOC_WED + return dev->version == 3; +#else + return false; +#endif +} + #ifdef CONFIG_NET_MEDIATEK_SOC_WED #define mtk_wed_device_active(_dev) !!(_dev)->ops #define mtk_wed_device_detach(_dev) (_dev)->ops->detach(_dev) From patchwork Thu Sep 14 14:38:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385520 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B77B24203; Thu, 14 Sep 2023 14:39:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97DA8C433C8; Thu, 14 Sep 2023 14:39:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702398; bh=llUNlobGC49KltJ5xNJ1PuIEq+EQTg2fZSDm0l6tVTE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QFb5uK5nIsvQkvXKkTOluXjm4tiAbt1JYWdrttW1W608Z4zz1RHOjDApdeY4GpRnJ /jUuTD18DkcX/jK88z4m5hPyD/IqZTeSoaN8sBuvOOOD3uzHJp54rOH4iWAqvWWLdv Z3s0pY+u4NihorcK+FXYyzkul5/cgRntQqURD+MaHp1Ckq94nsGc56SIQvLfGirZeg QbBv+W7fQVoFGAf940n/JZgV2A7Xwdc4Kju0SpEEKIoHbyQ0RejPhj1BGz7HzMuQav otTIfWxAPzs3EqlFZrmyOVyzJePukI3gCI3Iw7Ve5juoBTul/V6NEkDZ+B/89vo3Bz 1pwstvztta/tA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 13/15] net: ethernet: mtk_wed: introduce hw_rro support for MT7988 Date: Thu, 14 Sep 2023 16:38:18 +0200 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Sujuan Chen MT7988 SoC support 802.11 receive reordering offload in hw while MT7986 SoC implements it through the firmware running on the mcu. Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi Signed-off-by: Sujuan Chen --- drivers/net/ethernet/mediatek/mtk_wed.c | 304 +++++++++++++++++++++++- include/linux/soc/mediatek/mtk_wed.h | 44 ++++ 2 files changed, 346 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c index 7750869509c3..546397e2e40f 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed.c +++ b/drivers/net/ethernet/mediatek/mtk_wed.c @@ -27,7 +27,7 @@ #define MTK_WED_BUF_SIZE 2048 #define MTK_WED_PAGE_BUF_SIZE 128 #define MTK_WED_BUF_PER_PAGE (PAGE_SIZE / 2048) -#define MTK_WED_RX_PAGE_BUF_PER_PAGE (PAGE_SIZE / 128) +#define MTK_WED_RX_BUF_PER_PAGE (PAGE_SIZE / MTK_WED_PAGE_BUF_SIZE) #define MTK_WED_RX_RING_SIZE 1536 #define MTK_WED_RX_PG_BM_CNT 8192 #define MTK_WED_AMSDU_BUF_SIZE (PAGE_SIZE << 4) @@ -565,6 +565,73 @@ mtk_wed_free_tx_buffer(struct mtk_wed_device *dev) kfree(page_list); } +static int +mtk_wed_hwrro_buffer_alloc(struct mtk_wed_device *dev) +{ + int n_pages = MTK_WED_RX_PG_BM_CNT / MTK_WED_RX_BUF_PER_PAGE; + struct mtk_wed_buf *page_list; + struct mtk_wed_bm_desc *desc; + dma_addr_t desc_phys; + int i, page_idx = 0; + + if (!dev->wlan.hw_rro) + return 0; + + page_list = kcalloc(n_pages, sizeof(*page_list), GFP_KERNEL); + if (!page_list) + return -ENOMEM; + + dev->hw_rro.size = dev->wlan.rx_nbuf & ~(MTK_WED_BUF_PER_PAGE - 1); + dev->hw_rro.pages = page_list; + desc = dma_alloc_coherent(dev->hw->dev, + dev->wlan.rx_nbuf * sizeof(*desc), + &desc_phys, GFP_KERNEL); + if (!desc) + return -ENOMEM; + + dev->hw_rro.desc = desc; + dev->hw_rro.desc_phys = desc_phys; + + for (i = 0; i < MTK_WED_RX_PG_BM_CNT; i += MTK_WED_RX_BUF_PER_PAGE) { + dma_addr_t page_phys, buf_phys; + struct page *page; + void *buf; + int s; + + page = __dev_alloc_page(GFP_KERNEL); + if (!page) + return -ENOMEM; + + page_phys = dma_map_page(dev->hw->dev, page, 0, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev->hw->dev, page_phys)) { + __free_page(page); + return -ENOMEM; + } + + page_list[page_idx].p = page; + page_list[page_idx++].phy_addr = page_phys; + dma_sync_single_for_cpu(dev->hw->dev, page_phys, PAGE_SIZE, + DMA_BIDIRECTIONAL); + + buf = page_to_virt(page); + buf_phys = page_phys; + + for (s = 0; s < MTK_WED_RX_BUF_PER_PAGE; s++) { + desc->buf0 = cpu_to_le32(buf_phys); + desc++; + + buf += MTK_WED_PAGE_BUF_SIZE; + buf_phys += MTK_WED_PAGE_BUF_SIZE; + } + + dma_sync_single_for_device(dev->hw->dev, page_phys, PAGE_SIZE, + DMA_BIDIRECTIONAL); + } + + return 0; +} + static int mtk_wed_rx_buffer_alloc(struct mtk_wed_device *dev) { @@ -582,7 +649,42 @@ mtk_wed_rx_buffer_alloc(struct mtk_wed_device *dev) dev->rx_buf_ring.desc_phys = desc_phys; dev->wlan.init_rx_buf(dev, dev->wlan.rx_npkt); - return 0; + return mtk_wed_hwrro_buffer_alloc(dev); +} + +static void +mtk_wed_hwrro_free_buffer(struct mtk_wed_device *dev) +{ + struct mtk_wed_buf *page_list = dev->hw_rro.pages; + struct mtk_wed_bm_desc *desc = dev->hw_rro.desc; + int i, page_idx = 0; + + if (!dev->wlan.hw_rro) + return; + + if (!page_list) + return; + + if (!desc) + goto free_pagelist; + + for (i = 0; i < MTK_WED_RX_PG_BM_CNT; i += MTK_WED_RX_BUF_PER_PAGE) { + dma_addr_t buf_addr = page_list[page_idx].phy_addr; + void *page = page_list[page_idx++].p; + + if (!page) + break; + + dma_unmap_page(dev->hw->dev, buf_addr, PAGE_SIZE, + DMA_BIDIRECTIONAL); + __free_page(page); + } + + dma_free_coherent(dev->hw->dev, dev->hw_rro.size * sizeof(*desc), + desc, dev->hw_rro.desc_phys); + +free_pagelist: + kfree(page_list); } static void @@ -596,6 +698,28 @@ mtk_wed_free_rx_buffer(struct mtk_wed_device *dev) dev->wlan.release_rx_buf(dev); dma_free_coherent(dev->hw->dev, dev->rx_buf_ring.size * sizeof(*desc), desc, dev->rx_buf_ring.desc_phys); + + mtk_wed_hwrro_free_buffer(dev); +} + +static void +mtk_wed_hwrro_init(struct mtk_wed_device *dev) +{ + if (!mtk_wed_get_rx_capa(dev) || !dev->wlan.hw_rro) + return; + + wed_set(dev, MTK_WED_RRO_PG_BM_RX_DMAM, + FIELD_PREP(MTK_WED_RRO_PG_BM_RX_SDL0, 128)); + + wed_w32(dev, MTK_WED_RRO_PG_BM_BASE, dev->hw_rro.desc_phys); + + wed_w32(dev, MTK_WED_RRO_PG_BM_INIT_PTR, + MTK_WED_RRO_PG_BM_INIT_SW_TAIL_IDX | + FIELD_PREP(MTK_WED_RRO_PG_BM_SW_TAIL_IDX, + MTK_WED_RX_PG_BM_CNT)); + + /* enable rx_page_bm to fetch dmad */ + wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_PG_BM_EN); } static void @@ -609,6 +733,8 @@ mtk_wed_rx_buffer_hw_init(struct mtk_wed_device *dev) wed_w32(dev, MTK_WED_RX_BM_DYN_ALLOC_TH, FIELD_PREP(MTK_WED_RX_BM_DYN_ALLOC_TH_H, 0xffff)); wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_BM_EN); + + mtk_wed_hwrro_init(dev); } static void @@ -903,6 +1029,8 @@ mtk_wed_bus_init(struct mtk_wed_device *dev) static void mtk_wed_set_wpdma(struct mtk_wed_device *dev) { + int i; + if (mtk_wed_is_v1(dev->hw)) { wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys); return; @@ -923,6 +1051,15 @@ mtk_wed_set_wpdma(struct mtk_wed_device *dev) wed_w32(dev, MTK_WED_WPDMA_RX_RING0_V3, dev->wlan.wpdma_rx); else wed_w32(dev, MTK_WED_WPDMA_RX_RING0, dev->wlan.wpdma_rx); + + if (!dev->wlan.hw_rro) + return; + + wed_w32(dev, MTK_WED_RRO_RX_D_CFG(0), dev->wlan.wpdma_rx_rro[0]); + wed_w32(dev, MTK_WED_RRO_RX_D_CFG(1), dev->wlan.wpdma_rx_rro[1]); + for (i = 0; i < MTK_WED_RX_PAGE_QUEUES; i++) + wed_w32(dev, MTK_WED_RRO_MSDU_PG_RING_CFG(i), + dev->wlan.wpdma_rx_pg + i * 0x10); } static void @@ -1762,6 +1899,165 @@ mtk_wed_dma_enable(struct mtk_wed_device *dev) } } +static void +mtk_wed_start_hw_rro(struct mtk_wed_device *dev, u32 irq_mask) +{ + int i; + + wed_w32(dev, MTK_WED_WPDMA_INT_MASK, irq_mask); + wed_w32(dev, MTK_WED_INT_MASK, irq_mask); + + if (!mtk_wed_get_rx_capa(dev) || !dev->wlan.hw_rro) + return; + + wed_set(dev, MTK_WED_RRO_RX_D_CFG(2), MTK_WED_RRO_MSDU_PG_DRV_CLR); + wed_w32(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG, + MTK_WED_RRO_MSDU_PG_DRV_CLR); + + wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RRO_RX, + MTK_WED_WPDMA_INT_CTRL_RRO_RX0_EN | + MTK_WED_WPDMA_INT_CTRL_RRO_RX0_CLR | + MTK_WED_WPDMA_INT_CTRL_RRO_RX1_EN | + MTK_WED_WPDMA_INT_CTRL_RRO_RX1_CLR | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_RX0_DONE_TRIG, + dev->wlan.rro_rx_tbit[0]) | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_RX1_DONE_TRIG, + dev->wlan.rro_rx_tbit[1])); + + wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RRO_MSDU_PG, + MTK_WED_WPDMA_INT_CTRL_RRO_PG0_EN | + MTK_WED_WPDMA_INT_CTRL_RRO_PG0_CLR | + MTK_WED_WPDMA_INT_CTRL_RRO_PG1_EN | + MTK_WED_WPDMA_INT_CTRL_RRO_PG1_CLR | + MTK_WED_WPDMA_INT_CTRL_RRO_PG2_EN | + MTK_WED_WPDMA_INT_CTRL_RRO_PG2_CLR | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_PG0_DONE_TRIG, + dev->wlan.rx_pg_tbit[0]) | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_PG1_DONE_TRIG, + dev->wlan.rx_pg_tbit[1]) | + FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_PG2_DONE_TRIG, + dev->wlan.rx_pg_tbit[2])); + + /* RRO_MSDU_PG_RING2_CFG1_FLD_DRV_EN should be enabled after + * WM FWDL completed, otherwise RRO_MSDU_PG ring may broken + */ + wed_set(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG, + MTK_WED_RRO_MSDU_PG_DRV_EN); + + for (i = 0; i < MTK_WED_RX_QUEUES; i++) { + struct mtk_wed_ring *ring = &dev->rx_rro_ring[i]; + + if (!(ring->flags & MTK_WED_RING_CONFIGURED)) + continue; + + if (mtk_wed_check_wfdma_rx_fill(dev, ring)) + dev_err(dev->hw->dev, + "rx_rro_ring(%d) initialization failed\n", i); + } + + for (i = 0; i < MTK_WED_RX_PAGE_QUEUES; i++) { + struct mtk_wed_ring *ring = &dev->rx_page_ring[i]; + + if (!(ring->flags & MTK_WED_RING_CONFIGURED)) + continue; + + if (mtk_wed_check_wfdma_rx_fill(dev, ring)) + dev_err(dev->hw->dev, + "rx_page_ring(%d) initialization failed\n", i); + } +} + +static void +mtk_wed_rro_rx_ring_setup(struct mtk_wed_device *dev, int idx, + void __iomem *regs) +{ + struct mtk_wed_ring *ring = &dev->rx_rro_ring[idx]; + + ring->wpdma = regs; + wed_w32(dev, MTK_WED_RRO_RX_D_RX(idx) + MTK_WED_RING_OFS_BASE, + readl(regs)); + wed_w32(dev, MTK_WED_RRO_RX_D_RX(idx) + MTK_WED_RING_OFS_COUNT, + readl(regs + MTK_WED_RING_OFS_COUNT)); + ring->flags |= MTK_WED_RING_CONFIGURED; +} + +static void +mtk_wed_msdu_pg_rx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs) +{ + struct mtk_wed_ring *ring = &dev->rx_page_ring[idx]; + + ring->wpdma = regs; + wed_w32(dev, MTK_WED_RRO_MSDU_PG_CTRL0(idx) + MTK_WED_RING_OFS_BASE, + readl(regs)); + wed_w32(dev, MTK_WED_RRO_MSDU_PG_CTRL0(idx) + MTK_WED_RING_OFS_COUNT, + readl(regs + MTK_WED_RING_OFS_COUNT)); + ring->flags |= MTK_WED_RING_CONFIGURED; +} + +static int +mtk_wed_ind_rx_ring_setup(struct mtk_wed_device *dev, void __iomem *regs) +{ + struct mtk_wed_ring *ring = &dev->ind_cmd_ring; + u32 val = readl(regs + MTK_WED_RING_OFS_COUNT); + int i, count = 0; + + ring->wpdma = regs; + wed_w32(dev, MTK_WED_IND_CMD_RX_CTRL1 + MTK_WED_RING_OFS_BASE, + readl(regs) & 0xfffffff0); + + wed_w32(dev, MTK_WED_IND_CMD_RX_CTRL1 + MTK_WED_RING_OFS_COUNT, + readl(regs + MTK_WED_RING_OFS_COUNT)); + + /* ack sn cr */ + wed_w32(dev, MTK_WED_RRO_CFG0, dev->wlan.phy_base + + dev->wlan.ind_cmd.ack_sn_addr); + wed_w32(dev, MTK_WED_RRO_CFG1, + FIELD_PREP(MTK_WED_RRO_CFG1_MAX_WIN_SZ, + dev->wlan.ind_cmd.win_size) | + FIELD_PREP(MTK_WED_RRO_CFG1_PARTICL_SE_ID, + dev->wlan.ind_cmd.particular_sid)); + + /* particular session addr element */ + wed_w32(dev, MTK_WED_ADDR_ELEM_CFG0, + dev->wlan.ind_cmd.particular_se_phys); + + for (i = 0; i < dev->wlan.ind_cmd.se_group_nums; i++) { + wed_w32(dev, MTK_WED_RADDR_ELEM_TBL_WDATA, + dev->wlan.ind_cmd.addr_elem_phys[i] >> 4); + wed_w32(dev, MTK_WED_ADDR_ELEM_TBL_CFG, + MTK_WED_ADDR_ELEM_TBL_WR | (i & 0x7f)); + + val = wed_r32(dev, MTK_WED_ADDR_ELEM_TBL_CFG); + while (!(val & MTK_WED_ADDR_ELEM_TBL_WR_RDY) && count++ < 100) + val = wed_r32(dev, MTK_WED_ADDR_ELEM_TBL_CFG); + if (count >= 100) + dev_err(dev->hw->dev, + "write ba session base failed\n"); + } + + /* pn check init */ + for (i = 0; i < dev->wlan.ind_cmd.particular_sid; i++) { + wed_w32(dev, MTK_WED_PN_CHECK_WDATA_M, + MTK_WED_PN_CHECK_IS_FIRST); + + wed_w32(dev, MTK_WED_PN_CHECK_CFG, MTK_WED_PN_CHECK_WR | + FIELD_PREP(MTK_WED_PN_CHECK_SE_ID, i)); + + count = 0; + val = wed_r32(dev, MTK_WED_PN_CHECK_CFG); + while (!(val & MTK_WED_PN_CHECK_WR_RDY) && count++ < 100) + val = wed_r32(dev, MTK_WED_PN_CHECK_CFG); + if (count >= 100) + dev_err(dev->hw->dev, + "session(%d) initialization failed\n", i); + } + + wed_w32(dev, MTK_WED_RX_IND_CMD_CNT0, MTK_WED_RX_IND_CMD_DBG_CNT_EN); + wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_IND_CMD_EN); + + return 0; +} + static void mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask) { @@ -2227,6 +2523,10 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth, .detach = mtk_wed_detach, .ppe_check = mtk_wed_ppe_check, .setup_tc = mtk_wed_setup_tc, + .start_hw_rro = mtk_wed_start_hw_rro, + .rro_rx_ring_setup = mtk_wed_rro_rx_ring_setup, + .msdu_pg_rx_ring_setup = mtk_wed_msdu_pg_rx_ring_setup, + .ind_rx_ring_setup = mtk_wed_ind_rx_ring_setup, }; struct device_node *eth_np = eth->dev->of_node; struct platform_device *pdev; diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index 802e38e0840d..dc32e3529d10 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -10,6 +10,7 @@ #define MTK_WED_TX_QUEUES 2 #define MTK_WED_RX_QUEUES 2 +#define MTK_WED_RX_PAGE_QUEUES 3 #define WED_WO_STA_REC 0x6 @@ -99,6 +100,9 @@ struct mtk_wed_device { struct mtk_wed_ring txfree_ring; struct mtk_wed_ring tx_wdma[MTK_WED_TX_QUEUES]; struct mtk_wed_ring rx_wdma[MTK_WED_RX_QUEUES]; + struct mtk_wed_ring rx_rro_ring[MTK_WED_RX_QUEUES]; + struct mtk_wed_ring rx_page_ring[MTK_WED_RX_PAGE_QUEUES]; + struct mtk_wed_ring ind_cmd_ring; struct { int size; @@ -120,6 +124,13 @@ struct mtk_wed_device { dma_addr_t fdbk_phys; } rro; + struct { + int size; + struct mtk_wed_buf *pages; + struct mtk_wed_bm_desc *desc; + dma_addr_t desc_phys; + } hw_rro; + /* filled by driver: */ struct { union { @@ -138,6 +149,8 @@ struct mtk_wed_device { u32 wpdma_txfree; u32 wpdma_rx_glo; u32 wpdma_rx; + u32 wpdma_rx_rro[MTK_WED_RX_QUEUES]; + u32 wpdma_rx_pg; bool wcid_512; bool hw_rro; @@ -152,9 +165,20 @@ struct mtk_wed_device { u8 tx_tbit[MTK_WED_TX_QUEUES]; u8 rx_tbit[MTK_WED_RX_QUEUES]; + u8 rro_rx_tbit[MTK_WED_RX_QUEUES]; + u8 rx_pg_tbit[MTK_WED_RX_PAGE_QUEUES]; u8 txfree_tbit; u8 amsdu_max_subframes; + struct { + u8 se_group_nums; + u16 win_size; + u16 particular_sid; + u32 ack_sn_addr; + dma_addr_t particular_se_phys; + dma_addr_t addr_elem_phys[1024]; + } ind_cmd; + u32 (*init_buf)(void *ptr, dma_addr_t phys, int token_id); int (*offload_enable)(struct mtk_wed_device *wed); void (*offload_disable)(struct mtk_wed_device *wed); @@ -193,6 +217,13 @@ struct mtk_wed_ops { void (*irq_set_mask)(struct mtk_wed_device *dev, u32 mask); int (*setup_tc)(struct mtk_wed_device *wed, struct net_device *dev, enum tc_setup_type type, void *type_data); + void (*start_hw_rro)(struct mtk_wed_device *dev, u32 irq_mask); + void (*rro_rx_ring_setup)(struct mtk_wed_device *dev, int ring, + void __iomem *regs); + void (*msdu_pg_rx_ring_setup)(struct mtk_wed_device *dev, int ring, + void __iomem *regs); + int (*ind_rx_ring_setup)(struct mtk_wed_device *dev, + void __iomem *regs); }; extern const struct mtk_wed_ops __rcu *mtk_soc_wed_ops; @@ -264,6 +295,15 @@ static inline bool mtk_wed_is_amsdu_supported(struct mtk_wed_device *dev) #define mtk_wed_device_dma_reset(_dev) (_dev)->ops->reset_dma(_dev) #define mtk_wed_device_setup_tc(_dev, _netdev, _type, _type_data) \ (_dev)->ops->setup_tc(_dev, _netdev, _type, _type_data) +#define mtk_wed_device_start_hw_rro(_dev, _mask) \ + (_dev)->ops->start_hw_rro(_dev, _mask) +#define mtk_wed_device_rro_rx_ring_setup(_dev, _ring, _regs) \ + (_dev)->ops->rro_rx_ring_setup(_dev, _ring, _regs) +#define mtk_wed_device_msdu_pg_rx_ring_setup(_dev, _ring, _regs) \ + (_dev)->ops->msdu_pg_rx_ring_setup(_dev, _ring, _regs) +#define mtk_wed_device_ind_rx_ring_setup(_dev, _regs) \ + (_dev)->ops->ind_rx_ring_setup(_dev, _regs) + #else static inline bool mtk_wed_device_active(struct mtk_wed_device *dev) { @@ -283,6 +323,10 @@ static inline bool mtk_wed_device_active(struct mtk_wed_device *dev) #define mtk_wed_device_stop(_dev) do {} while (0) #define mtk_wed_device_dma_reset(_dev) do {} while (0) #define mtk_wed_device_setup_tc(_dev, _netdev, _type, _type_data) -EOPNOTSUPP +#define mtk_wed_device_start_hw_rro(_dev, _mask) do {} while (0) +#define mtk_wed_device_rro_rx_ring_setup(_dev, _ring, _regs) -ENODEV +#define mtk_wed_device_msdu_pg_rx_ring_setup(_dev, _ring, _regs) -ENODEV +#define mtk_wed_device_ind_rx_ring_setup(_dev, _regs) -ENODEV #endif #endif From patchwork Thu Sep 14 14:38:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385521 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 110E126288; Thu, 14 Sep 2023 14:40:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7EFB3C433BC; Thu, 14 Sep 2023 14:40:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702401; bh=H1Y7Lku2iwyhO5rgpqu7MyjoFTetMe1zN/nVR4VhRB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I2cyuJkh/fmiY0Zp1koQQCwAbiYogAbx8JvhQNnqK0Om4YNcG6fjFOGiIYNGs2a96 nriRPwdJI0Kqfo6TuAuLcN5pPPG34WgsFRTv96/Myp6Y1XQkZ48qK/V1w8ezuv4FCH y7L+FGyNbwotDkoX1vb6QIs6opAK+6BSTIthna0ubFHdsRxt1lXGmtgnhOAGYGg0K/ 1RGqcCr+kLL0IazqEasMKt+dNaHyJ4GQT6ekz7+rFWPIn6iZtqXatGkXJ7pML2UyUL vdgWaYe6u9ZtCJS920vw2j+n3xHL+BZYf01Z+BXfyrptfaiVOVj7E+lSu2nDBZ90vk HCtfMPwYLv6Dg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 14/15] net: ethernet: mtk_wed: debugfs: move wed_v2 specific regs out of regs array Date: Thu, 14 Sep 2023 16:38:19 +0200 Message-ID: <4e6b85fe4f2ce8971447ad09f28ba480581f97ea.1694701767.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Move specific WED2.0 debugfs entries out of regs array. This is a preliminary patch to introduce WED 3.0 debugfs info. Signed-off-by: Lorenzo Bianconi --- .../net/ethernet/mediatek/mtk_wed_debugfs.c | 33 ++++++++++--------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c index 674e919d0d3a..8999d0c743f3 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c @@ -151,7 +151,7 @@ DEFINE_SHOW_ATTRIBUTE(wed_txinfo); static int wed_rxinfo_show(struct seq_file *s, void *data) { - static const struct reg_dump regs[] = { + static const struct reg_dump regs_common[] = { DUMP_STR("WPDMA RX"), DUMP_WPDMA_RX_RING(0), DUMP_WPDMA_RX_RING(1), @@ -169,7 +169,7 @@ wed_rxinfo_show(struct seq_file *s, void *data) DUMP_WED_RING(WED_RING_RX_DATA(0)), DUMP_WED_RING(WED_RING_RX_DATA(1)), - DUMP_STR("WED RRO"), + DUMP_STR("WED WO RRO"), DUMP_WED_RRO_RING(WED_RROQM_MIOD_CTRL0), DUMP_WED(WED_RROQM_MID_MIB), DUMP_WED(WED_RROQM_MOD_MIB), @@ -180,17 +180,6 @@ wed_rxinfo_show(struct seq_file *s, void *data) DUMP_WED(WED_RROQM_FDBK_ANC_MIB), DUMP_WED(WED_RROQM_FDBK_ANC2H_MIB), - DUMP_STR("WED Route QM"), - DUMP_WED(WED_RTQM_R2H_MIB(0)), - DUMP_WED(WED_RTQM_R2Q_MIB(0)), - DUMP_WED(WED_RTQM_Q2H_MIB(0)), - DUMP_WED(WED_RTQM_R2H_MIB(1)), - DUMP_WED(WED_RTQM_R2Q_MIB(1)), - DUMP_WED(WED_RTQM_Q2H_MIB(1)), - DUMP_WED(WED_RTQM_Q2N_MIB), - DUMP_WED(WED_RTQM_Q2B_MIB), - DUMP_WED(WED_RTQM_PFDBK_MIB), - DUMP_STR("WED WDMA TX"), DUMP_WED(WED_WDMA_TX_MIB), DUMP_WED_RING(WED_WDMA_RING_TX), @@ -211,11 +200,25 @@ wed_rxinfo_show(struct seq_file *s, void *data) DUMP_WED(WED_RX_BM_INTF), DUMP_WED(WED_RX_BM_ERR_STS), }; + static const struct reg_dump regs_wed_v2[] = { + DUMP_STR("WED Route QM"), + DUMP_WED(WED_RTQM_R2H_MIB(0)), + DUMP_WED(WED_RTQM_R2Q_MIB(0)), + DUMP_WED(WED_RTQM_Q2H_MIB(0)), + DUMP_WED(WED_RTQM_R2H_MIB(1)), + DUMP_WED(WED_RTQM_R2Q_MIB(1)), + DUMP_WED(WED_RTQM_Q2H_MIB(1)), + DUMP_WED(WED_RTQM_Q2N_MIB), + DUMP_WED(WED_RTQM_Q2B_MIB), + DUMP_WED(WED_RTQM_PFDBK_MIB), + }; struct mtk_wed_hw *hw = s->private; struct mtk_wed_device *dev = hw->wed_dev; - if (dev) - dump_wed_regs(s, dev, regs, ARRAY_SIZE(regs)); + if (dev) { + dump_wed_regs(s, dev, regs_common, ARRAY_SIZE(regs_common)); + dump_wed_regs(s, dev, regs_wed_v2, ARRAY_SIZE(regs_wed_v2)); + } return 0; } From patchwork Thu Sep 14 14:38:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13385522 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C56D26288; Thu, 14 Sep 2023 14:40:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 658D6C433B8; Thu, 14 Sep 2023 14:40:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694702405; bh=uFy0yaIXeuU1WIHiqXjw8Vi32xgRKC9435bq8I2+iF0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SXIcBOlrAXldRWLG6CkqIVCX4GvN14veWYaBfKe8QpwQbpCu54yffxtvNlL1V2iYA JOu7CuVlqxLF9vHZFcHDZ/b9Vhg1WWjBOYJlgkZFtj8i+p/+J4MjCZO69/g9TWcwyH CvF1pNmQpBaZvgnc8f19Z2goLMpGq7tLhkRTcvXObuSavC8Qvss+HlmjUqj7F3gtPw qyfRdzJBIvkmdoaDtsjTUtx01w+t72yA1I1vkg9yZm/+LpUxovdeCr0LpTBIKfbTp8 aXijRLsNk8/0kdTibOuMdgLneCvv3o2ZjU1vQivZx9VQyJvj4j8VqfUgCovKuLFM0p 8FFlr4EZeVPpw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, daniel@makrotopia.org, linux-mediatek@lists.infradead.org, sujuan.chen@mediatek.com, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, devicetree@vger.kernel.org Subject: [PATCH net-next 15/15] net: ethernet: mtk_wed: debugfs: add WED 3.0 debugfs entries Date: Thu, 14 Sep 2023 16:38:20 +0200 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Sujuan Chen Introduce WED3.0 debugfs entries useful for debugging. Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi Signed-off-by: Sujuan Chen --- .../net/ethernet/mediatek/mtk_wed_debugfs.c | 371 +++++++++++++++++- 1 file changed, 369 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c index 8999d0c743f3..781c691473e1 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c @@ -11,6 +11,7 @@ struct reg_dump { u16 offset; u8 type; u8 base; + u32 mask; }; enum { @@ -25,6 +26,8 @@ enum { #define DUMP_STR(_str) { _str, 0, DUMP_TYPE_STRING } #define DUMP_REG(_reg, ...) { #_reg, MTK_##_reg, __VA_ARGS__ } +#define DUMP_REG_MASK(_reg, _mask) \ + { #_mask, MTK_##_reg, DUMP_TYPE_WED, 0, MTK_##_mask } #define DUMP_RING(_prefix, _base, ...) \ { _prefix " BASE", _base, __VA_ARGS__ }, \ { _prefix " CNT", _base + 0x4, __VA_ARGS__ }, \ @@ -32,6 +35,7 @@ enum { { _prefix " DIDX", _base + 0xc, __VA_ARGS__ } #define DUMP_WED(_reg) DUMP_REG(_reg, DUMP_TYPE_WED) +#define DUMP_WED_MASK(_reg, _mask) DUMP_REG_MASK(_reg, _mask) #define DUMP_WED_RING(_base) DUMP_RING(#_base, MTK_##_base, DUMP_TYPE_WED) #define DUMP_WDMA(_reg) DUMP_REG(_reg, DUMP_TYPE_WDMA) @@ -212,18 +216,372 @@ wed_rxinfo_show(struct seq_file *s, void *data) DUMP_WED(WED_RTQM_Q2B_MIB), DUMP_WED(WED_RTQM_PFDBK_MIB), }; + static const struct reg_dump regs_wed_v3[] = { + DUMP_STR("WED RX RRO DATA"), + DUMP_WED_RING(WED_RRO_RX_D_RX(0)), + DUMP_WED_RING(WED_RRO_RX_D_RX(1)), + + DUMP_STR("WED RX MSDU PAGE"), + DUMP_WED_RING(WED_RRO_MSDU_PG_CTRL0(0)), + DUMP_WED_RING(WED_RRO_MSDU_PG_CTRL0(1)), + DUMP_WED_RING(WED_RRO_MSDU_PG_CTRL0(2)), + + DUMP_STR("WED RX IND CMD"), + DUMP_WED(WED_IND_CMD_RX_CTRL1), + DUMP_WED_MASK(WED_IND_CMD_RX_CTRL2, WED_IND_CMD_MAX_CNT), + DUMP_WED_MASK(WED_IND_CMD_RX_CTRL0, WED_IND_CMD_PROC_IDX), + DUMP_WED_MASK(RRO_IND_CMD_SIGNATURE, RRO_IND_CMD_DMA_IDX), + DUMP_WED_MASK(WED_IND_CMD_RX_CTRL0, WED_IND_CMD_MAGIC_CNT), + DUMP_WED_MASK(RRO_IND_CMD_SIGNATURE, RRO_IND_CMD_MAGIC_CNT), + DUMP_WED_MASK(WED_IND_CMD_RX_CTRL0, + WED_IND_CMD_PREFETCH_FREE_CNT), + DUMP_WED_MASK(WED_RRO_CFG1, WED_RRO_CFG1_PARTICL_SE_ID), + + DUMP_STR("WED ADDR ELEM"), + DUMP_WED(WED_ADDR_ELEM_CFG0), + DUMP_WED_MASK(WED_ADDR_ELEM_CFG1, + WED_ADDR_ELEM_PREFETCH_FREE_CNT), + + DUMP_STR("WED Route QM"), + DUMP_WED(WED_RTQM_ENQ_I2Q_DMAD_CNT), + DUMP_WED(WED_RTQM_ENQ_I2N_DMAD_CNT), + DUMP_WED(WED_RTQM_ENQ_I2Q_PKT_CNT), + DUMP_WED(WED_RTQM_ENQ_I2N_PKT_CNT), + DUMP_WED(WED_RTQM_ENQ_USED_ENTRY_CNT), + DUMP_WED(WED_RTQM_ENQ_ERR_CNT), + + DUMP_WED(WED_RTQM_DEQ_DMAD_CNT), + DUMP_WED(WED_RTQM_DEQ_Q2I_DMAD_CNT), + DUMP_WED(WED_RTQM_DEQ_PKT_CNT), + DUMP_WED(WED_RTQM_DEQ_Q2I_PKT_CNT), + DUMP_WED(WED_RTQM_DEQ_USED_PFDBK_CNT), + DUMP_WED(WED_RTQM_DEQ_ERR_CNT), + }; struct mtk_wed_hw *hw = s->private; struct mtk_wed_device *dev = hw->wed_dev; if (dev) { dump_wed_regs(s, dev, regs_common, ARRAY_SIZE(regs_common)); - dump_wed_regs(s, dev, regs_wed_v2, ARRAY_SIZE(regs_wed_v2)); + if (mtk_wed_is_v2(hw)) + dump_wed_regs(s, dev, + regs_wed_v2, ARRAY_SIZE(regs_wed_v2)); + else + dump_wed_regs(s, dev, + regs_wed_v3, ARRAY_SIZE(regs_wed_v3)); } return 0; } DEFINE_SHOW_ATTRIBUTE(wed_rxinfo); +static int +wed_amsdu_show(struct seq_file *s, void *data) +{ + static const struct reg_dump regs[] = { + DUMP_STR("WED AMDSU INFO"), + DUMP_WED(WED_MON_AMSDU_FIFO_DMAD), + + DUMP_STR("WED AMDSU ENG0 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(0)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(0)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(0)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(0)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(0)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(0), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(0), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(0), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(0), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(0), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG1 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(1)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(1)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(1)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(1)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(1)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(1), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(1), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(1), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG2 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(2)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(2)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(2)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(2)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(2)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(2), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(2), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG3 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(3)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(3)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(3)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(3)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(3)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(3), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(3), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(3), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(3), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(3), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG4 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(4)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(4)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(4)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(4)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(4)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(4), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(4), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG5 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(5)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(5)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(5)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(5)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(5)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(5), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(5), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(5), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(5), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(5), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG6 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(6)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(6)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(6)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(6)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(6)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(6), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(6), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(6), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(6), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(6), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG7 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(7)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(7)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(7)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(7)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(7)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(7), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(7), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(7), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(7), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED AMDSU ENG8 INFO"), + DUMP_WED(WED_MON_AMSDU_ENG_DMAD(8)), + DUMP_WED(WED_MON_AMSDU_ENG_QFPL(8)), + DUMP_WED(WED_MON_AMSDU_ENG_QENI(8)), + DUMP_WED(WED_MON_AMSDU_ENG_QENO(8)), + DUMP_WED(WED_MON_AMSDU_ENG_MERG(8)), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(8), + WED_AMSDU_ENG_MAX_PL_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(8), + WED_AMSDU_ENG_MAX_QGPP_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(8), + WED_AMSDU_ENG_CUR_ENTRY), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(8), + WED_AMSDU_ENG_MAX_BUF_MERGED), + DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(8), + WED_AMSDU_ENG_MAX_MSDU_MERGED), + + DUMP_STR("WED QMEM INFO"), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(0), WED_AMSDU_QMEM_FQ_CNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(0), WED_AMSDU_QMEM_SP_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(1), WED_AMSDU_QMEM_TID0_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(1), WED_AMSDU_QMEM_TID1_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(2), WED_AMSDU_QMEM_TID2_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(2), WED_AMSDU_QMEM_TID3_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(3), WED_AMSDU_QMEM_TID4_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(3), WED_AMSDU_QMEM_TID5_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(4), WED_AMSDU_QMEM_TID6_QCNT), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(4), WED_AMSDU_QMEM_TID7_QCNT), + + DUMP_STR("WED QMEM HEAD INFO"), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(0), WED_AMSDU_QMEM_FQ_HEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(0), WED_AMSDU_QMEM_SP_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(1), WED_AMSDU_QMEM_TID0_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(1), WED_AMSDU_QMEM_TID1_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(2), WED_AMSDU_QMEM_TID2_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(2), WED_AMSDU_QMEM_TID3_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(3), WED_AMSDU_QMEM_TID4_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(3), WED_AMSDU_QMEM_TID5_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(4), WED_AMSDU_QMEM_TID6_QHEAD), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(4), WED_AMSDU_QMEM_TID7_QHEAD), + + DUMP_STR("WED QMEM TAIL INFO"), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(5), WED_AMSDU_QMEM_FQ_TAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(5), WED_AMSDU_QMEM_SP_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(6), WED_AMSDU_QMEM_TID0_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(6), WED_AMSDU_QMEM_TID1_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(7), WED_AMSDU_QMEM_TID2_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(7), WED_AMSDU_QMEM_TID3_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(8), WED_AMSDU_QMEM_TID4_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(8), WED_AMSDU_QMEM_TID5_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(9), WED_AMSDU_QMEM_TID6_QTAIL), + DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(9), WED_AMSDU_QMEM_TID7_QTAIL), + + DUMP_STR("WED HIFTXD MSDU INFO"), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(1)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(2)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(3)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(4)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(5)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(6)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(7)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(8)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(9)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(10)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(11)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(12)), + DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(13)), + }; + struct mtk_wed_hw *hw = s->private; + struct mtk_wed_device *dev = hw->wed_dev; + + if (dev) + dump_wed_regs(s, dev, regs, ARRAY_SIZE(regs)); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(wed_amsdu); + +static int +wed_rtqm_show(struct seq_file *s, void *data) +{ + static const struct reg_dump regs[] = { + DUMP_STR("WED Route QM IGRS0(N2H + Recycle)"), + DUMP_WED(WED_RTQM_IGRS0_I2HW_DMAD_CNT), + DUMP_WED(WED_RTQM_IGRS0_I2H_DMAD_CNT(0)), + DUMP_WED(WED_RTQM_IGRS0_I2H_DMAD_CNT(1)), + DUMP_WED(WED_RTQM_IGRS0_I2HW_PKT_CNT), + DUMP_WED(WED_RTQM_IGRS0_I2H_PKT_CNT(0)), + DUMP_WED(WED_RTQM_IGRS0_I2H_PKT_CNT(0)), + DUMP_WED(WED_RTQM_IGRS0_FDROP_CNT), + + DUMP_STR("WED Route QM IGRS1(Legacy)"), + DUMP_WED(WED_RTQM_IGRS1_I2HW_DMAD_CNT), + DUMP_WED(WED_RTQM_IGRS1_I2H_DMAD_CNT(0)), + DUMP_WED(WED_RTQM_IGRS1_I2H_DMAD_CNT(1)), + DUMP_WED(WED_RTQM_IGRS1_I2HW_PKT_CNT), + DUMP_WED(WED_RTQM_IGRS1_I2H_PKT_CNT(0)), + DUMP_WED(WED_RTQM_IGRS1_I2H_PKT_CNT(1)), + DUMP_WED(WED_RTQM_IGRS1_FDROP_CNT), + + DUMP_STR("WED Route QM IGRS2(RRO3.0)"), + DUMP_WED(WED_RTQM_IGRS2_I2HW_DMAD_CNT), + DUMP_WED(WED_RTQM_IGRS2_I2H_DMAD_CNT(0)), + DUMP_WED(WED_RTQM_IGRS2_I2H_DMAD_CNT(1)), + DUMP_WED(WED_RTQM_IGRS2_I2HW_PKT_CNT), + DUMP_WED(WED_RTQM_IGRS2_I2H_PKT_CNT(0)), + DUMP_WED(WED_RTQM_IGRS2_I2H_PKT_CNT(1)), + DUMP_WED(WED_RTQM_IGRS2_FDROP_CNT), + + DUMP_STR("WED Route QM IGRS3(DEBUG)"), + DUMP_WED(WED_RTQM_IGRS2_I2HW_DMAD_CNT), + DUMP_WED(WED_RTQM_IGRS3_I2H_DMAD_CNT(0)), + DUMP_WED(WED_RTQM_IGRS3_I2H_DMAD_CNT(1)), + DUMP_WED(WED_RTQM_IGRS3_I2HW_PKT_CNT), + DUMP_WED(WED_RTQM_IGRS3_I2H_PKT_CNT(0)), + DUMP_WED(WED_RTQM_IGRS3_I2H_PKT_CNT(1)), + DUMP_WED(WED_RTQM_IGRS3_FDROP_CNT), + }; + struct mtk_wed_hw *hw = s->private; + struct mtk_wed_device *dev = hw->wed_dev; + + if (dev) + dump_wed_regs(s, dev, regs, ARRAY_SIZE(regs)); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(wed_rtqm); + +static int +wed_rro_show(struct seq_file *s, void *data) +{ + static const struct reg_dump regs[] = { + DUMP_STR("RRO/IND CMD CNT"), + DUMP_WED(WED_RX_IND_CMD_CNT(1)), + DUMP_WED(WED_RX_IND_CMD_CNT(2)), + DUMP_WED(WED_RX_IND_CMD_CNT(3)), + DUMP_WED(WED_RX_IND_CMD_CNT(4)), + DUMP_WED(WED_RX_IND_CMD_CNT(5)), + DUMP_WED(WED_RX_IND_CMD_CNT(6)), + DUMP_WED(WED_RX_IND_CMD_CNT(7)), + DUMP_WED(WED_RX_IND_CMD_CNT(8)), + DUMP_WED_MASK(WED_RX_IND_CMD_CNT(9), + WED_IND_CMD_MAGIC_CNT_FAIL_CNT), + + DUMP_WED(WED_RX_ADDR_ELEM_CNT(0)), + DUMP_WED_MASK(WED_RX_ADDR_ELEM_CNT(1), + WED_ADDR_ELEM_SIG_FAIL_CNT), + DUMP_WED(WED_RX_MSDU_PG_CNT(1)), + DUMP_WED(WED_RX_MSDU_PG_CNT(2)), + DUMP_WED(WED_RX_MSDU_PG_CNT(3)), + DUMP_WED(WED_RX_MSDU_PG_CNT(4)), + DUMP_WED(WED_RX_MSDU_PG_CNT(5)), + DUMP_WED_MASK(WED_RX_PN_CHK_CNT, + WED_PN_CHK_FAIL_CNT), + }; + struct mtk_wed_hw *hw = s->private; + struct mtk_wed_device *dev = hw->wed_dev; + + if (dev) + dump_wed_regs(s, dev, regs, ARRAY_SIZE(regs)); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(wed_rro); + static int mtk_wed_reg_set(void *data, u64 val) { @@ -264,7 +622,16 @@ void mtk_wed_hw_add_debugfs(struct mtk_wed_hw *hw) debugfs_create_u32("regidx", 0600, dir, &hw->debugfs_reg); debugfs_create_file_unsafe("regval", 0600, dir, hw, &fops_regval); debugfs_create_file_unsafe("txinfo", 0400, dir, hw, &wed_txinfo_fops); - if (!mtk_wed_is_v1(hw)) + if (!mtk_wed_is_v1(hw)) { debugfs_create_file_unsafe("rxinfo", 0400, dir, hw, &wed_rxinfo_fops); + if (mtk_wed_is_v3_or_greater(hw)) { + debugfs_create_file_unsafe("amsdu", 0400, dir, hw, + &wed_amsdu_fops); + debugfs_create_file_unsafe("rtqm", 0400, dir, hw, + &wed_rtqm_fops); + debugfs_create_file_unsafe("rro", 0400, dir, hw, + &wed_rro_fops); + } + } }