From patchwork Wed Jan 18 11:38:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 13106230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79A12C32793 for ; Wed, 18 Jan 2023 11:42:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eas6wIFBuQy7UzkrK500VJMgeato0L4fG/5Di/9irl4=; b=Kl4Cy0IRJFwWKGQzDscaaMB9Uw VHaOz7gY0zS8oqiRichvE/U8If7dHMAl81VvCDxRnAqBA47dcbtjWsKaPY1NaISssVIFfNZjFtFNn pEwuxurEOdON67xpC9f57sU+I1bGxyGrXy7gVlSII08UOVfXPg4hXCfU0ISE8iO13o4qsFX9xqXCS QJ0K8ON5DmYEstxkVJ/+u2kLEYkKBOOuZzgIwOwEQscMdTNs/XuYm2UZecGN3NjRoY7NbTZbg2yMs CjH7br6nYRUR9Gm8/fKnj4oS5V6jsMxBnYe7JkxnsMabOCj5Troxt/W5I1zrkk4Z0xuhyX7AnN4aI n+AClUfg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI6pw-000amV-TC; Wed, 18 Jan 2023 11:42:36 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI6pq-000ak3-K6 for linux-mediatek@lists.infradead.org; Wed, 18 Jan 2023 11:42:35 +0000 X-UUID: 2add24dc972511edbbe3f76fe852e059-20230118 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=eas6wIFBuQy7UzkrK500VJMgeato0L4fG/5Di/9irl4=; b=jAId+x8lnJaBqDJpEHF7NqjJh2I293CXYUbeKxPi7B2d8vNbix9NR02SQLaJB3JAdvlj4S2N8+mo8ML0S1PCXrrWfT0A4EPrNPfx+v9whlmg1cq2k1urHZK8wB5gi/T/tf46i00WO4gLF+g6O5yZ737vT0vow4SgQE1nWZo/ARA=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.18,REQID:3456db77-dbfa-4d53-b758-99d8cf086462,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:3ca2d6b,CLOUDID:d19d9e8c-8530-4eff-9f77-222cf6e2895b,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0 X-CID-BVR: 0,NGT X-UUID: 2add24dc972511edbbe3f76fe852e059-20230118 Received: from mtkmbs13n2.mediatek.inc [(172.21.101.108)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1370904824; Wed, 18 Jan 2023 04:42:20 -0700 Received: from mtkmbs13n1.mediatek.inc (172.21.101.193) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Wed, 18 Jan 2023 19:41:44 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs13n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Wed, 18 Jan 2023 19:41:42 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: Intel experts , Chetan , MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang Subject: [PATCH net-next v2 03/12] net: wwan: tmi: Add control DMA interface Date: Wed, 18 Jan 2023 19:38:50 +0800 Message-ID: <20230118113859.175836-4-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230118113859.175836-1-yanchao.yang@mediatek.com> References: <20230118113859.175836-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230118_034230_757436_7DB6ED37 X-CRM114-Status: GOOD ( 25.96 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Cross Layer Direct Memory Access(CLDMA) is the hardware interface used by the control plane and designated to translate data between the host and the device. It supports 8 hardware queues for the device AP and modem respectively. CLDMA driver uses General Purpose Descriptor (GPD) to describe transaction information that can be recognized by CLDMA hardware. Once CLDMA hardware transaction is started, it would fetch and parse GPD to transfer data correctly. To facilitate the CLDMA transaction, a GPD ring for each queue is used. Once the transaction is started, CLDMA hardware will traverse the GPD ring to transfer data between the host and the device until no GPD is available. CLDMA TX flow: Once a TX service receives the TX data from the port layer, it uses APIs exported by the CLDMA driver to configure GPD with the DMA address of TX data. After that, the service triggers CLDMA to fetch the first available GPD to transfer data. CLDMA RX flow: When there is RX data from the MD, CLDMA hardware asserts an interrupt to notify the host to fetch data and dispatch it to FSM (for handshake messages) or the port layer. After CLDMA opening is finished, All RX GPDs are fulfilled and ready to receive data from the device. Signed-off-by: Yanchao Yang Signed-off-by: Min Dong --- drivers/net/wwan/mediatek/Makefile | 6 +- drivers/net/wwan/mediatek/mtk_cldma.c | 260 +++++ drivers/net/wwan/mediatek/mtk_cldma.h | 158 +++ drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 48 + .../wwan/mediatek/pcie/mtk_cldma_drv_t800.c | 939 ++++++++++++++++++ .../wwan/mediatek/pcie/mtk_cldma_drv_t800.h | 20 + 6 files changed, 1429 insertions(+), 2 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_cldma.c create mode 100644 drivers/net/wwan/mediatek/mtk_cldma.h create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 192f08e08a33..f607fb1dad6e 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -4,8 +4,10 @@ MODULE_NAME := mtk_tmi mtk_tmi-y = \ pcie/mtk_pci.o \ - mtk_dev.o \ - mtk_ctrl_plane.o + mtk_dev.o \ + mtk_ctrl_plane.o \ + mtk_cldma.o \ + pcie/mtk_cldma_drv_t800.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_cldma.c b/drivers/net/wwan/mediatek/mtk_cldma.c new file mode 100644 index 000000000000..f9531f48f898 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_cldma.c @@ -0,0 +1,260 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include + +#include "mtk_cldma.h" +#include "mtk_cldma_drv_t800.h" + +/** + * mtk_cldma_init() - Initialize CLDMA + * @trans: pointer to transaction structure + * + * Return: + * * 0 - OK + * * -ENOMEM - out of memory + */ +static int mtk_cldma_init(struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd; + + cd = devm_kzalloc(trans->mdev->dev, sizeof(*cd), GFP_KERNEL); + if (!cd) + return -ENOMEM; + + cd->trans = trans; + cd->hw_ops.init = mtk_cldma_hw_init_t800; + cd->hw_ops.exit = mtk_cldma_hw_exit_t800; + cd->hw_ops.txq_alloc = mtk_cldma_txq_alloc_t800; + cd->hw_ops.rxq_alloc = mtk_cldma_rxq_alloc_t800; + cd->hw_ops.txq_free = mtk_cldma_txq_free_t800; + cd->hw_ops.rxq_free = mtk_cldma_rxq_free_t800; + cd->hw_ops.start_xfer = mtk_cldma_start_xfer_t800; + + trans->dev[CLDMA_CLASS_ID] = cd; + + return 0; +} + +/** + * mtk_cldma_exit() - De-Initialize CLDMA + * @trans: pointer to transaction structure + * + * Return: + * * 0 - OK + */ +static int mtk_cldma_exit(struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd; + + cd = trans->dev[CLDMA_CLASS_ID]; + if (!cd) + return 0; + + devm_kfree(trans->mdev->dev, cd); + + return 0; +} + +/** + * mtk_cldma_open() - Initialize CLDMA hardware queue + * @cd: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * * 0 - OK + * * -EBUSY - hardware queue is busy + * * -EIO - failed to initialize hardware queue + * * -EINVAL - invalid input parameters + */ +static int mtk_cldma_open(struct cldma_dev *cd, struct sk_buff *skb) +{ + struct trb_open_priv *trb_open_priv = (struct trb_open_priv *)skb->data; + struct trb *trb = (struct trb *)skb->cb; + struct cldma_hw *hw; + struct virtq *vq; + struct txq *txq; + struct rxq *rxq; + int err = 0; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + trb_open_priv->tx_mtu = vq->tx_mtu; + trb_open_priv->rx_mtu = vq->rx_mtu; + if (unlikely(vq->rxqno < 0 || vq->rxqno >= HW_QUEUE_NUM) || + unlikely(vq->txqno < 0 || vq->txqno >= HW_QUEUE_NUM)) { + err = -EINVAL; + goto exit; + } + + if (hw->txq[vq->txqno] || hw->rxq[vq->rxqno]) { + err = -EBUSY; + goto exit; + } + + txq = cd->hw_ops.txq_alloc(hw, skb); + if (!txq) { + err = -EIO; + goto exit; + } + + rxq = cd->hw_ops.rxq_alloc(hw, skb); + if (!rxq) { + err = -EIO; + cd->hw_ops.txq_free(hw, trb->vqno); + goto exit; + } + +exit: + trb->status = err; + trb->trb_complete(skb); + + return err; +} + +/** + * mtk_cldma_tx() - start CLDMA TX transaction + * @cd: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * * 0 - OK + * * -EPIPE - hardware queue is broken + */ +static int mtk_cldma_tx(struct cldma_dev *cd, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_hw *hw; + struct virtq *vq; + struct txq *txq; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + txq = hw->txq[vq->txqno]; + if (txq->is_stopping) + return -EPIPE; + + cd->hw_ops.start_xfer(hw, vq->txqno); + + return 0; +} + +/** + * mtk_cldma_close() - De-Initialize CLDMA hardware queue + * @cd: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * * 0 - OK + */ +static int mtk_cldma_close(struct cldma_dev *cd, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_hw *hw; + struct virtq *vq; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + + cd->hw_ops.txq_free(hw, trb->vqno); + cd->hw_ops.rxq_free(hw, trb->vqno); + + trb->status = 0; + trb->trb_complete(skb); + + return 0; +} + +static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_dev *cd = dev; + dma_addr_t data_dma_addr; + struct cldma_hw *hw; + struct tx_req *req; + struct virtq *vq; + struct txq *txq; + int err; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + txq = hw->txq[vq->txqno]; + + if (!txq->req_budget) + return -EAGAIN; + + data_dma_addr = dma_map_single(hw->mdev->dev, skb->data, skb->len, DMA_TO_DEVICE); + err = dma_mapping_error(hw->mdev->dev, data_dma_addr); + if (unlikely(err)) { + dev_err(hw->mdev->dev, "Failed to map dma!\n"); + return err; + } + + mutex_lock(&txq->lock); + txq->req_budget--; + mutex_unlock(&txq->lock); + + req = txq->req_pool + txq->wr_idx; + req->gpd->tx_gpd.debug_id = 0x01; + req->gpd->tx_gpd.data_buff_ptr_h = cpu_to_le32((u64)(data_dma_addr) >> 32); + req->gpd->tx_gpd.data_buff_ptr_l = cpu_to_le32(data_dma_addr); + req->gpd->tx_gpd.data_buff_len = cpu_to_le16(skb->len); + req->gpd->tx_gpd.gpd_flags = CLDMA_GPD_FLAG_IOC | CLDMA_GPD_FLAG_HWO; + + req->data_vm_addr = skb->data; + req->data_dma_addr = data_dma_addr; + req->data_len = skb->len; + req->skb = skb; + txq->wr_idx = (txq->wr_idx + 1) % txq->req_pool_size; + + wmb(); /* ensure GPD setup done before HW start */ + + return 0; +} + +/** + * mtk_cldma_trb_process() - Dispatch trb request to low-level CLDMA routine + * @dev: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * * 0 - OK + * * -EBUSY - hardware queue is busy + * * -EINVAL - invalid input + * * -EIO - failed to initialize hardware queue + * * -EPIPE - hardware queue is broken + */ +static int mtk_cldma_trb_process(void *dev, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_dev *cd = dev; + int err; + + switch (trb->cmd) { + case TRB_CMD_ENABLE: + err = mtk_cldma_open(cd, skb); + break; + case TRB_CMD_TX: + err = mtk_cldma_tx(cd, skb); + break; + case TRB_CMD_DISABLE: + err = mtk_cldma_close(cd, skb); + break; + default: + err = -EINVAL; + } + + return err; +} + +struct hif_ops cldma_ops = { + .init = mtk_cldma_init, + .exit = mtk_cldma_exit, + .trb_process = mtk_cldma_trb_process, + .submit_tx = mtk_cldma_submit_tx, +}; diff --git a/drivers/net/wwan/mediatek/mtk_cldma.h b/drivers/net/wwan/mediatek/mtk_cldma.h new file mode 100644 index 000000000000..4fd5f826bcf6 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_cldma.h @@ -0,0 +1,158 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_CLDMA_H__ +#define __MTK_CLDMA_H__ + +#include + +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" + +#define HW_QUEUE_NUM 8 +#define ALLQ (0XFF) +#define LINK_ERROR_VAL (0XFFFFFFFF) + +#define CLDMA_CLASS_ID 0 + +#define NR_CLDMA 2 +#define CLDMA0 (((CLDMA_CLASS_ID) << HIF_CLASS_SHIFT) + 0) +#define CLDMA1 (((CLDMA_CLASS_ID) << HIF_CLASS_SHIFT) + 1) + +#define TXQ(N) (N) +#define RXQ(N) (N) + +#define CLDMA_GPD_FLAG_HWO BIT(0) +#define CLDMA_GPD_FLAG_IOC BIT(7) + +enum mtk_ip_busy_src { + IP_BUSY_TXDONE = 0, + IP_BUSY_RXDONE = 24, +}; + +enum mtk_intr_type { + QUEUE_XFER_DONE = 0, + QUEUE_ERROR = 16, + INVALID_TYPE +}; + +enum mtk_tx_rx { + DIR_TX, + DIR_RX, + INVALID_DIR +}; + +union gpd { + struct { + u8 gpd_flags; + u8 non_used1; + __le16 data_allow_len; + __le32 next_gpd_ptr_h; + __le32 next_gpd_ptr_l; + __le32 data_buff_ptr_h; + __le32 data_buff_ptr_l; + __le16 data_recv_len; + u8 non_used2; + u8 debug_id; + } rx_gpd; + + struct { + u8 gpd_flags; + u8 non_used1; + u8 non_used2; + u8 debug_id; + __le32 next_gpd_ptr_h; + __le32 next_gpd_ptr_l; + __le32 data_buff_ptr_h; + __le32 data_buff_ptr_l; + __le16 data_buff_len; + __le16 non_used3; + } tx_gpd; +}; + +struct rx_req { + union gpd *gpd; + int mtu; + struct sk_buff *skb; + size_t data_len; + dma_addr_t gpd_dma_addr; + dma_addr_t data_dma_addr; +}; + +struct rxq { + struct cldma_hw *hw; + int rxqno; + int vqno; + struct virtq *vq; + struct work_struct rx_done_work; + struct rx_req *req_pool; + int req_pool_size; + int free_idx; + unsigned short rx_done_cnt; + void *arg; + int (*rx_done)(struct sk_buff *skb, int len, void *priv); +}; + +struct tx_req { + union gpd *gpd; + int mtu; + void *data_vm_addr; + size_t data_len; + dma_addr_t data_dma_addr; + dma_addr_t gpd_dma_addr; + struct sk_buff *skb; + int (*trb_complete)(struct sk_buff *skb); +}; + +struct txq { + struct cldma_hw *hw; + int txqno; + int vqno; + struct virtq *vq; + struct mutex lock; /* protect structure fields */ + struct work_struct tx_done_work; + struct tx_req *req_pool; + int req_pool_size; + int req_budget; + int wr_idx; + int free_idx; + bool tx_started; + bool is_stopping; + unsigned short tx_done_cnt; +}; + +struct cldma_dev; +struct cldma_hw; + +struct cldma_hw_ops { + int (*init)(struct cldma_dev *cd, int hif_id); + int (*exit)(struct cldma_dev *cd, int hif_id); + struct txq* (*txq_alloc)(struct cldma_hw *hw, struct sk_buff *skb); + struct rxq* (*rxq_alloc)(struct cldma_hw *hw, struct sk_buff *skb); + int (*txq_free)(struct cldma_hw *hw, int vqno); + int (*rxq_free)(struct cldma_hw *hw, int vqno); + int (*start_xfer)(struct cldma_hw *hw, int qno); +}; + +struct cldma_hw { + int hif_id; + int base_addr; + int pci_ext_irq_id; + struct mtk_md_dev *mdev; + struct cldma_dev *cd; + struct txq *txq[HW_QUEUE_NUM]; + struct rxq *rxq[HW_QUEUE_NUM]; + struct dma_pool *dma_pool; + struct workqueue_struct *wq; +}; + +struct cldma_dev { + struct cldma_hw *cldma_hw[NR_CLDMA]; + struct mtk_ctrl_trans *trans; + struct cldma_hw_ops hw_ops; +}; + +extern struct hif_ops cldma_ops; +#endif diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 77af4248cb74..32cd8dc7bdb7 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -14,7 +14,55 @@ #define VQ_MTU_3_5K (0xE00) #define VQ_MTU_63K (0xFC00) +#define HIF_CLASS_NUM (1) +#define HIF_CLASS_SHIFT (8) +#define HIF_ID_BITMASK (0x01) + +enum mtk_trb_cmd_type { + TRB_CMD_ENABLE = 1, + TRB_CMD_TX, + TRB_CMD_DISABLE, +}; + +struct trb_open_priv { + u16 tx_mtu; + u16 rx_mtu; + int (*rx_done)(struct sk_buff *skb, int len, void *priv); +}; + +struct trb { + u8 vqno; + enum mtk_trb_cmd_type cmd; + int status; + struct kref kref; + void *priv; + int (*trb_complete)(struct sk_buff *skb); +}; + +struct virtq { + int vqno; + int hif_id; + int txqno; + int rxqno; + int tx_mtu; + int rx_mtu; + int tx_req_num; + int rx_req_num; +}; + +struct mtk_ctrl_trans; + +struct hif_ops { + int (*init)(struct mtk_ctrl_trans *trans); + int (*exit)(struct mtk_ctrl_trans *trans); + int (*submit_tx)(void *dev, struct sk_buff *skb); + int (*trb_process)(void *dev, struct sk_buff *skb); +}; + struct mtk_ctrl_trans { + struct virtq *vq_tbl; + void *dev[HIF_CLASS_NUM]; + struct hif_ops *ops[HIF_CLASS_NUM]; struct mtk_ctrl_blk *ctrl_blk; struct mtk_md_dev *mdev; }; diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c new file mode 100644 index 000000000000..bd9a7a7bf18f --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c @@ -0,0 +1,939 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_cldma_drv_t800.h" +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" +#include "mtk_reg.h" + +#define DMA_POOL_NAME_LEN 64 + +#define CLDMA_STOP_HW_WAIT_TIME_MS (20) +#define CLDMA_STOP_HW_POLLING_MAX_CNT (10) + +#define CLDMA0_BASE_ADDR (0x1021C000) +#define CLDMA1_BASE_ADDR (0x1021E000) + +/* CLDMA IN(Tx) */ +#define REG_CLDMA_UL_START_ADDRL_0 (0x0004) +#define REG_CLDMA_UL_START_ADDRH_0 (0x0008) +#define REG_CLDMA_UL_STATUS (0x0084) +#define REG_CLDMA_UL_START_CMD (0x0088) +#define REG_CLDMA_UL_RESUME_CMD (0x008C) +#define REG_CLDMA_UL_STOP_CMD (0x0090) +#define REG_CLDMA_UL_ERROR (0x0094) +#define REG_CLDMA_UL_CFG (0x0098) +#define REG_CLDMA_UL_DUMMY_0 (0x009C) + +/* CLDMA OUT(Rx) */ +#define REG_CLDMA_SO_START_CMD (0x0400 + 0x01BC) +#define REG_CLDMA_SO_RESUME_CMD (0x0400 + 0x01C0) +#define REG_CLDMA_SO_STOP_CMD (0x0400 + 0x01C4) +#define REG_CLDMA_SO_DUMMY_0 (0x0400 + 0x0108) +#define REG_CLDMA_SO_CFG (0x0400 + 0x0004) +#define REG_CLDMA_SO_START_ADDRL_0 (0x0400 + 0x0078) +#define REG_CLDMA_SO_START_ADDRH_0 (0x0400 + 0x007C) +#define REG_CLDMA_SO_CUR_ADDRL_0 (0x0400 + 0x00B8) +#define REG_CLDMA_SO_CUR_ADDRH_0 (0x0400 + 0x00BC) +#define REG_CLDMA_SO_STATUS (0x0400 + 0x00F8) + +/* CLDMA MISC */ +#define REG_CLDMA_L2TISAR0 (0x0800 + 0x0010) +#define REG_CLDMA_L2TISAR1 (0x0800 + 0x0014) +#define REG_CLDMA_L2TIMR0 (0x0800 + 0x0018) +#define REG_CLDMA_L2TIMR1 (0x0800 + 0x001C) +#define REG_CLDMA_L2TIMCR0 (0x0800 + 0x0020) +#define REG_CLDMA_L2TIMCR1 (0x0800 + 0x0024) +#define REG_CLDMA_L2TIMSR0 (0x0800 + 0x0028) +#define REG_CLDMA_L2TIMSR1 (0x0800 + 0x002C) +#define REG_CLDMA_L3TISAR0 (0x0800 + 0x0030) +#define REG_CLDMA_L3TISAR1 (0x0800 + 0x0034) +#define REG_CLDMA_L3TIMR0 (0x0800 + 0x0038) +#define REG_CLDMA_L3TIMR1 (0x0800 + 0x003C) +#define REG_CLDMA_L3TIMCR0 (0x0800 + 0x0040) +#define REG_CLDMA_L3TIMCR1 (0x0800 + 0x0044) +#define REG_CLDMA_L3TIMSR0 (0x0800 + 0x0048) +#define REG_CLDMA_L3TIMSR1 (0x0800 + 0x004C) +#define REG_CLDMA_L2RISAR0 (0x0800 + 0x0050) +#define REG_CLDMA_L2RISAR1 (0x0800 + 0x0054) +#define REG_CLDMA_L3RISAR0 (0x0800 + 0x0070) +#define REG_CLDMA_L3RISAR1 (0x0800 + 0x0074) +#define REG_CLDMA_L3RIMR0 (0x0800 + 0x0078) +#define REG_CLDMA_L3RIMR1 (0x0800 + 0x007C) +#define REG_CLDMA_L3RIMCR0 (0x0800 + 0x0080) +#define REG_CLDMA_L3RIMCR1 (0x0800 + 0x0084) +#define REG_CLDMA_L3RIMSR0 (0x0800 + 0x0088) +#define REG_CLDMA_L3RIMSR1 (0x0800 + 0x008C) +#define REG_CLDMA_IP_BUSY (0x0800 + 0x00B4) +#define REG_CLDMA_L3TISAR2 (0x0800 + 0x00C0) +#define REG_CLDMA_L3TIMR2 (0x0800 + 0x00C4) +#define REG_CLDMA_L3TIMCR2 (0x0800 + 0x00C8) +#define REG_CLDMA_L3TIMSR2 (0x0800 + 0x00CC) + +#define REG_CLDMA_L2RIMR0 (0x0800 + 0x00E8) +#define REG_CLDMA_L2RIMR1 (0x0800 + 0x00EC) +#define REG_CLDMA_L2RIMCR0 (0x0800 + 0x00F0) +#define REG_CLDMA_L2RIMCR1 (0x0800 + 0x00F4) +#define REG_CLDMA_L2RIMSR0 (0x0800 + 0x00F8) +#define REG_CLDMA_L2RIMSR1 (0x0800 + 0x00FC) + +#define REG_CLDMA_INT_EAP_USIP_MASK (0x0800 + 0x011C) +#define REG_CLDMA_RQ1_GPD_DONE_CNT (0x0800 + 0x0174) +#define REG_CLDMA_TQ1_GPD_DONE_CNT (0x0800 + 0x0184) + +#define REG_CLDMA_IP_BUSY_TO_PCIE_MASK (0x0800 + 0x0194) +#define REG_CLDMA_IP_BUSY_TO_PCIE_MASK_SET (0x0800 + 0x0198) +#define REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR (0x0800 + 0x019C) + +#define REG_CLDMA_IP_BUSY_TO_AP_MASK (0x0800 + 0x0200) +#define REG_CLDMA_IP_BUSY_TO_AP_MASK_SET (0x0800 + 0x0204) +#define REG_CLDMA_IP_BUSY_TO_AP_MASK_CLR (0x0800 + 0x0208) + +/* CLDMA RESET */ +#define REG_INFRA_RST0_SET (0x120) +#define REG_INFRA_RST0_CLR (0x124) +#define REG_CLDMA0_RST_SET_BIT (8) +#define REG_CLDMA0_RST_CLR_BIT (8) + +static void mtk_cldma_setup_start_addr(struct mtk_md_dev *mdev, int base, + enum mtk_tx_rx dir, int qno, dma_addr_t addr) +{ + unsigned int addr_l; + unsigned int addr_h; + + if (dir == DIR_TX) { + addr_l = base + REG_CLDMA_UL_START_ADDRL_0 + qno * HW_QUEUE_NUM; + addr_h = base + REG_CLDMA_UL_START_ADDRH_0 + qno * HW_QUEUE_NUM; + } else { + addr_l = base + REG_CLDMA_SO_START_ADDRL_0 + qno * HW_QUEUE_NUM; + addr_h = base + REG_CLDMA_SO_START_ADDRH_0 + qno * HW_QUEUE_NUM; + } + + mtk_hw_write32(mdev, addr_l, (u32)addr); + mtk_hw_write32(mdev, addr_h, (u32)((u64)addr >> 32)); +} + +static void mtk_cldma_mask_intr(struct mtk_md_dev *mdev, int base, + enum mtk_tx_rx dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + + if (unlikely(qno < 0 || qno >= HW_QUEUE_NUM)) + return; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TIMSR0; + else + addr = base + REG_CLDMA_L2RIMSR0; + + if (qno == ALLQ) + val = qno << type; + else + val = BIT(qno) << type; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_unmask_intr(struct mtk_md_dev *mdev, int base, + enum mtk_tx_rx dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + + if (unlikely(qno < 0 || qno >= HW_QUEUE_NUM)) + return; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TIMCR0; + else + addr = base + REG_CLDMA_L2RIMCR0; + + if (qno == ALLQ) + val = qno << type; + else + val = BIT(qno) << type; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_clr_intr_status(struct mtk_md_dev *mdev, int base, + int dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + + if (unlikely(qno < 0 || qno >= HW_QUEUE_NUM)) + return; + + if (type == QUEUE_ERROR) { + if (dir == DIR_TX) { + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3TISAR0); + mtk_hw_write32(mdev, base + REG_CLDMA_L3TISAR0, val); + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3TISAR1); + mtk_hw_write32(mdev, base + REG_CLDMA_L3TISAR1, val); + } else { + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3RISAR0); + mtk_hw_write32(mdev, base + REG_CLDMA_L3RISAR0, val); + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3RISAR1); + mtk_hw_write32(mdev, base + REG_CLDMA_L3RISAR1, val); + } + } + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TISAR0; + else + addr = base + REG_CLDMA_L2RISAR0; + + if (qno == ALLQ) + val = qno << type; + else + val = BIT(qno) << type; + + mtk_hw_write32(mdev, addr, val); + val = mtk_hw_read32(mdev, addr); +} + +static u32 mtk_cldma_check_intr_status(struct mtk_md_dev *mdev, int base, + int dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + u32 sta; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TISAR0; + else + addr = base + REG_CLDMA_L2RISAR0; + + val = mtk_hw_read32(mdev, addr); + if (val == LINK_ERROR_VAL) + sta = val; + else if (qno == ALLQ) + sta = (val >> type) & 0xFF; + else + sta = (val >> type) & BIT(qno); + return sta; +} + +static void mtk_cldma_start_queue(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 val = BIT(qno); + u32 addr; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_START_CMD; + else + addr = base + REG_CLDMA_SO_START_CMD; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_resume_queue(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 val = BIT(qno); + u32 addr; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_RESUME_CMD; + else + addr = base + REG_CLDMA_SO_RESUME_CMD; + + mtk_hw_write32(mdev, addr, val); +} + +static u32 mtk_cldma_queue_status(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 addr; + u32 val; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_STATUS; + else + addr = base + REG_CLDMA_SO_STATUS; + + val = mtk_hw_read32(mdev, addr); + + if (qno == ALLQ || val == LINK_ERROR_VAL) + return val; + else + return val & BIT(qno); +} + +static void mtk_cldma_mask_ip_busy_to_pci(struct mtk_md_dev *mdev, + int base, int qno, enum mtk_ip_busy_src type) +{ + if (qno == ALLQ) + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_SET, qno << type); + else + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_SET, BIT(qno) << type); +} + +static void mtk_cldma_unmask_ip_busy_to_pci(struct mtk_md_dev *mdev, + int base, int qno, enum mtk_ip_busy_src type) +{ + if (qno == ALLQ) + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR, qno << type); + else + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR, BIT(qno) << type); +} + +static void mtk_cldma_stop_queue(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 val = (qno == ALLQ) ? qno : BIT(qno); + u32 addr; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_STOP_CMD; + else + addr = base + REG_CLDMA_SO_STOP_CMD; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_clear_ip_busy(struct mtk_md_dev *mdev, int base) +{ + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY, 0x01); +} + +static void mtk_cldma_hw_init(struct mtk_md_dev *mdev, int base) +{ + u32 val = mtk_hw_read32(mdev, base + REG_CLDMA_UL_CFG); + + val = (val & (~(0x7 << 5))) | ((0x4) << 5); + mtk_hw_write32(mdev, base + REG_CLDMA_UL_CFG, val); + + val = mtk_hw_read32(mdev, base + REG_CLDMA_SO_CFG); + val = (val & (~(0x7 << 10))) | ((0x4) << 10) | (1 << 2); + mtk_hw_write32(mdev, base + REG_CLDMA_SO_CFG, val); + + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR, 0); + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_AP_MASK_CLR, 0); + + /* enable interrupt to PCIe */ + mtk_hw_write32(mdev, base + REG_CLDMA_INT_EAP_USIP_MASK, 0); + + /* disable illegal memory check */ + mtk_hw_write32(mdev, base + REG_CLDMA_UL_DUMMY_0, 1); + mtk_hw_write32(mdev, base + REG_CLDMA_SO_DUMMY_0, 1); +} + +static void mtk_cldma_tx_done_work(struct work_struct *work) +{ + struct txq *txq = container_of(work, struct txq, tx_done_work); + struct mtk_md_dev *mdev = txq->hw->mdev; + struct tx_req *req; + unsigned int state; + struct trb *trb; + int i; + +again: + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + txq->free_idx; + if ((req->gpd->tx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO) || !req->data_vm_addr) + break; + + dma_unmap_single(mdev->dev, req->data_dma_addr, req->data_len, DMA_TO_DEVICE); + + trb = (struct trb *)req->skb->cb; + trb->status = 0; + trb->trb_complete(req->skb); + + req->data_vm_addr = NULL; + req->data_dma_addr = 0; + req->data_len = 0; + + txq->free_idx = (txq->free_idx + 1) % txq->req_pool_size; + mutex_lock(&txq->lock); + txq->req_budget++; + mutex_unlock(&txq->lock); + } + mtk_cldma_unmask_ip_busy_to_pci(mdev, txq->hw->base_addr, txq->txqno, IP_BUSY_TXDONE); + state = mtk_cldma_check_intr_status(mdev, txq->hw->base_addr, + DIR_TX, txq->txqno, QUEUE_XFER_DONE); + if (state) { + if (unlikely(state == LINK_ERROR_VAL)) + return; + + mtk_cldma_clr_intr_status(mdev, txq->hw->base_addr, DIR_TX, + txq->txqno, QUEUE_XFER_DONE); + + if (need_resched()) { + mtk_cldma_mask_ip_busy_to_pci(mdev, txq->hw->base_addr, + txq->txqno, IP_BUSY_TXDONE); + cond_resched(); + mtk_cldma_unmask_ip_busy_to_pci(mdev, txq->hw->base_addr, + txq->txqno, IP_BUSY_TXDONE); + } + + goto again; + } + + mtk_cldma_unmask_intr(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); + mtk_cldma_clear_ip_busy(mdev, txq->hw->base_addr); +} + +static void mtk_cldma_rx_done_work(struct work_struct *work) +{ + struct rxq *rxq = container_of(work, struct rxq, rx_done_work); + struct cldma_hw *hw = rxq->hw; + u32 curr_addr_h, curr_addr_l; + struct mtk_md_dev *mdev; + struct rx_req *req; + u64 curr_addr; + int i, err; + u32 state; + u64 addr; + + mdev = hw->mdev; + + do { + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + rxq->free_idx; + if ((req->gpd->rx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO)) { + addr = hw->base_addr + REG_CLDMA_SO_CUR_ADDRH_0 + + (u64)rxq->rxqno * HW_QUEUE_NUM; + curr_addr_h = mtk_hw_read32(mdev, addr); + addr = hw->base_addr + REG_CLDMA_SO_CUR_ADDRL_0 + + (u64)rxq->rxqno * HW_QUEUE_NUM; + curr_addr_l = mtk_hw_read32(mdev, addr); + curr_addr = ((u64)curr_addr_h << 32) | curr_addr_l; + + if (req->gpd_dma_addr == curr_addr && + (req->gpd->rx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO)) + break; + } + + dma_unmap_single(mdev->dev, req->data_dma_addr, req->mtu, DMA_FROM_DEVICE); + + rxq->rx_done(req->skb, le16_to_cpu(req->gpd->rx_gpd.data_recv_len), + rxq->arg); + + rxq->free_idx = (rxq->free_idx + 1) % rxq->req_pool_size; + req->skb = __dev_alloc_skb(rxq->vq->rx_mtu, GFP_KERNEL); + if (!req->skb) + break; + + req->data_dma_addr = dma_map_single(mdev->dev, + req->skb->data, + req->mtu, + DMA_FROM_DEVICE); + err = dma_mapping_error(mdev->dev, req->data_dma_addr); + if (unlikely(err)) { + dev_err(mdev->dev, "Failed to map dma!\n"); + dev_kfree_skb_any(req->skb); + break; + } + + req->gpd->rx_gpd.data_recv_len = 0; + req->gpd->rx_gpd.data_buff_ptr_h = + cpu_to_le32((u64)req->data_dma_addr >> 32); + req->gpd->rx_gpd.data_buff_ptr_l = cpu_to_le32(req->data_dma_addr); + req->gpd->rx_gpd.gpd_flags = CLDMA_GPD_FLAG_IOC | CLDMA_GPD_FLAG_HWO; + } + + mtk_cldma_resume_queue(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno); + state = mtk_cldma_check_intr_status(mdev, rxq->hw->base_addr, + DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); + + if (!state) + break; + + mtk_cldma_clr_intr_status(mdev, rxq->hw->base_addr, DIR_RX, + rxq->rxqno, QUEUE_XFER_DONE); + + if (need_resched()) + cond_resched(); + } while (true); + + mtk_cldma_unmask_intr(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); + mtk_cldma_mask_ip_busy_to_pci(mdev, rxq->hw->base_addr, rxq->rxqno, IP_BUSY_RXDONE); + mtk_cldma_clear_ip_busy(mdev, rxq->hw->base_addr); +} + +static int mtk_cldma_isr(int irq_id, void *param) +{ + u32 txq_xfer_done, rxq_xfer_done; + struct cldma_hw *hw = param; + u32 tx_mask, rx_mask; + u32 txq_err, rxq_err; + u32 tx_sta, rx_sta; + struct txq *txq; + struct rxq *rxq; + int i; + + tx_sta = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2TISAR0); + tx_mask = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2TIMR0); + rx_sta = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2RISAR0); + rx_mask = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2RIMR0); + + tx_sta = tx_sta & (~tx_mask); + rx_sta = rx_sta & (~rx_mask); + + if (tx_sta) { + /* TX mask */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2TIMSR0, tx_sta); + + txq_err = (tx_sta >> QUEUE_ERROR) & 0xFF; + if (txq_err) { + mtk_cldma_clr_intr_status(hw->mdev, hw->base_addr, + DIR_TX, ALLQ, QUEUE_ERROR); + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2TIMCR0, + (txq_err << QUEUE_ERROR)); + } + + /* TX clear */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2TISAR0, tx_sta); + + txq_xfer_done = (tx_sta >> QUEUE_XFER_DONE) & 0xFF; + if (txq_xfer_done) { + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (txq_xfer_done & (1 << i)) { + txq = hw->txq[i]; + queue_work(hw->wq, &txq->tx_done_work); + } + } + } + } + + if (rx_sta) { + /* RX mask */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2RIMSR0, rx_sta); + + rxq_err = (rx_sta >> QUEUE_ERROR) & 0xFF; + if (rxq_err) { + mtk_cldma_clr_intr_status(hw->mdev, hw->base_addr, + DIR_RX, ALLQ, QUEUE_ERROR); + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2RIMCR0, + (rxq_err << QUEUE_ERROR)); + } + + /* RX clear */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2RISAR0, rx_sta); + + rxq_xfer_done = (rx_sta >> QUEUE_XFER_DONE) & 0xFF; + if (rxq_xfer_done) { + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (rxq_xfer_done & (1 << i)) { + rxq = hw->rxq[i]; + queue_work(hw->wq, &rxq->rx_done_work); + } + } + } + } + + mtk_hw_clear_irq(hw->mdev, hw->pci_ext_irq_id); + mtk_hw_unmask_irq(hw->mdev, hw->pci_ext_irq_id); + + return IRQ_HANDLED; +} + +int mtk_cldma_hw_init_t800(struct cldma_dev *cd, int hif_id) +{ + char pool_name[DMA_POOL_NAME_LEN]; + struct cldma_hw *hw; + unsigned int flag; + + if (cd->cldma_hw[hif_id]) + return 0; + + hw = devm_kzalloc(cd->trans->mdev->dev, sizeof(*hw), GFP_KERNEL); + if (!hw) + return -ENOMEM; + + hw->cd = cd; + hw->mdev = cd->trans->mdev; + hw->hif_id = ((CLDMA_CLASS_ID) << 8) + hif_id; + snprintf(pool_name, DMA_POOL_NAME_LEN, "cldma%d_pool_%s", hw->hif_id, hw->mdev->dev_str); + hw->dma_pool = dma_pool_create(pool_name, hw->mdev->dev, sizeof(union gpd), 64, 0); + if (!hw->dma_pool) + goto err_exit; + + switch (hif_id) { + case CLDMA0: + hw->pci_ext_irq_id = mtk_hw_get_irq_id(hw->mdev, MTK_IRQ_SRC_CLDMA0); + hw->base_addr = CLDMA0_BASE_ADDR; + break; + case CLDMA1: + hw->pci_ext_irq_id = mtk_hw_get_irq_id(hw->mdev, MTK_IRQ_SRC_CLDMA1); + hw->base_addr = CLDMA1_BASE_ADDR; + break; + default: + break; + } + + flag = WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI; + hw->wq = alloc_workqueue("cldma%d_workq_%s", flag, 0, hif_id, hw->mdev->dev_str); + + mtk_cldma_hw_init(hw->mdev, hw->base_addr); + + /* mask/clear PCI CLDMA L1 interrupt */ + mtk_hw_mask_irq(hw->mdev, hw->pci_ext_irq_id); + mtk_hw_clear_irq(hw->mdev, hw->pci_ext_irq_id); + + /* register CLDMA interrupt handler */ + mtk_hw_register_irq(hw->mdev, hw->pci_ext_irq_id, mtk_cldma_isr, hw); + + /* unmask PCI CLDMA L1 interrupt */ + mtk_hw_unmask_irq(hw->mdev, hw->pci_ext_irq_id); + + cd->cldma_hw[hif_id] = hw; + return 0; + +err_exit: + devm_kfree(hw->mdev->dev, hw); + + return -EIO; +} + +int mtk_cldma_hw_exit_t800(struct cldma_dev *cd, int hif_id) +{ + struct mtk_md_dev *mdev; + struct cldma_hw *hw; + int i; + + if (!cd->cldma_hw[hif_id]) + return 0; + + /* free cldma descriptor */ + hw = cd->cldma_hw[hif_id]; + mdev = cd->trans->mdev; + mtk_hw_mask_irq(mdev, hw->pci_ext_irq_id); + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (hw->txq[i]) + cd->hw_ops.txq_free(hw, hw->txq[i]->vqno); + if (hw->rxq[i]) + cd->hw_ops.rxq_free(hw, hw->rxq[i]->vqno); + } + + flush_workqueue(hw->wq); + destroy_workqueue(hw->wq); + dma_pool_destroy(hw->dma_pool); + mtk_hw_unregister_irq(mdev, hw->pci_ext_irq_id); + + devm_kfree(mdev->dev, hw); + cd->cldma_hw[hif_id] = NULL; + + return 0; +} + +struct txq *mtk_cldma_txq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct tx_req *next; + struct tx_req *req; + struct txq *txq; + int i; + + txq = devm_kzalloc(hw->mdev->dev, sizeof(*txq), GFP_KERNEL); + if (!txq) + return NULL; + + txq->hw = hw; + txq->vqno = trb->vqno; + txq->vq = hw->cd->trans->vq_tbl + trb->vqno; + txq->txqno = txq->vq->txqno; + txq->req_pool_size = txq->vq->tx_req_num; + txq->req_budget = txq->vq->tx_req_num; + txq->is_stopping = false; + mutex_init(&txq->lock); + if (unlikely(txq->txqno < 0 || txq->txqno >= HW_QUEUE_NUM)) + goto err_exit; + + txq->req_pool = devm_kcalloc(hw->mdev->dev, txq->req_pool_size, sizeof(*req), GFP_KERNEL); + if (!txq->req_pool) + goto err_exit; + + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + i; + req->mtu = txq->vq->tx_mtu; + req->gpd = dma_pool_zalloc(hw->dma_pool, GFP_KERNEL, &req->gpd_dma_addr); + if (!req->gpd) + goto exit_free_req; + } + + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + i; + next = txq->req_pool + ((i + 1) % txq->req_pool_size); + req->gpd->tx_gpd.next_gpd_ptr_h = cpu_to_le32((u64)(next->gpd_dma_addr) >> 32); + req->gpd->tx_gpd.next_gpd_ptr_l = cpu_to_le32(next->gpd_dma_addr); + } + + INIT_WORK(&txq->tx_done_work, mtk_cldma_tx_done_work); + + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_TX, txq->txqno); + txq->tx_started = false; + mtk_cldma_setup_start_addr(hw->mdev, hw->base_addr, DIR_TX, txq->txqno, + txq->req_pool[0].gpd_dma_addr); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_TX, txq->txqno, QUEUE_ERROR); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); + + hw->txq[txq->txqno] = txq; + return txq; + +exit_free_req: + for (i--; i >= 0; i--) { + req = txq->req_pool + i; + dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + } + + devm_kfree(hw->mdev->dev, txq->req_pool); +err_exit: + devm_kfree(hw->mdev->dev, txq); + return NULL; +} + +int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno) +{ + struct virtq *vq = hw->cd->trans->vq_tbl + vqno; + unsigned int active; + struct tx_req *req; + struct txq *txq; + struct trb *trb; + int cnt = 0; + int irq_id; + int txqno; + int i; + + txqno = vq->txqno; + if (unlikely(txqno < 0 || txqno >= HW_QUEUE_NUM)) + return -EINVAL; + txq = hw->txq[txqno]; + if (!txq) + return -EINVAL; + + /* stop HW tx transaction */ + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_TX, txqno); + txq->tx_started = false; + do { + active = mtk_cldma_queue_status(hw->mdev, hw->base_addr, DIR_TX, txqno); + if (active == LINK_ERROR_VAL) + break; + msleep(CLDMA_STOP_HW_WAIT_TIME_MS); /* ensure HW tx transaction done */ + cnt++; + } while (active && cnt < CLDMA_STOP_HW_POLLING_MAX_CNT); + + irq_id = mtk_hw_get_virq_id(hw->mdev, hw->pci_ext_irq_id); + synchronize_irq(irq_id); + + flush_work(&txq->tx_done_work); + mtk_cldma_mask_intr(hw->mdev, hw->base_addr, DIR_TX, txqno, QUEUE_XFER_DONE); + mtk_cldma_mask_intr(hw->mdev, hw->base_addr, DIR_TX, txqno, QUEUE_ERROR); + + /* free tx req resource */ + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + i; + if (req->data_dma_addr && req->data_len) { + dma_unmap_single(hw->mdev->dev, + req->data_dma_addr, + req->data_len, + DMA_TO_DEVICE); + trb = (struct trb *)req->skb->cb; + trb->status = -EPIPE; + trb->trb_complete(req->skb); + } + dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + } + + devm_kfree(hw->mdev->dev, txq->req_pool); + devm_kfree(hw->mdev->dev, txq); + hw->txq[txqno] = NULL; + + return 0; +} + +struct rxq *mtk_cldma_rxq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb) +{ + struct trb_open_priv *trb_open_priv = (struct trb_open_priv *)skb->data; + struct trb *trb = (struct trb *)skb->cb; + struct rx_req *next; + struct rx_req *req; + struct rxq *rxq; + int err; + int i; + + rxq = devm_kzalloc(hw->mdev->dev, sizeof(*rxq), GFP_KERNEL); + if (!rxq) + return NULL; + + rxq->hw = hw; + rxq->vqno = trb->vqno; + rxq->vq = hw->cd->trans->vq_tbl + trb->vqno; + rxq->rxqno = rxq->vq->rxqno; + rxq->req_pool_size = rxq->vq->rx_req_num; + rxq->arg = trb->priv; + rxq->rx_done = trb_open_priv->rx_done; + if (unlikely(rxq->rxqno < 0 || rxq->rxqno >= HW_QUEUE_NUM)) + goto err_exit; + + rxq->req_pool = devm_kcalloc(hw->mdev->dev, rxq->req_pool_size, sizeof(*req), GFP_KERNEL); + if (!rxq->req_pool) + goto err_exit; + + /* setup rx request */ + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + i; + req->mtu = rxq->vq->rx_mtu; + req->gpd = dma_pool_zalloc(hw->dma_pool, GFP_KERNEL, &req->gpd_dma_addr); + if (!req->gpd) + goto exit_free_req; + + req->skb = __dev_alloc_skb(rxq->vq->rx_mtu, GFP_KERNEL); + if (!req->skb) { + dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + goto exit_free_req; + } + + req->data_dma_addr = dma_map_single(hw->mdev->dev, + req->skb->data, + req->mtu, + DMA_FROM_DEVICE); + err = dma_mapping_error(hw->mdev->dev, req->data_dma_addr); + if (unlikely(err)) { + dev_err(hw->mdev->dev, "Failed to map dma!\n"); + i++; + goto exit_free_req; + } + } + + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + i; + next = rxq->req_pool + ((i + 1) % rxq->req_pool_size); + req->gpd->rx_gpd.gpd_flags = CLDMA_GPD_FLAG_IOC | CLDMA_GPD_FLAG_HWO; + req->gpd->rx_gpd.data_allow_len = cpu_to_le16(req->mtu); + req->gpd->rx_gpd.next_gpd_ptr_h = cpu_to_le32((u64)(next->gpd_dma_addr) >> 32); + req->gpd->rx_gpd.next_gpd_ptr_l = cpu_to_le32(next->gpd_dma_addr); + req->gpd->rx_gpd.data_buff_ptr_h = cpu_to_le32((u64)(req->data_dma_addr) >> 32); + req->gpd->rx_gpd.data_buff_ptr_l = cpu_to_le32(req->data_dma_addr); + } + + INIT_WORK(&rxq->rx_done_work, mtk_cldma_rx_done_work); + + hw->rxq[rxq->rxqno] = rxq; + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno); + mtk_cldma_setup_start_addr(hw->mdev, hw->base_addr, DIR_RX, + rxq->rxqno, rxq->req_pool[0].gpd_dma_addr); + mtk_cldma_start_queue(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_ERROR); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); + + return rxq; + +exit_free_req: + for (i--; i >= 0; i--) { + req = rxq->req_pool + i; + dma_unmap_single(hw->mdev->dev, req->data_dma_addr, req->mtu, DMA_FROM_DEVICE); + dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + if (req->skb) + dev_kfree_skb_any(req->skb); + } + + devm_kfree(hw->mdev->dev, rxq->req_pool); +err_exit: + devm_kfree(hw->mdev->dev, rxq); + return NULL; +} + +int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno) +{ + struct mtk_md_dev *mdev; + unsigned int active; + struct rx_req *req; + struct virtq *vq; + struct rxq *rxq; + int cnt = 0; + int irq_id; + int rxqno; + int i; + + mdev = hw->mdev; + vq = hw->cd->trans->vq_tbl + vqno; + rxqno = vq->rxqno; + if (unlikely(rxqno < 0 || rxqno >= HW_QUEUE_NUM)) + return -EINVAL; + rxq = hw->rxq[rxqno]; + if (!rxq) + return -EINVAL; + + mtk_cldma_stop_queue(mdev, hw->base_addr, DIR_RX, rxqno); + do { + /* check CLDMA HW state register */ + active = mtk_cldma_queue_status(mdev, hw->base_addr, DIR_RX, rxqno); + if (active == LINK_ERROR_VAL) + break; + msleep(CLDMA_STOP_HW_WAIT_TIME_MS); /* ensure HW rx transaction done */ + cnt++; + } while (active && cnt < CLDMA_STOP_HW_POLLING_MAX_CNT); + + irq_id = mtk_hw_get_virq_id(hw->mdev, hw->pci_ext_irq_id); + synchronize_irq(irq_id); + + flush_work(&rxq->rx_done_work); + mtk_cldma_mask_intr(mdev, hw->base_addr, DIR_RX, rxqno, QUEUE_XFER_DONE); + mtk_cldma_mask_intr(mdev, hw->base_addr, DIR_RX, rxqno, QUEUE_ERROR); + + /* free rx req resource */ + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + i; + if (!(req->gpd->rx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO) && + le16_to_cpu(req->gpd->rx_gpd.data_recv_len)) { + dma_unmap_single(mdev->dev, req->data_dma_addr, req->mtu, DMA_FROM_DEVICE); + rxq->rx_done(req->skb, le16_to_cpu(req->gpd->rx_gpd.data_recv_len), + rxq->arg); + req->skb = NULL; + } + + dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + if (req->skb) { + dev_kfree_skb_any(req->skb); + dma_unmap_single(mdev->dev, req->data_dma_addr, req->mtu, DMA_FROM_DEVICE); + } + } + + devm_kfree(mdev->dev, rxq->req_pool); + devm_kfree(mdev->dev, rxq); + hw->rxq[rxqno] = NULL; + + return 0; +} + +int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno) +{ + struct txq *txq; + u32 addr, val; + int idx; + + txq = hw->txq[qno]; + addr = hw->base_addr + REG_CLDMA_UL_START_ADDRL_0 + qno * HW_QUEUE_NUM; + val = mtk_hw_read32(hw->mdev, addr); + if (unlikely(!val)) { + mtk_cldma_hw_init(hw->mdev, hw->base_addr); + txq = hw->txq[qno]; + idx = (txq->wr_idx + txq->req_pool_size - 1) % txq->req_pool_size; + mtk_cldma_setup_start_addr(hw->mdev, hw->base_addr, DIR_TX, qno, + txq->req_pool[idx].gpd_dma_addr); + mtk_cldma_start_queue(hw->mdev, hw->base_addr, DIR_TX, qno); + txq->tx_started = true; + } else { + if (unlikely(!txq->tx_started)) { + mtk_cldma_start_queue(hw->mdev, hw->base_addr, DIR_TX, qno); + txq->tx_started = true; + } else { + mtk_cldma_resume_queue(hw->mdev, hw->base_addr, DIR_TX, qno); + } + } + + return 0; +} diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h new file mode 100644 index 000000000000..b89d45a81c4f --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_CLDMA_DRV_T800_H__ +#define __MTK_CLDMA_DRV_T800_H__ + +#include + +#include "mtk_cldma.h" + +int mtk_cldma_hw_init_t800(struct cldma_dev *cd, int hif_id); +int mtk_cldma_hw_exit_t800(struct cldma_dev *cd, int hif_id); +struct txq *mtk_cldma_txq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb); +int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno); +struct rxq *mtk_cldma_rxq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb); +int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno); +int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno); +#endif From patchwork Wed Jan 18 11:38:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 13106234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A6032C32793 for ; Wed, 18 Jan 2023 11:54:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8jXe6U6aaHCWxNHJRPEPqExtIBHsAbwrri5/2+NIk88=; b=j7esckHNM5Xs0Q1K1fUrN/WkeK r+T+ks9yuD5AMU5msekXV536yRavNCEq34EES6STX+VDCZUhApaCcqBHyEks2eZtMP1Yhfie1A4vJ GAlcSaYCo8+qsHVXkkCwPCz1k1Gvjyygatjb/qQJYI6h8Qv0LdM4xPwInMnN/olTVyRZoV5W0yLGt xqeAYJ2xUOJAufLcsOZ7NJRXXoMgPldS0mg+cEiUV6m1tISzjEwip3l5kq2tmpjNbFUbvXX8Zajgo 809yvnELw6vMBRKq00rtqz5t8/kzWp6N1ms2AhhA3jvQN5x5pm6As5bRSMUVGlosxgseIl2bnckwl ArxfpaSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI71P-000cVJ-Hm; Wed, 18 Jan 2023 11:54:27 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI71L-000cUY-Fc for linux-mediatek@lists.infradead.org; Wed, 18 Jan 2023 11:54:26 +0000 X-UUID: d60ae6ea972611edbbe3f76fe852e059-20230118 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=8jXe6U6aaHCWxNHJRPEPqExtIBHsAbwrri5/2+NIk88=; b=DK5y41NajGUa2CVckXmZ/wSm5LJwL1Bb7JmzkjdatMvYBG6Wn+DN8GHmuss/L26MGVSjxipR0jePh3YSBYjB5i6j+Ix8mQ14VM1+FSR1b5WYJ5qzxWYLYjOSqCWctnKgdDB7LpAm827yLlpLbHjHEqH/AH/FMRRzOJFfZNMuWEk=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.18,REQID:28c9be0a-1fc0-4121-91a0-b64bc671b893,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:3ca2d6b,CLOUDID:daaa2df6-ff42-4fb0-b929-626456a83c14,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0 X-CID-BVR: 0 X-UUID: d60ae6ea972611edbbe3f76fe852e059-20230118 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 952795106; Wed, 18 Jan 2023 04:54:17 -0700 Received: from mtkmbs13n1.mediatek.inc (172.21.101.193) by mtkmbs11n2.mediatek.inc (172.21.101.187) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Wed, 18 Jan 2023 19:43:42 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs13n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Wed, 18 Jan 2023 19:43:40 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: Intel experts , Chetan , MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang Subject: [PATCH net-next v2 06/12] net: wwan: tmi: Add AT & MBIM WWAN ports Date: Wed, 18 Jan 2023 19:38:53 +0800 Message-ID: <20230118113859.175836-7-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230118113859.175836-1-yanchao.yang@mediatek.com> References: <20230118113859.175836-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230118_035423_577202_47FA258D X-CRM114-Status: GOOD ( 30.45 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Adds AT & MBIM ports to the port infrastructure. The WWAN initialization method is responsible for creating the corresponding ports using the WWAN framework infrastructure. The implemented WWAN port operations are start, stop, tx, tx_blocking and tx_poll. Adds Modem Logging (MDLog) port to collect modem logs for debugging purposes. MDLog is supported by the RelayFs interface. MDLog allows user-space APPs to control logging via MBIM command and to collect logs via the RelayFs interface, while port infrastructure facilitates communication between the driver and the modem. Signed-off-by: Yanchao Yang Signed-off-by: Felix Chen --- drivers/net/wwan/mediatek/mtk_ctrl_plane.c | 3 + drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 2 +- drivers/net/wwan/mediatek/mtk_fsm.c | 9 + drivers/net/wwan/mediatek/mtk_port.c | 106 ++++- drivers/net/wwan/mediatek/mtk_port.h | 81 +++- drivers/net/wwan/mediatek/mtk_port_io.c | 478 ++++++++++++++++++++- drivers/net/wwan/mediatek/mtk_port_io.h | 11 + drivers/net/wwan/mediatek/pcie/mtk_pci.c | 18 +- 8 files changed, 699 insertions(+), 9 deletions(-) diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c index 06932feb6bed..16626a083793 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c @@ -17,6 +17,9 @@ static const struct virtq vq_tbl[] = { {VQ(0), CLDMA0, TXQ(0), RXQ(0), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, {VQ(1), CLDMA1, TXQ(0), RXQ(0), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(2), CLDMA1, TXQ(2), RXQ(2), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(3), CLDMA1, TXQ(5), RXQ(5), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(4), CLDMA1, TXQ(7), RXQ(7), VQ_MTU_3_5K, VQ_MTU_63K, TX_REQ_NUM, RX_REQ_NUM}, }; static int mtk_ctrl_get_hif_id(unsigned char peer_id) diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 0885a434616e..f8216020448f 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -13,7 +13,7 @@ #include "mtk_fsm.h" #define VQ(N) (N) -#define VQ_NUM (2) +#define VQ_NUM (5) #define TX_REQ_NUM (16) #define RX_REQ_NUM (TX_REQ_NUM) diff --git a/drivers/net/wwan/mediatek/mtk_fsm.c b/drivers/net/wwan/mediatek/mtk_fsm.c index cbcf2c9749c9..46feb3148342 100644 --- a/drivers/net/wwan/mediatek/mtk_fsm.c +++ b/drivers/net/wwan/mediatek/mtk_fsm.c @@ -97,6 +97,7 @@ enum ctrl_msg_id { CTRL_MSG_MDEE = 4, CTRL_MSG_MDEE_REC_OK = 6, CTRL_MSG_MDEE_PASS = 8, + CTRL_MSG_UNIFIED_PORT_CFG = 11, }; struct ctrl_msg_header { @@ -416,6 +417,14 @@ static int mtk_fsm_md_ctrl_msg_handler(void *__fsm, struct sk_buff *skb) case CTRL_MSG_MDEE_PASS: mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_MDEE, FSM_F_MDEE_PASS, NULL, 0, 0); break; + case CTRL_MSG_UNIFIED_PORT_CFG: + mtk_port_tbl_update(fsm->mdev, skb->data + sizeof(*ctrl_msg_h)); + ret = mtk_port_internal_write(hs_info->ctrl_port, skb); + if (ret <= 0) + dev_err(fsm->mdev->dev, "Unable to send port config ack message.\n"); + else + need_free_data = false; + break; default: dev_err(fsm->mdev->dev, "Invalid control message id\n"); } diff --git a/drivers/net/wwan/mediatek/mtk_port.c b/drivers/net/wwan/mediatek/mtk_port.c index 6a7447ab385e..85474285f1e7 100644 --- a/drivers/net/wwan/mediatek/mtk_port.c +++ b/drivers/net/wwan/mediatek/mtk_port.c @@ -45,6 +45,9 @@ DEFINE_MUTEX(port_mngr_grp_mtx); static DEFINE_IDA(ccci_dev_ids); static const struct mtk_port_cfg port_cfg[] = { + {CCCI_UART2_TX, CCCI_UART2_RX, VQ(3), PORT_TYPE_WWAN, "AT", PORT_F_ALLOW_DROP}, + {CCCI_MD_LOG_TX, CCCI_MD_LOG_RX, VQ(4), PORT_TYPE_RELAYFS, "MDLog", PORT_F_DFLT}, + {CCCI_MBIM_TX, CCCI_MBIM_RX, VQ(2), PORT_TYPE_WWAN, "MBIM", PORT_F_ALLOW_DROP}, {CCCI_CONTROL_TX, CCCI_CONTROL_RX, VQ(1), PORT_TYPE_INTERNAL, "MDCTRL", PORT_F_ALLOW_DROP}, {CCCI_SAP_CONTROL_TX, CCCI_SAP_CONTROL_RX, VQ(0), PORT_TYPE_INTERNAL, "SAPCTRL", PORT_F_ALLOW_DROP}, @@ -302,11 +305,101 @@ static void mtk_port_tbl_destroy(struct mtk_port_mngr *port_mngr, struct mtk_sta } while (tbl_type < PORT_TBL_MAX); } +/** + * mtk_port_tbl_update() - Update port radix tree table. + * @mdev: pointer to mtk_md_dev. + * @data: pointer to config data from device. + * + * This function called when host driver received a control message from device. + * + * Return: 0 on success and failure value on error. + */ +int mtk_port_tbl_update(struct mtk_md_dev *mdev, void *data) +{ + struct mtk_port_cfg_header *cfg_hdr = data; + struct mtk_port_cfg_hif_info *hif_info; + struct mtk_port_cfg_ch_info *ch_info; + struct mtk_port_mngr *port_mngr; + struct mtk_ctrl_blk *ctrl_blk; + int parsed_data_len = 0; + struct mtk_port *port; + int ret = 0; + + if (unlikely(!mdev || !cfg_hdr)) { + ret = -EINVAL; + goto end; + } + + ctrl_blk = mdev->ctrl_blk; + port_mngr = ctrl_blk->port_mngr; + + if (cfg_hdr->msg_type != PORT_CFG_MSG_REQUEST) { + dev_warn(mdev->dev, "Invalid msg_type: %d\n", cfg_hdr->msg_type); + ret = -EPROTO; + goto end; + } + + if (cfg_hdr->is_enable != 1) { + dev_warn(mdev->dev, "Invalid enable flag: %d\n", cfg_hdr->is_enable); + ret = -EPROTO; + goto end; + } + switch (cfg_hdr->cfg_type) { + case PORT_CFG_CH_INFO: + while (parsed_data_len < le16_to_cpu(cfg_hdr->port_config_len)) { + ch_info = (struct mtk_port_cfg_ch_info *)(cfg_hdr->data + parsed_data_len); + parsed_data_len += sizeof(*ch_info); + + port = mtk_port_search_by_id(port_mngr, le16_to_cpu(ch_info->dl_ch_id)); + if (port) { + continue; + } else { + dev_warn(mdev->dev, + "It's not supported the extended port(%s),ch: 0x%x\n", + ch_info->port_name, le16_to_cpu(ch_info->dl_ch_id)); + } + } + cfg_hdr->msg_type = PORT_CFG_MSG_RESPONSE; + break; + case PORT_CFG_HIF_INFO: + hif_info = (struct mtk_port_cfg_hif_info *)cfg_hdr->data; + /* Clean up all the mark of the vqs before next paint, because if + * clean up at end of case PORT_CFG_CH_INFO, the ch_info may be + * NULL when cfg_hdr->port_config_len is 0, that will lead to can + * not get peer_id. + */ + mtk_ctrl_vq_color_cleanup(port_mngr->ctrl_blk, hif_info->peer_id); + + while (parsed_data_len < le16_to_cpu(cfg_hdr->port_config_len)) { + hif_info = (struct mtk_port_cfg_hif_info *) + (cfg_hdr->data + parsed_data_len); + parsed_data_len += sizeof(*hif_info); + /* Color vq means that mark the vq to configure to the port */ + mtk_ctrl_vq_color_paint(port_mngr->ctrl_blk, + hif_info->peer_id, + hif_info->ul_hw_queue_id, + hif_info->dl_hw_queue_id, + le32_to_cpu(hif_info->ul_hw_queue_mtu), + le32_to_cpu(hif_info->dl_hw_queue_mtu)); + } + cfg_hdr->msg_type = PORT_CFG_MSG_RESPONSE; + break; + default: + dev_warn(mdev->dev, "Unsupported cfg_type: %d\n", cfg_hdr->cfg_type); + cfg_hdr->is_enable = 0; + ret = -EPROTO; + break; + } + +end: + return ret; +} + static struct mtk_stale_list *mtk_port_stale_list_create(struct mtk_port_mngr *port_mngr) { struct mtk_stale_list *s_list; - /* cannot use devm_kzalloc here, because should pair with the free operation which + /* can not use devm_kzalloc here, because should pair with the free operation which * may be no dev pointer. */ s_list = kzalloc(sizeof(*s_list), GFP_KERNEL); @@ -508,7 +601,7 @@ static int mtk_port_tx_complete(struct sk_buff *skb) return 0; } -static int mtk_port_status_check(struct mtk_port *port) +int mtk_port_status_check(struct mtk_port *port) { /* If port is enable, it must on port_mngr's port_tbl, so the mdev must exist. */ if (!test_bit(PORT_S_ENABLE, &port->status)) { @@ -1153,6 +1246,13 @@ void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg) } port->enable = true; ports_ops[port->info.type]->enable(port); + port = mtk_port_search_by_id(port_mngr, CCCI_MD_LOG_RX); + if (!port) { + dev_err(port_mngr->ctrl_blk->mdev->dev, "Failed to find MD LOG port\n"); + goto err; + } + port->enable = true; + ports_ops[port->info.type]->enable(port); } else if (flag & FSM_F_MDEE_CLEARQ_DONE) { /* the time 2000ms recommended by device-end * it's for wait device prepares the data @@ -1184,7 +1284,7 @@ void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg) * And then it will initialize port table and register fsm callback. * * Return: - * * 0: -success to initialize mtk_port_mngr + * * 0: -success to initialize mtk_port_mngr * * -ENOMEM: -alloc memory for structure failed */ int mtk_port_mngr_init(struct mtk_ctrl_blk *ctrl_blk) diff --git a/drivers/net/wwan/mediatek/mtk_port.h b/drivers/net/wwan/mediatek/mtk_port.h index 9ab1c392cde9..32ff28788773 100644 --- a/drivers/net/wwan/mediatek/mtk_port.h +++ b/drivers/net/wwan/mediatek/mtk_port.h @@ -26,6 +26,7 @@ #define MTK_PORT_NAME_HDR "wwanD" #define MTK_DFLT_MAX_DEV_CNT (10) #define MTK_DFLT_PORT_NAME_LEN (20) +#define MTK_DFLT_FULL_NAME_LEN (50) /* Mapping MTK_PEER_ID and mtk_port_tbl index */ #define MTK_PORT_TBL_TYPE(ch) (MTK_PEER_ID(ch) - 1) @@ -65,6 +66,12 @@ enum mtk_ccci_ch { /* to MD */ CCCI_CONTROL_RX = 0x2000, CCCI_CONTROL_TX = 0x2001, + CCCI_UART2_RX = 0x200A, + CCCI_UART2_TX = 0x200C, + CCCI_MD_LOG_RX = 0x202A, + CCCI_MD_LOG_TX = 0x202B, + CCCI_MBIM_RX = 0x20D0, + CCCI_MBIM_TX = 0x20D1, }; enum mtk_port_flag { @@ -82,6 +89,8 @@ enum mtk_port_tbl { enum mtk_port_type { PORT_TYPE_INTERNAL, + PORT_TYPE_WWAN, + PORT_TYPE_RELAYFS, PORT_TYPE_MAX }; @@ -90,14 +99,31 @@ struct mtk_internal_port { int (*recv_cb)(void *arg, struct sk_buff *skb); }; +struct mtk_wwan_port { + /* w_lock Protect wwan_port when recv data and disable port at the same time */ + struct mutex w_lock; + int w_type; + void *w_port; +}; + +struct mtk_relayfs_port { + struct dentry *ctrl_file; + struct dentry *d_wwan; + struct rchan *rc; + atomic_t is_full; + char ctrl_file_name[MTK_DFLT_FULL_NAME_LEN]; +}; + /** * union mtk_port_priv - Contains private data for different type of ports. - * @cdev: private data for character device port. * @i_priv: private data for internal other user. + * @w_priv: private data for wwan port. + * @rf_priv: private data for relayfs port */ union mtk_port_priv { - struct cdev *cdev; struct mtk_internal_port i_priv; + struct mtk_wwan_port w_priv; + struct mtk_relayfs_port rf_priv; }; /** @@ -209,6 +235,55 @@ struct mtk_port_enum_msg { u8 data[]; } __packed; +enum mtk_port_cfg_type { + PORT_CFG_CH_INFO = 4, + PORT_CFG_HIF_INFO, +}; + +enum mtk_port_cfg_msg_type { + PORT_CFG_MSG_REQUEST = 1, + PORT_CFG_MSG_RESPONSE, +}; + +struct mtk_port_cfg_ch_info { + __le16 dl_ch_id; + u8 dl_hw_queue_id; + u8 ul_hw_queue_id; + u8 reserve[2]; + u8 peer_id; + u8 reserved; + u8 port_name_len; + char port_name[20]; +} __packed; + +struct mtk_port_cfg_hif_info { + u8 dl_hw_queue_id; + u8 ul_hw_queue_id; + u8 peer_id; + u8 reserved; + __le32 dl_hw_queue_mtu; + __le32 ul_hw_queue_mtu; +} __packed; + +/** + * struct mtk_port_cfg_header - Message from device to configure unified port + * @port_config_len: data length. + * @cfg_type: 4:Channel info/ 5:Hif info + * @msg_type: 1:request/ 2:response + * @is_enable: 0:disable/ 1:enable + * @reserve: reserve bytes. + * @data: the data is channel config information @ref mtk_port_cfg_ch_info or + * hif config information @ref mtk_port_cfg_hif_info, following the cfg_type value. + */ +struct mtk_port_cfg_header { + __le16 port_config_len; + u8 cfg_type; + u8 msg_type; + u8 is_enable; + u8 reserve[3]; + u8 data[]; +} __packed; + struct mtk_ccci_header { __le32 packet_header; __le32 packet_len; @@ -223,8 +298,10 @@ struct mtk_port *mtk_port_search_by_name(struct mtk_port_mngr *port_mngr, char * void mtk_port_stale_list_grp_cleanup(void); int mtk_port_add_header(struct sk_buff *skb); struct mtk_ccci_header *mtk_port_strip_header(struct sk_buff *skb); +int mtk_port_status_check(struct mtk_port *port); int mtk_port_send_data(struct mtk_port *port, void *data); int mtk_port_status_update(struct mtk_md_dev *mdev, void *data); +int mtk_port_tbl_update(struct mtk_md_dev *mdev, void *data); int mtk_port_vq_enable(struct mtk_port *port); int mtk_port_vq_disable(struct mtk_port *port); void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg); diff --git a/drivers/net/wwan/mediatek/mtk_port_io.c b/drivers/net/wwan/mediatek/mtk_port_io.c index 050ec0a1bb04..1116370c8d6b 100644 --- a/drivers/net/wwan/mediatek/mtk_port_io.c +++ b/drivers/net/wwan/mediatek/mtk_port_io.c @@ -3,9 +3,25 @@ * Copyright (c) 2022, MediaTek Inc. */ +#ifdef CONFIG_COMPAT +#include +#endif +#include +#include +#include +#include +#include +#include +#include + #include "mtk_port_io.h" +#define MTK_CCCI_CLASS_NAME "ccci_node" #define MTK_DFLT_READ_TIMEOUT (1 * HZ) +#define MTK_RELAYFS_N_SUB_BUFF 16 +#define MTK_RELAYFS_CTRL_FILE_PERM 0600 + +static void *ccci_class; static int mtk_port_get_locked(struct mtk_port *port) { @@ -34,6 +50,34 @@ static void mtk_port_put_locked(struct mtk_port *port) mutex_unlock(&port_mngr_grp_mtx); } +/** + * mtk_port_io_init() - Function for initialize device driver. + * Create ccci_class and register each type of device driver into kernel. + * This function called at driver module initialize. + * + * Return:. + * * 0: success + * * error value if initialization failed + */ +int mtk_port_io_init(void) +{ + ccci_class = class_create(THIS_MODULE, MTK_CCCI_CLASS_NAME); + if (IS_ERR(ccci_class)) + return PTR_ERR(ccci_class); + return 0; +} + +/** + * mtk_port_io_exit() - Function for delete device driver. + * Unregister each type of device driver from kernel, and destroyccci_class. + * + * This function called at driver module exit. + */ +void mtk_port_io_exit(void) +{ + class_destroy(ccci_class); +} + static void mtk_port_struct_init(struct mtk_port *port) { port->tx_seq = 0; @@ -45,6 +89,23 @@ static void mtk_port_struct_init(struct mtk_port *port) init_waitqueue_head(&port->trb_wq); init_waitqueue_head(&port->rx_wq); mutex_init(&port->read_buf_lock); + mutex_init(&port->write_lock); +} + +static int mtk_port_copy_data_from(void *to, union user_buf from, unsigned int len, + unsigned int offset, bool from_user_space) +{ + int ret = 0; + + if (from_user_space) { + ret = copy_from_user(to, from.ubuf + offset, len); + if (ret) + ret = -EFAULT; + } else { + memcpy(to, from.kbuf + offset, len); + } + + return ret; } static int mtk_port_internal_init(struct mtk_port *port) @@ -77,7 +138,7 @@ static int mtk_port_internal_enable(struct mtk_port *port) if (test_bit(PORT_S_ENABLE, &port->status)) { dev_info(port->port_mngr->ctrl_blk->mdev->dev, - "Skip to enable port( %s )\n", port->info.name); + "Skip to enable port(%s)\n", port->info.name); return 0; } @@ -171,6 +232,56 @@ static void mtk_port_common_close(struct mtk_port *port) skb_queue_purge(&port->rx_skb_list); } +static int mtk_port_common_write(struct mtk_port *port, union user_buf buf, unsigned int len, + bool from_user_space) +{ + unsigned int tx_cnt, left_cnt = len; + struct sk_buff *skb; + int ret; + +start_write: + ret = mtk_port_status_check(port); + if (ret) + goto end_write; + + skb = __dev_alloc_skb(port->tx_mtu, GFP_KERNEL); + if (!skb) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to alloc skb for port(%s)\n", port->info.name); + ret = -ENOMEM; + goto end_write; + } + + if (!(port->info.flags & PORT_F_RAW_DATA)) { + /* Reserve enough buf len for ccci header */ + skb_reserve(skb, sizeof(struct mtk_ccci_header)); + } + + tx_cnt = min(left_cnt, port->tx_mtu); + ret = mtk_port_copy_data_from(skb_put(skb, tx_cnt), buf, tx_cnt, len - left_cnt, + from_user_space); + if (ret) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to copy data for port(%s)\n", port->info.name); + dev_kfree_skb_any(skb); + goto end_write; + } + + ret = mtk_port_send_data(port, skb); + if (ret < 0) + goto end_write; + + left_cnt -= ret; + if (left_cnt) { + dev_dbg(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) send %dBytes, but still left %dBytes to send\n", + port->info.name, ret, left_cnt); + goto start_write; + } +end_write: + return (len > left_cnt) ? (len - left_cnt) : ret; +} + /** * mtk_port_internal_open() - Function for open internal port. * @mdev: pointer to mtk_md_dev. @@ -205,7 +316,10 @@ void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag) goto err; } - port->info.flags |= PORT_F_BLOCKING; + if (flag & O_NONBLOCK) + port->info.flags &= ~PORT_F_BLOCKING; + else + port->info.flags |= PORT_F_BLOCKING; err: return port; } @@ -289,6 +403,346 @@ void mtk_port_internal_recv_register(void *i_port, priv->recv_cb = cb; } +static int mtk_port_wwan_open(struct wwan_port *w_port) +{ + struct mtk_port *port; + int ret; + + port = wwan_port_get_drvdata(w_port); + ret = mtk_port_get_locked(port); + if (ret) + return ret; + + ret = mtk_port_common_open(port); + if (ret) + mtk_port_put_locked(port); + + return ret; +} + +static void mtk_port_wwan_close(struct wwan_port *w_port) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + + mtk_port_common_close(port); + mtk_port_put_locked(port); +} + +static int mtk_port_wwan_write(struct wwan_port *w_port, struct sk_buff *skb) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + union user_buf user_buf; + + port->info.flags &= ~PORT_F_BLOCKING; + user_buf.kbuf = (void *)skb->data; + return mtk_port_common_write(port, user_buf, skb->len, false); +} + +static int mtk_port_wwan_write_blocking(struct wwan_port *w_port, struct sk_buff *skb) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + union user_buf user_buf; + + port->info.flags |= PORT_F_BLOCKING; + user_buf.kbuf = (void *)skb->data; + return mtk_port_common_write(port, user_buf, skb->len, false); +} + +static __poll_t mtk_port_wwan_poll(struct wwan_port *w_port, struct file *file, + struct poll_table_struct *poll) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + struct mtk_ctrl_blk *ctrl_blk; + __poll_t mask = 0; + + if (mtk_port_status_check(port)) + goto end_poll; + + ctrl_blk = port->port_mngr->ctrl_blk; + poll_wait(file, &port->trb_wq, poll); + if (!VQ_LIST_FULL(ctrl_blk->trans, port->info.vq_id)) + mask |= EPOLLOUT | EPOLLWRNORM; + else + dev_info(ctrl_blk->mdev->dev, "VQ(%d) skb_list_len is %d\n", + port->info.vq_id, ctrl_blk->trans->skb_list[port->info.vq_id].qlen); + +end_poll: + return mask; +} + +static const struct wwan_port_ops wwan_ops = { + .start = mtk_port_wwan_open, + .stop = mtk_port_wwan_close, + .tx = mtk_port_wwan_write, + .tx_blocking = mtk_port_wwan_write_blocking, + .tx_poll = mtk_port_wwan_poll, +}; + +static int mtk_port_wwan_init(struct mtk_port *port) +{ + mtk_port_struct_init(port); + port->enable = false; + + mutex_init(&port->priv.w_priv.w_lock); + + switch (port->info.rx_ch) { + case CCCI_MBIM_RX: + port->priv.w_priv.w_type = WWAN_PORT_MBIM; + break; + case CCCI_UART2_RX: + port->priv.w_priv.w_type = WWAN_PORT_AT; + break; + default: + port->priv.w_priv.w_type = WWAN_PORT_UNKNOWN; + break; + } + + return 0; +} + +static int mtk_port_wwan_exit(struct mtk_port *port) +{ + if (test_bit(PORT_S_ENABLE, &port->status)) + ports_ops[port->info.type]->disable(port); + + pr_err("[TMI] WWAN port(%s) exit is complete\n", port->info.name); + + return 0; +} + +static int mtk_port_wwan_enable(struct mtk_port *port) +{ + struct mtk_port_mngr *port_mngr; + int ret = 0; + + port_mngr = port->port_mngr; + + if (test_bit(PORT_S_ENABLE, &port->status)) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Skip to enable port( %s )\n", port->info.name); + goto end; + } + + ret = mtk_port_vq_enable(port); + if (ret && ret != -EBUSY) + goto end; + + port->priv.w_priv.w_port = wwan_create_port(port_mngr->ctrl_blk->mdev->dev, + port->priv.w_priv.w_type, &wwan_ops, port); + if (IS_ERR(port->priv.w_priv.w_port)) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed to create wwan port for (%s)\n", port->info.name); + ret = PTR_ERR(port->priv.w_priv.w_port); + goto end; + } + + set_bit(PORT_S_RDWR, &port->status); + set_bit(PORT_S_ENABLE, &port->status); + dev_info(port_mngr->ctrl_blk->mdev->dev, + "Port(%s) enable is complete\n", port->info.name); + + return 0; +end: + return ret; +} + +static int mtk_port_wwan_disable(struct mtk_port *port) +{ + struct wwan_port *w_port; + + if (!test_and_clear_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to disable port(%s)\n", port->info.name); + return 0; + } + + clear_bit(PORT_S_RDWR, &port->status); + w_port = port->priv.w_priv.w_port; + /* When the port is being disabled, port manager may receive RX data + * and try to call wwan_port_rx(). So the w_lock is to protect w_port + * from using by disable flow and receive flow at the same time. + */ + mutex_lock(&port->priv.w_priv.w_lock); + port->priv.w_priv.w_port = NULL; + mutex_unlock(&port->priv.w_priv.w_lock); + + wwan_remove_port(w_port); + + mtk_port_vq_disable(port); + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) disable is complete\n", port->info.name); + + return 0; +} + +static int mtk_port_wwan_recv(struct mtk_port *port, struct sk_buff *skb) +{ + if (!test_bit(PORT_S_OPEN, &port->status)) { + /* If current port is not opened by any user, the received data will be dropped */ + dev_warn_ratelimited(port->port_mngr->ctrl_blk->mdev->dev, + "Unabled to recv: (%s) not opened\n", port->info.name); + goto drop_data; + } + + /* Protect w_port from using by disable flow and receive flow at the same time. */ + mutex_lock(&port->priv.w_priv.w_lock); + if (!port->priv.w_priv.w_port) { + mutex_unlock(&port->priv.w_priv.w_lock); + dev_warn_ratelimited(port->port_mngr->ctrl_blk->mdev->dev, + "Invalid (%s) wwan_port, drop packet\n", port->info.name); + goto drop_data; + } + + wwan_port_rx(port->priv.w_priv.w_port, skb); + mutex_unlock(&port->priv.w_priv.w_lock); + return 0; + +drop_data: + dev_kfree_skb_any(skb); + return -ENXIO; +} + +static struct dentry *trace_create_buf_file_handler(const char *filename, struct dentry *parent, + umode_t mode, struct rchan_buf *buf, + int *is_global) +{ + *is_global = 1; + return debugfs_create_file(filename, mode, parent, buf, &relay_file_operations); +} + +static int trace_remove_buf_file_handler(struct dentry *dentry) +{ + debugfs_remove_recursive(dentry); + return 0; +} + +static int trace_subbuf_start_handler(struct rchan_buf *buf, void *subbuf, + void *prev_subbuf, size_t prev_padding) +{ + struct mtk_port *port = buf->chan->private_data; + + if (relay_buf_full(buf)) { + pr_err_ratelimited("Failed to write relayfs buffer"); + atomic_set(&port->priv.rf_priv.is_full, 1); + return 0; + } + atomic_set(&port->priv.rf_priv.is_full, 0); + return 1; +} + +static struct rchan_callbacks relay_callbacks = { + .subbuf_start = trace_subbuf_start_handler, + .create_buf_file = trace_create_buf_file_handler, + .remove_buf_file = trace_remove_buf_file_handler, +}; + +static int mtk_port_relayfs_enable(struct mtk_port *port) +{ + struct dentry *debugfs_pdev = wwan_get_debugfs_dir(port->port_mngr->ctrl_blk->mdev->dev); + int ret; + + if (IS_ERR_OR_NULL(debugfs_pdev)) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to get wwan debugfs dentry port(%s)\n", port->info.name); + return 0; + } + port->priv.rf_priv.d_wwan = debugfs_pdev; + + if (test_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to enable port( %s )\n", port->info.name); + return 0; + } + + ret = mtk_port_vq_enable(port); + if (ret && ret != -EBUSY) + goto err_open_vq; + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) enable is complete, rx_buf_size: %d * %d\n", + port->info.name, port->rx_mtu, MTK_RELAYFS_N_SUB_BUFF); + port->priv.rf_priv.rc = relay_open(port->info.name, + debugfs_pdev, + port->rx_mtu, + MTK_RELAYFS_N_SUB_BUFF, + &relay_callbacks, port); + if (!port->priv.rf_priv.rc) + goto err_open_relay; + + set_bit(PORT_S_RDWR, &port->status); + set_bit(PORT_S_ENABLE, &port->status); + /* Open port and allow to receive data */ + ret = mtk_port_common_open(port); + if (ret) + goto err_open_port; + port->info.flags &= ~PORT_F_BLOCKING; + return 0; + +err_open_port: + relay_close(port->priv.rf_priv.rc); +err_open_relay: + mtk_port_vq_disable(port); +err_open_vq: + wwan_put_debugfs_dir(port->priv.rf_priv.d_wwan); + return ret; +} + +static int mtk_port_relayfs_disable(struct mtk_port *port) +{ + if (!test_and_clear_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to disable port(%s)\n", port->info.name); + goto out; + } + clear_bit(PORT_S_RDWR, &port->status); + mtk_port_common_close(port); + + relay_close(port->priv.rf_priv.rc); + wwan_put_debugfs_dir(port->priv.rf_priv.d_wwan); + mtk_port_vq_disable(port); + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) disable is complete\n", port->info.name); +out: + return 0; +} + +static int mtk_port_relayfs_recv(struct mtk_port *port, struct sk_buff *skb) +{ + struct mtk_relayfs_port *relayfs_port = &port->priv.rf_priv; + + while (test_bit(PORT_S_OPEN, &port->status) && test_bit(PORT_S_ENABLE, &port->status)) { + __relay_write(relayfs_port->rc, skb->data, skb->len); + if (atomic_read(&port->priv.rf_priv.is_full)) { + msleep(20); + continue; + } else { + break; + } + } + + dev_kfree_skb_any(skb); + return 0; +} + +static int mtk_port_relayfs_init(struct mtk_port *port) +{ + mtk_port_struct_init(port); + port->enable = false; + atomic_set(&port->priv.rf_priv.is_full, 0); + + return 0; +} + +static int mtk_port_relayfs_exit(struct mtk_port *port) +{ + if (test_bit(PORT_S_ENABLE, &port->status)) + ports_ops[port->info.type]->disable(port); + + pr_err("[TMI] RelayFS Port(%s) exit is complete\n", port->info.name); + return 0; +} + static const struct port_ops port_internal_ops = { .init = mtk_port_internal_init, .exit = mtk_port_internal_exit, @@ -298,6 +752,26 @@ static const struct port_ops port_internal_ops = { .recv = mtk_port_internal_recv, }; +static const struct port_ops port_wwan_ops = { + .init = mtk_port_wwan_init, + .exit = mtk_port_wwan_exit, + .reset = mtk_port_reset, + .enable = mtk_port_wwan_enable, + .disable = mtk_port_wwan_disable, + .recv = mtk_port_wwan_recv, +}; + +static const struct port_ops port_relayfs_ops = { + .init = mtk_port_relayfs_init, + .exit = mtk_port_relayfs_exit, + .reset = mtk_port_reset, + .enable = mtk_port_relayfs_enable, + .disable = mtk_port_relayfs_disable, + .recv = mtk_port_relayfs_recv, +}; + const struct port_ops *ports_ops[PORT_TYPE_MAX] = { &port_internal_ops, + &port_wwan_ops, + &port_relayfs_ops }; diff --git a/drivers/net/wwan/mediatek/mtk_port_io.h b/drivers/net/wwan/mediatek/mtk_port_io.h index 30e1d4149881..034b5a2d8f12 100644 --- a/drivers/net/wwan/mediatek/mtk_port_io.h +++ b/drivers/net/wwan/mediatek/mtk_port_io.h @@ -9,9 +9,12 @@ #include #include +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" #include "mtk_port.h" #define MTK_RX_BUF_SIZE (1024 * 1024) +#define MTK_RX_BUF_MAX_SIZE (2 * 1024 * 1024) extern struct mutex port_mngr_grp_mtx; @@ -24,6 +27,14 @@ struct port_ops { int (*recv)(struct mtk_port *port, struct sk_buff *skb); }; +union user_buf { + void __user *ubuf; + void *kbuf; +}; + +int mtk_port_io_init(void); +void mtk_port_io_exit(void); + void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag); int mtk_port_internal_close(void *i_port); int mtk_port_internal_write(void *i_port, struct sk_buff *skb); diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 5a821e55771f..5b91da25eb08 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -1129,13 +1129,29 @@ static struct pci_driver mtk_pci_drv = { static int __init mtk_drv_init(void) { - return pci_register_driver(&mtk_pci_drv); + int ret; + + ret = mtk_port_io_init(); + if (ret) + goto err_init_devid; + + ret = pci_register_driver(&mtk_pci_drv); + if (ret) + goto err_pci_drv; + + return 0; +err_pci_drv: + mtk_port_io_exit(); +err_init_devid: + + return ret; } module_init(mtk_drv_init); static void __exit mtk_drv_exit(void) { pci_unregister_driver(&mtk_pci_drv); + mtk_port_io_exit(); mtk_port_stale_list_grp_cleanup(); } module_exit(mtk_drv_exit); From patchwork Wed Jan 18 11:38:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 13106235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 92CA1C32793 for ; Wed, 18 Jan 2023 11:55:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pKUgvxbLZX9tcX5BwsfEkkvcXvcM9UnngeQJrj/BzXU=; b=yxKEmYS0UR8F1+GLhpQt3f9okD Cil/1VgUgeja7r/0yLO2cGjvE0SqKK75xFk38na+YL6r/rRnlc8ClGa27s0PKoFfmAw5KbCJtXP/U 4rhKafVzoJUuI6y4+CMst1JlCtFofROo897FulKu/lJqk23422vbiSwU/Qvlqm/yWE/ViDCke9Wkh 6xpXaCV1V3kjYMWjRas55X6ghMpfBPkN/9LGVNXKr3SB0ufWijIvNMivl3aHgLGnxirMYPhrF6QmO id6TYtIngdYzTbibIpAQ+0kPWAvAooUVHPxsb1raQBXwfRtb0ZoFjNUwkDqR9Zr65tFbnSn3wCNox vyVba0ig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI71p-000cb4-1i; Wed, 18 Jan 2023 11:54:53 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI71j-000cZI-1x for linux-mediatek@lists.infradead.org; Wed, 18 Jan 2023 11:54:51 +0000 X-UUID: e4805f52972611edbbe3f76fe852e059-20230118 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=pKUgvxbLZX9tcX5BwsfEkkvcXvcM9UnngeQJrj/BzXU=; b=lcgq11MccuMuURKUIM5ReYAcv5qI5mMvQz3mmoGmYI1B6NQYxCCctwwEtfWUqEaNPDZZIIrj+8/nnQF1WYXcKQIObtXBUDdTo9qjy+BelVkmojGaV7JuYmi1LKbXE1BHAeYfUSktTc22QZTpXXhYeg9FJ+eWn+bgaLSzBBfsRpQ=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.18,REQID:55bb433d-f0b9-4c0a-8421-6a6a901eb73d,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:3ca2d6b,CLOUDID:e5e10355-dd49-462e-a4be-2143a3ddc739,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:1,OSI:0,OSA:0 X-CID-BVR: 2,OSH X-UUID: e4805f52972611edbbe3f76fe852e059-20230118 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 2123748209; Wed, 18 Jan 2023 04:54:41 -0700 Received: from mtkmbs13n1.mediatek.inc (172.21.101.193) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Wed, 18 Jan 2023 19:44:06 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs13n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Wed, 18 Jan 2023 19:44:04 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: Intel experts , Chetan , MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang Subject: [PATCH net-next v2 07/12] net: wwan: tmi: Introduce data plane hardware interface Date: Wed, 18 Jan 2023 19:38:54 +0800 Message-ID: <20230118113859.175836-8-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230118113859.175836-1-yanchao.yang@mediatek.com> References: <20230118113859.175836-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230118_035447_193536_08E80B6A X-CRM114-Status: GOOD ( 21.00 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Data Plane Modem AP Interface (DPMAIF) hardware layer provides hardware abstraction for the upper layer (DPMAIF HIF). It implements functions to do the data plane hardware's configuration, TX/RX control and interrupt handling. Signed-off-by: Yanchao Yang Signed-off-by: Hua Yang --- drivers/net/wwan/mediatek/Makefile | 1 + drivers/net/wwan/mediatek/mtk_dpmaif_drv.h | 277 +++ .../wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c | 2115 +++++++++++++++++ .../wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h | 368 +++ 4 files changed, 2761 insertions(+) create mode 100644 drivers/net/wwan/mediatek/mtk_dpmaif_drv.h create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index a6c1252dfe46..1049b0a0a339 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -8,6 +8,7 @@ mtk_tmi-y = \ mtk_ctrl_plane.o \ mtk_cldma.o \ pcie/mtk_cldma_drv_t800.o \ + pcie/mtk_dpmaif_drv_t800.o \ mtk_port.o \ mtk_port_io.o \ mtk_fsm.o diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h new file mode 100644 index 000000000000..29b6c99bba42 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h @@ -0,0 +1,277 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_DPMAIF_DRV_H__ +#define __MTK_DPMAIF_DRV_H__ + +enum dpmaif_drv_dir { + DPMAIF_TX, + DPMAIF_RX, +}; + +enum mtk_data_hw_feature_type { + DATA_HW_F_LRO = BIT(0), + DATA_HW_F_FRAG = BIT(1), +}; + +enum dpmaif_drv_cmd { + DATA_HW_INTR_COALESCE_SET, + DATA_HW_HASH_GET, + DATA_HW_HASH_SET, + DATA_HW_HASH_KEY_SIZE_GET, + DATA_HW_INDIR_GET, + DATA_HW_INDIR_SET, + DATA_HW_INDIR_SIZE_GET, + DATA_HW_LRO_SET, +}; + +struct dpmaif_drv_intr { + enum dpmaif_drv_dir dir; + unsigned int q_mask; + unsigned int mode; + unsigned int pkt_threshold; + unsigned int time_threshold; +}; + +struct dpmaif_hpc_rule { + unsigned int type:4; + unsigned int flow_lab:20; /* only use for ipv6 */ + unsigned int hop_lim:8; /* only use for ipv6 */ + unsigned short src_port; + unsigned short dst_port; + union{ + struct{ + unsigned int v4src_addr; + unsigned int v4dst_addr; + unsigned int resv[6]; + }; + struct{ + unsigned int v6src_addr3; + unsigned int v6dst_addr3; + unsigned int v6src_addr0; + unsigned int v6src_addr1; + unsigned int v6src_addr2; + unsigned int v6dst_addr0; + unsigned int v6dst_addr1; + unsigned int v6dst_addr2; + }; + }; +}; + +enum mtk_drv_err { + DATA_ERR_STOP_MAX = 10, + DATA_HW_REG_TIMEOUT, + DATA_HW_REG_CHK_FAIL, + DATA_FLOW_CHK_ERR, + DATA_DMA_MAP_ERR, + DATA_DL_ONCE_MORE, + DATA_PIT_SEQ_CHK_FAIL, + DATA_LOW_MEM_TYPE_MAX, + DATA_LOW_MEM_DRB, + DATA_LOW_MEM_SKB, +}; + +#define DPMAIF_RXQ_CNT_MAX 2 +#define DPMAIF_TXQ_CNT_MAX 5 +#define DPMAIF_IRQ_CNT_MAX 3 + +#define DPMAIF_PIT_SEQ_MAX 251 + +#define DPMAIF_HW_PKT_ALIGN 64 +#define DPMAIF_HW_BAT_RSVLEN 0 + +enum { + DPMAIF_CLEAR_INTR, + DPMAIF_UNMASK_INTR, +}; + +enum dpmaif_drv_dlq_id { + DPMAIF_DLQ0 = 0, + DPMAIF_DLQ1, +}; + +struct dpmaif_drv_dlq { + bool q_started; + dma_addr_t pit_base; + u32 pit_size; +}; + +struct dpmaif_drv_ulq { + bool q_started; + dma_addr_t drb_base; + u32 drb_size; +}; + +struct dpmaif_drv_data_ring { + dma_addr_t normal_bat_base; + u32 normal_bat_size; + dma_addr_t frag_bat_base; + u32 frag_bat_size; + u32 normal_bat_remain_size; + u32 normal_bat_pkt_bufsz; + u32 frag_bat_pkt_bufsz; + u32 normal_bat_rsv_length; + u32 pkt_bid_max_cnt; + u32 pkt_alignment; + u32 mtu; + u32 chk_pit_num; + u32 chk_normal_bat_num; + u32 chk_frag_bat_num; +}; + +struct dpmaif_drv_property { + u32 features; + struct dpmaif_drv_dlq dlq[DPMAIF_RXQ_CNT_MAX]; + struct dpmaif_drv_ulq ulq[DPMAIF_TXQ_CNT_MAX]; + struct dpmaif_drv_data_ring ring; +}; + +enum dpmaif_drv_ring_type { + DPMAIF_PIT, + DPMAIF_BAT, + DPMAIF_FRAG, + DPMAIF_DRB, +}; + +enum dpmaif_drv_ring_idx { + DPMAIF_PIT_WIDX, + DPMAIF_PIT_RIDX, + DPMAIF_BAT_WIDX, + DPMAIF_BAT_RIDX, + DPMAIF_FRAG_WIDX, + DPMAIF_FRAG_RIDX, + DPMAIF_DRB_WIDX, + DPMAIF_DRB_RIDX, +}; + +struct dpmaif_drv_irq_en_mask { + u32 ap_ul_l2intr_en_mask; + u32 ap_dl_l2intr_en_mask; + u32 ap_udl_ip_busy_en_mask; +}; + +struct dpmaif_drv_info { + struct mtk_md_dev *mdev; + bool ulq_all_enable, dlq_all_enable; + struct dpmaif_drv_property drv_property; + struct dpmaif_drv_irq_en_mask drv_irq_en_mask; + struct dpmaif_drv_ops *drv_ops; +}; + +struct dpmaif_drv_cfg { + dma_addr_t drb_base[DPMAIF_TXQ_CNT_MAX]; + u32 drb_cnt[DPMAIF_TXQ_CNT_MAX]; + dma_addr_t pit_base[DPMAIF_RXQ_CNT_MAX]; + u32 pit_cnt[DPMAIF_RXQ_CNT_MAX]; + dma_addr_t normal_bat_base; + u32 normal_bat_cnt; + dma_addr_t frag_bat_base; + u32 frag_bat_cnt; + u32 normal_bat_buf_size; + u32 frag_bat_buf_size; + u32 max_mtu; + u32 features; +}; + +enum dpmaif_drv_intr_type { + DPMAIF_INTR_MIN = 0, + /* uplink part */ + DPMAIF_INTR_UL_DONE, + /* downlink part */ + DPMAIF_INTR_DL_BATCNT_LEN_ERR, + DPMAIF_INTR_DL_FRGCNT_LEN_ERR, + DPMAIF_INTR_DL_PITCNT_LEN_ERR, + DPMAIF_INTR_DL_DONE, + DPMAIF_INTR_MAX +}; + +#define DPMAIF_INTR_COUNT ((DPMAIF_INTR_MAX) - (DPMAIF_INTR_MIN) - 1) + +struct dpmaif_drv_intr_info { + unsigned char intr_cnt; + enum dpmaif_drv_intr_type intr_types[DPMAIF_INTR_COUNT]; + /* it's a queue mask or queue index */ + u32 intr_queues[DPMAIF_INTR_COUNT]; +}; + +/* This structure defines the management hooks for dpmaif devices. */ +struct dpmaif_drv_ops { + /* Initialize dpmaif hardware. */ + int (*init)(struct dpmaif_drv_info *drv_info, void *data); + /* Start dpmaif hardware transaction and unmask dpmaif interrupt. */ + int (*start_queue)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_dir dir); + /* Stop dpmaif hardware transaction and mask dpmaif interrupt. */ + int (*stop_queue)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_dir dir); + /* Check, mask and clear the dpmaif interrupts, + * and then, collect interrupt information for data plane transaction layer. + */ + int (*intr_handle)(struct dpmaif_drv_info *drv_info, void *data, u8 irq_id); + /* Unmask or clear dpmaif interrupt. */ + int (*intr_complete)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_intr_type type, + u8 q_id, u64 data); + int (*clear_ip_busy)(struct dpmaif_drv_info *drv_info); + int (*send_doorbell)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_ring_type type, + u8 q_id, u32 cnt); + int (*get_ring_idx)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_ring_idx index, + u8 q_id); + int (*feature_cmd)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_cmd cmd, void *data); + void (*dump)(struct dpmaif_drv_info *drv_info); +}; + +static inline int mtk_dpmaif_drv_init(struct dpmaif_drv_info *drv_info, void *data) +{ + return drv_info->drv_ops->init(drv_info, data); +} + +static inline int mtk_dpmaif_drv_start_queue(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_dir dir) +{ + return drv_info->drv_ops->start_queue(drv_info, dir); +} + +static inline int mtk_dpmaif_drv_stop_queue(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_dir dir) +{ + return drv_info->drv_ops->stop_queue(drv_info, dir); +} + +static inline int mtk_dpmaif_drv_intr_handle(struct dpmaif_drv_info *drv_info, + void *data, u8 irq_id) +{ + return drv_info->drv_ops->intr_handle(drv_info, data, irq_id); +} + +static inline int mtk_dpmaif_drv_intr_complete(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_intr_type type, u8 q_id, u64 data) +{ + return drv_info->drv_ops->intr_complete(drv_info, type, q_id, data); +} + +static inline int mtk_dpmaif_drv_clear_ip_busy(struct dpmaif_drv_info *drv_info) +{ + return drv_info->drv_ops->clear_ip_busy(drv_info); +} + +static inline int mtk_dpmaif_drv_send_doorbell(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_type type, u8 q_id, u32 cnt) +{ + return drv_info->drv_ops->send_doorbell(drv_info, type, q_id, cnt); +} + +static inline int mtk_dpmaif_drv_get_ring_idx(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_idx index, u8 q_id) +{ + return drv_info->drv_ops->get_ring_idx(drv_info, index, q_id); +} + +static inline int mtk_dpmaif_drv_feature_cmd(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_cmd cmd, void *data) +{ + return drv_info->drv_ops->feature_cmd(drv_info, cmd, data); +} + +extern struct dpmaif_drv_ops dpmaif_drv_ops_t800; + +#endif diff --git a/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c new file mode 100644 index 000000000000..c9a1cb431cbe --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c @@ -0,0 +1,2115 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include + +#include "mtk_dev.h" +#include "mtk_dpmaif_drv.h" +#include "mtk_dpmaif_reg_t800.h" + +#define DRV_TO_MDEV(__drv_info) ((__drv_info)->mdev) + +/* 2ms -> 2 * 1000 / 10 = 200 */ +#define POLL_MAX_TIMES 200 +#define POLL_INTERVAL_US 10 + +static void mtk_dpmaif_drv_reset(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_AO_RGU_ASSERT, DPMAIF_AP_AO_RST_BIT); + /* Delay 2 us to wait for hardware ready. */ + udelay(2); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_RGU_ASSERT, DPMAIF_AP_RST_BIT); + udelay(2); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_AO_RGU_DEASSERT, DPMAIF_AP_AO_RST_BIT); + udelay(2); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_RGU_DEASSERT, DPMAIF_AP_RST_BIT); + udelay(2); +} + +static bool mtk_dpmaif_drv_sram_init(struct dpmaif_drv_info *drv_info) +{ + u32 val, cnt = 0; + bool ret = true; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_RSTR_CLR); + val |= DPMAIF_MEM_CLR_MASK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_RSTR_CLR, val); + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_RSTR_CLR) & + DPMAIF_MEM_CLR_MASK)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to initialize sram.\n"); + return false; + } + return ret; +} + +static bool mtk_dpmaif_drv_config(struct dpmaif_drv_info *drv_info) +{ + u32 val; + + /* Reset dpmaif HW setting. */ + mtk_dpmaif_drv_reset(drv_info); + + /* Initialize dpmaif sram. */ + if (!mtk_dpmaif_drv_sram_init(drv_info)) + return false; + + /* Set DPMAIF AP port mode. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val &= ~DPMAIF_PORT_MODE_MSK; + val |= DPMAIF_PORT_MODE_PCIE; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); + + /* Set CG enable. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_CG_EN, 0x7f); + return true; +} + +static bool mtk_dpmaif_drv_init_intr(struct dpmaif_drv_info *drv_info) +{ + struct dpmaif_drv_irq_en_mask *irq_en_mask; + u32 cnt = 0, cfg; + + irq_en_mask = &drv_info->drv_irq_en_mask; + + /* Set SW UL interrupt. */ + irq_en_mask->ap_ul_l2intr_en_mask = DPMAIF_AP_UL_L2INTR_EN_MASK; + + /* Clear dummy status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, 0xFFFFFFFF); + + /* Set HW UL interrupt enable mask. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TICR0, + irq_en_mask->ap_ul_l2intr_en_mask); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0, + ~(irq_en_mask->ap_ul_l2intr_en_mask)); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0); + + /* Check UL interrupt mask set done. */ + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0) & + irq_en_mask->ap_ul_l2intr_en_mask) == irq_en_mask->ap_ul_l2intr_en_mask)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to set UL interrupt mask.\n"); + return false; + } + + /* Set SW DL interrupt. */ + irq_en_mask->ap_dl_l2intr_en_mask = DPMAIF_AP_DL_L2INTR_EN_MASK; + + /* Clear dummy status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, 0xFFFFFFFF); + + /* Set HW DL interrupt enable mask. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, + ~(irq_en_mask->ap_dl_l2intr_en_mask)); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); + + /* Check DL interrupt mask set done. */ + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0) & + irq_en_mask->ap_dl_l2intr_en_mask) == irq_en_mask->ap_dl_l2intr_en_mask)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to set DL interrupt mask\n"); + return false; + } + + /* Set SW AP IP busy. */ + irq_en_mask->ap_udl_ip_busy_en_mask = DPMAIF_AP_UDL_IP_BUSY_EN_MASK; + + /* Clear dummy status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_IP_BUSY, 0xFFFFFFFF); + + /* Set HW IP busy mask. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DLUL_IP_BUSY_MASK, + irq_en_mask->ap_udl_ip_busy_en_mask); + + /* DLQ HPC setting. */ + cfg = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_AP_L1TIMR0); + cfg |= DPMAIF_DL_INT_Q2APTOP_MSK | DPMAIF_DL_INT_Q2TOQ1_MSK | DPMAIF_UL_TOP0_INT_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_AP_L1TIMR0, cfg); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_HPC_INTR_MASK, 0xffff); + + dev_info(DRV_TO_MDEV(drv_info)->dev, + "ul_mask=0x%08x, dl_mask=0x%08x, busy_mask=0x%08x\n", + irq_en_mask->ap_ul_l2intr_en_mask, + irq_en_mask->ap_dl_l2intr_en_mask, + irq_en_mask->ap_udl_ip_busy_en_mask); + return true; +} + +static void mtk_dpmaif_drv_set_property(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_cfg *drv_cfg) +{ + struct dpmaif_drv_property *drv_property = &drv_info->drv_property; + struct dpmaif_drv_data_ring *ring; + struct dpmaif_drv_dlq *dlq; + struct dpmaif_drv_ulq *ulq; + u32 i; + + drv_property->features = drv_cfg->features; + + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + dlq = &drv_property->dlq[i]; + dlq->pit_base = drv_cfg->pit_base[i]; + dlq->pit_size = drv_cfg->pit_cnt[i]; + dlq->q_started = true; + } + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ulq = &drv_property->ulq[i]; + ulq->drb_base = drv_cfg->drb_base[i]; + ulq->drb_size = drv_cfg->drb_cnt[i]; + ulq->q_started = true; + } + + ring = &drv_property->ring; + + /* Normal bat setting. */ + ring->normal_bat_base = drv_cfg->normal_bat_base; + ring->normal_bat_size = drv_cfg->normal_bat_cnt; + ring->normal_bat_pkt_bufsz = drv_cfg->normal_bat_buf_size; + ring->normal_bat_remain_size = DPMAIF_HW_BAT_REMAIN; + ring->normal_bat_rsv_length = DPMAIF_HW_BAT_RSVLEN; + ring->chk_normal_bat_num = DPMAIF_HW_CHK_BAT_NUM; + + /* Frag bat setting. */ + if (drv_property->features & DATA_HW_F_FRAG) { + ring->frag_bat_base = drv_cfg->frag_bat_base; + ring->frag_bat_size = drv_cfg->frag_bat_cnt; + ring->frag_bat_pkt_bufsz = drv_cfg->frag_bat_buf_size; + ring->chk_frag_bat_num = DPMAIF_HW_CHK_FRG_NUM; + } + + ring->mtu = drv_cfg->max_mtu; + ring->pkt_bid_max_cnt = DPMAIF_HW_PKT_BIDCNT; + ring->pkt_alignment = DPMAIF_HW_PKT_ALIGN; + ring->chk_pit_num = DPMAIF_HW_CHK_PIT_NUM; +} + +static void mtk_dpmaif_drv_init_common_hw(struct dpmaif_drv_info *drv_info) +{ + u32 val; + + /* Config PCIe mode. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_UL_RESERVE_AO_RW, + DPMAIF_PCIE_MODE_SET_VALUE); + + /* Bat cache enable. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + val |= DPMAIF_DL_BAT_CACHE_PRI; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); + + /* Pit burst enable. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val |= DPMAIF_DL_BURST_PIT_EN; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_set_hpc_cntl(struct dpmaif_drv_info *drv_info) +{ + u32 cfg = 0; + + cfg = (DPMAIF_HPC_LRO_PATH_DF & 0x3) << 0; + cfg |= (DPMAIF_HPC_ADD_MODE_DF & 0x3) << 2; + cfg |= (DPMAIF_HASH_PRIME_DF & 0xf) << 4; + cfg |= (DPMAIF_HPC_TOTAL_NUM & 0xff) << 8; + + /* Configuration include hpc dlq path, + * hpc add mode, hash prime, hpc total number. + */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_HPC_CNTL, cfg); +} + +static void mtk_dpmaif_drv_set_agg_cfg(struct dpmaif_drv_info *drv_info) +{ + u32 cfg; + + cfg = (DPMAIF_AGG_MAX_LEN_DF & 0xffff) << 0; + cfg |= (DPMAIF_AGG_TBL_ENT_NUM_DF & 0xffff) << 16; + + /* Configuration include agg max length, agg table number. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LRO_AGG_CFG, cfg); + + /* enable/disable AGG */ + cfg = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES); + if (drv_info->drv_property.features & DATA_HW_F_LRO) + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES, cfg | (0xff << 20)); + else + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES, cfg & 0xf00fffff); +} + +static void mtk_dpmaif_drv_set_hash_bit_choose(struct dpmaif_drv_info *drv_info) +{ + u32 cfg; + + cfg = (DPMAIF_LRO_HASH_BIT_CHOOSE_DF & 0x7) << 0; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LROPIT_INIT_CON5, cfg); +} + +static void mtk_dpmaif_drv_set_mid_pit_timeout_threshold(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT0, + DPMAIF_MID_TIMEOUT_THRES_DF); +} + +static void mtk_dpmaif_drv_set_dlq_timeout_threshold(struct dpmaif_drv_info *drv_info) +{ + u32 val, i; + + for (i = 0; i < DPMAIF_HPC_MAX_TOTAL_NUM; i++) { + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT1 + 4 * (i / 2)); + + if (i % 2) + val = (val & 0xFFFF) | (DPMAIF_LRO_TIMEOUT_THRES_DF << 16); + else + val = (val & 0xFFFF0000) | (DPMAIF_LRO_TIMEOUT_THRES_DF); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT1 + (4 * (i / 2)), val); + } +} + +static void mtk_dpmaif_drv_set_dlq_start_prs_threshold(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LROPIT_TRIG_THRES, + DPMAIF_LRO_PRS_THRES_DF & 0x3FFFF); +} + +static void mtk_dpmaif_drv_toeplitz_hash_enable(struct dpmaif_drv_info *drv_info, u32 enable) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_TOE_HASH_EN, enable); +} + +static void mtk_dpmaif_drv_hash_default_value_set(struct dpmaif_drv_info *drv_info, u32 hash) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON, + (val & DPMAIF_HASH_DEFAULT_V_MASK) | hash); +} + +static int mtk_dpmaif_drv_hash_sec_key_set(struct dpmaif_drv_info *drv_info, u8 *hash_key) +{ + u32 i, cnt = 0; + u32 index; + u32 val; + + for (i = 0; i < DPMAIF_HASH_SEC_KEY_NUM / 4; i++) { + index = i << 2; + val = hash_key[index] << 24 | hash_key[index + 1] << 16 | + hash_key[index + 2] << 8 | hash_key[index + 3]; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_0 + index, val); + } + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_UPD, 1); + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_UPD))) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) + return -DATA_HW_REG_TIMEOUT; + + return 0; +} + +static int mtk_dpmaif_drv_hash_sec_key_get(struct dpmaif_drv_info *drv_info, u8 *hash_key) +{ + u32 index; + u32 val; + u32 i; + + for (i = 0; i < DPMAIF_HASH_SEC_KEY_NUM / 4; i++) { + index = i << 2; + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_0 + index); + hash_key[index] = val >> 24 & 0xff; + hash_key[index + 1] = val >> 16 & 0xff; + hash_key[index + 2] = val >> 8 & 0xff; + hash_key[index + 3] = val & 0xff; + } + + return 0; +} + +static void mtk_dpmaif_drv_hash_bit_mask_set(struct dpmaif_drv_info *drv_info, u32 mask) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON, + (val & DPMAIF_HASH_BIT_MASK) | (mask << 8)); +} + +static void mtk_dpmaif_drv_hash_indir_mask_set(struct dpmaif_drv_info *drv_info, u32 mask) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON, + (val & DPMAIF_HASH_INDR_MASK) | (mask << 16)); +} + +static u32 mtk_dpmaif_drv_hash_indir_mask_get(struct dpmaif_drv_info *drv_info) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + return (val & (~DPMAIF_HASH_INDR_MASK)) >> 16; +} + +static void mtk_dpmaif_drv_hpc_stats_thres_set(struct dpmaif_drv_info *drv_info, u32 thres) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HPC_STATS_THRES, thres); +} + +static void mtk_dpmaif_drv_hpc_stats_time_cfg_set(struct dpmaif_drv_info *drv_info, u32 time_cfg) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HPC_STATS_TIMER_CFG, time_cfg); +} + +static void mtk_dpmaif_drv_init_dl_hpc_hw(struct dpmaif_drv_info *drv_info) +{ + u8 hash_key[DPMAIF_HASH_SEC_KEY_NUM]; + + mtk_dpmaif_drv_set_hpc_cntl(drv_info); + mtk_dpmaif_drv_set_agg_cfg(drv_info); + mtk_dpmaif_drv_set_hash_bit_choose(drv_info); + mtk_dpmaif_drv_set_mid_pit_timeout_threshold(drv_info); + mtk_dpmaif_drv_set_dlq_timeout_threshold(drv_info); + mtk_dpmaif_drv_set_dlq_start_prs_threshold(drv_info); + mtk_dpmaif_drv_toeplitz_hash_enable(drv_info, DPMAIF_TOEPLITZ_HASH_EN); + mtk_dpmaif_drv_hash_default_value_set(drv_info, DPMAIF_HASH_DEFAULT_VALUE); + get_random_bytes(hash_key, sizeof(hash_key)); + mtk_dpmaif_drv_hash_sec_key_set(drv_info, hash_key); + mtk_dpmaif_drv_hash_bit_mask_set(drv_info, DPMAIF_HASH_BIT_MASK_DF); + mtk_dpmaif_drv_hash_indir_mask_set(drv_info, DPMAIF_HASH_INDR_MASK_DF); + mtk_dpmaif_drv_hpc_stats_thres_set(drv_info, DPMAIF_HPC_STATS_THRESHOLD); + mtk_dpmaif_drv_hpc_stats_time_cfg_set(drv_info, DPMAIF_HPC_STATS_TIMER_CFG); +} + +static void mtk_dpmaif_drv_dl_set_ao_remain_minsz(struct dpmaif_drv_info *drv_info, u32 sz) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO); + val &= ~DPMAIF_BAT_REMAIN_MINSZ_MSK; + val |= ((sz / DPMAIF_BAT_REMAIN_SZ_BASE) << 8) & DPMAIF_BAT_REMAIN_MINSZ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bat_bufsz(struct dpmaif_drv_info *drv_info, u32 buf_sz) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2); + val &= ~DPMAIF_BAT_BUF_SZ_MSK; + val |= ((buf_sz / DPMAIF_BAT_BUFFER_SZ_BASE) << 8) & DPMAIF_BAT_BUF_SZ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bat_rsv_length(struct dpmaif_drv_info *drv_info, u32 length) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2); + val &= ~DPMAIF_BAT_RSV_LEN_MSK; + val |= length & DPMAIF_BAT_RSV_LEN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bid_maxcnt(struct dpmaif_drv_info *drv_info, u32 cnt) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO); + val &= ~DPMAIF_BAT_BID_MAXCNT_MSK; + val |= (cnt << 16) & DPMAIF_BAT_BID_MAXCNT_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO, val); +} + +static void mtk_dpmaif_drv_dl_set_pkt_alignment(struct dpmaif_drv_info *drv_info, + bool enable, u32 mode) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val &= ~DPMAIF_PKT_ALIGN_MSK; + if (enable) { + val |= DPMAIF_PKT_ALIGN_EN; + val |= (mode << 22) & DPMAIF_PKT_ALIGN_MSK; + } + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_pit_seqnum(struct dpmaif_drv_info *drv_info, u32 seq) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_PIT_SEQ_END); + val &= ~DPMAIF_DL_PIT_SEQ_MSK; + val |= seq & DPMAIF_DL_PIT_SEQ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_PIT_SEQ_END, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_mtu(struct dpmaif_drv_info *drv_info, u32 mtu_sz) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON1, mtu_sz); +} + +static void mtk_dpmaif_drv_dl_set_ao_pit_chknum(struct dpmaif_drv_info *drv_info, u32 number) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2); + val &= ~DPMAIF_PIT_CHK_NUM_MSK; + val |= (number << 24) & DPMAIF_PIT_CHK_NUM_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bat_check_threshold(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val &= ~DPMAIF_BAT_CHECK_THRES_MSK; + val |= (size << 16) & DPMAIF_BAT_CHECK_THRES_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_frg_ao_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES); + if (enable) + val |= DPMAIF_FRG_EN_MSK; + else + val &= ~DPMAIF_FRG_EN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_frg_bufsz(struct dpmaif_drv_info *drv_info, u32 buf_sz) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES); + val &= ~DPMAIF_FRG_BUF_SZ_MSK; + val |= ((buf_sz / DPMAIF_FRG_BUFFER_SZ_BASE) << 8) & DPMAIF_FRG_BUF_SZ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_frg_check_threshold(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES); + val &= ~DPMAIF_FRG_CHECK_THRES_MSK; + val |= size & DPMAIF_FRG_CHECK_THRES_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_bat_base_addr(struct dpmaif_drv_info *drv_info, u64 addr) +{ + u32 lb_addr = (u32)(addr & 0xFFFFFFFF); + u32 hb_addr = (u32)(addr >> 32); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON0, lb_addr); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON3, hb_addr); +} + +static void mtk_dpmaif_drv_dl_set_bat_size(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + val &= ~DPMAIF_BAT_SIZE_MSK; + val |= size & DPMAIF_BAT_SIZE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); +} + +static void mtk_dpmaif_drv_dl_bat_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + if (enable) + val |= DPMAIF_BAT_EN_MSK; + else + val &= ~DPMAIF_BAT_EN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); +} + +static void mtk_dpmaif_drv_dl_bat_init_done(struct dpmaif_drv_info *drv_info, bool frag_en) +{ + u32 cnt = 0, dl_bat_init; + + dl_bat_init = DPMAIF_DL_BAT_INIT_ALLSET; + dl_bat_init |= DPMAIF_DL_BAT_INIT_EN; + + if (frag_en) + dl_bat_init |= DPMAIF_DL_BAT_FRG_INIT; + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT, dl_bat_init); + break; + } + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to initialize bat.\n"); + return; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY) == DPMAIF_DL_BAT_INIT_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Initialize bat is not ready.\n"); + return; + } +} + +static void mtk_dpmaif_drv_dl_set_pit_base_addr(struct dpmaif_drv_info *drv_info, u64 addr) +{ + u32 lb_addr = (u32)(addr & 0xFFFFFFFF); + u32 hb_addr = (u32)(addr >> 32); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON0, lb_addr); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON4, hb_addr); +} + +static void mtk_dpmaif_drv_dl_set_pit_size(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON1); + val &= ~DPMAIF_PIT_SIZE_MSK; + val |= size & DPMAIF_PIT_SIZE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON1, val); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON2, 0); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON3, 0); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON5, 0); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON6, 0); +} + +static void mtk_dpmaif_drv_dl_pit_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON3); + if (enable) + val |= DPMAIF_LROPIT_EN_MSK; + else + val &= ~DPMAIF_LROPIT_EN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON3, val); +} + +static void mtk_dpmaif_drv_dl_pit_init_done(struct dpmaif_drv_info *drv_info, u32 pit_idx) +{ + int cnt = 0, dl_pit_init; + + dl_pit_init = DPMAIF_DL_PIT_INIT_ALLSET; + dl_pit_init |= pit_idx << DPMAIF_LROPIT_CHAN_OFS; + dl_pit_init |= DPMAIF_DL_PIT_INIT_EN; + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT) & + DPMAIF_DL_PIT_INIT_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_DL_LROPIT_INIT, dl_pit_init); + break; + } + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to initialize pit.\n"); + return; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT) & + DPMAIF_DL_PIT_INIT_NOT_READY) == DPMAIF_DL_PIT_INIT_NOT_READY)) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Initialize pit is not ready.\n"); + return; + } +} + +static void mtk_dpmaif_drv_config_dlq_pit_hw(struct dpmaif_drv_info *drv_info, u8 q_num, + struct dpmaif_drv_dlq *dlq) +{ + mtk_dpmaif_drv_dl_set_pit_base_addr(drv_info, (u64)dlq->pit_base); + mtk_dpmaif_drv_dl_set_pit_size(drv_info, dlq->pit_size); + mtk_dpmaif_drv_dl_pit_en(drv_info, true); + mtk_dpmaif_drv_dl_pit_init_done(drv_info, q_num); +} + +static int mtk_dpmaif_drv_dlq_all_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val, dl_bat_init, cnt = 0; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + + if (enable) + val |= DPMAIF_BAT_EN_MSK; + else + val &= ~DPMAIF_BAT_EN_MSK; + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + + dl_bat_init = DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT; + dl_bat_init |= DPMAIF_DL_BAT_INIT_EN; + + /* Update DL bat setting to HW */ + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT, dl_bat_init); + break; + } + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to enable all dl queue.\n"); + return -DATA_HW_REG_TIMEOUT; + } + + /* Wait HW update done */ + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY) == DPMAIF_DL_BAT_INIT_NOT_READY)) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Enable all dl queue is not ready.\n"); + return -DATA_HW_REG_TIMEOUT; + } + + return 0; +} + +static bool mtk_dpmaif_drv_dl_idle_check(struct dpmaif_drv_info *drv_info) +{ + bool is_idle = false; + u32 dl_dbg_sta; + + dl_dbg_sta = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_DBG_STA1); + + /* If all the queues are idle, DL idle is true. */ + if ((dl_dbg_sta & DPMAIF_DL_IDLE_STS) == DPMAIF_DL_IDLE_STS) + is_idle = true; + + return is_idle; +} + +static u32 mtk_dpmaif_drv_dl_get_wridx(struct dpmaif_drv_info *drv_info) +{ + return ((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PIT_STA3)) & + DPMAIF_DL_PIT_WRIDX_MSK); +} + +static u32 mtk_dpmaif_drv_dl_get_pit_ridx(struct dpmaif_drv_info *drv_info) +{ + return ((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PIT_STA2)) & + DPMAIF_DL_PIT_WRIDX_MSK); +} + +static void mtk_dpmaif_drv_dl_set_pkt_checksum(struct dpmaif_drv_info *drv_info) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val |= DPMAIF_DL_PKT_CHECKSUM_EN; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static bool mtk_dpmaif_drv_config_dlq_hw(struct dpmaif_drv_info *drv_info) +{ + struct dpmaif_drv_property *drv_property = &drv_info->drv_property; + struct dpmaif_drv_data_ring *ring = &drv_property->ring; + struct dpmaif_drv_dlq *dlq; + u32 i; + + mtk_dpmaif_drv_init_dl_hpc_hw(drv_info); + mtk_dpmaif_drv_dl_set_ao_remain_minsz(drv_info, ring->normal_bat_remain_size); + mtk_dpmaif_drv_dl_set_ao_bat_bufsz(drv_info, ring->normal_bat_pkt_bufsz); + mtk_dpmaif_drv_dl_set_ao_bat_rsv_length(drv_info, ring->normal_bat_rsv_length); + mtk_dpmaif_drv_dl_set_ao_bid_maxcnt(drv_info, ring->pkt_bid_max_cnt); + + if (ring->pkt_alignment == 64) + mtk_dpmaif_drv_dl_set_pkt_alignment(drv_info, true, DPMAIF_PKT_ALIGN64_MODE); + else if (ring->pkt_alignment == 128) + mtk_dpmaif_drv_dl_set_pkt_alignment(drv_info, true, DPMAIF_PKT_ALIGN128_MODE); + else + mtk_dpmaif_drv_dl_set_pkt_alignment(drv_info, false, 0); + + mtk_dpmaif_drv_dl_set_pit_seqnum(drv_info, DPMAIF_PIT_SEQ_MAX); + mtk_dpmaif_drv_dl_set_ao_mtu(drv_info, ring->mtu); + mtk_dpmaif_drv_dl_set_ao_pit_chknum(drv_info, ring->chk_pit_num); + mtk_dpmaif_drv_dl_set_ao_bat_check_threshold(drv_info, ring->chk_normal_bat_num); + + /* Initialize frag bat. */ + if (drv_property->features & DATA_HW_F_FRAG) { + mtk_dpmaif_drv_dl_frg_ao_en(drv_info, true); + mtk_dpmaif_drv_dl_set_ao_frg_bufsz(drv_info, ring->frag_bat_pkt_bufsz); + mtk_dpmaif_drv_dl_set_ao_frg_check_threshold(drv_info, ring->chk_frag_bat_num); + mtk_dpmaif_drv_dl_set_bat_base_addr(drv_info, (u64)ring->frag_bat_base); + mtk_dpmaif_drv_dl_set_bat_size(drv_info, ring->frag_bat_size); + mtk_dpmaif_drv_dl_bat_en(drv_info, true); + mtk_dpmaif_drv_dl_bat_init_done(drv_info, true); + } + + /* Initialize normal bat. */ + mtk_dpmaif_drv_dl_set_bat_base_addr(drv_info, (u64)ring->normal_bat_base); + mtk_dpmaif_drv_dl_set_bat_size(drv_info, ring->normal_bat_size); + mtk_dpmaif_drv_dl_bat_en(drv_info, false); + mtk_dpmaif_drv_dl_bat_init_done(drv_info, false); + + /* Initialize pit information. */ + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + dlq = &drv_property->dlq[i]; + mtk_dpmaif_drv_config_dlq_pit_hw(drv_info, i, dlq); + } + + if (mtk_dpmaif_drv_dlq_all_en(drv_info, true)) + return false; + mtk_dpmaif_drv_dl_set_pkt_checksum(drv_info); + return true; +} + +static void mtk_dpmaif_drv_ul_update_drb_size(struct dpmaif_drv_info *drv_info, u8 q_num, u32 size) +{ + u32 old_size; + u64 addr; + + addr = DPMAIF_UL_DRBSIZE_ADDRH_N(q_num); + + old_size = mtk_hw_read32(DRV_TO_MDEV(drv_info), addr); + old_size &= ~DPMAIF_DRB_SIZE_MSK; + old_size |= size & DPMAIF_DRB_SIZE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), addr, old_size); +} + +static void mtk_dpmaif_drv_ul_update_drb_base_addr(struct dpmaif_drv_info *drv_info, + u8 q_num, u64 addr) +{ + u32 lb_addr = (u32)(addr & 0xFFFFFFFF); + u32 hb_addr = (u32)(addr >> 32); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_ULQSAR_N(q_num), lb_addr); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_UL_DRB_ADDRH_N(q_num), hb_addr); +} + +static void mtk_dpmaif_drv_ul_rdy_en(struct dpmaif_drv_info *drv_info, u8 q_num, bool ready) +{ + u32 ul_rdy_en; + + ul_rdy_en = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + if (ready) + ul_rdy_en |= (1 << q_num); + else + ul_rdy_en &= ~(1 << q_num); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0, ul_rdy_en); +} + +static void mtk_dpmaif_drv_ul_arb_en(struct dpmaif_drv_info *drv_info, u8 q_num, bool enable) +{ + u32 ul_arb_en; + + ul_arb_en = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + if (enable) + ul_arb_en |= (1 << (q_num + 8)); + else + ul_arb_en &= ~(1 << (q_num + 8)); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0, ul_arb_en); +} + +static void mtk_dpmaif_drv_config_ulq_hw(struct dpmaif_drv_info *drv_info) +{ + struct dpmaif_drv_ulq *ulq; + u32 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ulq = &drv_info->drv_property.ulq[i]; + mtk_dpmaif_drv_ul_update_drb_size(drv_info, i, + (ulq->drb_size * DPMAIF_UL_DRB_ENTRY_WORD)); + mtk_dpmaif_drv_ul_update_drb_base_addr(drv_info, i, (u64)ulq->drb_base); + mtk_dpmaif_drv_ul_rdy_en(drv_info, i, true); + mtk_dpmaif_drv_ul_arb_en(drv_info, i, true); + } +} + +static bool mtk_dpmaif_drv_init_done(struct dpmaif_drv_info *drv_info) +{ + u32 val, cnt = 0; + + /* Sync default value to SRAM. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG); + val |= DPMAIF_SRAM_SYNC_MASK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG, val); + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG) & + DPMAIF_SRAM_SYNC_MASK)) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to sync default value to sram\n"); + return false; + } + + /* UL configure done. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_INIT_SET, DPMAIF_UL_INIT_DONE_MASK); + + /* DL configure done. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_INIT_SET, DPMAIF_DL_INIT_DONE_MASK); + return true; +} + +static bool mtk_dpmaif_drv_cfg_hw(struct dpmaif_drv_info *drv_info) +{ + mtk_dpmaif_drv_init_common_hw(drv_info); + if (!mtk_dpmaif_drv_config_dlq_hw(drv_info)) + return false; + mtk_dpmaif_drv_config_ulq_hw(drv_info); + if (!mtk_dpmaif_drv_init_done(drv_info)) + return false; + + drv_info->ulq_all_enable = true; + drv_info->dlq_all_enable = true; + + return true; +} + +static void mtk_dpmaif_drv_clr_ul_all_intr(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, 0xFFFFFFFF); +} + +static void mtk_dpmaif_drv_clr_dl_all_intr(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, 0xFFFFFFFF); +} + +static int mtk_dpmaif_drv_init_t800(struct dpmaif_drv_info *drv_info, void *data) +{ + struct dpmaif_drv_cfg *drv_cfg = data; + + if (!drv_cfg) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Invalid parameter\n"); + return -DATA_FLOW_CHK_ERR; + } + + /* Initialize port mode and clock. */ + if (!mtk_dpmaif_drv_config(drv_info)) + return DATA_HW_REG_CHK_FAIL; + + /* Initialize dpmaif interrupt. */ + if (!mtk_dpmaif_drv_init_intr(drv_info)) + return DATA_HW_REG_CHK_FAIL; + + /* Get initialization information from trans layer. */ + mtk_dpmaif_drv_set_property(drv_info, drv_cfg); + + /* Configure HW queue setting. */ + if (!mtk_dpmaif_drv_cfg_hw(drv_info)) + return DATA_HW_REG_CHK_FAIL; + + /* Clear all interrupt status. */ + mtk_dpmaif_drv_clr_ul_all_intr(drv_info); + mtk_dpmaif_drv_clr_dl_all_intr(drv_info); + + return 0; +} + +static int mtk_dpmaif_drv_ulq_all_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 ul_arb_en; + + ul_arb_en = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + if (enable) + ul_arb_en |= DPMAIF_UL_ALL_QUE_ARB_EN; + else + ul_arb_en &= ~DPMAIF_UL_ALL_QUE_ARB_EN; + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0, ul_arb_en); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + + return 0; +} + +static bool mtk_dpmaif_drv_ul_all_idle_check(struct dpmaif_drv_info *drv_info) +{ + bool is_idle = false; + u32 ul_dbg_sta; + + ul_dbg_sta = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_DBG_STA2); + /* If all the queues are idle, UL idle is true. */ + if ((ul_dbg_sta & DPMAIF_UL_IDLE_STS_MSK) == DPMAIF_UL_IDLE_STS) + is_idle = true; + + return is_idle; +} + +static int mtk_dpmaif_drv_unmask_ulq_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + u32 ui_que_done_mask; + + ui_que_done_mask = (1 << (q_num + DP_UL_INT_DONE_OFFSET)) & DPMAIF_UL_INT_QDONE_MSK; + drv_info->drv_irq_en_mask.ap_ul_l2intr_en_mask |= ui_que_done_mask; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TICR0, ui_que_done_mask); + + return 0; +} + +static int mtk_dpmaif_drv_ul_unmask_all_tx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ret = mtk_dpmaif_drv_unmask_ulq_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_dl_unmask_rx_done_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + u32 di_que_done_mask; + + if (qno == DPMAIF_DLQ0) + di_que_done_mask = DPMAIF_DL_INT_DLQ0_QDONE_MSK; + else + di_que_done_mask = DPMAIF_DL_INT_DLQ1_QDONE_MSK; + + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask |= di_que_done_mask; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TICR0, di_que_done_mask); + + return 0; +} + +static int mtk_dpmaif_drv_dl_unmask_all_rx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + ret = mtk_dpmaif_drv_dl_unmask_rx_done_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_dlq_mask_rx_done_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + u32 cnt = 0, di_que_done_mask; + + if (qno == DPMAIF_DLQ0) + di_que_done_mask = DPMAIF_DL_INT_DLQ0_QDONE_MSK; + else + di_que_done_mask = DPMAIF_DL_INT_DLQ1_QDONE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, di_que_done_mask); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); + + /* Check mask status. */ + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0) & + di_que_done_mask) != di_que_done_mask)) + break; + + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to mask dlq%u interrupt done-0x%08x\n", + qno, mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0)); + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to mask dlq0 interrupt done\n"); + return -DATA_HW_REG_TIMEOUT; + } + + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask &= ~di_que_done_mask; + + return 0; +} + +static int mtk_dpmaif_drv_dl_mask_all_rx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + ret = mtk_dpmaif_drv_dlq_mask_rx_done_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static void mtk_dpmaif_drv_mask_dl_batcnt_len_err_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask &= ~DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, + DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); +} + +static void mtk_dpmaif_drv_unmask_dl_batcnt_len_err_intr(struct dpmaif_drv_info *drv_info) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask |= DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TICR0, + DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK); +} + +static int mtk_dpmaif_drv_mask_dl_frgcnt_len_err_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask &= ~DPMAIF_DL_INT_FRG_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, + DPMAIF_DL_INT_FRG_LEN_ERR_MSK); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); + + return 0; +} + +static void mtk_dpmaif_drv_unmask_dl_frgcnt_len_err_intr(struct dpmaif_drv_info *drv_info) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask |= DPMAIF_DL_INT_FRG_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TICR0, + DPMAIF_DL_INT_FRG_LEN_ERR_MSK); +} + +static int mtk_dpmaif_drv_dlq_mask_pit_cnt_len_err_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + if (qno == DPMAIF_DLQ0) + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0, + DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK); + else + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0, + DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK); + + mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0); + + return 0; +} + +static int mtk_dpmaif_drv_dlq_unmask_pit_cnt_len_err_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + if (qno == DPMAIF_DLQ0) + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0, + DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK); + else + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0, + DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK); + + return 0; +} + +static int mtk_dpmaif_drv_start_queue_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_dir dir) +{ + int ret; + + if (dir == DPMAIF_TX) { + if (unlikely(drv_info->ulq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "ulq all enabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_ulq_all_en(drv_info, true); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_ul_unmask_all_tx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->ulq_all_enable = true; + } else { + if (unlikely(drv_info->dlq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "dlq all enabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_dlq_all_en(drv_info, true); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_dl_unmask_all_rx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->dlq_all_enable = true; + } + + return 0; +} + +static int mtk_dpmaif_drv_stop_ulq(struct dpmaif_drv_info *drv_info) +{ + int cnt = 0; + + /* Disable HW arb and check idle. */ + mtk_dpmaif_drv_ulq_all_en(drv_info, false); + do { + if (++cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to stop ul queue, 0x%x\n", + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_DBG_STA2)); + return -DATA_HW_REG_TIMEOUT; + } + udelay(POLL_INTERVAL_US); + } while (!mtk_dpmaif_drv_ul_all_idle_check(drv_info)); + + return 0; +} + +static int mtk_dpmaif_drv_mask_ulq_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + u32 cnt = 0, ui_que_done_mask; + + ui_que_done_mask = (1 << (q_num + DP_UL_INT_DONE_OFFSET)) & DPMAIF_UL_INT_QDONE_MSK; + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0, ui_que_done_mask); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0); + + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0) & + ui_que_done_mask) != ui_que_done_mask)) + break; + + dev_err(DRV_TO_MDEV(drv_info)->dev, + "Failed to mask ul%u interrupt done-0x%08x\n", q_num, + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0)); + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to mask dlq0 interrupt done\n"); + return -DATA_HW_REG_TIMEOUT; + } + drv_info->drv_irq_en_mask.ap_ul_l2intr_en_mask &= ~ui_que_done_mask; + + return 0; +} + +static void mtk_dpmaif_drv_ul_mask_multi_tx_done_intr(struct dpmaif_drv_info *drv_info, u8 q_mask) +{ + u32 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + if (q_mask & (1 << i)) + mtk_dpmaif_drv_mask_ulq_intr(drv_info, i); + } +} + +static int mtk_dpmaif_drv_ul_mask_all_tx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ret = mtk_dpmaif_drv_mask_ulq_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_stop_dlq(struct dpmaif_drv_info *drv_info) +{ + u32 cnt = 0, wridx, ridx; + + /* Disable HW arb and check idle. */ + mtk_dpmaif_drv_dlq_all_en(drv_info, false); + do { + if (++cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to stop dl queue, 0x%x\n", + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_DBG_STA1)); + return -DATA_HW_REG_TIMEOUT; + } + udelay(POLL_INTERVAL_US); + } while (!mtk_dpmaif_drv_dl_idle_check(drv_info)); + + /* Check middle pit sync done. */ + cnt = 0; + do { + wridx = mtk_dpmaif_drv_dl_get_wridx(drv_info); + ridx = mtk_dpmaif_drv_dl_get_pit_ridx(drv_info); + if (wridx == ridx) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to check middle pit sync\n"); + return -DATA_HW_REG_TIMEOUT; + } + + return 0; +} + +static int mtk_dpmaif_drv_stop_queue_t800(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_dir dir) +{ + int ret; + + if (dir == DPMAIF_TX) { + if (unlikely(!drv_info->ulq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "ulq all disabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_stop_ulq(drv_info); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_ul_mask_all_tx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->ulq_all_enable = false; + } else { + if (unlikely(!drv_info->dlq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "dlq all disabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_stop_dlq(drv_info); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_dl_mask_all_rx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->dlq_all_enable = false; + } + + return 0; +} + +static u32 mtk_dpmaif_drv_get_dl_lv2_sts(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0); +} + +static u32 mtk_dpmaif_drv_get_ul_lv2_sts(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0); +} + +static u32 mtk_dpmaif_drv_get_ul_intr_mask(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0); +} + +static u32 mtk_dpmaif_drv_get_dl_intr_mask(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0); +} + +static bool mtk_dpmaif_drv_check_clr_ul_done_status(struct dpmaif_drv_info *drv_info, u8 qno) +{ + u32 val, l2tisar0; + bool ret = false; + /* get TX interrupt status. */ + l2tisar0 = mtk_dpmaif_drv_get_ul_lv2_sts(drv_info); + val = l2tisar0 & DPMAIF_UL_INT_QDONE & (1 << (DP_UL_INT_DONE_OFFSET + qno)); + + /* ulq status. */ + if (val) { + /* clear ulq done status */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, val); + ret = true; + } + + return ret; +} + +static u32 mtk_dpmaif_drv_irq_src0_dl_filter(struct dpmaif_drv_info *drv_info, u32 l2risar0, + u32 l2rimr0) +{ + if (l2rimr0 & DPMAIF_DL_INT_DLQ0_QDONE_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ0_QDONE; + + if (l2rimr0 & DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR; + + if (l2rimr0 & DPMAIF_DL_INT_FRG_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_FRG_LEN_ERR; + + if (l2rimr0 & DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_BATCNT_LEN_ERR; + + return l2risar0; +} + +static u32 mtk_dpmaif_drv_irq_src1_dl_filter(struct dpmaif_drv_info *drv_info, u32 l2risar0, + u32 l2rimr0) +{ + if (l2rimr0 & DPMAIF_DL_INT_DLQ1_QDONE_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ1_QDONE; + + if (l2rimr0 & DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR; + + return l2risar0; +} + +static int mtk_dpmaif_drv_irq_src0(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr_info *intr_info) +{ + u32 val, l2risar0, l2rimr0; + + l2risar0 = mtk_dpmaif_drv_get_dl_lv2_sts(drv_info); + l2rimr0 = mtk_dpmaif_drv_get_dl_intr_mask(drv_info); + + l2risar0 &= DPMAIF_SRC0_DL_STATUS_MASK; + if (l2risar0) { + /* Filter to get DL unmasked interrupts */ + l2risar0 = mtk_dpmaif_drv_irq_src0_dl_filter(drv_info, l2risar0, l2rimr0); + + val = l2risar0 & DPMAIF_DL_INT_BATCNT_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_BATCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = DPMAIF_DLQ0; + intr_info->intr_cnt++; + mtk_dpmaif_drv_mask_dl_batcnt_len_err_intr(drv_info, DPMAIF_DLQ0); + } + + val = l2risar0 & DPMAIF_DL_INT_FRG_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_FRGCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = DPMAIF_DLQ0; + intr_info->intr_cnt++; + mtk_dpmaif_drv_mask_dl_frgcnt_len_err_intr(drv_info, DPMAIF_DLQ0); + } + + val = l2risar0 & DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_PITCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ0; + intr_info->intr_cnt++; + mtk_dpmaif_drv_dlq_mask_pit_cnt_len_err_intr(drv_info, DPMAIF_DLQ0); + } + + val = l2risar0 & DPMAIF_DL_INT_DLQ0_QDONE; + if (val) { + if (!mtk_dpmaif_drv_dlq_mask_rx_done_intr(drv_info, DPMAIF_DLQ0)) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_DONE; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ0; + intr_info->intr_cnt++; + } + } + + /* Clear interrupt status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, l2risar0); + } + + return 0; +} + +static int mtk_dpmaif_drv_irq_src1(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr_info *intr_info) +{ + u32 val, l2risar0, l2rimr0; + + l2risar0 = mtk_dpmaif_drv_get_dl_lv2_sts(drv_info); + l2rimr0 = mtk_dpmaif_drv_get_dl_intr_mask(drv_info); + + /* Check and process interrupt. */ + l2risar0 &= DPMAIF_SRC1_DL_STATUS_MASK; + if (l2risar0) { + /* Filter to get DL unmasked interrupts */ + l2risar0 = mtk_dpmaif_drv_irq_src1_dl_filter(drv_info, l2risar0, l2rimr0); + + val = l2risar0 & DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_PITCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ1; + intr_info->intr_cnt++; + mtk_dpmaif_drv_dlq_mask_pit_cnt_len_err_intr(drv_info, DPMAIF_DLQ1); + } + + val = l2risar0 & DPMAIF_DL_INT_DLQ1_QDONE; + if (val) { + if (!mtk_dpmaif_drv_dlq_mask_rx_done_intr(drv_info, DPMAIF_DLQ1)) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_DONE; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ1; + intr_info->intr_cnt++; + } + } + + /* Clear interrupt status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, l2risar0); + } + + return 0; +} + +static int mtk_dpmaif_drv_irq_src2(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr_info *intr_info) +{ + u32 l2tisar0, l2timr0; + u8 q_mask; + u32 val; + + l2tisar0 = mtk_dpmaif_drv_get_ul_lv2_sts(drv_info); + l2timr0 = mtk_dpmaif_drv_get_ul_intr_mask(drv_info); + + /* Check and process interrupt. */ + l2tisar0 &= (~l2timr0); + if (l2tisar0) { + val = l2tisar0 & DPMAIF_UL_INT_QDONE; + if (val) { + q_mask = val >> DP_UL_INT_DONE_OFFSET & DPMAIF_ULQS; + mtk_dpmaif_drv_ul_mask_multi_tx_done_intr(drv_info, q_mask); + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_UL_DONE; + intr_info->intr_queues[intr_info->intr_cnt] = val >> DP_UL_INT_DONE_OFFSET; + intr_info->intr_cnt++; + } + + /* clear interrupt status */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, l2tisar0); + } + + return 0; +} + +static int mtk_dpmaif_drv_intr_handle_t800(struct dpmaif_drv_info *drv_info, void *data, u8 irq_id) +{ + switch (irq_id) { + case MTK_IRQ_SRC_DPMAIF: + mtk_dpmaif_drv_irq_src0(drv_info, data); + break; + case MTK_IRQ_SRC_DPMAIF2: + mtk_dpmaif_drv_irq_src1(drv_info, data); + break; + case MTK_IRQ_SRC_DPMAIF3: + mtk_dpmaif_drv_irq_src2(drv_info, data); + break; + default: + break; + } + + return 0; +} + +static int mtk_dpmaif_drv_intr_complete_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_intr_type type, u8 q_id, u64 data) +{ + int ret = 0; + + switch (type) { + case DPMAIF_INTR_UL_DONE: + if (data == DPMAIF_CLEAR_INTR) + mtk_dpmaif_drv_check_clr_ul_done_status(drv_info, q_id); + else + ret = mtk_dpmaif_drv_unmask_ulq_intr(drv_info, q_id); + break; + case DPMAIF_INTR_DL_BATCNT_LEN_ERR: + mtk_dpmaif_drv_unmask_dl_batcnt_len_err_intr(drv_info); + break; + case DPMAIF_INTR_DL_FRGCNT_LEN_ERR: + mtk_dpmaif_drv_unmask_dl_frgcnt_len_err_intr(drv_info); + break; + case DPMAIF_INTR_DL_PITCNT_LEN_ERR: + ret = mtk_dpmaif_drv_dlq_unmask_pit_cnt_len_err_intr(drv_info, q_id); + break; + case DPMAIF_INTR_DL_DONE: + ret = mtk_dpmaif_drv_dl_unmask_rx_done_intr(drv_info, q_id); + break; + default: + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_clr_ip_busy_sts_t800(struct dpmaif_drv_info *drv_info) +{ + u32 ip_busy_sts; + + /* Get AP IP busy status. */ + ip_busy_sts = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_IP_BUSY); + + /* Clear AP IP busy. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_IP_BUSY, ip_busy_sts); + + return 0; +} + +static int mtk_dpmaif_drv_dl_add_pit_cnt(struct dpmaif_drv_info *drv_info, + u32 qno, u32 pit_remain_cnt) +{ + u32 cnt = 0, dl_update; + + dl_update = pit_remain_cnt & 0x0003ffff; + dl_update |= DPMAIF_DL_ADD_UPDATE | (qno << DPMAIF_ADD_LRO_PIT_CHAN_OFS); + + do { + if ((mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == 0) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_ADD, dl_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add dlq%u pit-1, cnt=%u\n", + qno, pit_remain_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == DPMAIF_DL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add dlq%u pit-2, cnt=%u\n", + qno, pit_remain_cnt); + return false; + } + + return 0; +} + +static int mtk_dpmaif_drv_dl_add_bat_cnt(struct dpmaif_drv_info *drv_info, u32 bat_entry_cnt) +{ + u32 cnt = 0, dl_bat_update; + + dl_bat_update = bat_entry_cnt & 0xffff; + dl_bat_update |= DPMAIF_DL_ADD_UPDATE; + do { + if ((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == 0) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD, dl_bat_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, + "Failed to add bat-1, cnt=%u\n", bat_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == DPMAIF_DL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add bat-2, cnt=%u\n", + bat_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + return 0; +} + +static int mtk_dpmaif_drv_dl_add_frg_cnt(struct dpmaif_drv_info *drv_info, u32 frg_entry_cnt) +{ + u32 cnt = 0, dl_frg_update; + int ret = 0; + + dl_frg_update = frg_entry_cnt & 0xffff; + dl_frg_update |= DPMAIF_DL_FRG_ADD_UPDATE; + dl_frg_update |= DPMAIF_DL_ADD_UPDATE; + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) + & DPMAIF_DL_ADD_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD, dl_frg_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add frag bat-1, cnt=%u\n", + frg_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == DPMAIF_DL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add frag bat-2, cnt=%u\n", + frg_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + return ret; +} + +static int mtk_dpmaif_drv_ul_add_drb(struct dpmaif_drv_info *drv_info, u8 q_num, u32 drb_cnt) +{ + u32 drb_entry_cnt = drb_cnt * DPMAIF_UL_DRB_ENTRY_WORD; + u32 cnt = 0, ul_update; + u64 addr; + + ul_update = drb_entry_cnt & 0x0000ffff; + ul_update |= DPMAIF_UL_ADD_UPDATE; + + if (q_num == 4) + addr = NRL2_DPMAIF_UL_ADD_DESC_CH4; + else + addr = DPMAIF_ULQ_ADD_DESC_CH_N(q_num); + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), addr) & DPMAIF_UL_ADD_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), addr, ul_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add ulq%u drb-1, cnt=%u\n", + q_num, drb_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), addr) & + DPMAIF_UL_ADD_NOT_READY) == DPMAIF_UL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add ulq%u drb-2, cnt=%u\n", + q_num, drb_cnt); + return -DATA_HW_REG_TIMEOUT; + } + return 0; +} + +static int mtk_dpmaif_drv_send_doorbell_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_type type, + u8 q_id, u32 cnt) +{ + int ret = 0; + + switch (type) { + case DPMAIF_PIT: + ret = mtk_dpmaif_drv_dl_add_pit_cnt(drv_info, q_id, cnt); + break; + case DPMAIF_BAT: + ret = mtk_dpmaif_drv_dl_add_bat_cnt(drv_info, cnt); + break; + case DPMAIF_FRAG: + ret = mtk_dpmaif_drv_dl_add_frg_cnt(drv_info, cnt); + break; + case DPMAIF_DRB: + ret = mtk_dpmaif_drv_ul_add_drb(drv_info, q_id, cnt); + break; + default: + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_dl_get_pit_wridx(struct dpmaif_drv_info *drv_info, u32 qno) +{ + u32 pit_wridx; + + pit_wridx = (mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LRO_STA5 + qno * 0x20)) + & DPMAIF_DL_PIT_WRIDX_MSK; + if (unlikely(pit_wridx >= drv_info->drv_property.dlq[qno].pit_size)) + return -DATA_HW_REG_CHK_FAIL; + + return pit_wridx; +} + +static int mtk_dpmaif_drv_dl_get_pit_rdidx(struct dpmaif_drv_info *drv_info, u32 qno) +{ + u32 pit_rdidx; + + pit_rdidx = (mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LRO_STA6 + qno * 0x20)) + & DPMAIF_DL_PIT_WRIDX_MSK; + if (unlikely(pit_rdidx >= drv_info->drv_property.dlq[qno].pit_size)) + return -DATA_HW_REG_CHK_FAIL; + + return pit_rdidx; +} + +static int mtk_dpmaif_drv_dl_get_bat_ridx(struct dpmaif_drv_info *drv_info) +{ + u32 bat_ridx; + + bat_ridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_BAT_STA2) & + DPMAIF_DL_BAT_WRIDX_MSK; + + if (unlikely(bat_ridx >= drv_info->drv_property.ring.normal_bat_size)) + return -DATA_HW_REG_CHK_FAIL; + + return bat_ridx; +} + +static int mtk_dpmaif_drv_dl_get_bat_wridx(struct dpmaif_drv_info *drv_info) +{ + u32 bat_wridx; + + bat_wridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_BAT_STA3) & + DPMAIF_DL_BAT_WRIDX_MSK; + if (unlikely(bat_wridx >= drv_info->drv_property.ring.normal_bat_size)) + return -DATA_HW_REG_CHK_FAIL; + + return bat_wridx; +} + +static int mtk_dpmaif_drv_dl_get_frg_ridx(struct dpmaif_drv_info *drv_info) +{ + u32 frg_ridx; + + frg_ridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_STA2) & + DPMAIF_DL_FRG_WRIDX_MSK; + if (unlikely(frg_ridx >= drv_info->drv_property.ring.frag_bat_size)) + return -DATA_HW_REG_CHK_FAIL; + + return frg_ridx; +} + +static int mtk_dpmaif_drv_ul_get_drb_ridx(struct dpmaif_drv_info *drv_info, u8 q_num) +{ + u32 drb_ridx; + u64 addr; + + addr = DPMAIF_ULQ_STA0_N(q_num); + + drb_ridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), addr) >> 16; + drb_ridx = drb_ridx / DPMAIF_UL_DRB_ENTRY_WORD; + + if (unlikely(drb_ridx >= drv_info->drv_property.ulq[q_num].drb_size)) + return -DATA_HW_REG_CHK_FAIL; + + return drb_ridx; +} + +static int mtk_dpmaif_drv_get_ring_idx_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_idx index, u8 q_id) +{ + int ret = 0; + + switch (index) { + case DPMAIF_PIT_WIDX: + ret = mtk_dpmaif_drv_dl_get_pit_wridx(drv_info, q_id); + break; + case DPMAIF_PIT_RIDX: + ret = mtk_dpmaif_drv_dl_get_pit_rdidx(drv_info, q_id); + break; + case DPMAIF_BAT_WIDX: + ret = mtk_dpmaif_drv_dl_get_bat_wridx(drv_info); + break; + case DPMAIF_BAT_RIDX: + ret = mtk_dpmaif_drv_dl_get_bat_ridx(drv_info); + break; + case DPMAIF_FRAG_RIDX: + ret = mtk_dpmaif_drv_dl_get_frg_ridx(drv_info); + break; + case DPMAIF_DRB_RIDX: + ret = mtk_dpmaif_drv_ul_get_drb_ridx(drv_info, q_id); + break; + default: + break; + } + + return ret; +} + +static u32 mtk_dpmaif_drv_hash_indir_get(struct dpmaif_drv_info *drv_info, u32 *indir) +{ + u32 val = mtk_dpmaif_drv_hash_indir_mask_get(drv_info); + u8 i; + + for (i = 0; i < DPMAIF_HASH_INDR_SIZE; i++) { + if (val & (0x01 << i)) + indir[i] = 1; + else + indir[i] = 0; + } + + return 0; +} + +static u32 mtk_dpmaif_drv_hash_indir_set(struct dpmaif_drv_info *drv_info, u32 *indir) +{ + u32 val = 0; + u8 i; + + for (i = 0; i < DPMAIF_HASH_INDR_SIZE; i++) { + if (indir[i]) + val |= (0x01 << i); + } + mtk_dpmaif_drv_hash_indir_mask_set(drv_info, val); + + return 0; +} + +static u32 mtk_dpmaif_drv_5tuple_trig(struct dpmaif_drv_info *drv_info, + struct dpmaif_hpc_rule *rule, u32 sw_add, + u32 agg_en, u32 ovw_en) +{ + u32 cnt, i, *val = (u32 *)rule; + + for (i = 0; i < sizeof(*rule) / sizeof(u32); i++) + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_HPC_SW_ADD_RULE0 + 4 * i, + *(val + i)); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_HPC_SW_5TUPLE_TRIG, + (ovw_en << 3) | (agg_en << 2) | (sw_add << 1) | 0x1); + + /* wait hw 5-tuple process finish */ + cnt = 0; + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_HPC_SW_5TUPLE_TRIG) & 0x1)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to 5tuple trigger\n"); + return -DATA_HW_REG_TIMEOUT; + } + if (mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_HPC_5TUPLE_STS)) + return -DATA_HW_REG_CHK_FAIL; + + return 0; +} + +static int mtk_dpmaif_drv_ul_set_delay_intr(struct dpmaif_drv_info *drv_info, + u8 q_num, u8 mode, u32 time_us, u32 pkt_cnt) +{ + u32 ret = 0, cfg; + + cfg = ((mode & 0x3) << 30) | ((pkt_cnt & 0x3fff) << 16) | (time_us & 0xffff); + + switch (q_num) { + case 0: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER3, cfg); + break; + case 1: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER4, cfg); + break; + case 2: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER5, cfg); + break; + case 3: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER6, cfg); + break; + case 4: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER7, cfg); + break; + default: + dev_err(DRV_TO_MDEV(drv_info)->dev, "Invalid ulq=%d!\n", q_num); + ret = -EINVAL; + } + + return ret; +} + +static int mtk_dpmaif_drv_dl_set_delay_intr(struct dpmaif_drv_info *drv_info, + u8 q_num, u8 mode, u32 time_us, u32 pkt_cnt) +{ + int ret = 0; + u32 cfg = 0; + + cfg = ((mode & 0x3) << 30) | ((pkt_cnt & 0x3fff) << 16) | (time_us & 0xffff); + + switch (q_num) { + case 0: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER1, cfg); + break; + case 1: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER2, cfg); + break; + default: + dev_info(DRV_TO_MDEV(drv_info)->dev, "Invalid dlq=%d!\n", q_num); + ret = -EINVAL; + } + + return ret; +} + +static int mtk_dpmaif_drv_intr_coalesce_set(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr *intr) +{ + u8 i; + + if (intr->dir == DPMAIF_TX) { + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + if (intr->q_mask & (1 << i)) + mtk_dpmaif_drv_ul_set_delay_intr(drv_info, i, intr->mode, + intr->time_threshold, + intr->pkt_threshold); + } + } else { + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + if (intr->q_mask & (1 << i)) + mtk_dpmaif_drv_dl_set_delay_intr(drv_info, i, intr->mode, + intr->time_threshold, + intr->pkt_threshold); + } + } + + return 0; +} + +static int mtk_dpmaif_drv_feature_cmd_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_cmd cmd, void *data) +{ + int ret = 0; + + switch (cmd) { + case DATA_HW_INTR_COALESCE_SET: + ret = mtk_dpmaif_drv_intr_coalesce_set(drv_info, data); + break; + case DATA_HW_HASH_GET: + ret = mtk_dpmaif_drv_hash_sec_key_get(drv_info, data); + break; + case DATA_HW_HASH_SET: + ret = mtk_dpmaif_drv_hash_sec_key_set(drv_info, data); + break; + case DATA_HW_HASH_KEY_SIZE_GET: + *(u32 *)data = DPMAIF_HASH_SEC_KEY_NUM; + break; + case DATA_HW_INDIR_GET: + ret = mtk_dpmaif_drv_hash_indir_get(drv_info, data); + break; + case DATA_HW_INDIR_SET: + ret = mtk_dpmaif_drv_hash_indir_set(drv_info, data); + break; + case DATA_HW_INDIR_SIZE_GET: + *(u32 *)data = DPMAIF_HASH_INDR_SIZE; + break; + case DATA_HW_LRO_SET: + ret = mtk_dpmaif_drv_5tuple_trig(drv_info, data, 1, 1, 1); + break; + default: + dev_info(DRV_TO_MDEV(drv_info)->dev, "Unsupport cmd=%d\n", cmd); + ret = -EOPNOTSUPP; + break; + } + + return ret; +} + +struct dpmaif_drv_ops dpmaif_drv_ops_t800 = { + .init = mtk_dpmaif_drv_init_t800, + .start_queue = mtk_dpmaif_drv_start_queue_t800, + .stop_queue = mtk_dpmaif_drv_stop_queue_t800, + .intr_handle = mtk_dpmaif_drv_intr_handle_t800, + .intr_complete = mtk_dpmaif_drv_intr_complete_t800, + .clear_ip_busy = mtk_dpmaif_drv_clr_ip_busy_sts_t800, + .send_doorbell = mtk_dpmaif_drv_send_doorbell_t800, + .get_ring_idx = mtk_dpmaif_drv_get_ring_idx_t800, + .feature_cmd = mtk_dpmaif_drv_feature_cmd_t800, +}; diff --git a/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h new file mode 100644 index 000000000000..8db2cd782a80 --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_DPMAIF_DRV_T800_H__ +#define __MTK_DPMAIF_DRV_T800_H__ + +#define DPMAIF_DEV_PD_BASE (0x1022D000) +#define DPMAIF_DEV_AO_BASE (0x10011000) + +#define DPMAIF_PD_BASE DPMAIF_DEV_PD_BASE +#define DPMAIF_AO_BASE DPMAIF_DEV_AO_BASE + +#define BASE_NADDR_NRL2_DPMAIF_UL ((unsigned long)(DPMAIF_PD_BASE)) +#define BASE_NADDR_NRL2_DPMAIF_DL ((unsigned long)(DPMAIF_PD_BASE + 0x100)) +#define BASE_NADDR_NRL2_DPMAIF_AP_MISC ((unsigned long)(DPMAIF_PD_BASE + 0x400)) +#define BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL ((unsigned long)(DPMAIF_PD_BASE + 0xD00)) +#define BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL ((unsigned long)(DPMAIF_PD_BASE + 0xC00)) +#define BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX ((unsigned long)(DPMAIF_PD_BASE + 0x900)) +#define BASE_NADDR_NRL2_DPMAIF_MMW_HPC ((unsigned long)(DPMAIF_PD_BASE + 0x600)) +#define BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 ((unsigned long)(DPMAIF_PD_BASE + 0xF00)) +#define BASE_NADDR_NRL2_DPMAIF_AO_UL ((unsigned long)(DPMAIF_AO_BASE)) +#define BASE_NADDR_NRL2_DPMAIF_AO_DL ((unsigned long)(DPMAIF_AO_BASE + 0x400)) + +/* dpmaif uplink part registers. */ +#define NRL2_DPMAIF_UL_ADD_DESC (BASE_NADDR_NRL2_DPMAIF_UL + 0x00) +#define NRL2_DPMAIF_UL_DBG_STA2 (BASE_NADDR_NRL2_DPMAIF_UL + 0x88) +#define NRL2_DPMAIF_UL_RESERVE_AO_RW (BASE_NADDR_NRL2_DPMAIF_UL + 0xAC) +#define NRL2_DPMAIF_UL_ADD_DESC_CH0 (BASE_NADDR_NRL2_DPMAIF_UL + 0xB0) +#define NRL2_DPMAIF_UL_ADD_DESC_CH4 (BASE_NADDR_NRL2_DPMAIF_UL + 0xE0) + +/* dpmaif downlink part registers. */ +#define NRL2_DPMAIF_DL_BAT_INIT (BASE_NADDR_NRL2_DPMAIF_DL + 0x00) +#define NRL2_DPMAIF_DL_BAT_INIT (BASE_NADDR_NRL2_DPMAIF_DL + 0x00) +#define NRL2_DPMAIF_DL_BAT_ADD (BASE_NADDR_NRL2_DPMAIF_DL + 0x04) +#define NRL2_DPMAIF_DL_BAT_INIT_CON0 (BASE_NADDR_NRL2_DPMAIF_DL + 0x08) +#define NRL2_DPMAIF_DL_BAT_INIT_CON1 (BASE_NADDR_NRL2_DPMAIF_DL + 0x0C) +#define NRL2_DPMAIF_DL_BAT_INIT_CON3 (BASE_NADDR_NRL2_DPMAIF_DL + 0x50) +#define NRL2_DPMAIF_DL_DBG_STA1 (BASE_NADDR_NRL2_DPMAIF_DL + 0xB4) + +/* dpmaif ap misc part registers. */ +#define NRL2_DPMAIF_AP_MISC_AP_L2TISAR0 (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x00) +#define NRL2_DPMAIF_AP_MISC_APDL_L2TISAR0 (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x50) +#define NRL2_DPMAIF_AP_MISC_AP_IP_BUSY (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x60) +#define NRL2_DPMAIF_AP_MISC_CG_EN (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x68) +#define NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x90) +#define NRL2_DPMAIF_AP_MISC_RSTR_CLR (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x94) + +/* dpmaif uplink ao part registers. */ +#define NRL2_DPMAIF_AO_UL_INIT_SET (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x0) +#define NRL2_DPMAIF_AO_UL_CHNL_ARB0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x1C) +#define NRL2_DPMAIF_AO_UL_AP_L2TIMR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x80) +#define NRL2_DPMAIF_AO_UL_AP_L2TIMCR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x84) +#define NRL2_DPMAIF_AO_UL_AP_L2TIMSR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x88) +#define NRL2_DPMAIF_AO_UL_AP_L1TIMR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x8C) +#define NRL2_DPMAIF_AO_UL_APDL_L2TIMR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x90) +#define NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x94) +#define NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x98) +#define NRL2_DPMAIF_AO_UL_AP_DL_UL_IP_BUSY_MASK (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x9C) + +/* dpmaif uplink pd sram part registers. */ +#define NRL2_DPMAIF_AO_UL_CHNL0_CON0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x10) +#define NRL2_DPMAIF_AO_UL_CHNL0_CON1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x14) +#define NRL2_DPMAIF_AO_UL_CHNL0_CON2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x18) +#define NRL2_DPMAIF_DLY_IRQ_TIMER3 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x1C) +#define NRL2_DPMAIF_DLY_IRQ_TIMER4 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x2C) +#define NRL2_DPMAIF_DLY_IRQ_TIMER5 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x3C) +#define NRL2_DPMAIF_DLY_IRQ_TIMER6 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x60) +#define NRL2_DPMAIF_DLY_IRQ_TIMER7 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x64) +#define NRL2_DPMAIF_AO_UL_CH0_STA (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0xE0) + +/* dpmaif downlink ao part registers. */ +#define NRL2_DPMAIF_AO_DL_INIT_SET (BASE_NADDR_NRL2_DPMAIF_AO_DL + 0x0) +#define NRL2_DPMAIF_AO_DL_LROPIT_INIT_CON5 (BASE_NADDR_NRL2_DPMAIF_AO_DL + 0x28) +#define NRL2_DPMAIF_AO_DL_LROPIT_TRIG_THRES (BASE_NADDR_NRL2_DPMAIF_AO_DL + 0x34) + +/* dpmaif downlink pd sram part registers. */ +#define NRL2_DPMAIF_AO_DL_PKTINFO_CON0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x0) +#define NRL2_DPMAIF_AO_DL_PKTINFO_CON1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x4) +#define NRL2_DPMAIF_AO_DL_PKTINFO_CON2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x8) +#define NRL2_DPMAIF_AO_DL_RDY_CHK_THRES (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xC) +#define NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x10) +#define NRL2_DPMAIF_AO_DL_LRO_AGG_CFG (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x20) +#define NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x24) +#define NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x28) +#define NRL2_DPMAIF_AO_DL_HPC_CNTL (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x38) +#define NRL2_DPMAIF_AO_DL_PIT_SEQ_END (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x40) +#define NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x58) +#define NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x5C) +#define NRL2_DPMAIF_AO_DL_BAT_STA2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xD8) +#define NRL2_DPMAIF_AO_DL_BAT_STA3 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xDC) +#define NRL2_DPMAIF_AO_DL_PIT_STA2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xEC) +#define NRL2_DPMAIF_AO_DL_PIT_STA3 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x60) +#define NRL2_DPMAIF_AO_DL_FRGBAT_STA2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x78) +#define NRL2_DPMAIF_AO_DL_LRO_STA5 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xA4) +#define NRL2_DPMAIF_AO_DL_LRO_STA6 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xA8) + +/* dpmaif hpc part registers. */ +#define NRL2_DPMAIF_HPC_SW_5TUPLE_TRIG (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x030) +#define NRL2_DPMAIF_HPC_5TUPLE_STS (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x034) +#define NRL2_DPMAIF_HPC_SW_ADD_RULE0 (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x060) +#define NRL2_DPMAIF_HPC_INTR_MASK (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x0F4) + +/* dpmaif LRO part registers. */ +#define NRL2_DPMAIF_DL_LROPIT_INIT (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x0) +#define NRL2_DPMAIF_DL_LROPIT_ADD (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x10) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON0 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x14) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON1 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x18) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON2 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x1C) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON5 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x28) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON3 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x20) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON4 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x24) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON6 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x2C) + +/* dpmaif pd sram misc2 part registers. */ +#define NRL2_REG_TOE_HASH_EN (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x0) +#define NRL2_REG_HASH_CFG_CON (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x4) +#define NRL2_REG_HASH_SEC_KEY_0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x8) +#define NRL2_REG_HPC_STATS_THRES (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x30) +#define NRL2_REG_HPC_STATS_TIMER_CFG (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x34) +#define NRL2_REG_HASH_SEC_KEY_UPD (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0X70) + +/* dpmaif pd ul, ao ul config. */ +#define DPMAIF_PD_UL_CHNL_ARB0 NRL2_DPMAIF_AO_UL_CHNL_ARB0 +#define DPMAIF_PD_UL_CHNL0_CON0 NRL2_DPMAIF_AO_UL_CHNL0_CON0 +#define DPMAIF_PD_UL_CHNL0_CON1 NRL2_DPMAIF_AO_UL_CHNL0_CON1 +#define DPMAIF_PD_UL_CHNL0_CON2 NRL2_DPMAIF_AO_UL_CHNL0_CON2 +#define DPMAIF_PD_UL_ADD_DESC_CH NRL2_DPMAIF_UL_ADD_DESC_CH0 +#define DPMAIF_PD_UL_DBG_STA2 NRL2_DPMAIF_UL_DBG_STA2 + +/* dpmaif pd dl config. */ +#define DPMAIF_PD_DL_BAT_INIT NRL2_DPMAIF_DL_BAT_INIT +#define DPMAIF_PD_DL_BAT_ADD NRL2_DPMAIF_DL_BAT_ADD +#define DPMAIF_PD_DL_BAT_INIT_CON0 NRL2_DPMAIF_DL_BAT_INIT_CON0 +#define DPMAIF_PD_DL_BAT_INIT_CON1 NRL2_DPMAIF_DL_BAT_INIT_CON1 +#define DPMAIF_PD_DL_BAT_INIT_CON3 NRL2_DPMAIF_DL_BAT_INIT_CON3 +#define DPMAIF_PD_DL_DBG_STA1 NRL2_DPMAIF_DL_DBG_STA1 + +/* dpmaif pd ap misc, ao ul misc config. */ +#define DPMAIF_PD_AP_UL_L2TISAR0 NRL2_DPMAIF_AP_MISC_AP_L2TISAR0 +#define DPMAIF_PD_AP_UL_L2TIMR0 NRL2_DPMAIF_AO_UL_AP_L2TIMR0 +#define DPMAIF_PD_AP_UL_L2TICR0 NRL2_DPMAIF_AO_UL_AP_L2TIMCR0 +#define DPMAIF_PD_AP_UL_L2TISR0 NRL2_DPMAIF_AO_UL_AP_L2TIMSR0 +#define DPMAIF_PD_AP_DL_L2TISAR0 NRL2_DPMAIF_AP_MISC_APDL_L2TISAR0 +#define DPMAIF_PD_AP_DL_L2TIMR0 NRL2_DPMAIF_AO_UL_APDL_L2TIMR0 +#define DPMAIF_PD_AP_DL_L2TICR0 NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0 +#define DPMAIF_PD_AP_DL_L2TISR0 NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0 +#define DPMAIF_PD_AP_IP_BUSY NRL2_DPMAIF_AP_MISC_AP_IP_BUSY +#define DPMAIF_PD_AP_DLUL_IP_BUSY_MASK NRL2_DPMAIF_AO_UL_AP_DL_UL_IP_BUSY_MASK + +/* dpmaif ao dl config. */ +#define DPMAIF_AO_DL_PKTINFO_CONO NRL2_DPMAIF_AO_DL_PKTINFO_CON0 +#define DPMAIF_AO_DL_PKTINFO_CON1 NRL2_DPMAIF_AO_DL_PKTINFO_CON1 +#define DPMAIF_AO_DL_PKTINFO_CON2 NRL2_DPMAIF_AO_DL_PKTINFO_CON2 +#define DPMAIF_AO_DL_RDY_CHK_THRES NRL2_DPMAIF_AO_DL_RDY_CHK_THRES +#define DPMAIF_AO_DL_BAT_STA2 NRL2_DPMAIF_AO_DL_BAT_STA2 +#define DPMAIF_AO_DL_BAT_STA3 NRL2_DPMAIF_AO_DL_BAT_STA3 +#define DPMAIF_AO_DL_PIT_STA2 NRL2_DPMAIF_AO_DL_PIT_STA2 +#define DPMAIF_AO_DL_PIT_STA3 NRL2_DPMAIF_AO_DL_PIT_STA3 +#define DPMAIF_AO_DL_FRG_CHK_THRES NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES +#define DPMAIF_AO_DL_FRG_STA2 NRL2_DPMAIF_AO_DL_FRGBAT_STA2 + +/* DPMAIF AO register */ +#define DPMAIF_AP_RGU_ASSERT 0x10001120 +#define DPMAIF_AP_RGU_DEASSERT 0x10001124 +#define DPMAIF_AP_RST_BIT BIT(4) +#define DPMAIF_AP_AO_RGU_ASSERT 0x10001140 +#define DPMAIF_AP_AO_RGU_DEASSERT 0x10001144 +#define DPMAIF_AP_AO_RST_BIT BIT(3) + +/* hw configuration */ +#define DPMAIF_ULQSAR_N(q_num)\ + ((DPMAIF_PD_UL_CHNL0_CON0) + (0x10 * (q_num))) + +#define DPMAIF_UL_DRBSIZE_ADDRH_N(q_num)\ + ((DPMAIF_PD_UL_CHNL0_CON1) + (0x10 * (q_num))) + +#define DPMAIF_UL_DRB_ADDRH_N(q_num)\ + ((DPMAIF_PD_UL_CHNL0_CON2) + (0x10 * (q_num))) + +#define DPMAIF_ULQ_STA0_N(q_num)\ + ((NRL2_DPMAIF_AO_UL_CH0_STA) + (0x04 * (q_num))) + +#define DPMAIF_ULQ_ADD_DESC_CH_N(q_num)\ + ((DPMAIF_PD_UL_ADD_DESC_CH) + (0x04 * (q_num))) + +#define DPMAIF_ULQS 0x1F + +#define DPMAIF_UL_ADD_NOT_READY BIT(31) +#define DPMAIF_UL_ADD_UPDATE BIT(31) +#define DPMAIF_UL_ALL_QUE_ARB_EN (DPMAIF_ULQS << 8) + +#define DPMAIF_DL_ADD_UPDATE BIT(31) +#define DPMAIF_DL_ADD_NOT_READY BIT(31) +#define DPMAIF_DL_FRG_ADD_UPDATE BIT(16) + +#define DPMAIF_DL_BAT_INIT_ALLSET BIT(0) +#define DPMAIF_DL_BAT_FRG_INIT BIT(16) +#define DPMAIF_DL_BAT_INIT_EN BIT(31) +#define DPMAIF_DL_BAT_INIT_NOT_READY BIT(31) +#define DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT 0 + +#define DPMAIF_DL_PIT_INIT_ALLSET BIT(0) +#define DPMAIF_DL_PIT_INIT_EN BIT(31) +#define DPMAIF_DL_PIT_INIT_NOT_READY BIT(31) + +#define DPMAIF_PKT_ALIGN64_MODE 0 +#define DPMAIF_PKT_ALIGN128_MODE 1 + +#define DPMAIF_BAT_REMAIN_SZ_BASE 16 +#define DPMAIF_BAT_BUFFER_SZ_BASE 128 +#define DPMAIF_FRG_BUFFER_SZ_BASE 128 + +#define DPMAIF_PIT_SIZE_MSK 0x3FFFF + +#define DPMAIF_BAT_EN_MSK BIT(16) +#define DPMAIF_FRG_EN_MSK BIT(28) +#define DPMAIF_BAT_SIZE_MSK 0xFFFF + +#define DPMAIF_BAT_BID_MAXCNT_MSK 0xFFFF0000 +#define DPMAIF_BAT_REMAIN_MINSZ_MSK 0x0000FF00 +#define DPMAIF_PIT_CHK_NUM_MSK 0xFF000000 +#define DPMAIF_BAT_BUF_SZ_MSK 0x0001FF00 +#define DPMAIF_FRG_BUF_SZ_MSK 0x0001FF00 +#define DPMAIF_BAT_RSV_LEN_MSK 0x000000FF +#define DPMAIF_PKT_ALIGN_MSK (0x3 << 22) + +#define DPMAIF_BAT_CHECK_THRES_MSK (0x3F << 16) +#define DPMAIF_FRG_CHECK_THRES_MSK 0xFF +#define DPMAIF_PKT_ALIGN_EN BIT(23) +#define DPMAIF_DRB_SIZE_MSK 0x0000FFFF + +#define DPMAIF_DL_PIT_WRIDX_MSK 0x3FFFF +#define DPMAIF_DL_BAT_WRIDX_MSK 0x3FFFF +#define DPMAIF_DL_FRG_WRIDX_MSK 0x3FFFF + +/* DPMAIF_PD_UL_DBG_STA2 */ +#define DPMAIF_UL_IDLE_STS_MSK BIT(11) +#define DPMAIF_UL_IDLE_STS BIT(11) + +/* DPMAIF_PD_DL_DBG_STA1 */ +#define DPMAIF_DL_IDLE_STS BIT(23) +#define DPMAIF_DL_PKT_CHECKSUM_EN BIT(31) +#define DPMAIF_PORT_MODE_MSK BIT(30) +#define DPMAIF_PORT_MODE_PCIE BIT(30) + +/* BASE_NADDR_NRL2_DPMAIF_WDMA */ +#define DPMAIF_DL_BAT_CACHE_PRI BIT(22) +#define DPMAIF_DL_BURST_PIT_EN BIT(13) +#define DPMAIF_MEM_CLR_MASK BIT(0) +#define DPMAIF_SRAM_SYNC_MASK BIT(0) +#define DPMAIF_UL_INIT_DONE_MASK BIT(0) +#define DPMAIF_DL_INIT_DONE_MASK BIT(0) + +#define DPMAIF_DL_PIT_SEQ_MSK 0xFF +#define DPMAIF_PCIE_MODE_SET_VALUE 0x55 + +#define DPMAIF_UDL_IP_BUSY_MSK BIT(0) + +#define DP_UL_INT_DONE_OFFSET 0 +#define DP_UL_INT_EMPTY_OFFSET 5 +#define DP_UL_INT_MD_NOTRDY_OFFSET 10 +#define DP_UL_INT_PWR_NOTRDY_OFFSET 15 +#define DP_UL_INT_LEN_ERR_OFFSET 20 + +/* Enable and mask/unmaks UL interrupt */ +#define DPMAIF_UL_INT_QDONE_MSK (DPMAIF_ULQS << DP_UL_INT_DONE_OFFSET) +#define DPMAIF_UL_TOP0_INT_MSK BIT(9) + +/* UL interrupt status */ +#define DPMAIF_UL_INT_QDONE (DPMAIF_ULQS << DP_UL_INT_DONE_OFFSET) + +/* Enable and Mask/unmask DL interrupt */ +#define DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK BIT(2) +#define DPMAIF_DL_INT_FRG_LEN_ERR_MSK BIT(7) +#define DPMAIF_DL_INT_DLQ0_QDONE_MSK BIT(8) +#define DPMAIF_DL_INT_DLQ1_QDONE_MSK BIT(9) +#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK BIT(10) +#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK BIT(11) +#define DPMAIF_DL_INT_Q2TOQ1_MSK BIT(24) +#define DPMAIF_DL_INT_Q2APTOP_MSK BIT(25) + +/* DL interrupt status */ +#define DPMAIF_DL_INT_DUMMY_STATUS BIT(0) +#define DPMAIF_DL_INT_BATCNT_LEN_ERR BIT(2) +#define DPMAIF_DL_INT_FRG_LEN_ERR BIT(7) +#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR BIT(8) +#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR BIT(9) +#define DPMAIF_DL_INT_DLQ0_QDONE BIT(13) +#define DPMAIF_DL_INT_DLQ1_QDONE BIT(14) + +/* DPMAIF LRO HW configure */ +#define DPMAIF_HPC_LRO_PATH_DF 3 +/* 0: HPC rules add by HW; 1: HPC rules add by Host */ +#define DPMAIF_HPC_ADD_MODE_DF 0 +#define DPMAIF_HPC_TOTAL_NUM 8 +#define DPMAIF_HPC_MAX_TOTAL_NUM 8 +#define DPMAIF_AGG_MAX_LEN_DF 65535 +#define DPMAIF_AGG_TBL_ENT_NUM_DF 50 +#define DPMAIF_HASH_PRIME_DF 13 +#define DPMAIF_MID_TIMEOUT_THRES_DF 100 +#define DPMAIF_LRO_TIMEOUT_THRES_DF 100 +#define DPMAIF_LRO_PRS_THRES_DF 10 +#define DPMAIF_LRO_HASH_BIT_CHOOSE_DF 0 + +#define DPMAIF_LROPIT_EN_MSK 0x100000 +#define DPMAIF_LROPIT_CHAN_OFS 16 +#define DPMAIF_ADD_LRO_PIT_CHAN_OFS 20 + +#define DPMAIF_DL_PIT_BYTE_SIZE 16 +#define DPMAIF_DL_BAT_BYTE_SIZE 8 +#define DPMAIF_DL_FRG_BYTE_SIZE 8 +#define DPMAIF_UL_DRB_BYTE_SIZE 16 + +#define DPMAIF_UL_DRB_ENTRY_WORD (DPMAIF_UL_DRB_BYTE_SIZE >> 2) +#define DPMAIF_DL_PIT_ENTRY_WORD (DPMAIF_DL_PIT_BYTE_SIZE >> 2) +#define DPMAIF_DL_BAT_ENTRY_WORD (DPMAIF_DL_BAT_BYTE_SIZE >> 2) + +#define DPMAIF_HW_BAT_REMAIN 64 +#define DPMAIF_HW_PKT_BIDCNT 1 + +#define DPMAIF_HW_CHK_BAT_NUM 62 +#define DPMAIF_HW_CHK_FRG_NUM 3 +#define DPMAIF_HW_CHK_PIT_NUM (2 * DPMAIF_HW_CHK_BAT_NUM) + +#define DPMAIF_DLQ_NUM 2 +#define DPMAIF_ULQ_NUM 5 +#define DPMAIF_PKT_BIDCNT 1 + +#define DPMAIF_TOEPLITZ_HASH_EN 1 + +/* word num */ +#define DPMAIF_HASH_SEC_KEY_NUM 40 +#define DPMAIF_HASH_DEFAULT_VALUE 0 +#define DPMAIF_HASH_BIT_MASK_DF 0x7 +#define DPMAIF_HASH_INDR_MASK_DF 0xF0 + +/* 10k */ +#define DPMAIF_HPC_STATS_THRESHOLD 0x2800 + +/* 0x7A1- 1s: unit:512us */ +#define DPMAIF_HPC_STATS_TIMER_CFG 0 + +#define DPMAIF_HASH_INDR_SIZE (DPMAIF_HASH_BIT_MASK_DF + 1) +#define DPMAIF_HASH_INDR_MASK 0xFF00FFFF +#define DPMAIF_HASH_DEFAULT_V_MASK 0xFFFFFF00 +#define DPMAIF_HASH_BIT_MASK 0xFFFFF0FF + +/* dpmaif interrupt configuration */ +#define DPMAIF_AP_UL_L2INTR_EN_MASK DPMAIF_UL_INT_QDONE_MSK + +#define DPMAIF_AP_DL_L2INTR_EN_MASK\ + (DPMAIF_DL_INT_DLQ0_QDONE_MSK | DPMAIF_DL_INT_DLQ1_QDONE_MSK |\ + DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK | DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK |\ + DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK | DPMAIF_DL_INT_FRG_LEN_ERR_MSK) + +#define DPMAIF_AP_UDL_IP_BUSY_EN_MASK (DPMAIF_UDL_IP_BUSY_MSK) + +/* dpmaif interrupt mask status by interrupt source */ +#define DPMAIF_SRC0_DL_STATUS_MASK\ + (DPMAIF_DL_INT_DLQ0_QDONE | DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR |\ + DPMAIF_DL_INT_BATCNT_LEN_ERR | DPMAIF_DL_INT_FRG_LEN_ERR | DPMAIF_DL_INT_DUMMY_STATUS) + +#define DPMAIF_SRC1_DL_STATUS_MASK\ + (DPMAIF_DL_INT_DLQ1_QDONE | DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR) + +#endif From patchwork Wed Jan 18 11:38:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 13106231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D014C38147 for ; Wed, 18 Jan 2023 11:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=dMrUkkKRvfRKWJqThYBbZyBLLI3Nu5HE3sYuJb99UUg=; b=DDrSrZcHpCUl7nKAg9HzfoOeV4 d0mkUkBXoA+jATRS7+5NEp2tCkO+rurfCrlkqxHlmKDgVXygtYgdPf5vnJnMkoFvmh1W0vKZoGpn0 luZIO3rgpnO6JK9iLyAEj7Y5hqsG47Dw8OXp+ynNfDgfNqoCmfaJ5MEN0hGzOUD9c2e5tAGMUgndX onWXKUNFjr5tIWBLHEfDMUdlStS6d/FGhZM0T+isWIOuFGK8pLZbwpn5m1F4DgtrzEYMaeKWWLSV/ J22S1I5HQ5jiGMSpQuFCjuIckAD7wI02EHI3jlITFPokTGLV04DgmsQ1DUDxFyrV7sZeRDk6i4XF9 fin34uaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI6tA-000bND-Qz; Wed, 18 Jan 2023 11:45:56 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI6t5-000bLr-G4 for linux-mediatek@lists.infradead.org; Wed, 18 Jan 2023 11:45:54 +0000 X-UUID: a578a36a972511edbbe3f76fe852e059-20230118 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=dMrUkkKRvfRKWJqThYBbZyBLLI3Nu5HE3sYuJb99UUg=; b=i88omke6mKce52zll0jyrNo1UfEv/puGMPgR6CHgqACX/PSTMGaXhKIZrqnzD6bGxcN6PEuP2HS6y/vUtLEep3Q7SM7SnUMiodpSUUC7l+DogM66whlt8USJpywDZA7Kr6TQYNiUk6z/CDXJvoXNYXxh5/VTUBC7QcUDm92iGgI=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.18,REQID:647398a2-2799-48fd-b7fa-20f1d7820770,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:3ca2d6b,CLOUDID:adbf0355-dd49-462e-a4be-2143a3ddc739,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0 X-CID-BVR: 0 X-UUID: a578a36a972511edbbe3f76fe852e059-20230118 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1876583153; Wed, 18 Jan 2023 04:45:46 -0700 Received: from mtkmbs13n1.mediatek.inc (172.21.101.193) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Wed, 18 Jan 2023 19:45:05 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs13n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Wed, 18 Jan 2023 19:45:03 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: Intel experts , Chetan , MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang Subject: [PATCH net-next v2 09/12] net: wwan: tmi: Introduce WWAN interface Date: Wed, 18 Jan 2023 19:38:56 +0800 Message-ID: <20230118113859.175836-10-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230118113859.175836-1-yanchao.yang@mediatek.com> References: <20230118113859.175836-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230118_034551_590012_67107AD4 X-CRM114-Status: GOOD ( 27.29 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Creates the WWAN interface which implements the wwan_ops for registration with the WWAN framework. WWAN interface also implements the net_device_ops functions used by the network devices. Network device operations include open, stop, start transmission and get states. Signed-off-by: Yanchao Yang Signed-off-by: Hua Yang --- drivers/net/wwan/mediatek/Makefile | 4 +- drivers/net/wwan/mediatek/mtk_data_plane.h | 25 +- drivers/net/wwan/mediatek/mtk_dpmaif.c | 76 ++- drivers/net/wwan/mediatek/mtk_dpmaif_drv.h | 10 +- drivers/net/wwan/mediatek/mtk_ethtool.c | 179 ++++++ drivers/net/wwan/mediatek/mtk_wwan.c | 662 +++++++++++++++++++++ 6 files changed, 943 insertions(+), 13 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_ethtool.c create mode 100644 drivers/net/wwan/mediatek/mtk_wwan.c diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 8c37a7f9d598..6a5e699987ef 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -12,7 +12,9 @@ mtk_tmi-y = \ mtk_port.o \ mtk_port_io.o \ mtk_fsm.o \ - mtk_dpmaif.o + mtk_dpmaif.o \ + mtk_wwan.o \ + mtk_ethtool.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_data_plane.h b/drivers/net/wwan/mediatek/mtk_data_plane.h index 4daf3ec32c91..40c48b01e02c 100644 --- a/drivers/net/wwan/mediatek/mtk_data_plane.h +++ b/drivers/net/wwan/mediatek/mtk_data_plane.h @@ -22,11 +22,12 @@ enum mtk_data_feature { DATA_F_RXFH = BIT(1), DATA_F_INTR_COALESCE = BIT(2), DATA_F_MULTI_NETDEV = BIT(16), - DATA_F_ETH_PDN = BIT(17), + DATA_F_ETH_PDN = BIT(17) }; struct mtk_data_blk { struct mtk_md_dev *mdev; + struct mtk_wwan_ctlb *wcb; struct mtk_dpmaif_ctlb *dcb; }; @@ -85,6 +86,16 @@ struct mtk_data_trans_ops { struct sk_buff *skb, u64 data); }; +enum mtk_data_evt { + DATA_EVT_MIN, + DATA_EVT_TX_START, + DATA_EVT_TX_STOP, + DATA_EVT_RX_STOP, + DATA_EVT_REG_DEV, + DATA_EVT_UNREG_DEV, + DATA_EVT_MAX +}; + struct mtk_data_trans_info { u32 cap; unsigned char rxq_cnt; @@ -93,9 +104,21 @@ struct mtk_data_trans_info { struct napi_struct **napis; }; +struct mtk_data_port_ops { + int (*init)(struct mtk_data_blk *data_blk, struct mtk_data_trans_info *trans_info); + void (*exit)(struct mtk_data_blk *data_blk); + int (*recv)(struct mtk_data_blk *data_blk, struct sk_buff *skb, + unsigned char q_id, unsigned char if_id); + void (*notify)(struct mtk_data_blk *data_blk, enum mtk_data_evt evt, u64 data); +}; + +void mtk_ethtool_set_ops(struct net_device *dev); +int mtk_wwan_cmd_execute(struct net_device *dev, enum mtk_data_cmd_type cmd, void *data); +u16 mtk_wwan_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev); int mtk_data_init(struct mtk_md_dev *mdev); int mtk_data_exit(struct mtk_md_dev *mdev); +extern struct mtk_data_port_ops data_port_ops; extern struct mtk_data_trans_ops data_trans_ops; #endif /* __MTK_DATA_PLANE_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif.c b/drivers/net/wwan/mediatek/mtk_dpmaif.c index 36f247146bca..246b093a8cf6 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif.c +++ b/drivers/net/wwan/mediatek/mtk_dpmaif.c @@ -426,6 +426,7 @@ enum dpmaif_dump_flag { struct mtk_dpmaif_ctlb { struct mtk_data_blk *data_blk; + struct mtk_data_port_ops *port_ops; struct dpmaif_drv_info *drv_info; struct napi_struct *napi[DPMAIF_RXQ_CNT_MAX]; @@ -707,10 +708,10 @@ static int mtk_dpmaif_reload_rx_page(struct mtk_dpmaif_ctlb *dcb, page_info->offset = data - page_address(page_info->page); page_info->data_len = bat_ring->buf_size; page_info->data_dma_addr = dma_map_page(DCB_TO_MDEV(dcb)->dev, - page_info->page, - page_info->offset, - page_info->data_len, - DMA_FROM_DEVICE); + page_info->page, + page_info->offset, + page_info->data_len, + DMA_FROM_DEVICE); ret = dma_mapping_error(DCB_TO_MDEV(dcb)->dev, page_info->data_dma_addr); if (unlikely(ret)) { dev_err(DCB_TO_MDEV(dcb)->dev, "Failed to map dma!\n"); @@ -1421,6 +1422,8 @@ static int mtk_dpmaif_tx_rel_internal(struct dpmaif_txq *txq, txq->drb_rel_rd_idx = cur_idx; atomic_inc(&txq->budget); + if (atomic_read(&txq->budget) > txq->drb_cnt / 8) + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_TX_START, (u64)1 << txq->id); } *real_rel_cnt = i; @@ -2795,6 +2798,40 @@ static int mtk_dpmaif_irq_exit(struct mtk_dpmaif_ctlb *dcb) return 0; } +static int mtk_dpmaif_port_init(struct mtk_dpmaif_ctlb *dcb) +{ + struct mtk_data_trans_info trans_info; + struct dpmaif_rxq *rxq; + int ret; + int i; + + memset(&trans_info, 0x00, sizeof(struct mtk_data_trans_info)); + trans_info.cap = dcb->res_cfg->cap; + trans_info.txq_cnt = dcb->res_cfg->txq_cnt; + trans_info.rxq_cnt = dcb->res_cfg->rxq_cnt; + trans_info.max_mtu = dcb->bat_info.max_mtu; + + for (i = 0; i < trans_info.rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + dcb->napi[i] = &rxq->napi; + } + trans_info.napis = dcb->napi; + + /* Initialize data port layer. */ + dcb->port_ops = &data_port_ops; + ret = dcb->port_ops->init(dcb->data_blk, &trans_info); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), + "Failed to initialize data port layer, ret=%d\n", ret); + + return ret; +} + +static void mtk_dpmaif_port_exit(struct mtk_dpmaif_ctlb *dcb) +{ + dcb->port_ops->exit(dcb->data_blk); +} + static int mtk_dpmaif_hw_init(struct mtk_dpmaif_ctlb *dcb) { struct dpmaif_bat_ring *bat_ring; @@ -2940,11 +2977,18 @@ static int mtk_dpmaif_stop(struct mtk_dpmaif_ctlb *dcb) */ dcb->dpmaif_state = DPMAIF_STATE_PWROFF; + /* Stop data port layer tx. */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_TX_STOP, 0xff); + /* Stop all tx service. */ mtk_dpmaif_tx_srvs_stop(dcb); /* Stop dpmaif tx/rx handle. */ mtk_dpmaif_trans_ctl(dcb, false); + + /* Stop data port layer rx. */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_RX_STOP, 0xff); + out: return 0; } @@ -2962,6 +3006,11 @@ static void mtk_dpmaif_fsm_callback(struct mtk_fsm_param *fsm_param, void *data) case FSM_STATE_OFF: mtk_dpmaif_stop(dcb); + /* Unregister data port, because data port will be + * registered again in FSM_STATE_READY stage. + */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_UNREG_DEV, 0); + /* Flush all cmd process. */ flush_work(&dcb->cmd_srv.work); @@ -2973,6 +3022,7 @@ static void mtk_dpmaif_fsm_callback(struct mtk_fsm_param *fsm_param, void *data) mtk_dpmaif_start(dcb); break; case FSM_STATE_READY: + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_REG_DEV, 0); break; case FSM_STATE_MDEE: if (fsm_param->fsm_flag == FSM_F_MDEE_INIT) @@ -3056,6 +3106,12 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif goto err_init_drv_res; } + ret = mtk_dpmaif_port_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize data port, ret=%d\n", ret); + goto err_init_port; + } + ret = mtk_dpmaif_fsm_init(dcb); if (ret < 0) { dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif fsm, ret=%d\n", ret); @@ -3073,6 +3129,8 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif err_init_irq: mtk_dpmaif_fsm_exit(dcb); err_init_fsm: + mtk_dpmaif_port_exit(dcb); +err_init_port: mtk_dpmaif_drv_res_exit(dcb); err_init_drv_res: mtk_dpmaif_cmd_srvs_exit(dcb); @@ -3100,6 +3158,7 @@ static int mtk_dpmaif_sw_exit(struct mtk_data_blk *data_blk) mtk_dpmaif_irq_exit(dcb); mtk_dpmaif_fsm_exit(dcb); + mtk_dpmaif_port_exit(dcb); mtk_dpmaif_drv_res_exit(dcb); mtk_dpmaif_cmd_srvs_exit(dcb); mtk_dpmaif_tx_srvs_exit(dcb); @@ -3521,6 +3580,8 @@ static int mtk_dpmaif_rx_skb(struct dpmaif_rxq *rxq, struct dpmaif_rx_record *rx skb_record_rx_queue(new_skb, rxq->id); + /* Send skb to data port. */ + ret = dcb->port_ops->recv(dcb->data_blk, new_skb, rxq->id, rx_record->cur_ch_id); dcb->traffic_stats.rx_packets[rxq->id]++; out: rx_record->lro_parent = NULL; @@ -3883,10 +3944,13 @@ static int mtk_dpmaif_send_pkt(struct mtk_dpmaif_ctlb *dcb, struct sk_buff *skb, vq = &dcb->tx_vqs[vq_id]; srv_id = dcb->res_cfg->tx_vq_srv_map[vq_id]; - if (likely(skb_queue_len(&vq->list) < vq->max_len)) + if (likely(skb_queue_len(&vq->list) < vq->max_len)) { skb_queue_tail(&vq->list, skb); - else + } else { + /* Notify to data port layer, data port should carry off the net device tx queue. */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_TX_STOP, (u64)1 << vq_id); ret = -EBUSY; + } mtk_dpmaif_wake_up_tx_srv(&dcb->tx_srvs[srv_id]); diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h index 29b6c99bba42..34ec846e6336 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h +++ b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h @@ -84,12 +84,12 @@ enum mtk_drv_err { enum { DPMAIF_CLEAR_INTR, - DPMAIF_UNMASK_INTR, + DPMAIF_UNMASK_INTR }; enum dpmaif_drv_dlq_id { DPMAIF_DLQ0 = 0, - DPMAIF_DLQ1, + DPMAIF_DLQ1 }; struct dpmaif_drv_dlq { @@ -132,7 +132,7 @@ enum dpmaif_drv_ring_type { DPMAIF_PIT, DPMAIF_BAT, DPMAIF_FRAG, - DPMAIF_DRB, + DPMAIF_DRB }; enum dpmaif_drv_ring_idx { @@ -143,7 +143,7 @@ enum dpmaif_drv_ring_idx { DPMAIF_FRAG_WIDX, DPMAIF_FRAG_RIDX, DPMAIF_DRB_WIDX, - DPMAIF_DRB_RIDX, + DPMAIF_DRB_RIDX }; struct dpmaif_drv_irq_en_mask { @@ -184,7 +184,7 @@ enum dpmaif_drv_intr_type { DPMAIF_INTR_DL_FRGCNT_LEN_ERR, DPMAIF_INTR_DL_PITCNT_LEN_ERR, DPMAIF_INTR_DL_DONE, - DPMAIF_INTR_MAX + DPMAIF_INTR_MAX, }; #define DPMAIF_INTR_COUNT ((DPMAIF_INTR_MAX) - (DPMAIF_INTR_MIN) - 1) diff --git a/drivers/net/wwan/mediatek/mtk_ethtool.c b/drivers/net/wwan/mediatek/mtk_ethtool.c new file mode 100644 index 000000000000..b052d41027c2 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_ethtool.c @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include + +#include "mtk_data_plane.h" + +#define MTK_MAX_COALESCE_TIME 3 +#define MTK_MAX_COALESCE_FRAMES 1000 + +static int mtk_ethtool_cmd_execute(struct net_device *dev, enum mtk_data_cmd_type cmd, void *data) +{ + return mtk_wwan_cmd_execute(dev, cmd, data); +} + +static void mtk_ethtool_get_strings(struct net_device *dev, u32 sset, u8 *data) +{ + if (sset != ETH_SS_STATS) + return; + + mtk_ethtool_cmd_execute(dev, DATA_CMD_STRING_GET, data); +} + +static int mtk_ethtool_get_sset_count(struct net_device *dev, int sset) +{ + int s_count = 0; + int ret; + + if (sset != ETH_SS_STATS) + return -EOPNOTSUPP; + + ret = mtk_ethtool_cmd_execute(dev, DATA_CMD_STRING_CNT_GET, &s_count); + + if (ret) + return ret; + + return s_count; +} + +static void mtk_ethtool_get_stats(struct net_device *dev, + struct ethtool_stats *stats, u64 *data) +{ + mtk_ethtool_cmd_execute(dev, DATA_CMD_TRANS_DUMP, data); +} + +static int mtk_ethtool_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kec, + struct netlink_ext_ack *ack) +{ + struct mtk_data_intr_coalesce intr_get; + int ret; + + ret = mtk_ethtool_cmd_execute(dev, DATA_CMD_INTR_COALESCE_GET, &intr_get); + + if (ret) + return ret; + + ec->rx_coalesce_usecs = intr_get.rx_coalesce_usecs; + ec->tx_coalesce_usecs = intr_get.tx_coalesce_usecs; + ec->rx_max_coalesced_frames = intr_get.rx_coalesced_frames; + ec->tx_max_coalesced_frames = intr_get.tx_coalesced_frames; + + return 0; +} + +static int mtk_ethtool_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kec, + struct netlink_ext_ack *ack) +{ + struct mtk_data_intr_coalesce intr_set; + + if (ec->rx_coalesce_usecs > MTK_MAX_COALESCE_TIME) + return -EINVAL; + if (ec->tx_coalesce_usecs > MTK_MAX_COALESCE_TIME) + return -EINVAL; + if (ec->rx_max_coalesced_frames > MTK_MAX_COALESCE_FRAMES) + return -EINVAL; + if (ec->tx_max_coalesced_frames > MTK_MAX_COALESCE_FRAMES) + return -EINVAL; + + intr_set.rx_coalesce_usecs = ec->rx_coalesce_usecs; + intr_set.tx_coalesce_usecs = ec->tx_coalesce_usecs; + intr_set.rx_coalesced_frames = ec->rx_max_coalesced_frames; + intr_set.tx_coalesced_frames = ec->tx_max_coalesced_frames; + + return mtk_ethtool_cmd_execute(dev, DATA_CMD_INTR_COALESCE_SET, &intr_set); +} + +static int mtk_ethtool_get_rxfh(struct net_device *dev, u32 *indir, u8 *key, u8 *hfunc) +{ + struct mtk_data_rxfh rxfh; + + if (!indir && !key) + return 0; + + if (hfunc) + *hfunc = ETH_RSS_HASH_TOP; + + rxfh.indir = indir; + rxfh.key = key; + + return mtk_ethtool_cmd_execute(dev, DATA_CMD_RXFH_GET, &rxfh); +} + +static int mtk_ethtool_set_rxfh(struct net_device *dev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct mtk_data_rxfh rxfh; + + if (hfunc != ETH_RSS_HASH_NO_CHANGE) + return -EOPNOTSUPP; + + if (!indir && !key) + return 0; + + rxfh.indir = (u32 *)indir; + rxfh.key = (u8 *)key; + + return mtk_ethtool_cmd_execute(dev, DATA_CMD_RXFH_SET, &rxfh); +} + +static int mtk_ethtool_get_rxfhc(struct net_device *dev, + struct ethtool_rxnfc *rxnfc, u32 *rule_locs) +{ + u32 rx_rings; + int ret; + + /* Only supported %ETHTOOL_GRXRINGS */ + if (!rxnfc || rxnfc->cmd != ETHTOOL_GRXRINGS) + return -EOPNOTSUPP; + + ret = mtk_ethtool_cmd_execute(dev, DATA_CMD_RXQ_NUM_GET, &rx_rings); + if (!ret) + rxnfc->data = rx_rings; + + return ret; +} + +static u32 mtk_ethtool_get_indir_size(struct net_device *dev) +{ + u32 indir_size = 0; + + mtk_ethtool_cmd_execute(dev, DATA_CMD_INDIR_SIZE_GET, &indir_size); + + return indir_size; +} + +static u32 mtk_ethtool_get_hkey_size(struct net_device *dev) +{ + u32 hkey_size = 0; + + mtk_ethtool_cmd_execute(dev, DATA_CMD_HKEY_SIZE_GET, &hkey_size); + + return hkey_size; +} + +static const struct ethtool_ops mtk_wwan_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_MAX_FRAMES, + .get_ethtool_stats = mtk_ethtool_get_stats, + .get_sset_count = mtk_ethtool_get_sset_count, + .get_strings = mtk_ethtool_get_strings, + .get_coalesce = mtk_ethtool_get_coalesce, + .set_coalesce = mtk_ethtool_set_coalesce, + .get_rxfh = mtk_ethtool_get_rxfh, + .set_rxfh = mtk_ethtool_set_rxfh, + .get_rxnfc = mtk_ethtool_get_rxfhc, + .get_rxfh_indir_size = mtk_ethtool_get_indir_size, + .get_rxfh_key_size = mtk_ethtool_get_hkey_size, +}; + +void mtk_ethtool_set_ops(struct net_device *dev) +{ + dev->ethtool_ops = &mtk_wwan_ethtool_ops; +} diff --git a/drivers/net/wwan/mediatek/mtk_wwan.c b/drivers/net/wwan/mediatek/mtk_wwan.c new file mode 100644 index 000000000000..e232d403a983 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_wwan.c @@ -0,0 +1,662 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_data_plane.h" +#include "mtk_dev.h" + +#define MTK_NETDEV_MAX 20 +#define MTK_DFLT_INTF_ID 0 +#define MTK_NETDEV_WDT (HZ) +#define MTK_CMD_WDT (HZ) +#define MTK_MAX_INTF_ID (MTK_NETDEV_MAX - 1) +#define MTK_NAPI_POLL_WEIGHT 128 + +static unsigned int napi_budget = MTK_NAPI_POLL_WEIGHT; + +/** + * struct mtk_wwan_instance - Information about netdevice. + * @wcb: Contains all information about WWAN port layer. + * @stats: Statistics of netdevice's tx/rx packets. + * @tx_busy: Statistics of netdevice's busy counts. + * @netdev: Pointer to netdevice structure. + * @intf_id: The netdevice's interface id + */ +struct mtk_wwan_instance { + struct mtk_wwan_ctlb *wcb; + struct rtnl_link_stats64 stats; + unsigned long long tx_busy; + struct net_device *netdev; + unsigned int intf_id; +}; + +/** + * struct mtk_wwan_ctlb - Information about port layer and needed trans layer. + * @data_blk: Contains data port, trans layer, md_dev structure. + * @mdev: Pointer of mtk_md_dev. + * @trans_ops: Contains trans layer ops: send, select_txq, napi_poll. + * @wwan_inst: Instance of network device. + * @napis: Trans layer alloc napi structure by rx queue. + * @dummy_dev: Used for multiple network devices share one napi. + * @cap: Contains different hardware capabilities. + * @max_mtu: The max MTU supported. + * @napi_enabled: Mark for napi state. + * @active_cnt: The counter of network devices that are UP. + * @txq_num: Total TX qdisc number. + * @rxq_num: Total RX qdisc number. + * @reg_done: Mark for ntwork devices register state. + */ +struct mtk_wwan_ctlb { + struct mtk_data_blk *data_blk; + struct mtk_md_dev *mdev; + struct mtk_data_trans_ops *trans_ops; + struct mtk_wwan_instance __rcu *wwan_inst[MTK_NETDEV_MAX]; + struct napi_struct **napis; + struct net_device dummy_dev; + + u32 cap; + atomic_t napi_enabled; + unsigned int max_mtu; + unsigned int active_cnt; + unsigned char txq_num; + unsigned char rxq_num; + bool reg_done; +}; + +static void mtk_wwan_set_skb(struct sk_buff *skb, struct net_device *netdev) +{ + unsigned int pkt_type; + + pkt_type = skb->data[0] & 0xF0; + + if (pkt_type == IPV4_VERSION) + skb->protocol = htons(ETH_P_IP); + else + skb->protocol = htons(ETH_P_IPV6); + + skb->dev = netdev; +} + +static int mtk_wwan_data_recv(struct mtk_data_blk *data_blk, struct sk_buff *skb, + unsigned char q_id, unsigned char intf_id) +{ + struct mtk_wwan_instance *wwan_inst; + struct net_device *netdev; + struct napi_struct *napi; + + if (unlikely(!data_blk || !data_blk->wcb)) + goto err_rx; + + if (intf_id > MTK_MAX_INTF_ID) { + dev_err(data_blk->mdev->dev, "Invalid interface id=%d\n", intf_id); + goto err_rx; + } + + rcu_read_lock(); + wwan_inst = rcu_dereference(data_blk->wcb->wwan_inst[intf_id]); + + if (unlikely(!wwan_inst)) { + dev_err(data_blk->mdev->dev, "Invalid pointer wwan_inst is NULL\n"); + rcu_read_unlock(); + goto err_rx; + } + + napi = data_blk->wcb->napis[q_id]; + netdev = wwan_inst->netdev; + + mtk_wwan_set_skb(skb, netdev); + + wwan_inst->stats.rx_packets++; + wwan_inst->stats.rx_bytes += skb->len; + + napi_gro_receive(napi, skb); + + rcu_read_unlock(); + return 0; + +err_rx: + dev_kfree_skb_any(skb); + return -EINVAL; +} + +static void mtk_wwan_napi_enable(struct mtk_wwan_ctlb *wcb) +{ + int i; + + if (atomic_cmpxchg(&wcb->napi_enabled, 0, 1) == 0) { + for (i = 0; i < wcb->rxq_num; i++) + napi_enable(wcb->napis[i]); + } +} + +static void mtk_wwan_napi_disable(struct mtk_wwan_ctlb *wcb) +{ + int i; + + if (atomic_cmpxchg(&wcb->napi_enabled, 1, 0) == 1) { + for (i = 0; i < wcb->rxq_num; i++) { + napi_synchronize(wcb->napis[i]); + napi_disable(wcb->napis[i]); + } + } +} + +static int mtk_wwan_open(struct net_device *dev) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_wwan_ctlb *wcb = wwan_inst->wcb; + struct mtk_data_trans_ctl trans_ctl; + int ret; + + if (wcb->active_cnt == 0) { + mtk_wwan_napi_enable(wcb); + trans_ctl.enable = true; + ret = mtk_wwan_cmd_execute(dev, DATA_CMD_TRANS_CTL, &trans_ctl); + if (ret < 0) { + dev_err(wcb->mdev->dev, "Failed to enable trans\n"); + goto err_ctl; + } + } + + wcb->active_cnt++; + + netif_tx_start_all_queues(dev); + netif_carrier_on(dev); + + return 0; + +err_ctl: + mtk_wwan_napi_disable(wcb); + return ret; +} + +static int mtk_wwan_stop(struct net_device *dev) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_wwan_ctlb *wcb = wwan_inst->wcb; + struct mtk_data_trans_ctl trans_ctl; + int ret; + + netif_carrier_off(dev); + netif_tx_disable(dev); + + if (wcb->active_cnt == 1) { + trans_ctl.enable = false; + ret = mtk_wwan_cmd_execute(dev, DATA_CMD_TRANS_CTL, &trans_ctl); + if (ret < 0) + dev_err(wcb->mdev->dev, "Failed to disable trans\n"); + mtk_wwan_napi_disable(wcb); + } + wcb->active_cnt--; + + return 0; +} + +static void mtk_wwan_select_txq(struct mtk_wwan_instance *wwan_inst, struct sk_buff *skb, + enum mtk_pkt_type pkt_type) +{ + u16 qid; + + qid = wwan_inst->wcb->trans_ops->select_txq(skb, pkt_type); + if (qid > wwan_inst->wcb->txq_num) + qid = 0; + + skb_set_queue_mapping(skb, qid); +} + +static netdev_tx_t mtk_wwan_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + unsigned int intf_id = wwan_inst->intf_id; + unsigned int skb_len = skb->len; + int ret; + + if (unlikely(skb->len > dev->mtu)) { + dev_err(wwan_inst->wcb->mdev->dev, + "Failed to write skb,netdev=%s,len=0x%x,MTU=0x%x\n", + dev->name, skb->len, dev->mtu); + goto err_tx; + } + + /* select trans layer virtual queue */ + mtk_wwan_select_txq(wwan_inst, skb, PURE_IP); + + /* Forward skb to trans layer(DPMAIF). */ + ret = wwan_inst->wcb->trans_ops->send(wwan_inst->wcb->data_blk, DATA_PKT, skb, intf_id); + if (ret == -EBUSY) { + wwan_inst->tx_busy++; + return NETDEV_TX_BUSY; + } else if (ret == -EINVAL) { + goto err_tx; + } + + wwan_inst->stats.tx_packets++; + wwan_inst->stats.tx_bytes += skb_len; + goto out; + +err_tx: + wwan_inst->stats.tx_errors++; + wwan_inst->stats.tx_dropped++; + dev_kfree_skb_any(skb); +out: + return NETDEV_TX_OK; +} + +static void mtk_wwan_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + + memcpy(stats, &wwan_inst->stats, sizeof(*stats)); +} + +static const struct net_device_ops mtk_netdev_ops = { + .ndo_open = mtk_wwan_open, + .ndo_stop = mtk_wwan_stop, + .ndo_start_xmit = mtk_wwan_start_xmit, + .ndo_get_stats64 = mtk_wwan_get_stats, +}; + +static void mtk_wwan_cmd_complete(void *data) +{ + struct mtk_data_cmd *event; + struct sk_buff *skb = data; + + event = (struct mtk_data_cmd *)skb->data; + complete(&event->done); +} + +static int mtk_wwan_cmd_check(struct net_device *dev, enum mtk_data_cmd_type cmd) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + int ret = 0; + + switch (cmd) { + case DATA_CMD_INTR_COALESCE_GET: + fallthrough; + case DATA_CMD_INTR_COALESCE_SET: + if (!(wwan_inst->wcb->cap & DATA_F_INTR_COALESCE)) + ret = -EOPNOTSUPP; + break; + case DATA_CMD_INDIR_SIZE_GET: + fallthrough; + case DATA_CMD_HKEY_SIZE_GET: + fallthrough; + case DATA_CMD_RXFH_GET: + fallthrough; + case DATA_CMD_RXFH_SET: + if (!(wwan_inst->wcb->cap & DATA_F_RXFH)) + ret = -EOPNOTSUPP; + break; + case DATA_CMD_RXQ_NUM_GET: + fallthrough; + case DATA_CMD_TRANS_DUMP: + fallthrough; + case DATA_CMD_STRING_CNT_GET: + fallthrough; + case DATA_CMD_STRING_GET: + break; + case DATA_CMD_TRANS_CTL: + break; + default: + ret = -EOPNOTSUPP; + break; + } + + return ret; +} + +static struct sk_buff *mtk_wwan_cmd_alloc(enum mtk_data_cmd_type cmd, unsigned int len) + +{ + struct mtk_data_cmd *event; + struct sk_buff *skb; + + skb = dev_alloc_skb(sizeof(*event) + len); + if (unlikely(!skb)) + return NULL; + + skb_put(skb, len + sizeof(*event)); + event = (struct mtk_data_cmd *)skb->data; + event->cmd = cmd; + event->len = len; + + init_completion(&event->done); + event->data_complete = mtk_wwan_cmd_complete; + + return skb; +} + +static int mtk_wwan_cmd_send(struct net_device *dev, struct sk_buff *skb) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_data_cmd *event = (struct mtk_data_cmd *)skb->data; + int ret; + + ret = wwan_inst->wcb->trans_ops->send(wwan_inst->wcb->data_blk, DATA_CMD, skb, 0); + if (ret < 0) + return ret; + + if (!wait_for_completion_timeout(&event->done, MTK_CMD_WDT)) + return -ETIMEDOUT; + + if (event->ret < 0) + return event->ret; + + return 0; +} + +int mtk_wwan_cmd_execute(struct net_device *dev, + enum mtk_data_cmd_type cmd, void *data) +{ + struct mtk_wwan_instance *wwan_inst; + struct sk_buff *skb; + int ret; + + if (mtk_wwan_cmd_check(dev, cmd)) + return -EOPNOTSUPP; + + skb = mtk_wwan_cmd_alloc(cmd, sizeof(void *)); + if (unlikely(!skb)) + return -ENOMEM; + + SKB_TO_CMD_DATA(skb) = data; + + ret = mtk_wwan_cmd_send(dev, skb); + if (ret < 0) { + wwan_inst = wwan_netdev_drvpriv(dev); + dev_err(wwan_inst->wcb->mdev->dev, + "Failed to excute command:ret=%d,cmd=%d\n", ret, cmd); + } + + if (likely(skb)) + dev_kfree_skb_any(skb); + + return ret; +} + +static int mtk_wwan_start_txq(struct mtk_wwan_ctlb *wcb, u32 qmask) +{ + struct mtk_wwan_instance *wwan_inst; + struct net_device *dev; + int i; + + rcu_read_lock(); + /* All wwan network devices share same HIF queue */ + for (i = 0; i < MTK_NETDEV_MAX; i++) { + wwan_inst = rcu_dereference(wcb->wwan_inst[i]); + if (!wwan_inst) + continue; + + dev = wwan_inst->netdev; + + if (!(dev->flags & IFF_UP)) + continue; + + netif_tx_wake_all_queues(dev); + netif_carrier_on(dev); + } + rcu_read_unlock(); + + return 0; +} + +static int mtk_wwan_stop_txq(struct mtk_wwan_ctlb *wcb, u32 qmask) +{ + struct mtk_wwan_instance *wwan_inst; + struct net_device *dev; + int i; + + rcu_read_lock(); + /* All wwan network devices share same HIF queue */ + for (i = 0; i < MTK_NETDEV_MAX; i++) { + wwan_inst = rcu_dereference(wcb->wwan_inst[i]); + if (!wwan_inst) + continue; + + dev = wwan_inst->netdev; + + if (!(dev->flags & IFF_UP)) + continue; + + netif_carrier_off(dev); + /* the network transmit lock has already been held in the ndo_start_xmit context */ + netif_tx_stop_all_queues(dev); + } + rcu_read_unlock(); + + return 0; +} + +static void mtk_wwan_napi_exit(struct mtk_wwan_ctlb *wcb) +{ + int i; + + for (i = 0; i < wcb->rxq_num; i++) { + if (!wcb->napis[i]) + continue; + netif_napi_del(wcb->napis[i]); + } +} + +static int mtk_wwan_napi_init(struct mtk_wwan_ctlb *wcb, struct net_device *dev) +{ + int i; + + for (i = 0; i < wcb->rxq_num; i++) { + if (!wcb->napis[i]) { + dev_err(wcb->mdev->dev, "Invalid napi pointer, napi=%d", i); + goto err; + } + netif_napi_add_weight(dev, wcb->napis[i], wcb->trans_ops->poll, napi_budget); + } + + return 0; + +err: + for (--i; i >= 0; i--) + netif_napi_del(wcb->napis[i]); + return -EINVAL; +} + +static void mtk_wwan_setup(struct net_device *dev) +{ + dev->watchdog_timeo = MTK_NETDEV_WDT; + dev->mtu = ETH_DATA_LEN; + dev->min_mtu = ETH_MIN_MTU; + + dev->features = NETIF_F_SG; + dev->hw_features = NETIF_F_SG; + + dev->features |= NETIF_F_HW_CSUM; + dev->hw_features |= NETIF_F_HW_CSUM; + + dev->features |= NETIF_F_RXCSUM; + dev->hw_features |= NETIF_F_RXCSUM; + + dev->features |= NETIF_F_GRO; + dev->hw_features |= NETIF_F_GRO; + + dev->features |= NETIF_F_RXHASH; + dev->hw_features |= NETIF_F_RXHASH; + + dev->addr_len = ETH_ALEN; + dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN; + + /* Pure IP device. */ + dev->flags = IFF_NOARP; + dev->type = ARPHRD_NONE; + + dev->needs_free_netdev = true; + + dev->netdev_ops = &mtk_netdev_ops; + mtk_ethtool_set_ops(dev); +} + +static int mtk_wwan_newlink(void *ctxt, struct net_device *dev, u32 intf_id, + struct netlink_ext_ack *extack) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_wwan_ctlb *wcb = ctxt; + int ret; + + if (intf_id > MTK_MAX_INTF_ID) { + ret = -EINVAL; + goto err; + } + + dev->max_mtu = wcb->max_mtu; + + wwan_inst->wcb = wcb; + wwan_inst->netdev = dev; + wwan_inst->intf_id = intf_id; + + if (rcu_access_pointer(wcb->wwan_inst[intf_id])) { + ret = -EBUSY; + goto err; + } + + ret = register_netdevice(dev); + if (ret) + goto err; + + rcu_assign_pointer(wcb->wwan_inst[intf_id], wwan_inst); + + netif_device_attach(dev); + + return 0; +err: + return ret; +} + +static void mtk_wwan_dellink(void *ctxt, struct net_device *dev, + struct list_head *head) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + int intf_id = wwan_inst->intf_id; + struct mtk_wwan_ctlb *wcb = ctxt; + + if (WARN_ON(rcu_access_pointer(wcb->wwan_inst[intf_id]) != wwan_inst)) + return; + + RCU_INIT_POINTER(wcb->wwan_inst[intf_id], NULL); + unregister_netdevice_queue(dev, head); +} + +static const struct wwan_ops mtk_wwan_ops = { + .priv_size = sizeof(struct mtk_wwan_instance), + .setup = mtk_wwan_setup, + .newlink = mtk_wwan_newlink, + .dellink = mtk_wwan_dellink, +}; + +static void mtk_wwan_notify(struct mtk_data_blk *data_blk, enum mtk_data_evt evt, u64 data) +{ + struct mtk_wwan_ctlb *wcb; + + if (unlikely(!data_blk || !data_blk->wcb)) + return; + + wcb = data_blk->wcb; + + switch (evt) { + case DATA_EVT_TX_START: + mtk_wwan_start_txq(wcb, data); + break; + case DATA_EVT_TX_STOP: + mtk_wwan_stop_txq(wcb, data); + break; + + case DATA_EVT_RX_STOP: + mtk_wwan_napi_disable(wcb); + break; + + case DATA_EVT_REG_DEV: + if (!wcb->reg_done) { + wwan_register_ops(wcb->mdev->dev, &mtk_wwan_ops, wcb, MTK_DFLT_INTF_ID); + wcb->reg_done = true; + } + break; + + case DATA_EVT_UNREG_DEV: + if (wcb->reg_done) { + wwan_unregister_ops(wcb->mdev->dev); + wcb->reg_done = false; + } + break; + + default: + break; + } +} + +static int mtk_wwan_init(struct mtk_data_blk *data_blk, struct mtk_data_trans_info *trans_info) +{ + struct mtk_wwan_ctlb *wcb; + int ret; + + if (unlikely(!data_blk || !trans_info)) + return -EINVAL; + + wcb = devm_kzalloc(data_blk->mdev->dev, sizeof(*wcb), GFP_KERNEL); + if (unlikely(!wcb)) + return -ENOMEM; + + wcb->trans_ops = &data_trans_ops; + wcb->mdev = data_blk->mdev; + wcb->data_blk = data_blk; + wcb->napis = trans_info->napis; + wcb->max_mtu = trans_info->max_mtu; + wcb->cap = trans_info->cap; + wcb->rxq_num = trans_info->rxq_cnt; + wcb->txq_num = trans_info->txq_cnt; + atomic_set(&wcb->napi_enabled, 0); + init_dummy_netdev(&wcb->dummy_dev); + + data_blk->wcb = wcb; + + /* Multiple virtual network devices share one physical device, + * so we use dummy device to enable NAPI for multiple virtual network devices. + */ + ret = mtk_wwan_napi_init(wcb, &wcb->dummy_dev); + if (ret < 0) + goto err_napi_init; + + return 0; +err_napi_init: + devm_kfree(data_blk->mdev->dev, wcb); + data_blk->wcb = NULL; + + return ret; +} + +static void mtk_wwan_exit(struct mtk_data_blk *data_blk) +{ + struct mtk_wwan_ctlb *wcb; + + if (unlikely(!data_blk || !data_blk->wcb)) + return; + + wcb = data_blk->wcb; + mtk_wwan_napi_exit(wcb); + devm_kfree(data_blk->mdev->dev, wcb); + data_blk->wcb = NULL; +} + +struct mtk_data_port_ops data_port_ops = { + .init = mtk_wwan_init, + .exit = mtk_wwan_exit, + .recv = mtk_wwan_data_recv, + .notify = mtk_wwan_notify, +}; From patchwork Wed Jan 18 11:38:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 13106232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA2F2C32793 for ; Wed, 18 Jan 2023 11:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Cubr0SMDp+3uBznVIz+WdXPSfLWYS+oCicJuucvO70E=; b=4m9uUgw7688iSTw+Nosuwzod3z nhVxkzVxJGYML0HAVoYSPHshqjDDA5eMoHObf/AKbboBa/FFyTVNS978kgDd/ihLerDtu9XCr/bTf E/+TFxaF3QTKp01NSN5cjQ313iWx7HUzkxGZ+Psj5w5ru+G5VWx38sXZxu6CLB5a+3tQ6+D+BoU61 pxEczSigAKGEqMWD4BhqVHdhUSRG9HG7niWDKa4vmE1ww8tmaUlTdIxlCs677s10E4liB/sY/Fxik EkO0QTNZDLNk0n5zBfIYLK1zYx7u32VpqBCEW5RLAgmebZrcD22s0DN3Fay0alfnmaL62n5BF7tHQ 0PSkWJcQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI6t9-000bMZ-2T; Wed, 18 Jan 2023 11:45:55 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pI6t4-000bL0-LM for linux-mediatek@lists.infradead.org; Wed, 18 Jan 2023 11:45:53 +0000 X-UUID: a60f9ffe972511edbbe3f76fe852e059-20230118 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=Cubr0SMDp+3uBznVIz+WdXPSfLWYS+oCicJuucvO70E=; b=AseWCofpE48dOlEIylyYR1aODtmHcSy5+TMfiChkI9l2LQXZKUHDZ+ddDSx9g0g9WTZIred8E0kr0X73CCREcvwZ/ZzDk37nNiljyBrb4lRIza5hrUE540avrvkaNgeX6kBeUqLdRNzLFg6gzKsJd44I759ou1vuCRHc8DnE18Y=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.18,REQID:88c8a8f3-ad7b-4a0b-95ca-be49d5a8b942,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:3ca2d6b,CLOUDID:908a2df6-ff42-4fb0-b929-626456a83c14,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0 X-CID-BVR: 0,NGT X-UUID: a60f9ffe972511edbbe3f76fe852e059-20230118 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 2122985488; Wed, 18 Jan 2023 04:45:47 -0700 Received: from mtkmbs13n1.mediatek.inc (172.21.101.193) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Wed, 18 Jan 2023 19:45:35 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs13n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Wed, 18 Jan 2023 19:45:33 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: Intel experts , Chetan , MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang Subject: [PATCH net-next v2 10/12] net: wwan: tmi: Add exception handling service Date: Wed, 18 Jan 2023 19:38:57 +0800 Message-ID: <20230118113859.175836-11-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230118113859.175836-1-yanchao.yang@mediatek.com> References: <20230118113859.175836-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230118_034550_759463_BA5177FC X-CRM114-Status: GOOD ( 30.66 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org The exception handling service aims to recover the entire system when the host driver detects some exceptions. The scenarios that could trigger exceptions include: - Read/Write error from the transaction layer when the PCIe link brokes. - An RGU interrupt is received. - The OS reports PCIe link failure, e.g., an AER is detected. When an exception happens, the exception module will receive an exception event, and it will use FLDR or PLDR to reset the device. The exception module will also start a timer to check if the PCIe link is back by reading the vendor ID of the device, and it will re-initialize the host driver when the PCIe link comes back. Signed-off-by: Yanchao Yang Signed-off-by: Mingliang Xu --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_cldma.c | 15 +- drivers/net/wwan/mediatek/mtk_dev.c | 8 + drivers/net/wwan/mediatek/mtk_dev.h | 79 ++++++++ drivers/net/wwan/mediatek/mtk_dpmaif.c | 14 +- drivers/net/wwan/mediatek/mtk_dpmaif_drv.h | 10 +- drivers/net/wwan/mediatek/mtk_except.c | 176 ++++++++++++++++++ drivers/net/wwan/mediatek/mtk_fsm.c | 2 + .../wwan/mediatek/pcie/mtk_cldma_drv_t800.c | 15 +- drivers/net/wwan/mediatek/pcie/mtk_pci.c | 47 +++++ 10 files changed, 353 insertions(+), 16 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_except.c diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 6a5e699987ef..e29d9711e900 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -14,7 +14,8 @@ mtk_tmi-y = \ mtk_fsm.o \ mtk_dpmaif.o \ mtk_wwan.o \ - mtk_ethtool.o + mtk_ethtool.o \ + mtk_except.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_cldma.c b/drivers/net/wwan/mediatek/mtk_cldma.c index 03190c5a01b2..47b10207cdc0 100644 --- a/drivers/net/wwan/mediatek/mtk_cldma.c +++ b/drivers/net/wwan/mediatek/mtk_cldma.c @@ -180,14 +180,22 @@ static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) struct tx_req *req; struct virtq *vq; struct txq *txq; + int ret = 0; int err; vq = cd->trans->vq_tbl + trb->vqno; hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; txq = hw->txq[vq->txqno]; - if (!txq->req_budget) - return -EAGAIN; + if (!txq->req_budget) { + if (mtk_hw_mmio_check(hw->mdev)) { + mtk_except_report_evt(hw->mdev, EXCEPT_LINK_ERR); + ret = -EFAULT; + } else { + ret = -EAGAIN; + } + goto err; + } data_dma_addr = dma_map_single(hw->mdev->dev, skb->data, skb->len, DMA_TO_DEVICE); err = dma_mapping_error(hw->mdev->dev, data_dma_addr); @@ -215,7 +223,8 @@ static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) wmb(); /* ensure GPD setup done before HW start */ - return 0; +err: + return ret; } /** diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index 50a05921e698..d64b597bad0c 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -24,6 +24,13 @@ int mtk_dev_init(struct mtk_md_dev *mdev) if (ret) goto err_data_init; + ret = mtk_except_init(mdev); + if (ret) + goto err_except_init; + + return 0; +err_except_init: + mtk_data_exit(mdev); err_data_init: mtk_ctrl_exit(mdev); err_ctrl_init: @@ -38,6 +45,7 @@ void mtk_dev_exit(struct mtk_md_dev *mdev) EVT_MODE_BLOCKING | EVT_MODE_TOHEAD); mtk_data_exit(mdev); mtk_ctrl_exit(mdev); + mtk_except_exit(mdev); mtk_fsm_exit(mdev); } diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index 0dc73b40554f..3bcf8072feea 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -39,6 +39,7 @@ enum mtk_reset_type { RESET_FLDR, RESET_PLDR, RESET_RGU, + RESET_NONE }; enum mtk_reinit_type { @@ -51,6 +52,15 @@ enum mtk_l1ss_grp { L1SS_EXT_EVT, }; +enum mtk_except_evt { + EXCEPT_LINK_ERR, + EXCEPT_RGU, + EXCEPT_AER_DETECTED, + EXCEPT_AER_RESET, + EXCEPT_AER_RESUME, + EXCEPT_MAX +}; + #define L1SS_BIT_L1(grp) BIT(((grp) << 2) + 1) #define L1SS_BIT_L1_1(grp) BIT(((grp) << 2) + 2) #define L1SS_BIT_L1_2(grp) BIT(((grp) << 2) + 3) @@ -87,6 +97,7 @@ struct mtk_md_dev; * @reset: Callback to reset device. * @reinit: Callback to execute device re-initialization. * @mmio_check: Callback to check whether it is available to mmio access device. + * @link_check: Callback to execute hardware link check. * @get_hp_status: Callback to get link hotplug status. */ struct mtk_hw_ops { @@ -118,10 +129,18 @@ struct mtk_hw_ops { int (*reset)(struct mtk_md_dev *mdev, enum mtk_reset_type type); int (*reinit)(struct mtk_md_dev *mdev, enum mtk_reinit_type type); + bool (*link_check)(struct mtk_md_dev *mdev); bool (*mmio_check)(struct mtk_md_dev *mdev); int (*get_hp_status)(struct mtk_md_dev *mdev); }; +struct mtk_md_except { + atomic_t flag; + enum mtk_reset_type type; + int pci_ext_irq_id; + struct timer_list timer; +}; + /** * struct mtk_md_dev - defines the context structure of MTK modem device. * @dev: pointer to the generic device object. @@ -134,6 +153,7 @@ struct mtk_hw_ops { * @ctrl_blk: pointer to the context of control plane submodule. * @data_blk: pointer to the context of data plane submodule. * @bm_ctrl: pointer to the context of buffer management submodule. + * @except: pointer to the context of driver exception submodule. */ struct mtk_md_dev { struct device *dev; @@ -147,6 +167,7 @@ struct mtk_md_dev { void *ctrl_blk; void *data_blk; struct mtk_bm_ctrl *bm_ctrl; + struct mtk_md_except except; }; int mtk_dev_init(struct mtk_md_dev *mdev); @@ -461,6 +482,19 @@ static inline int mtk_hw_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type ty return mdev->hw_ops->reinit(mdev, type); } +/** + * mtk_hw_link_check() - Check if the link is down. + * @mdev: Device instance. + * + * Return: + * * 0 - indicates link normally. + * * other value - indicates link down. + */ +static inline bool mtk_hw_link_check(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->link_check(mdev); +} + /** * mtk_hw_mmio_check() - Check if the PCIe MMIO is ready. * @mdev: Device instance. @@ -487,4 +521,49 @@ static inline int mtk_hw_get_hp_status(struct mtk_md_dev *mdev) return mdev->hw_ops->get_hp_status(mdev); } +/** + * mtk_except_report_evt() - Report exception event. + * @mdev: pointer to mtk_md_dev + * @evt: exception event + * + * Return: + * * 0 - OK + * * -EFAULT - exception feature is not ready + */ +int mtk_except_report_evt(struct mtk_md_dev *mdev, enum mtk_except_evt evt); + +/** + * mtk_except_start() - Start exception service. + * @mdev: pointer to mtk_md_dev + * + * Return: void + */ +void mtk_except_start(struct mtk_md_dev *mdev); + +/** + * mtk_except_stop() - Stop exception service. + * @mdev: pointer to mtk_md_dev + * + * Return: void + */ +void mtk_except_stop(struct mtk_md_dev *mdev); + +/** + * mtk_except_init() - Initialize exception feature. + * @mdev: pointer to mtk_md_dev + * + * Return: + * * 0 - OK + */ +int mtk_except_init(struct mtk_md_dev *mdev); + +/** + * mtk_except_exit() - De-Initialize exception feature. + * @mdev: pointer to mtk_md_dev + * + * Return: + * * 0 - OK + */ +int mtk_except_exit(struct mtk_md_dev *mdev); + #endif /* __MTK_DEV_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif.c b/drivers/net/wwan/mediatek/mtk_dpmaif.c index 246b093a8cf6..44cd129b9544 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif.c +++ b/drivers/net/wwan/mediatek/mtk_dpmaif.c @@ -534,10 +534,12 @@ static void mtk_dpmaif_common_err_handle(struct mtk_dpmaif_ctlb *dcb, bool is_hw return; } - if (mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) + if (mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) { dev_err(DCB_TO_DEV(dcb), "Failed to access mmio\n"); - else + mtk_except_report_evt(DCB_TO_MDEV(dcb), EXCEPT_LINK_ERR); + } else { mtk_dpmaif_trigger_dev_exception(dcb); + } } static unsigned int mtk_dpmaif_pit_bid(struct dpmaif_pd_pit *pit_info) @@ -708,10 +710,10 @@ static int mtk_dpmaif_reload_rx_page(struct mtk_dpmaif_ctlb *dcb, page_info->offset = data - page_address(page_info->page); page_info->data_len = bat_ring->buf_size; page_info->data_dma_addr = dma_map_page(DCB_TO_MDEV(dcb)->dev, - page_info->page, - page_info->offset, - page_info->data_len, - DMA_FROM_DEVICE); + page_info->page, + page_info->offset, + page_info->data_len, + DMA_FROM_DEVICE); ret = dma_mapping_error(DCB_TO_MDEV(dcb)->dev, page_info->data_dma_addr); if (unlikely(ret)) { dev_err(DCB_TO_MDEV(dcb)->dev, "Failed to map dma!\n"); diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h index 34ec846e6336..29b6c99bba42 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h +++ b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h @@ -84,12 +84,12 @@ enum mtk_drv_err { enum { DPMAIF_CLEAR_INTR, - DPMAIF_UNMASK_INTR + DPMAIF_UNMASK_INTR, }; enum dpmaif_drv_dlq_id { DPMAIF_DLQ0 = 0, - DPMAIF_DLQ1 + DPMAIF_DLQ1, }; struct dpmaif_drv_dlq { @@ -132,7 +132,7 @@ enum dpmaif_drv_ring_type { DPMAIF_PIT, DPMAIF_BAT, DPMAIF_FRAG, - DPMAIF_DRB + DPMAIF_DRB, }; enum dpmaif_drv_ring_idx { @@ -143,7 +143,7 @@ enum dpmaif_drv_ring_idx { DPMAIF_FRAG_WIDX, DPMAIF_FRAG_RIDX, DPMAIF_DRB_WIDX, - DPMAIF_DRB_RIDX + DPMAIF_DRB_RIDX, }; struct dpmaif_drv_irq_en_mask { @@ -184,7 +184,7 @@ enum dpmaif_drv_intr_type { DPMAIF_INTR_DL_FRGCNT_LEN_ERR, DPMAIF_INTR_DL_PITCNT_LEN_ERR, DPMAIF_INTR_DL_DONE, - DPMAIF_INTR_MAX, + DPMAIF_INTR_MAX }; #define DPMAIF_INTR_COUNT ((DPMAIF_INTR_MAX) - (DPMAIF_INTR_MIN) - 1) diff --git a/drivers/net/wwan/mediatek/mtk_except.c b/drivers/net/wwan/mediatek/mtk_except.c new file mode 100644 index 000000000000..e35592d9d2c3 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_except.c @@ -0,0 +1,176 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include + +#include "mtk_dev.h" +#include "mtk_fsm.h" + +#define MTK_EXCEPT_HOST_RESET_TIME (2) +#define MTK_EXCEPT_SELF_RESET_TIME (35) +#define MTK_EXCEPT_RESET_TYPE_PLDR BIT(26) +#define MTK_EXCEPT_RESET_TYPE_FLDR BIT(27) + +static void mtk_except_start_monitor(struct mtk_md_dev *mdev, unsigned long expires) +{ + struct mtk_md_except *except = &mdev->except; + + if (!timer_pending(&except->timer) && !mtk_hw_get_hp_status(mdev)) { + except->timer.expires = jiffies + expires; + add_timer(&except->timer); + dev_info(mdev->dev, "Add timer to monitor PCI link\n"); + } +} + +int mtk_except_report_evt(struct mtk_md_dev *mdev, enum mtk_except_evt evt) +{ + struct mtk_md_except *except = &mdev->except; + int err, val; + + if (atomic_read(&except->flag) != 1) + return -EFAULT; + + switch (evt) { + case EXCEPT_LINK_ERR: + err = mtk_hw_mmio_check(mdev); + if (err) + mtk_fsm_evt_submit(mdev, FSM_EVT_LINKDOWN, FSM_F_DFLT, NULL, 0, 0); + break; + case EXCEPT_RGU: + /* delay 20ms to make sure device ready for reset */ + msleep(20); + + val = mtk_hw_get_dev_state(mdev); + dev_info(mdev->dev, "dev_state:0x%x, hw_ver:0x%x, fsm state:%d\n", + val, mdev->hw_ver, mdev->fsm->state); + + /* Invalid dev state will trigger PLDR */ + if (val & MTK_EXCEPT_RESET_TYPE_PLDR) { + except->type = RESET_PLDR; + } else if (val & MTK_EXCEPT_RESET_TYPE_FLDR) { + except->type = RESET_FLDR; + } else if (mdev->fsm->state >= FSM_STATE_READY) { + dev_info(mdev->dev, "HW reboot\n"); + except->type = RESET_NONE; + } else { + dev_info(mdev->dev, "RGU ignored\n"); + break; + } + mtk_fsm_evt_submit(mdev, FSM_EVT_DEV_RESET_REQ, FSM_F_DFLT, NULL, 0, 0); + break; + case EXCEPT_AER_DETECTED: + mtk_fsm_evt_submit(mdev, FSM_EVT_AER, FSM_F_DFLT, NULL, 0, EVT_MODE_BLOCKING); + break; + case EXCEPT_AER_RESET: + err = mtk_hw_reset(mdev, RESET_FLDR); + if (err) + mtk_hw_reset(mdev, RESET_RGU); + break; + case EXCEPT_AER_RESUME: + mtk_except_start_monitor(mdev, HZ); + break; + default: + break; + } + + return 0; +} + +void mtk_except_start(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + mtk_hw_unmask_irq(mdev, except->pci_ext_irq_id); +} + +void mtk_except_stop(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + mtk_hw_mask_irq(mdev, except->pci_ext_irq_id); +} + +static void mtk_except_fsm_handler(struct mtk_fsm_param *param, void *data) +{ + struct mtk_md_except *except = data; + enum mtk_reset_type reset_type; + struct mtk_md_dev *mdev; + unsigned long expires; + int err; + + mdev = container_of(except, struct mtk_md_dev, except); + + switch (param->to) { + case FSM_STATE_POSTDUMP: + mtk_hw_mask_irq(mdev, except->pci_ext_irq_id); + mtk_hw_clear_irq(mdev, except->pci_ext_irq_id); + mtk_hw_unmask_irq(mdev, except->pci_ext_irq_id); + break; + case FSM_STATE_OFF: + if (param->evt_id == FSM_EVT_DEV_RESET_REQ) + reset_type = except->type; + else if (param->evt_id == FSM_EVT_LINKDOWN) + reset_type = RESET_FLDR; + else + break; + + if (reset_type == RESET_NONE) { + expires = MTK_EXCEPT_SELF_RESET_TIME * HZ; + } else { + err = mtk_hw_reset(mdev, reset_type); + if (err) + expires = MTK_EXCEPT_SELF_RESET_TIME * HZ; + else + expires = MTK_EXCEPT_HOST_RESET_TIME * HZ; + } + + mtk_except_start_monitor(mdev, expires); + break; + default: + break; + } +} + +static void mtk_except_link_monitor(struct timer_list *timer) +{ + struct mtk_md_except *except = container_of(timer, struct mtk_md_except, timer); + struct mtk_md_dev *mdev = container_of(except, struct mtk_md_dev, except); + int err; + + err = mtk_hw_link_check(mdev); + if (!err) { + mtk_fsm_evt_submit(mdev, FSM_EVT_REINIT, FSM_F_FULL_REINIT, NULL, 0, 0); + del_timer(&except->timer); + } else { + mod_timer(timer, jiffies + HZ); + } +} + +int mtk_except_init(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + except->pci_ext_irq_id = mtk_hw_get_irq_id(mdev, MTK_IRQ_SRC_SAP_RGU); + + mtk_fsm_notifier_register(mdev, MTK_USER_EXCEPT, + mtk_except_fsm_handler, except, FSM_PRIO_1, false); + timer_setup(&except->timer, mtk_except_link_monitor, 0); + atomic_set(&except->flag, 1); + + return 0; +} + +int mtk_except_exit(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + atomic_set(&except->flag, 0); + del_timer(&except->timer); + mtk_fsm_notifier_unregister(mdev, MTK_USER_EXCEPT); + + return 0; +} diff --git a/drivers/net/wwan/mediatek/mtk_fsm.c b/drivers/net/wwan/mediatek/mtk_fsm.c index 46feb3148342..e1588b932e2a 100644 --- a/drivers/net/wwan/mediatek/mtk_fsm.c +++ b/drivers/net/wwan/mediatek/mtk_fsm.c @@ -516,6 +516,8 @@ static int mtk_fsm_early_bootup_handler(u32 status, void *__fsm) dev_stage = dev_state & REGION_BITMASK; if (dev_stage >= DEV_STAGE_MAX) { dev_err(mdev->dev, "Invalid dev state 0x%x\n", dev_state); + if (mtk_hw_link_check(mdev)) + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return -ENXIO; } diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c index 06c84afbd9ee..21e59fb07d56 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c @@ -364,8 +364,10 @@ static void mtk_cldma_tx_done_work(struct work_struct *work) state = mtk_cldma_check_intr_status(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); if (state) { - if (unlikely(state == LINK_ERROR_VAL)) + if (unlikely(state == LINK_ERROR_VAL)) { + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return; + } mtk_cldma_clr_intr_status(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); @@ -451,6 +453,11 @@ static void mtk_cldma_rx_done_work(struct work_struct *work) if (!state) break; + if (unlikely(state == LINK_ERROR_VAL)) { + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + return; + } + mtk_cldma_clr_intr_status(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); @@ -751,6 +758,9 @@ int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno) devm_kfree(hw->mdev->dev, txq); hw->txq[txqno] = NULL; + if (active == LINK_ERROR_VAL) + mtk_except_report_evt(hw->mdev, EXCEPT_LINK_ERR); + return 0; } @@ -906,6 +916,9 @@ int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno) devm_kfree(mdev->dev, rxq); hw->rxq[rxqno] = NULL; + if (active == LINK_ERROR_VAL) + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + return 0; } diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 3669e5523d12..3565705754c7 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -518,6 +518,8 @@ static int mtk_pci_reset(struct mtk_md_dev *mdev, enum mtk_reset_type type) return mtk_pci_fldr(mdev); case RESET_PLDR: return mtk_pci_pldr(mdev); + default: + break; } return -EINVAL; @@ -529,6 +531,12 @@ static int mtk_pci_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type type) struct mtk_pci_priv *priv = mdev->hw_priv; int ret, ltr, l1ss; + if (type == REINIT_TYPE_EXP) { + /* We have saved it in probe() */ + pci_load_saved_state(pdev, priv->saved_state); + pci_restore_state(pdev); + } + /* restore ltr */ ltr = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR); if (ltr) { @@ -553,6 +561,9 @@ static int mtk_pci_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type type) mtk_pci_set_msix_merged(priv, priv->irq_cnt); } + if (type == REINIT_TYPE_EXP) + mtk_pci_clear_irq(mdev, priv->rgu_irq_id); + mtk_pci_unmask_irq(mdev, priv->rgu_irq_id); mtk_pci_unmask_irq(mdev, priv->mhccif_irq_id); @@ -616,6 +627,7 @@ static const struct mtk_hw_ops mtk_pci_ops = { .get_ext_evt_status = mtk_mhccif_get_evt_status, .reset = mtk_pci_reset, .reinit = mtk_pci_reinit, + .link_check = mtk_pci_link_check, .mmio_check = mtk_pci_mmio_check, .get_hp_status = mtk_pci_get_hp_status, }; @@ -636,6 +648,7 @@ static void mtk_mhccif_isr_work(struct work_struct *work) if (unlikely(stat == U32_MAX && mtk_pci_link_check(mdev))) { /* When link failed, we don't need to unmask/clear. */ dev_err(mdev->dev, "Failed to check link in MHCCIF handler.\n"); + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return; } @@ -760,6 +773,7 @@ static void mtk_rgu_work(struct work_struct *work) struct mtk_pci_priv *priv; struct mtk_md_dev *mdev; struct pci_dev *pdev; + int ret; priv = container_of(to_delayed_work(work), struct mtk_pci_priv, rgu_work); mdev = priv->mdev; @@ -770,6 +784,10 @@ static void mtk_rgu_work(struct work_struct *work) mtk_pci_mask_irq(mdev, priv->rgu_irq_id); mtk_pci_clear_irq(mdev, priv->rgu_irq_id); + ret = mtk_except_report_evt(mdev, EXCEPT_RGU); + if (ret) + dev_err(mdev->dev, "Failed to report exception with EXCEPT_RGU\n"); + if (!pdev->msix_enabled) return; @@ -782,8 +800,14 @@ static int mtk_rgu_irq_cb(int irq_id, void *data) struct mtk_pci_priv *priv; priv = mdev->hw_priv; + + if (delayed_work_pending(&priv->rgu_work)) + goto exit; + schedule_delayed_work(&priv->rgu_work, msecs_to_jiffies(1)); + dev_info(mdev->dev, "RGU IRQ arrived\n"); +exit: return 0; } @@ -1105,16 +1129,39 @@ static void mtk_pci_remove(struct pci_dev *pdev) static pci_ers_result_t mtk_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state) { + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + int ret; + + ret = mtk_except_report_evt(mdev, EXCEPT_AER_DETECTED); + if (ret) + dev_err(mdev->dev, "Failed to call excpetion report API with EXCEPT_AER_DETECTED!\n"); + dev_info(mdev->dev, "AER detected: pci_channel_state_t=%d\n", state); + return PCI_ERS_RESULT_NEED_RESET; } static pci_ers_result_t mtk_pci_slot_reset(struct pci_dev *pdev) { + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + int ret; + + ret = mtk_except_report_evt(mdev, EXCEPT_AER_RESET); + if (ret) + dev_err(mdev->dev, "Failed to call excpetion report API with EXCEPT_AER_RESET!\n"); + dev_info(mdev->dev, "Slot reset!\n"); + return PCI_ERS_RESULT_RECOVERED; } static void mtk_pci_io_resume(struct pci_dev *pdev) { + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + int ret; + + ret = mtk_except_report_evt(mdev, EXCEPT_AER_RESUME); + if (ret) + dev_err(mdev->dev, "Failed to call excpetion report API with EXCEPT_AER_RESUME!\n"); + dev_info(mdev->dev, "IO resume!\n"); } static const struct pci_error_handlers mtk_pci_err_handler = {