From patchwork Fri Feb 28 10:01:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996107 X-Patchwork-Delegate: kuba@kernel.org Received: from out198-13.us.a.mail.aliyun.com (out198-13.us.a.mail.aliyun.com [47.90.198.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CB9625CC7B; Fri, 28 Feb 2025 10:01:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=47.90.198.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736881; cv=none; b=DCYtchYxU/Jtf+qiviP5dH4UmTLn+dGZ1uQQ2Db81dW8Mxq9QAVO9Oyd9Vl8H/koxKeh+tMxu3/uxEYoow4PO/1GxSknLlT3NpgTxrlOBAYIuuUfILLR8F2hxSUaAEa17lb82YJsL/8mIaSO8JBDPX+d36kH8T9alaQ58SfyzHY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736881; c=relaxed/simple; bh=P2lD1ssKbjNws6IUxKfowyJUmaSdJ8LXH5cXeIXt+Gg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Bgjs9Sn/Ykigc/0a/Rmhv8PzGL19dsEAkyG9rMwFrQ21AmXx7M872PiD4/yA7NCoU/tOAFnefGy8dvM9ZI9hzK8+ba/1WPKhLnPnj7yeDQHKDOeY8o2sULj0zO9KFSrTWeBmo4XBmOueyqw7JJnqHZyT69y1X19rUZOl0LOLtUY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=47.90.198.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn18w_1740736831 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:32 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 01/14] motorcomm:yt6801: Implement mdio register Date: Fri, 28 Feb 2025 18:01:04 +0800 Message-Id: <20250228100020.3944-2-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the mdio bus read, write and register function. Signed-off-by: Frank Sae --- .../net/ethernet/motorcomm/yt6801/yt6801.h | 379 +++++++ .../ethernet/motorcomm/yt6801/yt6801_net.c | 99 ++ .../ethernet/motorcomm/yt6801/yt6801_type.h | 967 ++++++++++++++++++ 3 files changed, 1445 insertions(+) create mode 100644 drivers/net/ethernet/motorcomm/yt6801/yt6801.h create mode 100644 drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c create mode 100644 drivers/net/ethernet/motorcomm/yt6801/yt6801_type.h diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801.h b/drivers/net/ethernet/motorcomm/yt6801/yt6801.h new file mode 100644 index 000000000..eeefd1e12 --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801.h @@ -0,0 +1,379 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* Copyright (c) 2022 - 2024 Motorcomm Electronic Technology Co.,Ltd. */ + +#ifndef YT6801_H +#define YT6801_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_PCI_MSI +#include +#endif + +#include "yt6801_type.h" + +#define FXGMAC_DRV_NAME "yt6801" +#define FXGMAC_DRV_DESC "Motorcomm Gigabit Ethernet Driver" + +#define FXGMAC_RX_BUF_ALIGN 64 +#define FXGMAC_TX_MAX_BUF_SIZE (0x3fff & ~(FXGMAC_RX_BUF_ALIGN - 1)) +#define FXGMAC_RX_MIN_BUF_SIZE (ETH_FRAME_LEN + ETH_FCS_LEN + VLAN_HLEN) + +/* Descriptors required for maximum contiguous TSO/GSO packet */ +#define FXGMAC_TX_MAX_SPLIT ((GSO_MAX_SIZE / FXGMAC_TX_MAX_BUF_SIZE) + 1) + +/* Maximum possible descriptors needed for a SKB */ +#define FXGMAC_TX_MAX_DESC_NR (MAX_SKB_FRAGS + FXGMAC_TX_MAX_SPLIT + 2) + +#define FXGMAC_DMA_STOP_TIMEOUT 5 +#define FXGMAC_JUMBO_PACKET_MTU 9014 +#define FXGMAC_MAX_DMA_RX_CHANNELS 4 +#define FXGMAC_MAX_DMA_TX_CHANNELS 1 +#define FXGMAC_MAX_DMA_CHANNELS \ + (FXGMAC_MAX_DMA_RX_CHANNELS + FXGMAC_MAX_DMA_TX_CHANNELS) + +struct fxgmac_ring_buf { + struct sk_buff *skb; + dma_addr_t skb_dma; + unsigned int skb_len; +}; + +/* Common Tx and Rx DMA hardware descriptor */ +struct fxgmac_dma_desc { + __le32 desc0; + __le32 desc1; + __le32 desc2; + __le32 desc3; +}; + +/* Page allocation related values */ +struct fxgmac_page_alloc { + struct page *pages; + unsigned int pages_len; + unsigned int pages_offset; + dma_addr_t pages_dma; +}; + +/* Ring entry buffer data */ +struct fxgmac_buffer_data { + struct fxgmac_page_alloc pa; + struct fxgmac_page_alloc pa_unmap; + + dma_addr_t dma_base; + unsigned long dma_off; + unsigned int dma_len; +}; + +struct fxgmac_tx_desc_data { + unsigned int packets; /* BQL packet count */ + unsigned int bytes; /* BQL byte count */ +}; + +struct fxgmac_rx_desc_data { + struct fxgmac_buffer_data hdr; /* Header locations */ + struct fxgmac_buffer_data buf; /* Payload locations */ + unsigned short hdr_len; /* Length of received header */ + unsigned short len; /* Length of received packet */ +}; + +struct fxgmac_pkt_info { + struct sk_buff *skb; +#define ATTR_TX_CSUM_ENABLE_POS 0 +#define ATTR_TX_CSUM_ENABLE_LEN 1 +#define ATTR_TX_TSO_ENABLE_POS 1 +#define ATTR_TX_TSO_ENABLE_LEN 1 +#define ATTR_TX_VLAN_CTAG_POS 2 +#define ATTR_TX_VLAN_CTAG_LEN 1 +#define ATTR_TX_PTP_POS 3 +#define ATTR_TX_PTP_LEN 1 +#define ATTR_RX_CSUM_DONE_POS 0 +#define ATTR_RX_CSUM_DONE_LEN 1 +#define ATTR_RX_VLAN_CTAG_POS 1 +#define ATTR_RX_VLAN_CTAG_LEN 1 +#define ATTR_RX_INCOMPLETE_POS 2 +#define ATTR_RX_INCOMPLETE_LEN 1 +#define ATTR_RX_CONTEXT_NEXT_POS 3 +#define ATTR_RX_CONTEXT_NEXT_LEN 1 +#define ATTR_RX_CONTEXT_POS 4 +#define ATTR_RX_CONTEXT_LEN 1 +#define ATTR_RX_RX_TSTAMP_POS 5 +#define ATTR_RX_RX_TSTAMP_LEN 1 +#define ATTR_RX_RSS_HASH_POS 6 +#define ATTR_RX_RSS_HASH_LEN 1 + unsigned int attr; +#define ERRORS_RX_CRC_POS 2 +#define ERRORS_RX_CRC_LEN 1 +#define ERRORS_RX_FRAME_POS 3 +#define ERRORS_RX_FRAME_LEN 1 +#define ERRORS_RX_LENGTH_POS 0 +#define ERRORS_RX_LENGTH_LEN 1 +#define ERRORS_RX_OVERRUN_POS 1 +#define ERRORS_RX_OVERRUN_LEN 1 + unsigned int errors; + unsigned int desc_count; /* descriptors needed for this packet */ + unsigned int length; + unsigned int tx_packets; + unsigned int tx_bytes; + + unsigned int header_len; + unsigned int tcp_header_len; + unsigned int tcp_payload_len; + unsigned short mss; + unsigned short vlan_ctag; + + u64 rx_tstamp; + u32 rss_hash; + enum pkt_hash_types rss_hash_type; +}; + +struct fxgmac_desc_data { + struct fxgmac_dma_desc *dma_desc; /* Virtual address of descriptor */ + dma_addr_t dma_desc_addr; /* DMA address of descriptor */ + struct sk_buff *skb; /* Virtual address of SKB */ + dma_addr_t skb_dma; /* DMA address of SKB data */ + unsigned int skb_dma_len; /* Length of SKB DMA area */ + + /* Tx/Rx -related data */ + struct fxgmac_tx_desc_data tx; + struct fxgmac_rx_desc_data rx; + + unsigned int mapped_as_page; +}; + +struct fxgmac_ring { + struct fxgmac_pkt_info pkt_info; /* packet related information */ + + /* Virtual/DMA addresses of DMA descriptor list */ + struct fxgmac_dma_desc *dma_desc_head; + dma_addr_t dma_desc_head_addr; + unsigned int dma_desc_count; + + /* Array of descriptor data corresponding the DMA descriptor + * (always use the FXGMAC_GET_DESC_DATA macro to access this data) + */ + struct fxgmac_desc_data *desc_data_head; + + /* Page allocation for RX buffers */ + struct fxgmac_page_alloc rx_hdr_pa; + struct fxgmac_page_alloc rx_buf_pa; + + /* Ring index values + * cur - Tx: index of descriptor to be used for current transfer + * Rx: index of descriptor to check for packet availability + * dirty - Tx: index of descriptor to check for transfer complete + * Rx: index of descriptor to check for buffer reallocation + */ + unsigned int cur; + unsigned int dirty; + + struct { + unsigned int xmit_more; + unsigned int queue_stopped; + unsigned short cur_mss; + unsigned short cur_vlan_ctag; + } tx; +} ____cacheline_aligned; + +struct fxgmac_channel { + char name[16]; + + /* Address of private data area for device */ + struct fxgmac_pdata *priv; + + /* Queue index and base address of queue's DMA registers */ + unsigned int queue_index; + + /* Per channel interrupt irq number */ + u32 dma_irq_rx; + char dma_irq_rx_name[IFNAMSIZ + 32]; + u32 dma_irq_tx; + char dma_irq_tx_name[IFNAMSIZ + 32]; + + /* Netdev related settings */ + struct napi_struct napi_tx; + struct napi_struct napi_rx; + + void __iomem *dma_regs; + struct fxgmac_ring *tx_ring; + struct fxgmac_ring *rx_ring; +} ____cacheline_aligned; + +/* This structure contains flags that indicate what hardware features + * or configurations are present in the device. + */ +struct fxgmac_hw_features { + unsigned int version; /* HW Version */ + + /* HW Feature Register0 */ + unsigned int phyifsel; /* PHY interface support */ + unsigned int vlhash; /* VLAN Hash Filter */ + unsigned int sma; /* SMA(MDIO) Interface */ + unsigned int rwk; /* PMT remote wake-up packet */ + unsigned int mgk; /* PMT magic packet */ + unsigned int mmc; /* RMON module */ + unsigned int aoe; /* ARP Offload */ + unsigned int ts; /* IEEE 1588-2008 Advanced Timestamp */ + unsigned int eee; /* Energy Efficient Ethernet */ + unsigned int tx_coe; /* Tx Checksum Offload */ + unsigned int rx_coe; /* Rx Checksum Offload */ + unsigned int addn_mac; /* Additional MAC Addresses */ + unsigned int ts_src; /* Timestamp Source */ + unsigned int sa_vlan_ins; /* Source Address or VLAN Insertion */ + + /* HW Feature Register1 */ + unsigned int rx_fifo_size; /* MTL Receive FIFO Size */ + unsigned int tx_fifo_size; /* MTL Transmit FIFO Size */ + unsigned int adv_ts_hi; /* Advance Timestamping High Word */ + unsigned int dma_width; /* DMA width */ + unsigned int dcb; /* DCB Feature */ + unsigned int sph; /* Split Header Feature */ + unsigned int tso; /* TCP Segmentation Offload */ + unsigned int dma_debug; /* DMA Debug Registers */ + unsigned int rss; /* Receive Side Scaling */ + unsigned int tc_cnt; /* Number of Traffic Classes */ + unsigned int avsel; /* AV Feature Enable */ + unsigned int ravsel; /* Rx Side Only AV Feature Enable */ + unsigned int hash_table_size; /* Hash Table Size */ + unsigned int l3l4_filter_num; /* Number of L3-L4 Filters */ + + /* HW Feature Register2 */ + unsigned int rx_q_cnt; /* Number of MTL Receive Queues */ + unsigned int tx_q_cnt; /* Number of MTL Transmit Queues */ + unsigned int rx_ch_cnt; /* Number of DMA Receive Channels */ + unsigned int tx_ch_cnt; /* Number of DMA Transmit Channels */ + unsigned int pps_out_num; /* Number of PPS outputs */ + unsigned int aux_snap_num; /* Number of Aux snapshot inputs */ + + u32 hwfr3; /* HW Feature Register3 */ +}; + +struct fxgmac_resources { + void __iomem *addr; + int irq; +}; + +enum fxgmac_dev_state { + FXGMAC_DEV_OPEN = 0x0, + FXGMAC_DEV_CLOSE = 0x1, + FXGMAC_DEV_STOP = 0x2, + FXGMAC_DEV_START = 0x3, + FXGMAC_DEV_SUSPEND = 0x4, + FXGMAC_DEV_RESUME = 0x5, + FXGMAC_DEV_PROBE = 0xFF, +}; + +struct fxgmac_pdata { + struct net_device *netdev; + struct device *dev; + struct phy_device *phydev; + + struct fxgmac_hw_features hw_feat; /* Hardware features */ + void __iomem *hw_addr; /* Registers base */ + + /* Rings for Tx/Rx on a DMA channel */ + struct fxgmac_channel *channel_head; + unsigned int channel_count; + unsigned int rx_ring_count; + unsigned int rx_desc_count; + unsigned int rx_q_count; +#define FXGMAC_TX_1_RING 1 +#define FXGMAC_TX_1_Q 1 + unsigned int tx_desc_count; + + unsigned long sysclk_rate; /* Device clocks */ + unsigned int pblx8; /* Tx/Rx common settings */ + + /* Tx settings */ + unsigned int tx_sf_mode; + unsigned int tx_threshold; + unsigned int tx_pbl; + unsigned int tx_osp_mode; + unsigned int tx_hang_restart_queuing; + + /* Rx settings */ + unsigned int rx_sf_mode; + unsigned int rx_threshold; + unsigned int rx_pbl; + + /* Tx coalescing settings */ + unsigned int tx_usecs; + unsigned int tx_frames; + + /* Rx coalescing settings */ + unsigned int rx_riwt; + unsigned int rx_usecs; + unsigned int rx_frames; + + /* Flow control settings */ + unsigned int tx_pause; + unsigned int rx_pause; + + unsigned int mtu; + unsigned int rx_buf_size; /* Current Rx buffer size */ + + /* Device interrupt */ + int dev_irq; + unsigned int per_channel_irq; + u32 channel_irq[FXGMAC_MAX_DMA_CHANNELS]; + struct msix_entry *msix_entries; +#define INT_FLAG_INTERRUPT_POS 0 +#define INT_FLAG_INTERRUPT_LEN 5 +#define INT_FLAG_MSI_POS 1 +#define INT_FLAG_MSI_LEN 1 +#define INT_FLAG_MSIX_POS 3 +#define INT_FLAG_MSIX_LEN 1 +#define INT_FLAG_LEGACY_POS 4 +#define INT_FLAG_LEGACY_LEN 1 +#define INT_FLAG_RX_NAPI_POS 18 +#define INT_FLAG_RX_NAPI_LEN 4 +#define INT_FLAG_PER_RX_NAPI_LEN 1 +#define INT_FLAG_RX_IRQ_POS 22 +#define INT_FLAG_RX_IRQ_LEN 4 +#define INT_FLAG_PER_RX_IRQ_LEN 1 +#define INT_FLAG_TX_NAPI_POS 26 +#define INT_FLAG_TX_NAPI_LEN 1 +#define INT_FLAG_TX_IRQ_POS 27 +#define INT_FLAG_TX_IRQ_LEN 1 +#define INT_FLAG_LEGACY_NAPI_POS 30 +#define INT_FLAG_LEGACY_NAPI_LEN 1 +#define INT_FLAG_LEGACY_IRQ_POS 31 +#define INT_FLAG_LEGACY_IRQ_LEN 1 + u32 int_flag; /* interrupt flag */ + + /* Netdev related settings */ + unsigned char mac_addr[ETH_ALEN]; + netdev_features_t netdev_features; + struct napi_struct napi; + + int mac_speed; + int mac_duplex; + + u32 msg_enable; + u32 reg_nonstick[(MSI_PBA - GLOBAL_CTRL0) >> 2]; + + struct work_struct restart_work; + enum fxgmac_dev_state dev_state; +#define FXGMAC_POWER_STATE_DOWN 0 +#define FXGMAC_POWER_STATE_UP 1 + unsigned long powerstate; + struct mutex mutex; /* Driver lock */ + + char drv_name[32]; + char drv_ver[32]; +}; + +extern int fxgmac_net_powerup(struct fxgmac_pdata *priv); +extern int fxgmac_net_powerdown(struct fxgmac_pdata *priv); +extern int fxgmac_drv_probe(struct device *dev, struct fxgmac_resources *res); +extern void fxgmac_phy_reset(struct fxgmac_pdata *priv); + +#endif /* YT6801_H */ diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c new file mode 100644 index 000000000..7cf4d1581 --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2022 - 2024 Motorcomm Electronic Technology Co.,Ltd. */ + +#include +#include +#include +#include +#include +#include + +#include "yt6801.h" + +#define PHY_WR_CONFIG(reg_offset) (0x8000205 + ((reg_offset) * 0x10000)) +static int fxgmac_phy_write_reg(struct fxgmac_pdata *priv, u32 reg_id, u32 data) +{ + u32 val; + int ret; + + FXGMAC_MAC_IO_WR(priv, MAC_MDIO_DATA, data); + FXGMAC_MAC_IO_WR(priv, MAC_MDIO_ADDRESS, PHY_WR_CONFIG(reg_id)); + ret = read_poll_timeout_atomic(FXGMAC_MAC_IO_RD, val, + !FXGMAC_GET_BITS(val, MAC_MDIO_ADDR, BUSY), + 10, 250, false, priv, MAC_MDIO_ADDRESS); + if (ret == -ETIMEDOUT) { + yt_err(priv, "%s err, id:%x ctrl:0x%08x, data:0x%08x\n", + __func__, reg_id, PHY_WR_CONFIG(reg_id), data); + return ret; + } + + return ret; +} + +#define PHY_RD_CONFIG(reg_offset) (0x800020d + ((reg_offset) * 0x10000)) +static int fxgmac_phy_read_reg(struct fxgmac_pdata *priv, u32 reg_id) +{ + u32 val; + int ret; + + FXGMAC_MAC_IO_WR(priv, MAC_MDIO_ADDRESS, PHY_RD_CONFIG(reg_id)); + ret = read_poll_timeout_atomic(FXGMAC_MAC_IO_RD, val, + !FXGMAC_GET_BITS(val, MAC_MDIO_ADDR, BUSY), + 10, 250, false, priv, MAC_MDIO_ADDRESS); + if (ret == -ETIMEDOUT) { + yt_err(priv, "%s err, id:%x, ctrl:0x%08x, val:0x%08x.\n", + __func__, reg_id, PHY_RD_CONFIG(reg_id), val); + return ret; + } + + return FXGMAC_MAC_IO_RD(priv, MAC_MDIO_DATA); /* Read data */ +} + +static int fxgmac_mdio_write_reg(struct mii_bus *mii_bus, int phyaddr, + int phyreg, u16 val) +{ + if (phyaddr > 0) + return -ENODEV; + + return fxgmac_phy_write_reg(mii_bus->priv, phyreg, val); +} + +static int fxgmac_mdio_read_reg(struct mii_bus *mii_bus, int phyaddr, + int phyreg) +{ + if (phyaddr > 0) + return -ENODEV; + + return fxgmac_phy_read_reg(mii_bus->priv, phyreg); +} + +static int fxgmac_mdio_register(struct fxgmac_pdata *priv) +{ + struct pci_dev *pdev = to_pci_dev(priv->dev); + struct phy_device *phydev; + struct mii_bus *new_bus; + int ret; + + new_bus = devm_mdiobus_alloc(&pdev->dev); + if (!new_bus) + return -ENOMEM; + + new_bus->name = "yt6801"; + new_bus->priv = priv; + new_bus->parent = &pdev->dev; + new_bus->read = fxgmac_mdio_read_reg; + new_bus->write = fxgmac_mdio_write_reg; + snprintf(new_bus->id, MII_BUS_ID_SIZE, "yt6801-%x-%x", + pci_domain_nr(pdev->bus), pci_dev_id(pdev)); + + ret = devm_mdiobus_register(&pdev->dev, new_bus); + if (ret < 0) + return ret; + + phydev = mdiobus_get_phy(new_bus, 0); + if (!phydev) + return -ENODEV; + + priv->phydev = phydev; + return 0; +} diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_type.h b/drivers/net/ethernet/motorcomm/yt6801/yt6801_type.h new file mode 100644 index 000000000..7cc558551 --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_type.h @@ -0,0 +1,967 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* Copyright (c) 2022 - 2024 Motorcomm Electronic Technology Co.,Ltd. */ + +#ifndef YT6801_TYPE_H +#define YT6801_TYPE_H + +#include +#include +#include +#include + +/**************** Other configuration register. *********************/ +#define GLOBAL_CTRL0 0x1000 + +#define EPHY_CTRL 0x1004 +#define EPHY_CTRL_RESET_POS 0 +#define EPHY_CTRL_RESET_LEN 1 +#define EPHY_CTRL_STA_LINKUP_POS 1 +#define EPHY_CTRL_STA_LINKUP_LEN 1 +#define EPHY_CTRL_STA_DUPLEX_POS 2 +#define EPHY_CTRL_STA_DUPLEX_LEN 1 +#define EPHY_CTRL_STA_SPEED_POS 3 +#define EPHY_CTRL_STA_SPEED_LEN 2 + +#define OOB_WOL_CTRL 0x1010 +#define OOB_WOL_CTRL_DIS_POS 0 +#define OOB_WOL_CTRL_DIS_LEN 1 + +/* MAC management registers bit positions and sizes */ +#define MGMT_INT_CTRL0 0x1100 +#define MGMT_INT_CTRL0_INT_STATUS_POS 0 +#define MGMT_INT_CTRL0_INT_STATUS_LEN 16 +#define MGMT_INT_CTRL0_INT_STATUS_RX 0x000f +#define MGMT_INT_CTRL0_INT_STATUS_TX 0x0010 +#define MGMT_INT_CTRL0_INT_STATUS_MISC 0x0020 +#define MGMT_INT_CTRL0_INT_STATUS_RXTXMISC 0x003f +#define MGMT_INT_CTRL0_INT_STATUS_MASK 0xffff +#define MGMT_INT_CTRL0_INT_MASK_POS 16 +#define MGMT_INT_CTRL0_INT_MASK_LEN 16 +#define MGMT_INT_CTRL0_INT_MASK_RXCH 0x000f +#define MGMT_INT_CTRL0_INT_MASK_TXCH 0x0010 +#define MGMT_INT_CTRL0_INT_MASK_EX_PMT 0xf7ff +#define MGMT_INT_CTRL0_INT_MASK_DISABLE 0xf000 +#define MGMT_INT_CTRL0_INT_MASK_MASK 0xffff + +/* Interrupt Moderation */ +#define INT_MOD 0x1108 +#define INT_MOD_RX_POS 0 +#define INT_MOD_RX_LEN 12 +#define INT_MOD_200_US 200 +#define INT_MOD_TX_POS 16 +#define INT_MOD_TX_LEN 12 + +/* LTR_CTRL3, LTR latency message, only for System IDLE Start. */ +#define LTR_IDLE_ENTER 0x113c +#define LTR_IDLE_ENTER_ENTER_POS 0 +#define LTR_IDLE_ENTER_ENTER_LEN 10 +#define LTR_IDLE_ENTER_900_US 900 +#define LTR_IDLE_ENTER_SCALE_POS 10 +#define LTR_IDLE_ENTER_SCALE_LEN 5 +#define LTR_IDLE_ENTER_SCALE_1_NS 0 +#define LTR_IDLE_ENTER_SCALE_32_NS 1 +#define LTR_IDLE_ENTER_SCALE_1024_NS 2 +#define LTR_IDLE_ENTER_SCALE_32768_NS 3 +#define LTR_IDLE_ENTER_SCALE_1048576_NS 4 +#define LTR_IDLE_ENTER_SCALE_33554432_NS 5 +#define LTR_IDLE_ENTER_REQUIRE_POS 15 +#define LTR_IDLE_ENTER_REQUIRE_LEN 1 +#define LTR_IDLE_ENTER_REQUIRE 1 + +/* LTR_CTRL4, LTR latency message, only for System IDLE End. */ +#define LTR_IDLE_EXIT 0x1140 +#define LTR_IDLE_EXIT_EXIT_POS 0 +#define LTR_IDLE_EXIT_EXIT_LEN 10 +#define LTR_IDLE_EXIT_171_US 171 +#define LTR_IDLE_EXIT_SCALE_POS 10 +#define LTR_IDLE_EXIT_SCALE_LEN 5 +#define LTR_IDLE_EXIT_SCALE 2 +#define LTR_IDLE_EXIT_REQUIRE_POS 15 +#define LTR_IDLE_EXIT_REQUIRE_LEN 1 +#define LTR_IDLE_EXIT_REQUIRE 1 + +#define MSIX_TBL_MASK 0x120c + +/* msi table */ +#define MSI_ID_RXQ0 0 +#define MSI_ID_RXQ1 1 +#define MSI_ID_RXQ2 2 +#define MSI_ID_RXQ3 3 +#define MSI_ID_TXQ0 4 +#define MSIX_TBL_MAX_NUM 5 + +#define MSI_PBA 0x1300 + +#define EFUSE_OP_CTRL_0 0x1500 +#define EFUSE_OP_MODE_POS 0 +#define EFUSE_OP_MODE_LEN 2 +#define EFUSE_OP_MODE_ROW_WRITE 0x0 +#define EFUSE_OP_MODE_ROW_READ 0x1 +#define EFUSE_OP_MODE_AUTO_LOAD 0x2 +#define EFUSE_OP_MODE_READ_BLANK 0x3 +#define EFUSE_OP_START_POS 2 +#define EFUSE_OP_START_LEN 1 +#define EFUSE_OP_ADDR_POS 8 +#define EFUSE_OP_ADDR_LEN 8 +#define EFUSE_OP_WR_DATA_POS 16 +#define EFUSE_OP_WR_DATA_LEN 8 + +#define EFUSE_OP_CTRL_1 0x1504 +#define EFUSE_OP_DONE_POS 1 +#define EFUSE_OP_DONE_LEN 1 +#define EFUSE_OP_PGM_PASS_POS 2 +#define EFUSE_OP_PGM_PASS_LEN 1 +#define EFUSE_OP_BIST_ERR_CNT_POS 8 +#define EFUSE_OP_BIST_ERR_CNT_LEN 8 +#define EFUSE_OP_BIST_ERR_ADDR_POS 16 +#define EFUSE_OP_BIST_ERR_ADDR_LEN 8 +#define EFUSE_OP_RD_DATA_POS 24 +#define EFUSE_OP_RD_DATA_LEN 8 + +/* MAC addr can be configured through effuse */ +#define MACA0LR_FROM_EFUSE 0x1520 +#define MACA0HR_FROM_EFUSE 0x1524 + +#define SYS_RESET 0x152c +#define SYS_RESET_RESET_POS 31 +#define SYS_RESET_RESET_LEN 1 + +#define PCIE_SERDES_PLL 0x199c +#define PCIE_SERDES_PLL_AUTOOFF_POS 0 +#define PCIE_SERDES_PLL_AUTOOFF_LEN 1 + +/**************** GMAC register. *********************/ +/* MAC register offsets */ +#define MAC_OFFSET 0x2000 +#define MAC_CR 0x0000 +#define MAC_ECR 0x0004 +#define MAC_PFR 0x0008 +#define MAC_WTR 0x000c +#define MAC_HTR0 0x0010 +#define MAC_VLANTR 0x0050 +#define MAC_VLANHTR 0x0058 +#define MAC_VLANIR 0x0060 +#define MAC_IVLANIR 0x0064 +#define MAC_Q0TFCR 0x0070 +#define MAC_RFCR 0x0090 +#define MAC_RQC0R 0x00a0 +#define MAC_RQC1R 0x00a4 +#define MAC_RQC2R 0x00a8 +#define MAC_RQC3R 0x00ac +#define MAC_ISR 0x00b0 +#define MAC_IER 0x00b4 +#define MAC_TX_RX_STA 0x00b8 +#define MAC_PMT_STA 0x00c0 +#define MAC_RWK_PAC 0x00c4 +#define MAC_LPI_STA 0x00d0 +#define MAC_LPI_CONTROL 0x00d4 +#define MAC_LPI_TIMER 0x00d8 +#define MAC_MS_TIC_COUNTER 0x00dc +#define MAC_AN_CR 0x00e0 +#define MAC_AN_SR 0x00e4 +#define MAC_AN_ADV 0x00e8 +#define MAC_AN_LPA 0x00ec +#define MAC_AN_EXP 0x00f0 +#define MAC_PHYIF_STA 0x00f8 +#define MAC_VR 0x0110 +#define MAC_DBG_STA 0x0114 +#define MAC_HWF0R 0x011c +#define MAC_HWF1R 0x0120 +#define MAC_HWF2R 0x0124 +#define MAC_HWF3R 0x0128 +#define MAC_MDIO_ADDRESS 0x0200 +#define MAC_MDIO_DATA 0x0204 +#define MAC_GPIOCR 0x0208 +#define MAC_GPIO_SR 0x020c +#define MAC_ARP_PROTO_ADDR 0x0210 +#define MAC_CSR_SW_CTRL 0x0230 +#define MAC_MACA0HR 0x0300 +#define MAC_MACA0LR 0x0304 +#define MAC_MACA1HR 0x0308 +#define MAC_MACA1LR 0x030c + +#define MAC_QTFCR_INC 4 +#define MAC_MACA_INC 4 +#define MAC_HTR_INC 4 +#define MAC_RQC2_INC 4 +#define MAC_RQC2_Q_PER_REG 4 + +/* MAC register entry bit positions and sizes */ +#define MAC_HWF0R_ADDMACADRSEL_POS 18 +#define MAC_HWF0R_ADDMACADRSEL_LEN 5 +#define MAC_HWF0R_ARPOFFSEL_POS 9 +#define MAC_HWF0R_ARPOFFSEL_LEN 1 +#define MAC_HWF0R_EEESEL_POS 13 +#define MAC_HWF0R_EEESEL_LEN 1 +#define MAC_HWF0R_ACTPHYIFSEL_POS 28 +#define MAC_HWF0R_ACTPHYIFSEL_LEN 3 +#define MAC_HWF0R_MGKSEL_POS 7 +#define MAC_HWF0R_MGKSEL_LEN 1 +#define MAC_HWF0R_MMCSEL_POS 8 +#define MAC_HWF0R_MMCSEL_LEN 1 +#define MAC_HWF0R_RWKSEL_POS 6 +#define MAC_HWF0R_RWKSEL_LEN 1 +#define MAC_HWF0R_RXCOESEL_POS 16 +#define MAC_HWF0R_RXCOESEL_LEN 1 +#define MAC_HWF0R_SAVLANINS_POS 27 +#define MAC_HWF0R_SAVLANINS_LEN 1 +#define MAC_HWF0R_SMASEL_POS 5 +#define MAC_HWF0R_SMASEL_LEN 1 +#define MAC_HWF0R_TSSEL_POS 12 +#define MAC_HWF0R_TSSEL_LEN 1 +#define MAC_HWF0R_TSSTSSEL_POS 25 +#define MAC_HWF0R_TSSTSSEL_LEN 2 +#define MAC_HWF0R_TXCOESEL_POS 14 +#define MAC_HWF0R_TXCOESEL_LEN 1 +#define MAC_HWF0R_VLHASH_POS 4 +#define MAC_HWF0R_VLHASH_LEN 1 +#define MAC_HWF1R_ADDR64_POS 14 +#define MAC_HWF1R_ADDR64_LEN 2 +#define MAC_HWF1R_ADVTHWORD_POS 13 +#define MAC_HWF1R_ADVTHWORD_LEN 1 +#define MAC_HWF1R_DBGMEMA_POS 19 +#define MAC_HWF1R_DBGMEMA_LEN 1 +#define MAC_HWF1R_DCBEN_POS 16 +#define MAC_HWF1R_DCBEN_LEN 1 +#define MAC_HWF1R_HASHTBLSZ_POS 24 +#define MAC_HWF1R_HASHTBLSZ_LEN 2 +#define MAC_HWF1R_L3L4FNUM_POS 27 +#define MAC_HWF1R_L3L4FNUM_LEN 4 +#define MAC_HWF1R_RAVSEL_POS 21 +#define MAC_HWF1R_RAVSEL_LEN 1 +#define MAC_HWF1R_AVSEL_POS 20 +#define MAC_HWF1R_AVSEL_LEN 1 +#define MAC_HWF1R_RXFIFOSIZE_POS 0 +#define MAC_HWF1R_RXFIFOSIZE_LEN 5 +#define MAC_HWF1R_SPHEN_POS 17 +#define MAC_HWF1R_SPHEN_LEN 1 +#define MAC_HWF1R_TSOEN_POS 18 +#define MAC_HWF1R_TSOEN_LEN 1 +#define MAC_HWF1R_TXFIFOSIZE_POS 6 +#define MAC_HWF1R_TXFIFOSIZE_LEN 5 +#define MAC_HWF2R_AUXSNAPNUM_POS 28 +#define MAC_HWF2R_AUXSNAPNUM_LEN 3 +#define MAC_HWF2R_PPSOUTNUM_POS 24 +#define MAC_HWF2R_PPSOUTNUM_LEN 3 +#define MAC_HWF2R_RXCHCNT_POS 12 +#define MAC_HWF2R_RXCHCNT_LEN 4 +#define MAC_HWF2R_RXQCNT_POS 0 +#define MAC_HWF2R_RXQCNT_LEN 4 +#define MAC_HWF2R_TXCHCNT_POS 18 +#define MAC_HWF2R_TXCHCNT_LEN 4 +#define MAC_HWF2R_TXQCNT_POS 6 +#define MAC_HWF2R_TXQCNT_LEN 4 +#define MAC_IER_TSIE_POS 12 +#define MAC_IER_TSIE_LEN 1 + +#define MAC_ISR_PHYIF_STA_POS 0 +#define MAC_ISR_PHYIF_STA_LEN 1 +#define MAC_ISR_AN_SR_POS 1 +#define MAC_ISR_AN_SR_LEN 3 +#define MAC_ISR_PMT_STA_POS 4 +#define MAC_ISR_PMT_STA_LEN 1 +#define MAC_ISR_LPI_STA_POS 5 +#define MAC_ISR_LPI_STA_LEN 1 +#define MAC_ISR_MMC_STA_POS 8 +#define MAC_ISR_MMC_STA_LEN 1 +#define MAC_ISR_RX_MMC_STA_POS 9 +#define MAC_ISR_RX_MMC_STA_LEN 1 +#define MAC_ISR_TX_MMC_STA_POS 10 +#define MAC_ISR_TX_MMC_STA_LEN 1 +#define MAC_ISR_IPC_RXINT_POS 11 +#define MAC_ISR_IPC_RXINT_LEN 1 +#define MAC_ISR_TSIS_POS 12 +#define MAC_ISR_TSIS_LEN 1 +#define MAC_ISR_TX_RX_STA_POS 13 +#define MAC_ISR_TX_RX_STA_LEN 2 +#define MAC_ISR_GPIO_SR_POS 15 +#define MAC_ISR_GPIO_SR_LEN 11 + +#define MAC_MACA1HR_AE_POS 31 +#define MAC_MACA1HR_AE_LEN 1 +#define MAC_PFR_HMC_POS 2 +#define MAC_PFR_HMC_LEN 1 +#define MAC_PFR_HPF_POS 10 +#define MAC_PFR_HPF_LEN 1 +#define MAC_PFR_PM_POS 4 /* Pass all Multicast. */ +#define MAC_PFR_PM_LEN 1 +#define MAC_PFR_DBF_POS 5 /* Disable Broadcast Packets. */ +#define MAC_PFR_DBF_LEN 1 +#define MAC_PFR_HUC_POS 1 /* Hash Unicast Mode. */ +#define MAC_PFR_HUC_LEN 1 +#define MAC_PFR_PR_POS 0 /* Promiscuous Mode. */ +#define MAC_PFR_PR_LEN 1 +#define MAC_PFR_VTFE_POS 16 +#define MAC_PFR_VTFE_LEN 1 +#define MAC_Q0TFCR_PT_POS 16 +#define MAC_Q0TFCR_PT_LEN 16 +#define MAC_Q0TFCR_TFE_POS 1 +#define MAC_Q0TFCR_TFE_LEN 1 +#define MAC_CR_ARPEN_POS 31 +#define MAC_CR_ARPEN_LEN 1 +#define MAC_CR_ACS_POS 20 +#define MAC_CR_ACS_LEN 1 +#define MAC_CR_CST_POS 21 +#define MAC_CR_CST_LEN 1 +#define MAC_CR_IPC_POS 27 +#define MAC_CR_IPC_LEN 1 +#define MAC_CR_JE_POS 16 +#define MAC_CR_JE_LEN 1 +#define MAC_CR_LM_POS 12 +#define MAC_CR_LM_LEN 1 +#define MAC_CR_RE_POS 0 +#define MAC_CR_RE_LEN 1 +#define MAC_CR_PS_POS 15 +#define MAC_CR_PS_LEN 1 +#define MAC_CR_FES_POS 14 +#define MAC_CR_FES_LEN 1 +#define MAC_CR_DM_POS 13 +#define MAC_CR_DM_LEN 1 +#define MAC_CR_TE_POS 1 +#define MAC_CR_TE_LEN 1 +#define MAC_ECR_DCRCC_POS 16 +#define MAC_ECR_DCRCC_LEN 1 +#define MAC_ECR_HDSMS_POS 20 +#define MAC_ECR_HDSMS_LEN 3 +#define MAC_ECR_HDSMS_64B 0 +#define MAC_ECR_HDSMS_128B 1 +#define MAC_ECR_HDSMS_256B 2 +#define MAC_ECR_HDSMS_512B 3 +#define MAC_ECR_HDSMS_1023B 4 +#define MAC_RFCR_PFCE_POS 8 +#define MAC_RFCR_PFCE_LEN 1 +#define MAC_RFCR_RFE_POS 0 +#define MAC_RFCR_RFE_LEN 1 +#define MAC_RFCR_UP_POS 1 +#define MAC_RFCR_UP_LEN 1 +#define MAC_RQC0R_RXQ0EN_POS 0 +#define MAC_RQC0R_RXQ0EN_LEN 2 +#define MAC_LPIIE_POS 5 +#define MAC_LPIIE_LEN 1 +#define MAC_LPIATE_POS 20 +#define MAC_LPIATE_LEN 1 +#define MAC_LPITXA_POS 19 +#define MAC_LPITXA_LEN 1 +#define MAC_PLS_POS 17 +#define MAC_PLS_LEN 1 +#define MAC_LPIEN_POS 16 +#define MAC_LPIEN_LEN 1 +#define MAC_LPI_ENTRY_TIMER 8 +#define MAC_LPIET_POS 3 +#define MAC_LPIET_LEN 17 +#define MAC_TWT_TIMER 0x10 +#define MAC_TWT_POS 0 +#define MAC_TWT_LEN 16 +#define MAC_LST_TIMER 2 +#define MAC_LST_POS 16 +#define MAC_LST_LEN 10 +#define MAC_MS_TIC 24 +#define MAC_MS_TIC_POS 0 +#define MAC_MS_TIC_LEN 12 + +#define MAC_MDIO_ADDR_BUSY_POS 0 +#define MAC_MDIO_ADDR_BUSY_LEN 1 +#define MAC_MDIO_ADDR_GOC_POS 2 +#define MAC_MDIO_ADDR_GOC_LEN 2 +#define MAC_MDIO_ADDR_GB_POS 0 +#define MAC_MDIO_ADDR_GB_LEN 1 +#define MAC_MDIO_DATA_RA_POS 16 +#define MAC_MDIO_DATA_RA_LEN 16 +#define MAC_MDIO_DATA_GD_POS 0 +#define MAC_MDIO_DATA_GD_LEN 16 + +/* bit definitions for PMT and WOL */ +#define MAC_PMT_STA_PWRDWN_POS 0 +#define MAC_PMT_STA_PWRDWN_LEN 1 +#define MAC_PMT_STA_MGKPKTEN_POS 1 +#define MAC_PMT_STA_MGKPKTEN_LEN 1 +#define MAC_PMT_STA_RWKPKTEN_POS 2 +#define MAC_PMT_STA_RWKPKTEN_LEN 1 +#define MAC_PMT_STA_MGKPRCVD_POS 5 +#define MAC_PMT_STA_MGKPRCVD_LEN 1 +#define MAC_PMT_STA_RWKPRCVD_POS 6 +#define MAC_PMT_STA_RWKPRCVD_LEN 1 +#define MAC_PMT_STA_GLBLUCAST_POS 9 +#define MAC_PMT_STA_GLBLUCAST_LEN 1 +#define MAC_PMT_STA_RWKPTR_POS 24 +#define MAC_PMT_STA_RWKPTR_LEN 4 +#define MAC_PMT_STA_RWKFILTERST_POS 31 +#define MAC_PMT_STA_RWKFILTERST_LEN 1 + +/* MMC register offsets */ +#define MMC_CR 0x0700 +#define MMC_RISR 0x0704 +#define MMC_TISR 0x0708 +#define MMC_RIER 0x070c +#define MMC_TIER 0x0710 +#define MMC_IPC_RXINT_MASK 0x0800 +#define MMC_IPC_RXINT 0x0808 + +#define MMC_CR_CR_POS 0 +#define MMC_CR_CR_LEN 1 +#define MMC_CR_CSR_POS 1 +#define MMC_CR_CSR_LEN 1 +#define MMC_CR_ROR_POS 2 +#define MMC_CR_ROR_LEN 1 +#define MMC_CR_MCF_POS 3 +#define MMC_CR_MCF_LEN 1 +#define MMC_RIER_ALL_INTERRUPTS_POS 0 +#define MMC_RIER_ALL_INTERRUPTS_LEN 28 +#define MMC_TIER_ALL_INTERRUPTS_POS 0 +#define MMC_TIER_ALL_INTERRUPTS_LEN 28 + +/* MTL register offsets */ +#define MTL_OMR 0x0c00 +#define MTL_FDCR 0x0c08 +#define MTL_FDSR 0x0c0c +#define MTL_FDDR 0x0c10 +#define MTL_INT_SR 0x0c20 +#define MTL_RQDCM0R 0x0c30 +#define MTL_ECC_INT_SR 0x0ccc + +#define MTL_RQDCM_INC 4 +#define MTL_RQDCM_Q_PER_REG 4 + +/* MTL register entry bit positions and sizes */ +#define MTL_OMR_ETSALG_POS 5 +#define MTL_OMR_ETSALG_LEN 2 +#define MTL_OMR_RAA_POS 2 +#define MTL_OMR_RAA_LEN 1 + +/* MTL queue register offsets */ +#define MTL_Q_BASE 0x0d00 +#define MTL_Q_INC 0x40 +#define MTL_Q_INT_CTL_SR 0x0d2c + +#define MTL_Q_TQOMR 0x00 +#define MTL_Q_TQUR 0x04 +#define MTL_Q_RQOMR 0x30 +#define MTL_Q_RQMPOCR 0x34 +#define MTL_Q_RQDR 0x38 +#define MTL_Q_RQCR 0x3c +#define MTL_Q_IER 0x2c +#define MTL_Q_ISR 0x2c +#define MTL_TXQ_DEG 0x08 /* transmit debug */ + +/* MTL queue register entry bit positions and sizes */ +#define MTL_Q_RQDR_PRXQ_POS 16 +#define MTL_Q_RQDR_PRXQ_LEN 14 +#define MTL_Q_RQDR_RXQSTS_POS 4 +#define MTL_Q_RQDR_RXQSTS_LEN 2 +#define MTL_Q_RQOMR_RFA_POS 8 +#define MTL_Q_RQOMR_RFA_LEN 6 +#define MTL_Q_RQOMR_RFD_POS 14 +#define MTL_Q_RQOMR_RFD_LEN 6 +#define MTL_Q_RQOMR_EHFC_POS 7 +#define MTL_Q_RQOMR_EHFC_LEN 1 +#define MTL_Q_RQOMR_RQS_POS 20 +#define MTL_Q_RQOMR_RQS_LEN 9 +#define MTL_Q_RQOMR_RSF_POS 5 +#define MTL_Q_RQOMR_RSF_LEN 1 +#define MTL_Q_RQOMR_FEP_POS 4 +#define MTL_Q_RQOMR_FEP_LEN 1 +#define MTL_Q_RQOMR_FUP_POS 3 +#define MTL_Q_RQOMR_FUP_LEN 1 +#define MTL_Q_RQOMR_RTC_POS 0 +#define MTL_Q_RQOMR_RTC_LEN 2 +#define MTL_Q_TQOMR_FTQ_POS 0 +#define MTL_Q_TQOMR_FTQ_LEN 1 +#define MTL_Q_TQOMR_TQS_POS 16 +#define MTL_Q_TQOMR_TQS_LEN 7 +#define MTL_Q_TQOMR_TSF_POS 1 +#define MTL_Q_TQOMR_TSF_LEN 1 +#define MTL_Q_TQOMR_TTC_POS 4 +#define MTL_Q_TQOMR_TTC_LEN 3 +#define MTL_Q_TQOMR_TXQEN_POS 2 +#define MTL_Q_TQOMR_TXQEN_LEN 2 + +/* MTL queue register value */ +#define MTL_RSF_DISABLE 0x00 +#define MTL_RSF_ENABLE 0x01 +#define MTL_TSF_DISABLE 0x00 +#define MTL_TSF_ENABLE 0x01 +#define MTL_FEP_DISABLE 0x00 +#define MTL_FEP_ENABLE 0x01 + +#define MTL_RX_THRESHOLD_64 0x00 +#define MTL_RX_THRESHOLD_32 0x01 +#define MTL_RX_THRESHOLD_96 0x02 +#define MTL_RX_THRESHOLD_128 0x03 +#define MTL_TX_THRESHOLD_32 0x00 +#define MTL_TX_THRESHOLD_64 0x01 +#define MTL_TX_THRESHOLD_96 0x02 +#define MTL_TX_THRESHOLD_128 0x03 +#define MTL_TX_THRESHOLD_192 0x04 +#define MTL_TX_THRESHOLD_256 0x05 +#define MTL_TX_THRESHOLD_384 0x06 +#define MTL_TX_THRESHOLD_512 0x07 + +#define MTL_ETSALG_WRR 0x00 +#define MTL_ETSALG_WFQ 0x01 +#define MTL_ETSALG_DWRR 0x02 +#define MTL_ETSALG_SP 0x03 + +#define MTL_RAA_SP 0x00 +#define MTL_RAA_WSP 0x01 + +#define MTL_Q_DISABLED 0x00 +#define MTL_Q_EN_IF_AV 0x01 +#define MTL_Q_ENABLED 0x02 + +#define MTL_RQDCM0R_Q0MDMACH 0x0 +#define MTL_RQDCM0R_Q1MDMACH 0x00000100 +#define MTL_RQDCM0R_Q2MDMACH 0x00020000 +#define MTL_RQDCM0R_Q3MDMACH 0x03000000 +#define MTL_RQDCM1R_Q4MDMACH 0x00000004 +#define MTL_RQDCM1R_Q5MDMACH 0x00000500 +#define MTL_RQDCM1R_Q6MDMACH 0x00060000 +#define MTL_RQDCM1R_Q7MDMACH 0x07000000 +#define MTL_RQDCM2R_Q8MDMACH 0x00000008 +#define MTL_RQDCM2R_Q9MDMACH 0x00000900 +#define MTL_RQDCM2R_Q10MDMACH 0x000A0000 +#define MTL_RQDCM2R_Q11MDMACH 0x0B000000 + +#define MTL_RQDCM0R_Q0DDMACH 0x10 +#define MTL_RQDCM0R_Q1DDMACH 0x00001000 +#define MTL_RQDCM0R_Q2DDMACH 0x00100000 +#define MTL_RQDCM0R_Q3DDMACH 0x10000000 +#define MTL_RQDCM1R_Q4DDMACH 0x00000010 +#define MTL_RQDCM1R_Q5DDMACH 0x00001000 +#define MTL_RQDCM1R_Q6DDMACH 0x00100000 +#define MTL_RQDCM1R_Q7DDMACH 0x10000000 + +/* MTL traffic class register offsets */ +#define MTL_TC_BASE MTL_Q_BASE +#define MTL_TC_INC MTL_Q_INC + +#define MTL_TC_TQDR 0x08 +#define MTL_TC_ETSCR 0x10 +#define MTL_TC_ETSSR 0x14 +#define MTL_TC_QWR 0x18 + +#define MTL_TC_TQDR_TRCSTS_POS 1 +#define MTL_TC_TQDR_TRCSTS_LEN 2 +#define MTL_TC_TQDR_TXQSTS_POS 4 +#define MTL_TC_TQDR_TXQSTS_LEN 1 + +/* MTL traffic class register entry bit positions and sizes */ +#define MTL_TC_ETSCR_TSA_POS 0 +#define MTL_TC_ETSCR_TSA_LEN 2 +#define MTL_TC_QWR_QW_POS 0 +#define MTL_TC_QWR_QW_LEN 21 + +/* MTL traffic class register value */ +#define MTL_TSA_SP 0x00 +#define MTL_TSA_ETS 0x02 + +/* DMA register offsets */ +#define DMA_MR 0x1000 +#define DMA_SBMR 0x1004 +#define DMA_ISR 0x1008 +#define DMA_DSR0 0x100c +#define DMA_DSR1 0x1010 +#define DMA_DSR2 0x1014 +#define DMA_AXIARCR 0x1020 +#define DMA_AXIAWCR 0x1024 +#define DMA_AXIAWRCR 0x1028 +#define DMA_SAFE_ISR 0x1080 +#define DMA_ECC_IE 0x1084 +#define DMA_ECC_INT_SR 0x1088 + +/* DMA register entry bit positions and sizes */ +#define DMA_ISR_MACIS_POS 17 +#define DMA_ISR_MACIS_LEN 1 +#define DMA_ISR_MTLIS_POS 16 +#define DMA_ISR_MTLIS_LEN 1 +#define DMA_MR_SWR_POS 0 +#define DMA_MR_SWR_LEN 1 +#define DMA_MR_TXPR_POS 11 +#define DMA_MR_TXPR_LEN 1 +#define DMA_MR_INTM_POS 16 +#define DMA_MR_INTM_LEN 2 +#define DMA_MR_QUREAD_POS 19 +#define DMA_MR_QUREAD_LEN 1 +#define DMA_MR_TNDF_POS 20 +#define DMA_MR_TNDF_LEN 2 +#define DMA_MR_RNDF_POS 22 +#define DMA_MR_RNDF_LEN 2 + +#define DMA_SBMR_EN_LPI_POS 31 +#define DMA_SBMR_EN_LPI_LEN 1 +#define DMA_SBMR_LPI_XIT_PKT_POS 30 +#define DMA_SBMR_LPI_XIT_PKT_LEN 1 +#define DMA_SBMR_WR_OSR_LMT_POS 24 +#define DMA_SBMR_WR_OSR_LMT_LEN 6 +#define DMA_SBMR_RD_OSR_LMT_POS 16 +#define DMA_SBMR_RD_OSR_LMT_LEN 8 +#define DMA_SBMR_AAL_POS 12 +#define DMA_SBMR_AAL_LEN 1 +#define DMA_SBMR_EAME_POS 11 +#define DMA_SBMR_EAME_LEN 1 +#define DMA_SBMR_AALE_POS 10 +#define DMA_SBMR_AALE_LEN 1 +#define DMA_SBMR_BLEN_4_POS 1 +#define DMA_SBMR_BLEN_4_LEN 1 +#define DMA_SBMR_BLEN_8_POS 2 +#define DMA_SBMR_BLEN_8_LEN 1 +#define DMA_SBMR_BLEN_16_POS 3 +#define DMA_SBMR_BLEN_16_LEN 1 +#define DMA_SBMR_BLEN_32_POS 4 +#define DMA_SBMR_BLEN_32_LEN 1 +#define DMA_SBMR_BLEN_64_POS 5 +#define DMA_SBMR_BLEN_64_LEN 1 +#define DMA_SBMR_BLEN_128_POS 6 +#define DMA_SBMR_BLEN_128_LEN 1 +#define DMA_SBMR_BLEN_256_POS 7 +#define DMA_SBMR_BLEN_256_LEN 1 +#define DMA_SBMR_FB_POS 0 +#define DMA_SBMR_FB_LEN 1 + +/* DMA register values */ +#define DMA_DSR_RPS_LEN 4 +#define DMA_DSR_TPS_LEN 4 +#define DMA_DSR_Q_LEN (DMA_DSR_RPS_LEN + DMA_DSR_TPS_LEN) +#define DMA_DSR0_TPS_START 12 +#define DMA_DSRX_FIRST_QUEUE 3 +#define DMA_DSRX_INC 4 +#define DMA_DSRX_QPR 4 +#define DMA_DSRX_TPS_START 4 +#define DMA_TPS_STOPPED 0x00 +#define DMA_TPS_SUSPENDED 0x06 + +/* DMA channel register offsets */ +#define DMA_CH_BASE 0x1100 +#define DMA_CH_INC 0x80 + +#define DMA_CH_CR 0x00 +#define DMA_CH_TCR 0x04 +#define DMA_CH_RCR 0x08 +#define DMA_CH_TDLR_HI 0x10 +#define DMA_CH_TDLR_LO 0x14 +#define DMA_CH_RDLR_HI 0x18 +#define DMA_CH_RDLR_LO 0x1c +#define DMA_CH_TDTR_LO 0x20 +#define DMA_CH_RDTR_LO 0x28 +#define DMA_CH_TDRLR 0x2c +#define DMA_CH_RDRLR 0x30 +#define DMA_CH_IER 0x34 +#define DMA_CH_RIWT 0x38 +#define DMA_CH_CATDR_LO 0x44 +#define DMA_CH_CARDR_LO 0x4c +#define DMA_CH_CATBR_HI 0x50 +#define DMA_CH_CATBR_LO 0x54 +#define DMA_CH_CARBR_HI 0x58 +#define DMA_CH_CARBR_LO 0x5c +#define DMA_CH_SR 0x60 + +/* DMA channel register entry bit positions and sizes */ +#define DMA_CH_CR_PBLX8_POS 16 +#define DMA_CH_CR_PBLX8_LEN 1 +#define DMA_CH_CR_SPH_POS 24 +#define DMA_CH_CR_SPH_LEN 1 +#define DMA_CH_IER_AIE_POS 14 +#define DMA_CH_IER_AIE_LEN 1 +#define DMA_CH_IER_FBEE_POS 12 +#define DMA_CH_IER_FBEE_LEN 1 +#define DMA_CH_IER_NIE_POS 15 +#define DMA_CH_IER_NIE_LEN 1 +#define DMA_CH_IER_RBUE_POS 7 +#define DMA_CH_IER_RBUE_LEN 1 +#define DMA_CH_IER_RIE_POS 6 +#define DMA_CH_IER_RIE_LEN 1 +#define DMA_CH_IER_RSE_POS 8 +#define DMA_CH_IER_RSE_LEN 1 +#define DMA_CH_IER_TBUE_POS 2 +#define DMA_CH_IER_TBUE_LEN 1 +#define DMA_CH_IER_TIE_POS 0 +#define DMA_CH_IER_TIE_LEN 1 +#define DMA_CH_IER_TXSE_POS 1 +#define DMA_CH_IER_TXSE_LEN 1 +#define DMA_CH_RCR_PBL_POS 16 +#define DMA_CH_RCR_PBL_LEN 6 +#define DMA_CH_RCR_RBSZ_POS 1 +#define DMA_CH_RCR_RBSZ_LEN 14 +#define DMA_CH_RCR_SR_POS 0 +#define DMA_CH_RCR_SR_LEN 1 +#define DMA_CH_RIWT_RWT_POS 0 +#define DMA_CH_RIWT_RWT_LEN 8 +#define DMA_CH_SR_FBE_POS 12 +#define DMA_CH_SR_FBE_LEN 1 +#define DMA_CH_SR_RBU_POS 7 +#define DMA_CH_SR_RBU_LEN 1 +#define DMA_CH_SR_RI_POS 6 +#define DMA_CH_SR_RI_LEN 1 +#define DMA_CH_SR_RPS_POS 8 +#define DMA_CH_SR_RPS_LEN 1 +#define DMA_CH_SR_TBU_POS 2 +#define DMA_CH_SR_TBU_LEN 1 +#define DMA_CH_SR_TI_POS 0 +#define DMA_CH_SR_TI_LEN 1 +#define DMA_CH_SR_TPS_POS 1 +#define DMA_CH_SR_TPS_LEN 1 +#define DMA_CH_TCR_OSP_POS 4 +#define DMA_CH_TCR_OSP_LEN 1 +#define DMA_CH_TCR_PBL_POS 16 +#define DMA_CH_TCR_PBL_LEN 6 +#define DMA_CH_TCR_ST_POS 0 +#define DMA_CH_TCR_ST_LEN 1 +#define DMA_CH_TCR_TSE_POS 12 +#define DMA_CH_TCR_TSE_LEN 1 + +/* DMA channel register values */ +#define DMA_OSP_DISABLE 0x00 +#define DMA_OSP_ENABLE 0x01 +#define DMA_PBL_1 1 +#define DMA_PBL_2 2 +#define DMA_PBL_4 4 +#define DMA_PBL_8 8 +#define DMA_PBL_16 16 +#define DMA_PBL_32 32 +#define DMA_PBL_64 64 +#define DMA_PBL_128 128 +#define DMA_PBL_256 256 +#define DMA_PBL_X8_DISABLE 0x00 +#define DMA_PBL_X8_ENABLE 0x01 + +/* Descriptor/Packet entry bit positions and sizes */ + +#define RX_NORMAL_DESC0_OVT_POS 0 /* Outer VLAN Tag */ +#define RX_NORMAL_DESC0_OVT_LEN 16 +#define RX_NORMAL_DESC2_HL_POS 0 /* L3/L4 Header Length */ +#define RX_NORMAL_DESC2_HL_LEN 10 +#define RX_NORMAL_DESC3_OWN_POS 31 /* Own Bit */ +#define RX_NORMAL_DESC3_OWN_LEN 1 +#define RX_NORMAL_DESC3_INTE_POS 30 +#define RX_NORMAL_DESC3_INTE_LEN 1 +#define RX_NORMAL_DESC3_FD_POS 29 /* First Descriptor */ +#define RX_NORMAL_DESC3_FD_LEN 1 +#define RX_NORMAL_DESC3_LD_POS 28 /* Last Descriptor */ +#define RX_NORMAL_DESC3_LD_LEN 1 +#define RX_NORMAL_DESC3_BUF2V_POS 25 /* Receive Status RDES2 Valid */ +#define RX_NORMAL_DESC3_BUF2V_LEN 1 +#define RX_NORMAL_DESC3_BUF1V_POS 24 /* Receive Status RDES1 Valid */ +#define RX_NORMAL_DESC3_BUF1V_LEN 1 +#define RX_NORMAL_DESC3_ETLT_POS 16 /* Length/Type Field */ +#define RX_NORMAL_DESC3_ETLT_LEN 3 +#define RX_NORMAL_DESC3_ES_POS 15 /* Error Summary */ +#define RX_NORMAL_DESC3_ES_LEN 1 +#define RX_NORMAL_DESC3_PL_POS 0 /* Packet Length */ +#define RX_NORMAL_DESC3_PL_LEN 15 + +#define RX_NORMAL_DESC0_WB_IVT_POS 16 /* Inner VLAN Tag. */ +#define RX_NORMAL_DESC0_WB_IVT_LEN 16 +#define RX_NORMAL_DESC0_WB_OVT_POS 0 /* Outer VLAN Tag. */ +#define RX_NORMAL_DESC0_WB_OVT_LEN 16 +#define RX_NORMAL_DESC1_WB_IPCE_POS 7 /* IP Payload Error. */ +#define RX_NORMAL_DESC1_WB_IPCE_LEN 1 +#define RX_NORMAL_DESC1_WB_IPV6_POS 5 /* IPV6 Header Present. */ +#define RX_NORMAL_DESC1_WB_IPV6_LEN 1 +#define RX_NORMAL_DESC1_WB_IPV4_POS 4 /* IPV4 Header Present. */ +#define RX_NORMAL_DESC1_WB_IPV4_LEN 1 +#define RX_NORMAL_DESC1_WB_IPHE_POS 3 /* IP Header Error. */ +#define RX_NORMAL_DESC1_WB_IPHE_LEN 1 +#define RX_NORMAL_DESC1_WB_PT_POS 0 /* Payload Type */ +#define RX_NORMAL_DESC1_WB_PT_LEN 3 + +#define RX_NORMAL_DESC2_WB_HF_POS 18 /* Hash Filter Status. */ +#define RX_NORMAL_DESC2_WB_HF_LEN 1 +#define RX_NORMAL_DESC2_WB_DAF_POS 17 /* DA Filter Fail */ +#define RX_NORMAL_DESC2_WB_DAF_LEN 1 +#define RX_NORMAL_DESC2_WB_RAPARSER_POS 11 /* Parse error */ +#define RX_NORMAL_DESC2_WB_RAPARSER_LEN 3 + +#define TX_CONTEXT_DESC2_IVLTV_POS 16 /* Inner VLAN Tag. */ +#define TX_CONTEXT_DESC2_IVLTV_LEN 16 +#define TX_CONTEXT_DESC2_MSS_POS 0 /* Maximum Segment Size */ +#define TX_CONTEXT_DESC2_MSS_LEN 14 +#define TX_CONTEXT_DESC3_CTXT_POS 30 /* Context Type */ +#define TX_CONTEXT_DESC3_CTXT_LEN 1 +#define TX_CONTEXT_DESC3_TCMSSV_POS 26 /* Timestamp correct or MSS Valid */ +#define TX_CONTEXT_DESC3_TCMSSV_LEN 1 +#define TX_CONTEXT_DESC3_IVTIR_POS 18 /* Inner VLAN Tag Insert/Replace */ +#define TX_CONTEXT_DESC3_IVTIR_LEN 2 +#define TX_CONTEXT_DESC3_IVTIR_INSERT 2 +#define TX_CONTEXT_DESC3_IVLTV_POS 17 /* Inner VLAN TAG valid. */ +#define TX_CONTEXT_DESC3_IVLTV_LEN 1 +#define TX_CONTEXT_DESC3_VLTV_POS 16 /* Inner VLAN Tag Valid */ +#define TX_CONTEXT_DESC3_VLTV_LEN 1 +#define TX_CONTEXT_DESC3_VT_POS 0 /* VLAN Tag */ +#define TX_CONTEXT_DESC3_VT_LEN 16 + +#define TX_NORMAL_DESC2_IC_POS 31 /* Interrupt on Completion. */ +#define TX_NORMAL_DESC2_IC_LEN 1 +#define TX_NORMAL_DESC2_TTSE_POS 30 /* Transmit Timestamp Enable */ +#define TX_NORMAL_DESC2_TTSE_LEN 1 +#define TX_NORMAL_DESC2_VTIR_POS 14 /* VLAN Tag Insertion/Replacement */ +#define TX_NORMAL_DESC2_VTIR_LEN 2 +#define TX_NORMAL_DESC2_VLAN_INSERT 0x2 +#define TX_NORMAL_DESC2_HL_B1L_POS 0 /* Header Length or Buffer 1 Length */ +#define TX_NORMAL_DESC2_HL_B1L_LEN 14 + +#define TX_NORMAL_DESC3_OWN_POS 31 /* Own Bit */ +#define TX_NORMAL_DESC3_OWN_LEN 1 +#define TX_NORMAL_DESC3_CTXT_POS 30 /* Context Type */ +#define TX_NORMAL_DESC3_CTXT_LEN 1 +#define TX_NORMAL_DESC3_FD_POS 29 /* First Descriptor */ +#define TX_NORMAL_DESC3_FD_LEN 1 +#define TX_NORMAL_DESC3_LD_POS 28 /* Last Descriptor */ +#define TX_NORMAL_DESC3_LD_LEN 1 +#define TX_NORMAL_DESC3_CPC_POS 26 /* CRC Pad Control */ +#define TX_NORMAL_DESC3_CPC_LEN 2 +#define TX_NORMAL_DESC3_TCPHDRLEN_POS 19 /* TCP/UDP Header Length. */ +#define TX_NORMAL_DESC3_TCPHDRLEN_LEN 4 +#define TX_NORMAL_DESC3_TSE_POS 18 /* TCP Segmentation Enable */ +#define TX_NORMAL_DESC3_TSE_LEN 1 +#define TX_NORMAL_DESC3_CIC_POS 16 /* Checksum Insertion Control */ +#define TX_NORMAL_DESC3_CIC_LEN 2 +#define TX_NORMAL_DESC3_FL_POS 0 /* Frame Length */ +#define TX_NORMAL_DESC3_FL_LEN 15 +#define TX_NORMAL_DESC3_TCPPL_POS 0 /* TCP Packet Length.*/ +#define TX_NORMAL_DESC3_TCPPL_LEN 18 + +/* Bit getting and setting macros + * The get macro will extract the current bit field value from within + * the variable + * + * The set macro will clear the current bit field value within the + * variable and then set the bit field of the variable to the + * specified value + */ +#define GET_BITS(_var, _index, _width) \ + (((_var) >> (_index)) & ((0x1U << (_width)) - 1)) + +#define SET_BITS(_var, _index, _width, _val) \ + do { \ + (_var) &= ~(((0x1U << (_width)) - 1) << (_index)); \ + (_var) |= (((_val) & ((0x1U << (_width)) - 1)) << (_index)); \ + } while (0) + +#define GET_BITS_LE(_var, _index, _width) \ + ((le32_to_cpu((_var)) >> (_index)) & ((0x1U << (_width)) - 1)) + +#define SET_BITS_LE(_var, _index, _width, _val) \ + do { \ + (_var) &= \ + cpu_to_le32(~(((0x1U << (_width)) - 1) << (_index))); \ + (_var) |= cpu_to_le32( \ + (((_val) & ((0x1U << (_width)) - 1)) << (_index))); \ + } while (0) + +/* Bit getting and setting macros based on register fields + * The get macro uses the bit field definitions formed using the input + * names to extract the current bit field value from within the + * variable + * + * The set macro uses the bit field definitions formed using the input + * names to set the bit field of the variable to the specified value + */ +#define FXGMAC_GET_BITS(_var, _prefix, _field) \ + GET_BITS((_var), _prefix##_##_field##_POS, _prefix##_##_field##_LEN) + +#define FXGMAC_SET_BITS(_var, _prefix, _field, _val) \ + SET_BITS((_var), _prefix##_##_field##_POS, _prefix##_##_field##_LEN, \ + (_val)) + +#define FXGMAC_GET_BITS_LE(_var, _prefix, _field) \ + GET_BITS_LE((_var), _prefix##_##_field##_POS, _prefix##_##_field##_LEN) + +#define FXGMAC_SET_BITS_LE(_var, _prefix, _field, _val) \ + SET_BITS_LE((_var), _prefix##_##_field##_POS, \ + _prefix##_##_field##_LEN, (_val)) + +/* Macros for reading or writing registers + * The ioread macros will get bit fields or full values using the + * register definitions formed using the input names + * + * The iowrite macros will set bit fields or full values using the + * register definitions formed using the input names + */ +#define FXGMAC_IO_RD(_pdata, _reg) ioread32(((_pdata)->hw_addr) + (_reg)) + +#define FXGMAC_IO_RD_BITS(_pdata, _reg, _field) \ + GET_BITS(FXGMAC_IO_RD((_pdata), _reg), _reg##_##_field##_POS, \ + _reg##_##_field##_LEN) + +#define FXGMAC_IO_WR(_pdata, _reg, _val) \ + iowrite32((_val), ((_pdata)->hw_addr) + (_reg)) + +#define FXGMAC_IO_WR_BITS(_pdata, _reg, _field, _val) \ + do { \ + u32 reg_val = FXGMAC_IO_RD((_pdata), _reg); \ + SET_BITS(reg_val, _reg##_##_field##_POS, \ + _reg##_##_field##_LEN, (_val)); \ + FXGMAC_IO_WR((_pdata), _reg, reg_val); \ + } while (0) + +/* Macros for reading or writing MAC registers + * Similar to the standard read and write macros except that the + * base register value need add MAC_OFFSET. + */ +#define FXGMAC_MAC_IO_RD(_pdata, _reg) \ + ioread32(((_pdata)->hw_addr) + MAC_OFFSET + (_reg)) + +#define FXGMAC_MAC_IO_RD_BITS(_pdata, _reg, _field) \ + GET_BITS(FXGMAC_MAC_IO_RD((_pdata), _reg), _reg##_##_field##_POS, \ + _reg##_##_field##_LEN) + +#define FXGMAC_MAC_IO_WR(_pdata, _reg, _val) \ + iowrite32((_val), ((_pdata)->hw_addr) + MAC_OFFSET + (_reg)) + +#define FXGMAC_MAC_IO_WR_BITS(_pdata, _reg, _field, _val) \ + do { \ + u32 reg_val = FXGMAC_MAC_IO_RD((_pdata), _reg); \ + SET_BITS(reg_val, _reg##_##_field##_POS, \ + _reg##_##_field##_LEN, (_val)); \ + FXGMAC_MAC_IO_WR((_pdata), _reg, reg_val); \ + } while (0) + +/* Macros for reading or writing MTL queue or traffic class registers + * Similar to the standard read and write macros except that the + * base register value is calculated by the queue or traffic class number + */ +#define FXGMAC_MTL_IO_RD(_pdata, _n, _reg) \ + ioread32(((_pdata)->hw_addr) + MAC_OFFSET + MTL_Q_BASE + \ + ((_n) * MTL_Q_INC) + (_reg)) + +#define FXGMAC_MTL_IO_RD_BITS(_pdata, _n, _reg, _field) \ + GET_BITS(FXGMAC_MTL_IO_RD((_pdata), (_n), (_reg)), \ + _reg##_##_field##_POS, _reg##_##_field##_LEN) + +#define FXGMAC_MTL_IO_WR(_pdata, _n, _reg, _val) \ + iowrite32((_val), ((_pdata)->hw_addr) + MAC_OFFSET + MTL_Q_BASE + \ + ((_n) * MTL_Q_INC) + (_reg)) + +#define FXGMAC_MTL_IO_WR_BITS(_pdata, _n, _reg, _field, _val) \ + do { \ + u32 reg_val = FXGMAC_MTL_IO_RD((_pdata), (_n), _reg); \ + SET_BITS(reg_val, _reg##_##_field##_POS, \ + _reg##_##_field##_LEN, (_val)); \ + FXGMAC_MTL_IO_WR((_pdata), (_n), _reg, reg_val); \ + } while (0) + +/* Macros for reading or writing DMA channel registers + * Similar to the standard read and write macros except that the + * base register value is obtained from the ring + */ +#define FXGMAC_DMA_IO_RD(_channel, _reg) \ + ioread32(((_channel)->dma_regs) + (_reg)) + +#define FXGMAC_DMA_IO_RD_BITS(_channel, _reg, _field) \ + GET_BITS(FXGMAC_DMA_IO_RD((_channel), _reg), _reg##_##_field##_POS, \ + _reg##_##_field##_LEN) + +#define FXGMAC_DMA_IO_WR(_channel, _reg, _val) \ + iowrite32((_val), ((_channel)->dma_regs) + (_reg)) + +#define FXGMAC_DMA_IO_WR_BITS(_channel, _reg, _field, _val) \ + do { \ + u32 reg_val = FXGMAC_DMA_IO_RD((_channel), _reg); \ + SET_BITS(reg_val, _reg##_##_field##_POS, \ + _reg##_##_field##_LEN, (_val)); \ + FXGMAC_DMA_IO_WR((_channel), _reg, reg_val); \ + } while (0) + +#define yt_err(priv, fmt, arg...) dev_err((priv)->dev, fmt, ##arg) +#define yt_dbg(priv, fmt, arg...) dev_dbg((priv)->dev, fmt, ##arg) + +#endif /* YT6801_TYPE_H */ From patchwork Fri Feb 28 10:00:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996100 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-5.mail.aliyun.com (out28-5.mail.aliyun.com [115.124.28.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57D3325D531; Fri, 28 Feb 2025 10:00:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736842; cv=none; b=RgsQ/l1/8sKn55ufJhgMsjAPdQHC41T+6CMsbptG3547N26kku6oi76xVpi59PG1jXAU8TAQKbs28MFa//bLYBSm6gPGmgSGBqdZovcZnfMH5z8l1iukaQoNLqA+6LChqkEkyb2BOlrSLcSo8Ia/rc6fCPCUlpU/8MgrU3X5Tf8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736842; c=relaxed/simple; bh=giJ+Siex/pPWYDpPV9dfJk6AGBbuAXkwK0LcnsfBo2w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kOfXr/8lPIlcLDiV4jOMOusJiBe8n0H59bJl+GyHxlahttnXSf+Jk/lIh7vbR+SVp0FCCEwHL1fVw4FvSorzV8DD9xeYEVp7/drJAzvXrb4vjYpJ3tSiLwwu8ZK/FNPE85RfkZHs9l2KYZhIlU6Td72JSxiJjFyNDRU5+RsHDK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn19z_1740736832 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:33 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 02/14] motorcomm:yt6801: Add support for a pci table in this module Date: Fri, 28 Feb 2025 18:00:08 +0800 Message-Id: <20250228100020.3944-3-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add support for a pci table in this module, and implement pci_driver function to initialize this driver, remove this driver or shutdown this driver. Implement the fxgmac_drv_probe function to init interrupts, register mdio and netdev. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 111 ++++++++++++++++++ .../ethernet/motorcomm/yt6801/yt6801_pci.c | 104 ++++++++++++++++ 2 files changed, 215 insertions(+) create mode 100644 drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index 7cf4d1581..c54550cd4 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -97,3 +97,114 @@ static int fxgmac_mdio_register(struct fxgmac_pdata *priv) priv->phydev = phydev; return 0; } + +static void fxgmac_phy_release(struct fxgmac_pdata *priv) +{ + FXGMAC_IO_WR_BITS(priv, EPHY_CTRL, RESET, 1); + fsleep(100); +} + +void fxgmac_phy_reset(struct fxgmac_pdata *priv) +{ + FXGMAC_IO_WR_BITS(priv, EPHY_CTRL, RESET, 0); + fsleep(1500); +} + +#ifdef CONFIG_PCI_MSI +static void fxgmac_init_interrupt_scheme(struct fxgmac_pdata *priv) +{ + struct pci_dev *pdev = to_pci_dev(priv->dev); + int req_vectors = FXGMAC_MAX_DMA_CHANNELS; + + /* Since we have FXGMAC_MAX_DMA_CHANNELS channels, we must + * ensure the number of cpu core is ok. otherwise, just roll back to legacy. + */ + if (num_online_cpus() < FXGMAC_MAX_DMA_CHANNELS - 1) + goto enable_msi_interrupt; + + priv->msix_entries = + kcalloc(req_vectors, sizeof(struct msix_entry), GFP_KERNEL); + if (!priv->msix_entries) + goto enable_msi_interrupt; + + for (u32 i = 0; i < req_vectors; i++) + priv->msix_entries[i].entry = i; + + if (pci_enable_msix_exact(pdev, priv->msix_entries, req_vectors) < 0) { + /* Roll back to msi */ + kfree(priv->msix_entries); + priv->msix_entries = NULL; + yt_err(priv, "enable MSIx err, clear msix entries.\n"); + goto enable_msi_interrupt; + } + + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, INTERRUPT, BIT(INT_FLAG_MSIX_POS)); + priv->per_channel_irq = 1; + return; + +enable_msi_interrupt: + if (pci_enable_msi(pdev) < 0) { + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, INTERRUPT, BIT(INT_FLAG_LEGACY_POS)); + yt_err(priv, "MSI err, rollback to LEGACY.\n"); + } else { + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, INTERRUPT, BIT(INT_FLAG_MSI_POS)); + priv->dev_irq = pdev->irq; + } +} +#endif + +int fxgmac_drv_probe(struct device *dev, struct fxgmac_resources *res) +{ + struct fxgmac_pdata *priv; + struct net_device *netdev; + int ret; + + netdev = alloc_etherdev_mq(sizeof(struct fxgmac_pdata), + FXGMAC_MAX_DMA_RX_CHANNELS); + if (!netdev) + return -ENOMEM; + + SET_NETDEV_DEV(netdev, dev); + priv = netdev_priv(netdev); + + priv->dev = dev; + priv->netdev = netdev; + priv->dev_irq = res->irq; + priv->hw_addr = res->addr; + priv->msg_enable = NETIF_MSG_DRV; + priv->dev_state = FXGMAC_DEV_PROBE; + + /* Default to legacy interrupt */ + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, INTERRUPT, BIT(INT_FLAG_LEGACY_POS)); + pci_set_drvdata(to_pci_dev(priv->dev), priv); + + if (IS_ENABLED(CONFIG_PCI_MSI)) + fxgmac_init_interrupt_scheme(priv); + + ret = fxgmac_init(priv, true); + if (ret < 0) { + yt_err(priv, "fxgmac_init err:%d\n", ret); + goto err_free_netdev; + } + + fxgmac_phy_reset(priv); + fxgmac_phy_release(priv); + ret = fxgmac_mdio_register(priv); + if (ret < 0) { + yt_err(priv, "fxgmac_mdio_register err:%d\n", ret); + goto err_free_netdev; + } + + netif_carrier_off(netdev); + ret = register_netdev(netdev); + if (ret) { + yt_err(priv, "register_netdev err:%d\n", ret); + goto err_free_netdev; + } + + return 0; + +err_free_netdev: + free_netdev(netdev); + return ret; +} diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c new file mode 100644 index 000000000..1b80ae15a --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c @@ -0,0 +1,104 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2022 - 2024 Motorcomm Electronic Technology Co.,Ltd. + * + * Below is a simplified block diagram of YT6801 chip and its relevant + * interfaces. + * || + * ********************++********************** + * * | PCIE Endpoint | * + * * +---------------+ * + * * | GMAC | * + * * +--++--+ * + * * |**| * + * * GMII --> |**| <-- MDIO * + * * +-++--+ * + * * | Integrated PHY | YT8531S * + * * +-++-+ * + * ********************||******************* ** + */ + +#include +#include + +#ifdef CONFIG_PCI_MSI +#include +#endif + +#include "yt6801.h" + +static int fxgmac_probe(struct pci_dev *pcidev, const struct pci_device_id *id) +{ + struct device *dev = &pcidev->dev; + struct fxgmac_resources res; + int i, ret; + + ret = pcim_enable_device(pcidev); + if (ret) { + dev_err(dev, "%s pcim_enable_device err:%d\n", __func__, ret); + return ret; + } + + for (i = 0; i < PCI_STD_NUM_BARS; i++) { + if (pci_resource_len(pcidev, i) == 0) + continue; + + ret = pcim_iomap_regions(pcidev, BIT(i), FXGMAC_DRV_NAME); + if (ret) { + dev_err(dev, "%s, pcim_iomap_regions err:%d\n", + __func__, ret); + return ret; + } + break; + } + + pci_set_master(pcidev); + + memset(&res, 0, sizeof(res)); + res.irq = pcidev->irq; + res.addr = pcim_iomap_table(pcidev)[i]; + + return fxgmac_drv_probe(&pcidev->dev, &res); +} + +static void fxgmac_remove(struct pci_dev *pcidev) +{ + struct fxgmac_pdata *priv = dev_get_drvdata(&pcidev->dev); + struct net_device *netdev = priv->netdev; + struct device *dev = &pcidev->dev; + + unregister_netdev(netdev); + fxgmac_phy_reset(priv); + free_netdev(netdev); + + if (IS_ENABLED(CONFIG_PCI_MSI) && + FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, MSIX)) { + pci_disable_msix(pcidev); + kfree(priv->msix_entries); + priv->msix_entries = NULL; + } + + dev_dbg(dev, "%s has been removed\n", netdev->name); +} + +#define MOTORCOMM_PCI_ID 0x1f0a +#define YT6801_PCI_DEVICE_ID 0x6801 + +static const struct pci_device_id fxgmac_pci_tbl[] = { + { PCI_DEVICE(MOTORCOMM_PCI_ID, YT6801_PCI_DEVICE_ID) }, + { 0 } +}; + +MODULE_DEVICE_TABLE(pci, fxgmac_pci_tbl); + +static struct pci_driver fxgmac_pci_driver = { + .name = FXGMAC_DRV_NAME, + .id_table = fxgmac_pci_tbl, + .probe = fxgmac_probe, + .remove = fxgmac_remove, +}; + +module_pci_driver(fxgmac_pci_driver); + +MODULE_AUTHOR("Motorcomm Electronic Tech. Co., Ltd."); +MODULE_DESCRIPTION(FXGMAC_DRV_DESC); +MODULE_LICENSE("GPL"); From patchwork Fri Feb 28 10:00:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996149 X-Patchwork-Delegate: kuba@kernel.org Received: from out198-8.us.a.mail.aliyun.com (out198-8.us.a.mail.aliyun.com [47.90.198.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 223AC26036B; Fri, 28 Feb 2025 10:06:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=47.90.198.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737167; cv=none; b=FvEfl4DTd01AzOnd/L8MTfnhhYlL91tZ4X/49hDid/GgZ8kvG9GwYbxHh1zyTVOPQ/3LcQNJMLFlwVIqrV9QVlRAZGVc3m/wwT3AyE+s5b9OWT3TmIJGCXdE+Fp/hQQ67E2HJxsJXMyUUiEb7EjCk+VchQ0gKVpr/ZeKxYijwNs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737167; c=relaxed/simple; bh=rKKoapeU3Mgrdvx1znvZkM743aU29GNyMexs8eijBVE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dHzSOu+iURmO8D5/jW5syi91b12MW4q1Bl5KLDnUNF/bKOoL0WEdsNVa6uRaKHbOyhR2gGLirkx8t+h26n5zcV0yw7TRLbAjSlEKP2uGpaub4WWtTxYv/Bvom0F0nVItOp8cWwgIr7GbWSQv4t12u3X02UeYqoK5Cz69S2owlE0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=47.90.198.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1B-_1740736833 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:33 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 03/14] motorcomm:yt6801: Implement pci_driver shutdown Date: Fri, 28 Feb 2025 18:00:09 +0800 Message-Id: <20250228100020.3944-4-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the pci_driver shutdown function to shutdown this driver. Implement the fxgmac_net_powerdown function to stop tx, disable tx, disable rx, config powerdown, free rx date and free tx date. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_desc.c | 50 +++ .../ethernet/motorcomm/yt6801/yt6801_desc.h | 35 ++ .../ethernet/motorcomm/yt6801/yt6801_net.c | 301 ++++++++++++++++++ .../ethernet/motorcomm/yt6801/yt6801_pci.c | 24 ++ 4 files changed, 410 insertions(+) create mode 100644 drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c create mode 100644 drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.h diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c new file mode 100644 index 000000000..3ff5eff11 --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2022 - 2024 Motorcomm Electronic Technology Co.,Ltd. */ + +#include "yt6801.h" +#include "yt6801_desc.h" + +void fxgmac_desc_data_unmap(struct fxgmac_pdata *priv, + struct fxgmac_desc_data *desc_data) +{ + if (desc_data->skb_dma) { + if (desc_data->mapped_as_page) { + dma_unmap_page(priv->dev, desc_data->skb_dma, + desc_data->skb_dma_len, DMA_TO_DEVICE); + } else { + dma_unmap_single(priv->dev, desc_data->skb_dma, + desc_data->skb_dma_len, DMA_TO_DEVICE); + } + desc_data->skb_dma = 0; + desc_data->skb_dma_len = 0; + } + + if (desc_data->skb) { + dev_kfree_skb_any(desc_data->skb); + desc_data->skb = NULL; + } + + if (desc_data->rx.hdr.pa.pages) + put_page(desc_data->rx.hdr.pa.pages); + + if (desc_data->rx.hdr.pa_unmap.pages) { + dma_unmap_page(priv->dev, desc_data->rx.hdr.pa_unmap.pages_dma, + desc_data->rx.hdr.pa_unmap.pages_len, + DMA_FROM_DEVICE); + put_page(desc_data->rx.hdr.pa_unmap.pages); + } + + if (desc_data->rx.buf.pa.pages) + put_page(desc_data->rx.buf.pa.pages); + + if (desc_data->rx.buf.pa_unmap.pages) { + dma_unmap_page(priv->dev, desc_data->rx.buf.pa_unmap.pages_dma, + desc_data->rx.buf.pa_unmap.pages_len, + DMA_FROM_DEVICE); + put_page(desc_data->rx.buf.pa_unmap.pages); + } + memset(&desc_data->tx, 0, sizeof(desc_data->tx)); + memset(&desc_data->rx, 0, sizeof(desc_data->rx)); + + desc_data->mapped_as_page = 0; +} diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.h b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.h new file mode 100644 index 000000000..b238f20be --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* Copyright (c) 2022 - 2024 Motorcomm Electronic Technology Co.,Ltd. */ + +#ifndef YT6801_DESC_H +#define YT6801_DESC_H + +#define FXGMAC_TX_DESC_CNT 256 +#define FXGMAC_TX_DESC_MIN_FREE (FXGMAC_TX_DESC_CNT >> 3) +#define FXGMAC_TX_DESC_MAX_PROC (FXGMAC_TX_DESC_CNT >> 1) +#define FXGMAC_RX_DESC_CNT 1024 +#define FXGMAC_RX_DESC_MAX_DIRTY (FXGMAC_RX_DESC_CNT >> 3) + +#define FXGMAC_GET_DESC_DATA(ring, idx) ((ring)->desc_data_head + (idx)) +#define FXGMAC_GET_ENTRY(x, size) (((x) + 1) & ((size) - 1)) + +void fxgmac_desc_tx_reset(struct fxgmac_desc_data *desc_data); +void fxgmac_desc_rx_reset(struct fxgmac_desc_data *desc_data); +void fxgmac_desc_data_unmap(struct fxgmac_pdata *priv, + struct fxgmac_desc_data *desc_data); + +int fxgmac_channels_rings_alloc(struct fxgmac_pdata *priv); +void fxgmac_channels_rings_free(struct fxgmac_pdata *priv); +int fxgmac_tx_skb_map(struct fxgmac_channel *channel, struct sk_buff *skb); +int fxgmac_rx_buffe_map(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + struct fxgmac_desc_data *desc_data); +void fxgmac_dump_tx_desc(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + unsigned int idx, unsigned int count, + unsigned int flag); +void fxgmac_dump_rx_desc(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + unsigned int idx); + +int fxgmac_is_tx_complete(struct fxgmac_dma_desc *dma_desc); +int fxgmac_is_last_desc(struct fxgmac_dma_desc *dma_desc); + +#endif /* YT6801_DESC_H */ diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index c54550cd4..7d557f6b0 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -9,6 +9,7 @@ #include #include "yt6801.h" +#include "yt6801_desc.h" #define PHY_WR_CONFIG(reg_offset) (0x8000205 + ((reg_offset) * 0x10000)) static int fxgmac_phy_write_reg(struct fxgmac_pdata *priv, u32 reg_id, u32 data) @@ -98,6 +99,229 @@ static int fxgmac_mdio_register(struct fxgmac_pdata *priv) return 0; } +static void fxgmac_disable_mgm_irq(struct fxgmac_pdata *priv) +{ + FXGMAC_IO_WR_BITS(priv, MGMT_INT_CTRL0, INT_MASK, + MGMT_INT_CTRL0_INT_MASK_MASK); +} + +static void napi_disable_del(struct fxgmac_pdata *priv, struct napi_struct *n, + u32 flag_pos) +{ + napi_disable(n); + netif_napi_del(n); + SET_BITS(priv->int_flag, flag_pos, 1, 0); /* set flag_pos bit to 0 */ +} + +static void fxgmac_napi_disable(struct fxgmac_pdata *priv) +{ + u32 rx = FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, RX_NAPI); + struct fxgmac_channel *channel = priv->channel_head; + + if (!priv->per_channel_irq) { + if (!FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, LEGACY_NAPI)) + return; + + napi_disable_del(priv, &priv->napi, + INT_FLAG_LEGACY_NAPI_POS); + return; + } + + if (FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, TX_NAPI)) + napi_disable_del(priv, &channel->napi_tx, + INT_FLAG_TX_NAPI_POS); + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + if (GET_BITS(rx, i, INT_FLAG_PER_RX_NAPI_LEN)) + napi_disable_del(priv, &channel->napi_rx, + INT_FLAG_RX_NAPI_POS + i); +} + +static void fxgmac_free_irqs(struct fxgmac_pdata *priv) +{ + u32 i, rx = FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, RX_IRQ); + struct fxgmac_channel *channel = priv->channel_head; + + if (!FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, MSIX) && + FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, LEGACY_IRQ)) { + devm_free_irq(priv->dev, priv->dev_irq, priv); + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, LEGACY_IRQ, 0); + } + + if (!priv->per_channel_irq) + return; + + if (FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, TX_IRQ)) { + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, TX_IRQ, 0); + devm_free_irq(priv->dev, channel->dma_irq_tx, channel); + } + + for (i = 0; i < priv->channel_count; i++, channel++) { + if (GET_BITS(rx, i, INT_FLAG_PER_RX_IRQ_LEN)) { + SET_BITS(priv->int_flag, INT_FLAG_RX_IRQ_POS + i, + INT_FLAG_PER_RX_IRQ_LEN, 0); + devm_free_irq(priv->dev, channel->dma_irq_rx, channel); + } + } +} + +static void fxgmac_free_tx_data(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + struct fxgmac_ring *ring; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + ring = channel->tx_ring; + if (!ring) + break; + + for (u32 j = 0; j < ring->dma_desc_count; j++) + fxgmac_desc_data_unmap(priv, + FXGMAC_GET_DESC_DATA(ring, j)); + } +} + +static void fxgmac_free_rx_data(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + struct fxgmac_ring *ring; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + ring = channel->rx_ring; + if (!ring) + break; + + for (u32 j = 0; j < ring->dma_desc_count; j++) + fxgmac_desc_data_unmap(priv, + FXGMAC_GET_DESC_DATA(ring, j)); + } +} + +static void fxgmac_prepare_tx_stop(struct fxgmac_pdata *priv, + struct fxgmac_channel *channel) +{ + unsigned int tx_q_idx, tx_status; + unsigned int tx_dsr, tx_pos; + unsigned long tx_timeout; + + /* Calculate the status register to read and the position within */ + if (channel->queue_index < DMA_DSRX_FIRST_QUEUE) { + tx_dsr = DMA_DSR0; + tx_pos = (channel->queue_index * DMA_DSR_Q_LEN) + + DMA_DSR0_TPS_START; + } else { + tx_q_idx = channel->queue_index - DMA_DSRX_FIRST_QUEUE; + + tx_dsr = DMA_DSR1 + ((tx_q_idx / DMA_DSRX_QPR) * DMA_DSRX_INC); + tx_pos = ((tx_q_idx % DMA_DSRX_QPR) * DMA_DSR_Q_LEN) + + DMA_DSRX_TPS_START; + } + + /* The Tx engine cannot be stopped if it is actively processing + * descriptors. Wait for the Tx engine to enter the stopped or + * suspended state. + */ + tx_timeout = jiffies + (FXGMAC_DMA_STOP_TIMEOUT * HZ); + + while (time_before(jiffies, tx_timeout)) { + tx_status = FXGMAC_MAC_IO_RD(priv, tx_dsr); + tx_status = GET_BITS(tx_status, tx_pos, DMA_DSR_TPS_LEN); + if (tx_status == DMA_TPS_STOPPED || + tx_status == DMA_TPS_SUSPENDED) + break; + + fsleep(500); + } + + if (!time_before(jiffies, tx_timeout)) + yt_err(priv, + "timed out waiting for Tx DMA channel %u to stop\n", + channel->queue_index); +} + +static void fxgmac_disable_tx(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + /* Prepare for Tx DMA channel stop */ + fxgmac_prepare_tx_stop(priv, channel); + + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, TE, 0);/* Disable MAC Tx */ + + /* Disable Tx queue */ + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_Q_TQOMR, TXQEN, MTL_Q_DISABLED); + + /* Disable Tx DMA channel */ + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_TCR, ST, 0); +} + +static void fxgmac_prepare_rx_stop(struct fxgmac_pdata *priv, + unsigned int queue) +{ + unsigned int rx_status, rx_q, rx_q_sts; + unsigned long rx_timeout; + + /* The Rx engine cannot be stopped if it is actively processing + * packets. Wait for the Rx queue to empty the Rx fifo. + */ + rx_timeout = jiffies + (FXGMAC_DMA_STOP_TIMEOUT * HZ); + + while (time_before(jiffies, rx_timeout)) { + rx_status = FXGMAC_MTL_IO_RD(priv, queue, MTL_Q_RQDR); + rx_q = FXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, PRXQ); + rx_q_sts = FXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, RXQSTS); + if (rx_q == 0 && rx_q_sts == 0) + break; + + fsleep(500); + } + + if (!time_before(jiffies, rx_timeout)) + yt_err(priv, "timed out waiting for Rx queue %u to empty\n", + queue); +} + +static void fxgmac_disable_rx(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + u32 i; + + /* Disable MAC Rx */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, CST, 0); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, ACS, 0); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, RE, 0); + + /* Prepare for Rx DMA channel stop */ + for (i = 0; i < priv->rx_q_count; i++) + fxgmac_prepare_rx_stop(priv, i); + + FXGMAC_MAC_IO_WR(priv, MAC_RQC0R, 0); /* Disable each Rx queue */ + + /* Disable each Rx DMA channel */ + for (i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_RCR, SR, 0); +} + +/** + * fxgmac_set_oob_wol - disable or enable oob wol crtl function + * @priv: driver private struct + * @enable: 1 or 0 + * + * Description: After enable OOB_WOL from efuse, mac will loopcheck phy status, + * and lead to panic sometimes. So we should disable it from powerup, + * enable it from power down. + */ +static void fxgmac_set_oob_wol(struct fxgmac_pdata *priv, unsigned int en) +{ + FXGMAC_IO_WR_BITS(priv, OOB_WOL_CTRL, DIS, !en);/* en = 1 is disable */ +} + +static void fxgmac_pre_powerdown(struct fxgmac_pdata *priv) +{ + fxgmac_set_oob_wol(priv, 1); + fsleep(2000); +} + static void fxgmac_phy_release(struct fxgmac_pdata *priv) { FXGMAC_IO_WR_BITS(priv, EPHY_CTRL, RESET, 1); @@ -110,6 +334,83 @@ void fxgmac_phy_reset(struct fxgmac_pdata *priv) fsleep(1500); } +static void fxgmac_disable_msix_irqs(struct fxgmac_pdata *priv) +{ + for (u32 intid = 0; intid < MSIX_TBL_MAX_NUM; intid++) + fxgmac_disable_msix_one_irq(priv, intid); +} + +static void fxgmac_stop(struct fxgmac_pdata *priv) +{ + struct net_device *netdev = priv->netdev; + struct netdev_queue *txq; + + if (priv->dev_state != FXGMAC_DEV_START) + return; + + priv->dev_state = FXGMAC_DEV_STOP; + + if (priv->per_channel_irq) + fxgmac_disable_msix_irqs(priv); + else + fxgmac_disable_mgm_irq(priv); + + netif_carrier_off(netdev); + netif_tx_stop_all_queues(netdev); + fxgmac_disable_tx(priv); + fxgmac_disable_rx(priv); + fxgmac_free_irqs(priv); + fxgmac_napi_disable(priv); + phy_stop(priv->phydev); + + txq = netdev_get_tx_queue(netdev, priv->channel_head->queue_index); + netdev_tx_reset_queue(txq); +} + +static void fxgmac_config_powerdown(struct fxgmac_pdata *priv) +{ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, RE, 1); /* Enable MAC Rx */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, TE, 1); /* Enable MAC TX */ + + /* Set GAMC power down */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PMT_STA, PWRDWN, 1); +} + +int fxgmac_net_powerdown(struct fxgmac_pdata *priv) +{ + struct net_device *netdev = priv->netdev; + + /* Signal that we are down to the interrupt handler */ + if (__test_and_set_bit(FXGMAC_POWER_STATE_DOWN, &priv->powerstate)) + return 0; /* do nothing if already down */ + + __clear_bit(FXGMAC_POWER_STATE_UP, &priv->powerstate); + netif_tx_stop_all_queues(netdev); /* Shut off incoming Tx traffic */ + + /* Call carrier off first to avoid false dev_watchdog timeouts */ + netif_carrier_off(netdev); + netif_tx_disable(netdev); + fxgmac_disable_rx(priv); + + /* Synchronize_rcu() needed for pending XDP buffers to drain */ + synchronize_rcu(); + + fxgmac_stop(priv); + fxgmac_pre_powerdown(priv); + + if (!test_bit(FXGMAC_POWER_STATE_DOWN, &priv->powerstate)) + yt_err(priv, + "fxgmac powerstate is %lu when config powe down.\n", + priv->powerstate); + + /* Set mac to lowpower mode */ + fxgmac_config_powerdown(priv); + fxgmac_free_tx_data(priv); + fxgmac_free_rx_data(priv); + + return 0; +} + #ifdef CONFIG_PCI_MSI static void fxgmac_init_interrupt_scheme(struct fxgmac_pdata *priv) { diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c index 1b80ae15a..fba01e393 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c @@ -80,6 +80,29 @@ static void fxgmac_remove(struct pci_dev *pcidev) dev_dbg(dev, "%s has been removed\n", netdev->name); } +static void __fxgmac_shutdown(struct pci_dev *pcidev) +{ + struct fxgmac_pdata *priv = dev_get_drvdata(&pcidev->dev); + struct net_device *netdev = priv->netdev; + + rtnl_lock(); + fxgmac_net_powerdown(priv); + netif_device_detach(netdev); + rtnl_unlock(); +} + +static void fxgmac_shutdown(struct pci_dev *pcidev) +{ + struct fxgmac_pdata *priv = dev_get_drvdata(&pcidev->dev); + + mutex_lock(&priv->mutex); + __fxgmac_shutdown(pcidev); + if (system_state == SYSTEM_POWER_OFF) { + pci_wake_from_d3(pcidev, false); + pci_set_power_state(pcidev, PCI_D3hot); + } + mutex_unlock(&priv->mutex); +} #define MOTORCOMM_PCI_ID 0x1f0a #define YT6801_PCI_DEVICE_ID 0x6801 @@ -95,6 +118,7 @@ static struct pci_driver fxgmac_pci_driver = { .id_table = fxgmac_pci_tbl, .probe = fxgmac_probe, .remove = fxgmac_remove, + .shutdown = fxgmac_shutdown, }; module_pci_driver(fxgmac_pci_driver); From patchwork Fri Feb 28 10:01:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996155 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-121.mail.aliyun.com (out28-121.mail.aliyun.com [115.124.28.121]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 706D225DAE0; Fri, 28 Feb 2025 10:16:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.121 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737814; cv=none; b=aipi5BhKJiRHNIHxFUTHO5oWr1kd99ISlBqVCEsfCyj9dEjI0rCNpCPjrtLtq8mAwoXPBTtjCCHO3ydV88yojS4lLCLjMf8KeN1uQJBr7e4IkNqqzu/Qlpntz/1akL0yx7vMzC3zKnh5W1EgVcfz76I0edSS1XV5vKYhWDmq6V0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737814; c=relaxed/simple; bh=RKmWoPNOf1kAlKhPJ+SYwGXfq2TmfaPQrjzaw1Hbt60=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E78fMj+7Kpoo5wnNrMwDNm2iCA4BPPWpedeeTVPbPTK0SBwwVX1TeLbtOHDYIipCb8NUEZcUHld7Irj6DOWh5rKnsLiXOXCwAxM97XtK3AyeWZVuEDit6H+8ErB/FeOkIKLm5RmQUlmCpIJKudvNuCH/7/OoVPU3ydyWqSdYkzc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.121 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1C7_1740736834 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:34 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 04/14] motorcomm:yt6801: Implement the fxgmac_init function Date: Fri, 28 Feb 2025 18:01:11 +0800 Message-Id: <20250228100020.3944-5-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the fxgmac_init to init hardware settings, including setting function pointers, default configuration data, irq, base_addr, MAC address, DMA mask, device operations and device features. Implement the fxgmac_read_mac_addr function to read mac address form efuse. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 423 ++++++++++++++++++ 1 file changed, 423 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index 7d557f6b0..350510174 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -302,6 +302,12 @@ static void fxgmac_disable_rx(struct fxgmac_pdata *priv) FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_RCR, SR, 0); } +static void fxgmac_default_speed_duplex_config(struct fxgmac_pdata *priv) +{ + priv->mac_duplex = DUPLEX_FULL; + priv->mac_speed = SPEED_1000; +} + /** * fxgmac_set_oob_wol - disable or enable oob wol crtl function * @priv: driver private struct @@ -322,12 +328,30 @@ static void fxgmac_pre_powerdown(struct fxgmac_pdata *priv) fsleep(2000); } +static void fxgmac_restore_nonstick_reg(struct fxgmac_pdata *priv) +{ + for (u32 i = GLOBAL_CTRL0; i < MSI_PBA; i += 4) + FXGMAC_IO_WR(priv, i, + priv->reg_nonstick[(i - GLOBAL_CTRL0) >> 2]); +} + static void fxgmac_phy_release(struct fxgmac_pdata *priv) { FXGMAC_IO_WR_BITS(priv, EPHY_CTRL, RESET, 1); fsleep(100); } +static void fxgmac_hw_exit(struct fxgmac_pdata *priv) +{ + /* Reset CHIP, it will reset trigger circuit and reload efuse patch */ + FXGMAC_IO_WR_BITS(priv, SYS_RESET, RESET, 1); + fsleep(9000); + + fxgmac_phy_release(priv); + + /* Reset will clear nonstick registers. */ + fxgmac_restore_nonstick_reg(priv); +} void fxgmac_phy_reset(struct fxgmac_pdata *priv) { FXGMAC_IO_WR_BITS(priv, EPHY_CTRL, RESET, 0); @@ -411,6 +435,405 @@ int fxgmac_net_powerdown(struct fxgmac_pdata *priv) return 0; } +#define EFUSE_FISRT_UPDATE_ADDR 255 +#define EFUSE_SECOND_UPDATE_ADDR 209 +#define EFUSE_MAX_ENTRY 39 +#define EFUSE_PATCH_ADDR_START 0 +#define EFUSE_PATCH_DATA_START 2 +#define EFUSE_PATCH_SIZE 6 +#define EFUSE_REGION_A_B_LENGTH 18 + +static bool fxgmac_efuse_read_data(struct fxgmac_pdata *priv, u32 offset, + u8 *value) +{ + u32 val = 0, wait = 1000; + bool ret = false; + + FXGMAC_SET_BITS(val, EFUSE_OP, ADDR, offset); + FXGMAC_SET_BITS(val, EFUSE_OP, START, 1); + FXGMAC_SET_BITS(val, EFUSE_OP, MODE, EFUSE_OP_MODE_ROW_READ); + FXGMAC_IO_WR(priv, EFUSE_OP_CTRL_0, val); + + while (wait--) { + fsleep(20); + val = FXGMAC_IO_RD(priv, EFUSE_OP_CTRL_1); + if (FXGMAC_GET_BITS(val, EFUSE_OP, DONE)) { + ret = true; + break; + } + } + + if (!ret) { + yt_err(priv, "Fail to reading efuse Byte%d\n", offset); + return ret; + } + + if (value) + *value = FXGMAC_GET_BITS(val, EFUSE_OP, RD_DATA) & 0xff; + + return ret; +} + +static bool fxgmac_efuse_read_index_patch(struct fxgmac_pdata *priv, u8 index, + u32 *offset, u32 *value) +{ + u8 tmp[EFUSE_PATCH_SIZE - EFUSE_PATCH_DATA_START]; + u32 addr, i; + bool ret; + + if (index >= EFUSE_MAX_ENTRY) { + yt_err(priv, "Reading efuse out of range, index %d\n", index); + return false; + } + + for (i = EFUSE_PATCH_ADDR_START; i < EFUSE_PATCH_DATA_START; i++) { + addr = EFUSE_REGION_A_B_LENGTH + index * EFUSE_PATCH_SIZE + i; + ret = fxgmac_efuse_read_data(priv, addr, + tmp + i - EFUSE_PATCH_ADDR_START); + if (!ret) { + yt_err(priv, "Fail to reading efuse Byte%d\n", addr); + return ret; + } + } + /* tmp[0] is low 8bit date, tmp[1] is high 8bit date */ + if (offset) + *offset = tmp[0] | (tmp[1] << 8); + + for (i = EFUSE_PATCH_DATA_START; i < EFUSE_PATCH_SIZE; i++) { + addr = EFUSE_REGION_A_B_LENGTH + index * EFUSE_PATCH_SIZE + i; + ret = fxgmac_efuse_read_data(priv, addr, + tmp + i - EFUSE_PATCH_DATA_START); + if (!ret) { + yt_err(priv, "Fail to reading efuse Byte%d\n", addr); + return ret; + } + } + /* tmp[0] is low 8bit date, tmp[1] is low 8bit date + * ... tmp[3] is highest 8bit date + */ + if (value) + *value = tmp[0] | (tmp[1] << 8) | (tmp[2] << 16) | + (tmp[3] << 24); + + return ret; +} + +static bool fxgmac_efuse_read_mac_subsys(struct fxgmac_pdata *priv, + u8 *mac_addr, u32 *subsys, u32 *revid) +{ + u32 machr = 0, maclr = 0, offset = 0, val = 0; + + for (u8 index = 0; index < EFUSE_MAX_ENTRY; index++) { + if (!fxgmac_efuse_read_index_patch(priv, index, &offset, &val)) + return false; + + if (offset == 0x00) + break; /* Reach the blank. */ + if (offset == MACA0LR_FROM_EFUSE) + maclr = val; + if (offset == MACA0HR_FROM_EFUSE) + machr = val; + if (offset == PCI_REVISION_ID && revid) + *revid = val; + if (offset == PCI_SUBSYSTEM_VENDOR_ID && subsys) + *subsys = val; + } + + if (mac_addr) { + mac_addr[5] = (u8)(maclr & 0xFF); + mac_addr[4] = (u8)((maclr >> 8) & 0xFF); + mac_addr[3] = (u8)((maclr >> 16) & 0xFF); + mac_addr[2] = (u8)((maclr >> 24) & 0xFF); + mac_addr[1] = (u8)(machr & 0xFF); + mac_addr[0] = (u8)((machr >> 8) & 0xFF); + } + + return true; +} + +static int fxgmac_read_mac_addr(struct fxgmac_pdata *priv) +{ + u8 default_addr[ETH_ALEN] = { 0, 0x55, 0x7b, 0xb5, 0x7d, 0xf7 }; + struct net_device *netdev = priv->netdev; + int ret; + + /* If efuse have mac addr, use it. if not, use static mac address. */ + ret = fxgmac_efuse_read_mac_subsys(priv, priv->mac_addr, NULL, NULL); + if (!ret) + return -1; + + if (is_zero_ether_addr(priv->mac_addr)) + /* Use a static mac address for test */ + memcpy(priv->mac_addr, default_addr, netdev->addr_len); + + return 0; +} + +static void fxgmac_default_config(struct fxgmac_pdata *priv) +{ + priv->sysclk_rate = 125000000; /* System clock is 125 MHz */ + priv->tx_threshold = MTL_TX_THRESHOLD_128; + priv->rx_threshold = MTL_RX_THRESHOLD_128; + priv->tx_osp_mode = DMA_OSP_ENABLE; + priv->tx_sf_mode = MTL_TSF_ENABLE; + priv->rx_sf_mode = MTL_RSF_ENABLE; + priv->pblx8 = DMA_PBL_X8_ENABLE; + priv->tx_pbl = DMA_PBL_16; + priv->rx_pbl = DMA_PBL_4; + priv->tx_pause = 1; /* Enable tx pause */ + priv->rx_pause = 1; /* Enable rx pause */ + + fxgmac_default_speed_duplex_config(priv); +} + +static void fxgmac_get_all_hw_features(struct fxgmac_pdata *priv) +{ + struct fxgmac_hw_features *hw_feat = &priv->hw_feat; + unsigned int mac_hfr0, mac_hfr1, mac_hfr2, mac_hfr3; + + mac_hfr0 = FXGMAC_MAC_IO_RD(priv, MAC_HWF0R); + mac_hfr1 = FXGMAC_MAC_IO_RD(priv, MAC_HWF1R); + mac_hfr2 = FXGMAC_MAC_IO_RD(priv, MAC_HWF2R); + mac_hfr3 = FXGMAC_MAC_IO_RD(priv, MAC_HWF3R); + memset(hw_feat, 0, sizeof(*hw_feat)); + hw_feat->version = FXGMAC_MAC_IO_RD(priv, MAC_VR); + + /* Hardware feature register 0 */ + hw_feat->phyifsel = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, ACTPHYIFSEL); + hw_feat->vlhash = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, VLHASH); + hw_feat->sma = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, SMASEL); + hw_feat->rwk = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, RWKSEL); + hw_feat->mgk = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, MGKSEL); + hw_feat->mmc = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, MMCSEL); + hw_feat->aoe = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, ARPOFFSEL); + hw_feat->ts = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TSSEL); + hw_feat->eee = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, EEESEL); + hw_feat->tx_coe = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TXCOESEL); + hw_feat->rx_coe = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, RXCOESEL); + hw_feat->addn_mac = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, ADDMACADRSEL); + hw_feat->ts_src = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TSSTSSEL); + hw_feat->sa_vlan_ins = FXGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, SAVLANINS); + + /* Hardware feature register 1 */ + hw_feat->rx_fifo_size = + FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, RXFIFOSIZE); + hw_feat->tx_fifo_size = + FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, TXFIFOSIZE); + hw_feat->adv_ts_hi = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, ADVTHWORD); + hw_feat->dma_width = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, ADDR64); + hw_feat->dcb = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, DCBEN); + hw_feat->sph = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, SPHEN); + hw_feat->tso = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, TSOEN); + hw_feat->dma_debug = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, DBGMEMA); + hw_feat->avsel = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, RAVSEL); + hw_feat->ravsel = FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, RAVSEL); + hw_feat->hash_table_size = + FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, HASHTBLSZ); + hw_feat->l3l4_filter_num = + FXGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, L3L4FNUM); + hw_feat->tx_q_cnt = FXGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, TXQCNT); + hw_feat->rx_ch_cnt = FXGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, RXCHCNT); + hw_feat->tx_ch_cnt = FXGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, TXCHCNT); + hw_feat->pps_out_num = FXGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, PPSOUTNUM); + hw_feat->aux_snap_num = + FXGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, AUXSNAPNUM); + + /* Translate the Hash Table size into actual number */ + switch (hw_feat->hash_table_size) { + case 0: + break; + case 1: + hw_feat->hash_table_size = 64; + break; + case 2: + hw_feat->hash_table_size = 128; + break; + case 3: + hw_feat->hash_table_size = 256; + break; + } + + /* Translate the address width setting into actual number */ + switch (hw_feat->dma_width) { + case 0: + hw_feat->dma_width = 32; + break; + case 1: + hw_feat->dma_width = 40; + break; + case 2: + hw_feat->dma_width = 48; + break; + default: + hw_feat->dma_width = 32; + } + + /* The Queue, Channel are zero based so increment them + * to get the actual number + */ + hw_feat->tx_q_cnt++; + hw_feat->rx_ch_cnt++; + hw_feat->tx_ch_cnt++; + + /* HW implement 1 rx fifo, 4 dma channel. but from software + * we see 4 logical queues. hardcode to 4 queues. + */ + hw_feat->rx_q_cnt = 4; + + hw_feat->hwfr3 = mac_hfr3; +} + +static unsigned int fxgmac_usec_to_riwt(struct fxgmac_pdata *priv, + unsigned int usec) +{ + /* Convert the input usec value to the watchdog timer value. Each + * watchdog timer value is equivalent to 256 clock cycles. + * Calculate the required value as: + * ( usec * ( system_clock_mhz / 10^6) / 256 + */ + return (usec * (priv->sysclk_rate / 1000000)) / 256; +} + +static void fxgmac_save_nonstick_reg(struct fxgmac_pdata *priv) +{ + for (u32 i = GLOBAL_CTRL0; i < MSI_PBA; i += 4) { + priv->reg_nonstick[(i - GLOBAL_CTRL0) >> 2] = + FXGMAC_IO_RD(priv, i); + } +} + +static int fxgmac_init(struct fxgmac_pdata *priv, bool save_private_reg) +{ + struct net_device *netdev = priv->netdev; + int ret; + + fxgmac_default_config(priv); /* Set default configuration data */ + netdev->irq = priv->dev_irq; + netdev->base_addr = (unsigned long)priv->hw_addr; + + ret = fxgmac_read_mac_addr(priv); + if (ret) { + yt_err(priv, "fxgmac_read_mac_addr err:%d\n", ret); + return ret; + } + eth_hw_addr_set(netdev, priv->mac_addr); + + if (save_private_reg) + fxgmac_save_nonstick_reg(priv); + + fxgmac_hw_exit(priv); /* Reset here to get hw features correctly */ + fxgmac_get_all_hw_features(priv); + + /* Set the DMA mask */ + ret = dma_set_mask_and_coherent(priv->dev, + DMA_BIT_MASK(priv->hw_feat.dma_width)); + if (ret) { + ret = dma_set_mask_and_coherent(priv->dev, DMA_BIT_MASK(32)); + if (ret) { + yt_err(priv, "No usable DMA configuration, aborting\n"); + return ret; + } + } + + if (FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, LEGACY)) { + /* We should disable msi and msix here when we use legacy + * interrupt,for two reasons: + * 1. Exit will restore msi and msix config regisiter, + * that may enable them. + * 2. When the driver that uses the msix interrupt by default + * is compiled into the OS, uninstall the driver through rmmod, + * and then install the driver that uses the legacy interrupt, + * at which time the msix enable will be turned on again by + * default after waking up from S4 on some + * platform. such as UOS platform. + */ + pci_disable_msi(to_pci_dev(priv->dev)); + pci_disable_msix(to_pci_dev(priv->dev)); + } + + BUILD_BUG_ON_NOT_POWER_OF_2(FXGMAC_TX_DESC_CNT); + priv->tx_desc_count = FXGMAC_TX_DESC_CNT; + BUILD_BUG_ON_NOT_POWER_OF_2(FXGMAC_RX_DESC_CNT); + priv->rx_desc_count = FXGMAC_RX_DESC_CNT; + + ret = netif_set_real_num_tx_queues(netdev, FXGMAC_TX_1_Q); + if (ret) { + yt_err(priv, "error setting real tx queue count\n"); + return ret; + } + + priv->rx_ring_count = min_t(unsigned int, + netif_get_num_default_rss_queues(), + priv->hw_feat.rx_ch_cnt); + priv->rx_ring_count = min_t(unsigned int, priv->rx_ring_count, + priv->hw_feat.rx_q_cnt); + priv->rx_q_count = priv->rx_ring_count; + ret = netif_set_real_num_rx_queues(netdev, priv->rx_q_count); + if (ret) { + yt_err(priv, "error setting real rx queue count\n"); + return ret; + } + + priv->channel_count = + max_t(unsigned int, FXGMAC_TX_1_RING, priv->rx_ring_count); + + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = + FXGMAC_JUMBO_PACKET_MTU + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN); + + netdev->netdev_ops = fxgmac_get_netdev_ops();/* Set device operations */ + + /* Set device features */ + if (priv->hw_feat.tso) { + netdev->hw_features = NETIF_F_TSO; + netdev->hw_features |= NETIF_F_TSO6; + netdev->hw_features |= NETIF_F_SG; + netdev->hw_features |= NETIF_F_IP_CSUM; + netdev->hw_features |= NETIF_F_IPV6_CSUM; + } else if (priv->hw_feat.tx_coe) { + netdev->hw_features = NETIF_F_IP_CSUM; + netdev->hw_features |= NETIF_F_IPV6_CSUM; + } + + if (priv->hw_feat.rx_coe) { + netdev->hw_features |= NETIF_F_RXCSUM; + netdev->hw_features |= NETIF_F_GRO; + } + + netdev->hw_features |= NETIF_F_RXHASH; + netdev->vlan_features |= netdev->hw_features; + netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX; + + if (priv->hw_feat.sa_vlan_ins) + netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX; + + netdev->features |= netdev->hw_features; + priv->netdev_features = netdev->features; + + netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->watchdog_timeo = msecs_to_jiffies(5000); + +#define NIC_MAX_TCP_OFFLOAD_SIZE 7300 + netif_set_tso_max_size(netdev, NIC_MAX_TCP_OFFLOAD_SIZE); + +/* Default coalescing parameters */ +#define FXGMAC_INIT_DMA_TX_USECS INT_MOD_200_US +#define FXGMAC_INIT_DMA_TX_FRAMES 25 +#define FXGMAC_INIT_DMA_RX_USECS INT_MOD_200_US +#define FXGMAC_INIT_DMA_RX_FRAMES 25 + + /* Tx coalesce parameters initialization */ + priv->tx_usecs = FXGMAC_INIT_DMA_TX_USECS; + priv->tx_frames = FXGMAC_INIT_DMA_TX_FRAMES; + + /* Rx coalesce parameters initialization */ + priv->rx_riwt = fxgmac_usec_to_riwt(priv, FXGMAC_INIT_DMA_RX_USECS); + priv->rx_usecs = FXGMAC_INIT_DMA_RX_USECS; + priv->rx_frames = FXGMAC_INIT_DMA_RX_FRAMES; + + return 0; +} + #ifdef CONFIG_PCI_MSI static void fxgmac_init_interrupt_scheme(struct fxgmac_pdata *priv) { From patchwork Fri Feb 28 10:00:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996102 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-98.mail.aliyun.com (out28-98.mail.aliyun.com [115.124.28.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F271E25FA06; Fri, 28 Feb 2025 10:00:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736848; cv=none; b=Y5f2iHAQHT4b5p5BDq1UpzvNmYhvGflEpcyA1riBaMGRAde5p+/T0usCSOjhWluOXbG+86/vR0KSlngH5MDVTnJON+GY7ddl4bv3buMKt/DvnnB4Y37Z08vAWaITtR4qllb0YkTQboEpC9MyLae+NP55rS43alwwoL8714CWXp4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736848; c=relaxed/simple; bh=m6WuCiV/RYA+a5NP+PJkwLidaergF+ppnvVg4Hw89LQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C7RDIVO8C+y6RxTBUQehez2HWuvWQ57moY3MoBAm8cu3RMVqOVPsHnYxSD2nJ0H0WJfjnca+UowFUiIwIGwY9WV+M9SV6doR6c9TwwRJcexiP2WzkfBxrTTrqBYZqTz+3o98++4NY+W3jOAlUtIN1sdNZIA9YoNc6j1FL57zzoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1DD_1740736834 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:35 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 05/14] motorcomm:yt6801: Implement the .ndo_open function Date: Fri, 28 Feb 2025 18:00:11 +0800 Message-Id: <20250228100020.3944-6-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the .ndo_open function to Calculate the Rx buffer size, allocate the channels and rings. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_desc.c | 223 ++++++++++++++++++ .../ethernet/motorcomm/yt6801/yt6801_net.c | 90 +++++++ 2 files changed, 313 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c index 3ff5eff11..74a0bec45 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c @@ -48,3 +48,226 @@ void fxgmac_desc_data_unmap(struct fxgmac_pdata *priv, desc_data->mapped_as_page = 0; } + +static int fxgmac_ring_init(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + unsigned int dma_desc_count) +{ + /* Descriptors */ + ring->dma_desc_count = dma_desc_count; + ring->dma_desc_head = + dma_alloc_coherent(priv->dev, (sizeof(struct fxgmac_dma_desc) * + dma_desc_count), + &ring->dma_desc_head_addr, GFP_KERNEL); + if (!ring->dma_desc_head) + return -ENOMEM; + + /* Array of descriptor data */ + ring->desc_data_head = kcalloc(dma_desc_count, + sizeof(struct fxgmac_desc_data), + GFP_KERNEL); + if (!ring->desc_data_head) + return -ENOMEM; + + return 0; +} + +static void fxgmac_ring_free(struct fxgmac_pdata *priv, + struct fxgmac_ring *ring) +{ + if (!ring) + return; + + if (ring->desc_data_head) { + for (u32 i = 0; i < ring->dma_desc_count; i++) + fxgmac_desc_data_unmap(priv, + FXGMAC_GET_DESC_DATA(ring, i)); + + kfree(ring->desc_data_head); + ring->desc_data_head = NULL; + } + + if (ring->rx_hdr_pa.pages) { + dma_unmap_page(priv->dev, ring->rx_hdr_pa.pages_dma, + ring->rx_hdr_pa.pages_len, DMA_FROM_DEVICE); + put_page(ring->rx_hdr_pa.pages); + + ring->rx_hdr_pa.pages = NULL; + ring->rx_hdr_pa.pages_len = 0; + ring->rx_hdr_pa.pages_offset = 0; + ring->rx_hdr_pa.pages_dma = 0; + } + + if (ring->rx_buf_pa.pages) { + dma_unmap_page(priv->dev, ring->rx_buf_pa.pages_dma, + ring->rx_buf_pa.pages_len, DMA_FROM_DEVICE); + put_page(ring->rx_buf_pa.pages); + + ring->rx_buf_pa.pages = NULL; + ring->rx_buf_pa.pages_len = 0; + ring->rx_buf_pa.pages_offset = 0; + ring->rx_buf_pa.pages_dma = 0; + } + if (ring->dma_desc_head) { + dma_free_coherent(priv->dev, (sizeof(struct fxgmac_dma_desc) * + ring->dma_desc_count), ring->dma_desc_head, + ring->dma_desc_head_addr); + ring->dma_desc_head = NULL; + } +} + +static void fxgmac_rings_free(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + fxgmac_ring_free(priv, channel->tx_ring); + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + fxgmac_ring_free(priv, channel->rx_ring); +} + +static int fxgmac_rings_alloc(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + int ret; + + ret = fxgmac_ring_init(priv, channel->tx_ring, priv->tx_desc_count); + if (ret < 0) { + yt_err(priv, "error initializing Tx ring"); + goto err_init_ring; + } + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + ret = fxgmac_ring_init(priv, channel->rx_ring, + priv->rx_desc_count); + if (ret < 0) { + yt_err(priv, "error initializing Rx ring\n"); + goto err_init_ring; + } + } + return 0; + +err_init_ring: + fxgmac_rings_free(priv); + return ret; +} + +static void fxgmac_channels_free(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + kfree(channel->tx_ring); + channel->tx_ring = NULL; + + kfree(channel->rx_ring); + channel->rx_ring = NULL; + + kfree(channel); + priv->channel_head = NULL; +} + +void fxgmac_channels_rings_free(struct fxgmac_pdata *priv) +{ + fxgmac_rings_free(priv); + fxgmac_channels_free(priv); +} + +#ifdef CONFIG_PCI_MSI +static void fxgmac_set_msix_tx_irq(struct fxgmac_pdata *priv, + struct fxgmac_channel *channel, u32 i) +{ + if (i != 0) /*only one tx*/ + return; + + priv->channel_irq[FXGMAC_MAX_DMA_RX_CHANNELS] = + priv->msix_entries[FXGMAC_MAX_DMA_RX_CHANNELS].vector; + channel->dma_irq_tx = priv->channel_irq[FXGMAC_MAX_DMA_RX_CHANNELS]; +} +#endif + +static int fxgmac_channels_alloc(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel_head, *channel; + struct fxgmac_ring *tx_ring, *rx_ring; + int ret = -ENOMEM; + + channel_head = kcalloc(priv->channel_count, + sizeof(struct fxgmac_channel), GFP_KERNEL); + + if (!channel_head) + return ret; + + tx_ring = kcalloc(FXGMAC_TX_1_RING, sizeof(struct fxgmac_ring), + GFP_KERNEL); + if (!tx_ring) + goto err_tx_ring; + + rx_ring = kcalloc(priv->rx_ring_count, sizeof(struct fxgmac_ring), + GFP_KERNEL); + if (!rx_ring) + goto err_rx_ring; + + channel = channel_head; + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + snprintf(channel->name, sizeof(channel->name), "channel-%u", i); + channel->priv = priv; + channel->queue_index = i; + channel->dma_regs = (priv)->hw_addr + MAC_OFFSET + DMA_CH_BASE + + (DMA_CH_INC * i); + + if (priv->per_channel_irq) { + priv->channel_irq[i] = priv->msix_entries[i].vector; + + if (IS_ENABLED(CONFIG_PCI_MSI)) + fxgmac_set_msix_tx_irq(priv, channel, i); + + /* Get the per DMA rx interrupt */ + ret = priv->channel_irq[i]; + if (ret < 0) { + yt_err(priv, "get_irq %u err\n", i + 1); + goto err_irq; + } + + channel->dma_irq_rx = ret; + } + + if (i < FXGMAC_TX_1_RING) + channel->tx_ring = tx_ring++; + + if (i < priv->rx_ring_count) + channel->rx_ring = rx_ring++; + } + + priv->channel_head = channel_head; + return 0; + +err_irq: + kfree(rx_ring); + +err_rx_ring: + kfree(tx_ring); + +err_tx_ring: + kfree(channel_head); + + yt_err(priv, "%s err:%d\n", __func__, ret); + return ret; +} + +int fxgmac_channels_rings_alloc(struct fxgmac_pdata *priv) +{ + int ret; + + ret = fxgmac_channels_alloc(priv); + if (ret < 0) + goto err_alloc; + + ret = fxgmac_rings_alloc(priv); + if (ret < 0) + goto err_alloc; + + return 0; + +err_alloc: + fxgmac_channels_rings_free(priv); + return ret; +} diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index 350510174..c5e02c497 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -11,6 +11,8 @@ #include "yt6801.h" #include "yt6801_desc.h" +const struct net_device_ops *fxgmac_get_netdev_ops(void); + #define PHY_WR_CONFIG(reg_offset) (0x8000205 + ((reg_offset) * 0x10000)) static int fxgmac_phy_write_reg(struct fxgmac_pdata *priv, u32 reg_id, u32 data) { @@ -391,6 +393,32 @@ static void fxgmac_stop(struct fxgmac_pdata *priv) netdev_tx_reset_queue(txq); } +static void fxgmac_restart(struct fxgmac_pdata *priv) +{ + int ret; + + /* If not running, "restart" will happen on open */ + if (!netif_running(priv->netdev) && priv->dev_state != FXGMAC_DEV_START) + return; + + mutex_lock(&priv->mutex); + fxgmac_stop(priv); + fxgmac_free_tx_data(priv); + fxgmac_free_rx_data(priv); + ret = fxgmac_start(priv); + if (ret < 0) + yt_err(priv, "%s err, ret = %d.\n", __func__, ret); + + mutex_unlock(&priv->mutex); +} + +static void fxgmac_restart_work(struct work_struct *work) +{ + rtnl_lock(); + fxgmac_restart(container_of(work, struct fxgmac_pdata, restart_work)); + rtnl_unlock(); +} + static void fxgmac_config_powerdown(struct fxgmac_pdata *priv) { FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, RE, 1); /* Enable MAC Rx */ @@ -435,6 +463,59 @@ int fxgmac_net_powerdown(struct fxgmac_pdata *priv) return 0; } +static int fxgmac_calc_rx_buf_size(struct fxgmac_pdata *priv, unsigned int mtu) +{ + u32 rx_buf_size, max_mtu = FXGMAC_JUMBO_PACKET_MTU - ETH_HLEN; + + if (mtu > max_mtu) { + yt_err(priv, "MTU exceeds maximum supported value\n"); + return -EINVAL; + } + + rx_buf_size = mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; + rx_buf_size = + clamp_val(rx_buf_size, FXGMAC_RX_MIN_BUF_SIZE, PAGE_SIZE * 4); + + rx_buf_size = (rx_buf_size + FXGMAC_RX_BUF_ALIGN - 1) & + ~(FXGMAC_RX_BUF_ALIGN - 1); + + return rx_buf_size; +} + +static int fxgmac_open(struct net_device *netdev) +{ + struct fxgmac_pdata *priv = netdev_priv(netdev); + int ret; + + mutex_lock(&priv->mutex); + priv->dev_state = FXGMAC_DEV_OPEN; + + /* Calculate the Rx buffer size before allocating rings */ + ret = fxgmac_calc_rx_buf_size(priv, netdev->mtu); + if (ret < 0) + goto unlock; + + priv->rx_buf_size = ret; + ret = fxgmac_channels_rings_alloc(priv); + if (ret < 0) + goto unlock; + + INIT_WORK(&priv->restart_work, fxgmac_restart_work); + ret = fxgmac_start(priv); + if (ret < 0) + goto err_channels_and_rings; + + mutex_unlock(&priv->mutex); + return 0; + +err_channels_and_rings: + fxgmac_channels_rings_free(priv); + yt_err(priv, "%s, channel alloc err\n", __func__); +unlock: + mutex_unlock(&priv->mutex); + return ret; +} + #define EFUSE_FISRT_UPDATE_ADDR 255 #define EFUSE_SECOND_UPDATE_ADDR 209 #define EFUSE_MAX_ENTRY 39 @@ -932,3 +1013,12 @@ int fxgmac_drv_probe(struct device *dev, struct fxgmac_resources *res) free_netdev(netdev); return ret; } + +static const struct net_device_ops fxgmac_netdev_ops = { + .ndo_open = fxgmac_open, +}; + +const struct net_device_ops *fxgmac_get_netdev_ops(void) +{ + return &fxgmac_netdev_ops; +} From patchwork Fri Feb 28 10:00:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996135 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-1.mail.aliyun.com (out28-1.mail.aliyun.com [115.124.28.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4AB1D25F99A; Fri, 28 Feb 2025 10:05:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737159; cv=none; b=pppgRMXyWcc1Eg33FWjt2VFovKnnH5SpEUzRHQ0p6w56dnIQzMkRC8AKPyMOtG5HaelQGXMDootSdHiFzIYfCi/nzVrSs3/d8DLIUQ6ZXtBmJZ6uOIxsOK5Pe4tsPsnkCWPBATyB/miHJohuvcBptZHuleIQA2NIjJTBYmetL5g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737159; c=relaxed/simple; bh=uY17bPlH0/Br3SDQNFVPnJYVDubD3OnqYJaIenrxPis=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AmVhn3HZQ+iHECKlGVq7h1mZwHtI1Se/sdCaGSjrckLMqIenSXOCvntAUkUcDqsIQO6FhbPYSjHLaUk321VMhzoAsowVOlSh+fF+EIue02vHKME288j0q3t67cQECR688W15z9gRmyVbfF0IVHlCVoh945tdmbVTAjKf0MnuTjw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1EG_1740736835 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:35 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 06/14] motorcomm:yt6801: Implement the fxgmac_start function Date: Fri, 28 Feb 2025 18:00:12 +0800 Message-Id: <20250228100020.3944-7-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the fxgmac_start function to connect phy, enable napi, phy and msix irq. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 363 ++++++++++++++++++ 1 file changed, 363 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index c5e02c497..1918cb550 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -12,6 +12,7 @@ #include "yt6801_desc.h" const struct net_device_ops *fxgmac_get_netdev_ops(void); +static void fxgmac_napi_enable(struct fxgmac_pdata *priv); #define PHY_WR_CONFIG(reg_offset) (0x8000205 + ((reg_offset) * 0x10000)) static int fxgmac_phy_write_reg(struct fxgmac_pdata *priv, u32 reg_id, u32 data) @@ -101,6 +102,11 @@ static int fxgmac_mdio_register(struct fxgmac_pdata *priv) return 0; } +static void fxgmac_enable_msix_one_irq(struct fxgmac_pdata *priv, u32 int_id) +{ + FXGMAC_IO_WR(priv, MSIX_TBL_MASK + int_id * 16, 0); +} + static void fxgmac_disable_mgm_irq(struct fxgmac_pdata *priv) { FXGMAC_IO_WR_BITS(priv, MGMT_INT_CTRL0, INT_MASK, @@ -167,6 +173,73 @@ static void fxgmac_free_irqs(struct fxgmac_pdata *priv) } } +static int fxgmac_request_irqs(struct fxgmac_pdata *priv) +{ + u32 rx, i = 0, msi = FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, MSI); + struct fxgmac_channel *channel = priv->channel_head; + struct net_device *netdev = priv->netdev; + int ret; + + if (!FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, MSIX) && + !FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, LEGACY_IRQ)) { + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, LEGACY_IRQ, 1); + ret = devm_request_irq(priv->dev, priv->dev_irq, fxgmac_isr, + msi ? 0 : IRQF_SHARED, netdev->name, + priv); + if (ret) { + yt_err(priv, "requesting irq:%d ,err:%d\n", + priv->dev_irq, ret); + return ret; + } + } + + if (!priv->per_channel_irq) + return 0; + + if (!FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, TX_IRQ)) { + snprintf(channel->dma_irq_tx_name, + sizeof(channel->dma_irq_tx_name) - 1, + "%s-ch%d-Tx-%u", netdev_name(netdev), 0, + channel->queue_index); + FXGMAC_SET_BITS(priv->int_flag, INT_FLAG, TX_IRQ, 1); + ret = devm_request_irq(priv->dev, channel->dma_irq_tx, + fxgmac_dma_isr, 0, + channel->dma_irq_tx_name, channel); + if (ret) { + yt_err(priv, "requesting tx irq:%d ,err:%d\n", + channel->dma_irq_tx, ret); + goto err_irq; + } + } + + rx = FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, RX_IRQ); + for (i = 0; i < priv->channel_count; i++, channel++) { + snprintf(channel->dma_irq_rx_name, + sizeof(channel->dma_irq_rx_name) - 1, "%s-ch%d-Rx-%u", + netdev_name(netdev), i, channel->queue_index); + + if (!GET_BITS(rx, i, INT_FLAG_PER_RX_IRQ_LEN)) { + SET_BITS(priv->int_flag, INT_FLAG_RX_IRQ_POS + i, + INT_FLAG_PER_RX_IRQ_LEN, 1); + ret = devm_request_irq(priv->dev, channel->dma_irq_rx, + fxgmac_dma_isr, 0, + channel->dma_irq_rx_name, + channel); + if (ret) { + yt_err(priv, "requesting rx irq:%d ,err:%d\n", + channel->dma_irq_rx, ret); + goto err_irq; + } + } + } + + return 0; + +err_irq: + fxgmac_free_irqs(priv); + return ret; +} + static void fxgmac_free_tx_data(struct fxgmac_pdata *priv) { struct fxgmac_channel *channel = priv->channel_head; @@ -199,6 +272,19 @@ static void fxgmac_free_rx_data(struct fxgmac_pdata *priv) } } +static void fxgmac_enable_tx(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + /* Enable Tx DMA channel */ + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_TCR, ST, 1); + + /* Enable Tx queue */ + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_Q_TQOMR, TXQEN, MTL_Q_ENABLED); + /* Enable MAC Tx */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, TE, 1); +} + static void fxgmac_prepare_tx_stop(struct fxgmac_pdata *priv, struct fxgmac_channel *channel) { @@ -257,6 +343,27 @@ static void fxgmac_disable_tx(struct fxgmac_pdata *priv) FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_TCR, ST, 0); } +static void fxgmac_enable_rx(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + u32 val = 0, i; + + /* Enable each Rx DMA channel */ + for (i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_RCR, SR, 1); + + /* Enable each Rx queue */ + for (i = 0; i < priv->rx_q_count; i++) + val |= (0x02 << (i << 1)); + + FXGMAC_MAC_IO_WR(priv, MAC_RQC0R, val); + + /* Enable MAC Rx */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, CST, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, ACS, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, RE, 1); +} + static void fxgmac_prepare_rx_stop(struct fxgmac_pdata *priv, unsigned int queue) { @@ -310,6 +417,147 @@ static void fxgmac_default_speed_duplex_config(struct fxgmac_pdata *priv) priv->mac_speed = SPEED_1000; } +static void fxgmac_config_mac_speed(struct fxgmac_pdata *priv) +{ + if (priv->mac_duplex == DUPLEX_UNKNOWN && + priv->mac_speed == SPEED_UNKNOWN) + fxgmac_default_speed_duplex_config(priv); + + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, DM, priv->mac_duplex); + + switch (priv->mac_speed) { + case SPEED_1000: + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, PS, 0); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, FES, 0); + break; + case SPEED_100: + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, PS, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, FES, 1); + break; + case SPEED_10: + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, PS, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, FES, 0); + break; + default: + WARN_ON(1); + break; + } +} + +static void fxgmac_phylink_handler(struct net_device *ndev) +{ + struct fxgmac_pdata *priv = netdev_priv(ndev); + + priv->mac_speed = priv->phydev->speed; + priv->mac_duplex = priv->phydev->duplex; + + if (priv->phydev->link) { + fxgmac_config_mac_speed(priv); + fxgmac_enable_rx(priv); + fxgmac_enable_tx(priv); + if (netif_running(priv->netdev)) + netif_tx_wake_all_queues(priv->netdev); + } else { + netif_tx_stop_all_queues(priv->netdev); + fxgmac_disable_rx(priv); + fxgmac_disable_tx(priv); + } + + phy_print_status(priv->phydev); +} + +static int fxgmac_phy_connect(struct fxgmac_pdata *priv) +{ + struct phy_device *phydev = priv->phydev; + int ret; + + priv->phydev->irq = PHY_POLL; + ret = phy_connect_direct(priv->netdev, phydev, fxgmac_phylink_handler, + PHY_INTERFACE_MODE_INTERNAL); + if (ret) + return ret; + + phy_support_asym_pause(phydev); + priv->phydev->mac_managed_pm = 1; + phy_attached_info(phydev); + + return 0; +} + +static void fxgmac_enable_msix_irqs(struct fxgmac_pdata *priv) +{ + for (u32 intid = 0; intid < MSIX_TBL_MAX_NUM; intid++) + fxgmac_enable_msix_one_irq(priv, intid); +} + +static void fxgmac_enable_dma_interrupts(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + u32 ch_sr; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + /* Clear all the interrupts which are set */ + ch_sr = FXGMAC_DMA_IO_RD(channel, DMA_CH_SR); + FXGMAC_DMA_IO_WR(channel, DMA_CH_SR, ch_sr); + + ch_sr = 0; + /* Enable Normal Interrupt Summary Enable and Fatal Bus Error + * Enable interrupts. + */ + FXGMAC_SET_BITS(ch_sr, DMA_CH_IER, NIE, 1); + FXGMAC_SET_BITS(ch_sr, DMA_CH_IER, FBEE, 1); + + /* only one tx, enable Transmit Interrupt Enable interrupts */ + if (i == 0 && channel->tx_ring) + FXGMAC_SET_BITS(ch_sr, DMA_CH_IER, TIE, 1); + + if (channel->rx_ring) { + /* Enable Receive Buffer Unavailable Enable and Receive + * Interrupt Enable interrupts. + */ + FXGMAC_SET_BITS(ch_sr, DMA_CH_IER, RBUE, 1); + FXGMAC_SET_BITS(ch_sr, DMA_CH_IER, RIE, 1); + } + + FXGMAC_DMA_IO_WR(channel, DMA_CH_IER, ch_sr); + } +} + +static void fxgmac_dismiss_all_int(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + u32 i; + + /* Clear all the interrupts which are set */ + for (i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR(channel, DMA_CH_SR, + FXGMAC_DMA_IO_RD(channel, DMA_CH_SR)); + + for (i = 0; i < priv->hw_feat.rx_q_cnt; i++) + FXGMAC_MTL_IO_WR(priv, i, MTL_Q_ISR, + FXGMAC_MTL_IO_RD(priv, i, MTL_Q_ISR)); + + FXGMAC_MAC_IO_RD(priv, MAC_ISR); /* Clear all MAC interrupts */ + FXGMAC_MAC_IO_RD(priv, MAC_TX_RX_STA);/* Clear tx/rx error interrupts */ + FXGMAC_MAC_IO_RD(priv, MAC_PMT_STA); + FXGMAC_MAC_IO_RD(priv, MAC_LPI_STA); + + FXGMAC_MAC_IO_WR(priv, MAC_DBG_STA, + FXGMAC_MAC_IO_RD(priv, MAC_DBG_STA)); +} + +static void fxgmac_set_interrupt_moderation(struct fxgmac_pdata *priv) +{ + FXGMAC_IO_WR_BITS(priv, INT_MOD, TX, priv->tx_usecs); + FXGMAC_IO_WR_BITS(priv, INT_MOD, RX, priv->rx_usecs); +} + +static void fxgmac_enable_mgm_irq(struct fxgmac_pdata *priv) +{ + FXGMAC_IO_WR_BITS(priv, MGMT_INT_CTRL0, INT_MASK, + MGMT_INT_CTRL0_INT_MASK_DISABLE); +} + /** * fxgmac_set_oob_wol - disable or enable oob wol crtl function * @priv: driver private struct @@ -324,6 +572,12 @@ static void fxgmac_set_oob_wol(struct fxgmac_pdata *priv, unsigned int en) FXGMAC_IO_WR_BITS(priv, OOB_WOL_CTRL, DIS, !en);/* en = 1 is disable */ } +static void fxgmac_config_powerup(struct fxgmac_pdata *priv) +{ + fxgmac_set_oob_wol(priv, 0); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PMT_STA, PWRDWN, 0); /* GAMC power up */ +} + static void fxgmac_pre_powerdown(struct fxgmac_pdata *priv) { fxgmac_set_oob_wol(priv, 1); @@ -354,12 +608,87 @@ static void fxgmac_hw_exit(struct fxgmac_pdata *priv) /* Reset will clear nonstick registers. */ fxgmac_restore_nonstick_reg(priv); } + +static void fxgmac_pcie_init(struct fxgmac_pdata *priv) +{ + /* snoopy + non-snoopy */ + FXGMAC_IO_WR_BITS(priv, LTR_IDLE_ENTER, REQUIRE, + LTR_IDLE_ENTER_REQUIRE); + FXGMAC_IO_WR_BITS(priv, LTR_IDLE_ENTER, SCALE, + LTR_IDLE_ENTER_SCALE_1024_NS); + FXGMAC_IO_WR_BITS(priv, LTR_IDLE_ENTER, ENTER, LTR_IDLE_ENTER_900_US); + + /* snoopy + non-snoopy */ + FXGMAC_IO_WR_BITS(priv, LTR_IDLE_EXIT, REQUIRE, LTR_IDLE_EXIT_REQUIRE); + FXGMAC_IO_WR_BITS(priv, LTR_IDLE_EXIT, SCALE, LTR_IDLE_EXIT_SCALE); + FXGMAC_IO_WR_BITS(priv, LTR_IDLE_EXIT, EXIT, LTR_IDLE_EXIT_171_US); + + FXGMAC_IO_WR_BITS(priv, PCIE_SERDES_PLL, AUTOOFF, 1); +} + void fxgmac_phy_reset(struct fxgmac_pdata *priv) { FXGMAC_IO_WR_BITS(priv, EPHY_CTRL, RESET, 0); fsleep(1500); } +static int fxgmac_start(struct fxgmac_pdata *priv) +{ + int ret; + + if (priv->dev_state != FXGMAC_DEV_OPEN && + priv->dev_state != FXGMAC_DEV_STOP && + priv->dev_state != FXGMAC_DEV_RESUME) { + return 0; + } + + if (priv->dev_state != FXGMAC_DEV_STOP) { + fxgmac_phy_reset(priv); + fxgmac_phy_release(priv); + } + + if (priv->dev_state == FXGMAC_DEV_OPEN) { + ret = fxgmac_phy_connect(priv); + if (ret < 0) + return ret; + } + + fxgmac_pcie_init(priv); + if (test_bit(FXGMAC_POWER_STATE_DOWN, &priv->powerstate)) { + yt_err(priv, "fxgmac powerstate is %lu when config power up.\n", + priv->powerstate); + } + + fxgmac_config_powerup(priv); + fxgmac_dismiss_all_int(priv); + ret = fxgmac_hw_init(priv); + if (ret < 0) { + yt_err(priv, "fxgmac hw init error.\n"); + return ret; + } + + fxgmac_napi_enable(priv); + ret = fxgmac_request_irqs(priv); + if (ret < 0) + return ret; + + /* Config interrupt to level signal */ + FXGMAC_MAC_IO_WR_BITS(priv, DMA_MR, INTM, 2); + FXGMAC_MAC_IO_WR_BITS(priv, DMA_MR, QUREAD, 1); + + fxgmac_enable_mgm_irq(priv); + fxgmac_set_interrupt_moderation(priv); + + if (priv->per_channel_irq) + fxgmac_enable_msix_irqs(priv); + + fxgmac_enable_dma_interrupts(priv); + priv->dev_state = FXGMAC_DEV_START; + phy_start(priv->phydev); + + return 0; +} + static void fxgmac_disable_msix_irqs(struct fxgmac_pdata *priv) { for (u32 intid = 0; intid < MSIX_TBL_MAX_NUM; intid++) @@ -1022,3 +1351,37 @@ const struct net_device_ops *fxgmac_get_netdev_ops(void) { return &fxgmac_netdev_ops; } + +static void napi_add_enable(struct fxgmac_pdata *priv, struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), + u32 flag_pos) +{ + netif_napi_add(priv->netdev, napi, poll); + napi_enable(napi); + SET_BITS(priv->int_flag, flag_pos, 1, 1); /* set flag_pos bit to 1 */ +} + +static void fxgmac_napi_enable(struct fxgmac_pdata *priv) +{ + u32 rx = FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, RX_NAPI); + struct fxgmac_channel *channel = priv->channel_head; + + if (!priv->per_channel_irq) { + if (FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, LEGACY_NAPI)) + return; + + napi_add_enable(priv, &priv->napi, fxgmac_all_poll, + INT_FLAG_LEGACY_NAPI_POS); + return; + } + + if (!FXGMAC_GET_BITS(priv->int_flag, INT_FLAG, TX_NAPI)) + napi_add_enable(priv, &channel->napi_tx, fxgmac_one_poll_tx, + INT_FLAG_TX_NAPI_POS); + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + if (!(GET_BITS(rx, i, INT_FLAG_PER_RX_NAPI_LEN))) + napi_add_enable(priv, &channel->napi_rx, + fxgmac_one_poll_rx, + INT_FLAG_RX_NAPI_POS + i); +} From patchwork Fri Feb 28 10:00:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996147 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-3.mail.aliyun.com (out28-3.mail.aliyun.com [115.124.28.3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1B4825D558; Fri, 28 Feb 2025 10:05:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.3 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737161; cv=none; b=fkv7zsMNSCp1aiMFO7Kde9MduDMbSu0GEchqtjJ5emkno+3I65wb6exRJGn+1T+GcV4Y6gg0EwCWxhLCz7zsOUMoP/LT3cRk0FH93OM5Q5qblKreZheZjY7zO1ZDrDGBbVAPyDl4H+XIbagwSER2SR3A3nzEx4huRUIyfskwfXE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737161; c=relaxed/simple; bh=w1EvRB3dYdwdaRjLy1m5zA06kix0XeLvaV0nfbgkqDI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cZScpo1X7jcFDpdSzuWxXV0Kx80Vd7EZCtwId6IXwSyvEZeiKteuhXXN8yYWUPtbYGo8kNwogluWqRZfg19RdwbnA+XIJsu5HL12z9SAPXJGnJWaR5nsbvxvKpchHD614f9sniqejvTrEpLYXaUKAebFxJdCHzYTMYv2M7IXiyU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.3 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1FV_1740736836 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:37 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 07/14] phy:motorcomm: Add PHY_INTERFACE_MODE_INTERNAL to support YT6801 Date: Fri, 28 Feb 2025 18:00:13 +0800 Message-Id: <20250228100020.3944-8-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org YT6801 NIC Integrated a PHY that is YT8531S, but it used GMII interface. Add a case of PHY_INTERFACE_MODE_INTERNAL to support YT6801. Signed-off-by: Frank Sae --- drivers/net/phy/motorcomm.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/phy/motorcomm.c b/drivers/net/phy/motorcomm.c index 0e91f5d1a..371acafd5 100644 --- a/drivers/net/phy/motorcomm.c +++ b/drivers/net/phy/motorcomm.c @@ -896,6 +896,12 @@ static int ytphy_rgmii_clk_delay_config(struct phy_device *phydev) val |= FIELD_PREP(YT8521_RC1R_RX_DELAY_MASK, rx_reg) | FIELD_PREP(YT8521_RC1R_GE_TX_DELAY_MASK, tx_reg); break; + case PHY_INTERFACE_MODE_INTERNAL: + if (phydev->drv->phy_id != PHY_ID_YT8531S) + return -EOPNOTSUPP; + + phydev_info(phydev, "Integrated YT8531S phy of YT6801.\n"); + return 0; default: /* do not support other modes */ return -EOPNOTSUPP; } From patchwork Fri Feb 28 10:00:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996148 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-3.mail.aliyun.com (out28-3.mail.aliyun.com [115.124.28.3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCF3A25FA0F; Fri, 28 Feb 2025 10:05:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.3 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737163; cv=none; b=OpCYdoT0X+Ub2T5vdPP7K9juonfw9K/NCobmbIqF3AyWRlOvdnaEhlR18PhBg6N2bPO5dgEDjKLAoCU3aezJ5zUwGseXb0nHaSnZQ36vk/IemahoQW2kSb6wp4cfalVh3x3FP4OXKi5+LlJUA9qUfzFBz+Ck2KkBoy4Fvs2tznU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737163; c=relaxed/simple; bh=P7l5rykwGlDfGBGU84JonmUkeDiNrYF6Fhqrik6jU90=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RijAkHhjmgqUomKuwDUnvlJQ8OLrEAUpby3CCBkrwUwg8A7V/bCu1Y3Dtn0aWCqqaCoxifnIIWW6R2gqKQjZ5J/uyt1PaP+wPd3c6Oy45h2lONTfgoMNg9Cb5ZG4Zgb0cBpE2ql7n6oFFdzR4BeFAxiuFEi5GM2vsgptVUAkT6o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.3 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Hw_1740736837 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:38 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 08/14] motorcomm:yt6801: Implement the fxgmac_hw_init function Date: Fri, 28 Feb 2025 18:00:14 +0800 Message-Id: <20250228100020.3944-9-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement some hardware init functions to set default hardware settings, including PHY control, Vlan related config, RX coalescing, and other basic function control. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 537 ++++++++++++++++++ 1 file changed, 537 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index 1918cb550..14c59cece 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -490,6 +490,319 @@ static void fxgmac_enable_msix_irqs(struct fxgmac_pdata *priv) fxgmac_enable_msix_one_irq(priv, intid); } +static void __fxgmac_set_mac_address(struct fxgmac_pdata *priv, u8 *addr) +{ + u32 mac_hi, mac_lo; + + mac_lo = (u32)addr[0] | ((u32)addr[1] << 8) | ((u32)addr[2] << 16) | + ((u32)addr[3] << 24); + + mac_hi = (u32)addr[4] | ((u32)addr[5] << 8); + + FXGMAC_MAC_IO_WR(priv, MAC_MACA0LR, mac_lo); + FXGMAC_MAC_IO_WR(priv, MAC_MACA0HR, mac_hi); +} + +static void fxgmac_config_mac_address(struct fxgmac_pdata *priv) +{ + __fxgmac_set_mac_address(priv, priv->mac_addr); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PFR, HPF, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PFR, HUC, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PFR, HMC, 1); +} + +static void fxgmac_config_crc_check_en(struct fxgmac_pdata *priv) +{ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_ECR, DCRCC, 1); +} + +static void fxgmac_config_checksum_offload(struct fxgmac_pdata *priv) +{ + if (priv->netdev->features & NETIF_F_RXCSUM) + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, IPC, 1); + else + FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, IPC, 0); +} + +static void fxgmac_set_promiscuous_mode(struct fxgmac_pdata *priv, + unsigned int enable) +{ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PFR, PR, enable); +} + +static void fxgmac_enable_rx_broadcast(struct fxgmac_pdata *priv, + unsigned int enable) +{ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PFR, DBF, enable); +} + +static void fxgmac_set_all_multicast_mode(struct fxgmac_pdata *priv, + unsigned int enable) +{ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_PFR, PM, enable); +} + +static void fxgmac_config_rx_mode(struct fxgmac_pdata *priv) +{ + u32 pr_mode, am_mode, bd_mode; + + pr_mode = ((priv->netdev->flags & IFF_PROMISC) != 0); + am_mode = ((priv->netdev->flags & IFF_ALLMULTI) != 0); + bd_mode = ((priv->netdev->flags & IFF_BROADCAST) != 0); + + fxgmac_enable_rx_broadcast(priv, bd_mode); + fxgmac_set_promiscuous_mode(priv, pr_mode); + fxgmac_set_all_multicast_mode(priv, am_mode); +} + +static void fxgmac_config_tx_flow_control(struct fxgmac_pdata *priv) +{ + /* Set MTL flow control */ + for (u32 i = 0; i < priv->rx_q_count; i++) + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, EHFC, + priv->tx_pause); + + /* Set MAC flow control */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_Q0TFCR, TFE, priv->tx_pause); + + if (priv->tx_pause == 1) /* Set pause time */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_Q0TFCR, PT, 0xffff); +} + +static void fxgmac_config_rx_flow_control(struct fxgmac_pdata *priv) +{ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_RFCR, RFE, priv->rx_pause); +} + +static void fxgmac_config_rx_coalesce(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + if (!channel->rx_ring) + break; + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_RIWT, RWT, priv->rx_riwt); + } +} + +static void fxgmac_config_rx_fep_disable(struct fxgmac_pdata *priv) +{ + /* Enable the rx queue forward packet with error status + * (crc error,gmii_er, watch dog timeout.or overflow) + */ + for (u32 i = 0; i < priv->rx_q_count; i++) + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, FEP, 1); +} + +static void fxgmac_config_rx_fup_enable(struct fxgmac_pdata *priv) +{ + for (u32 i = 0; i < priv->rx_q_count; i++) + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, FUP, 1); +} + +static void fxgmac_config_rx_buffer_size(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_RCR, RBSZ, + priv->rx_buf_size); +} + +static void fxgmac_config_tso_mode(struct fxgmac_pdata *priv) +{ + FXGMAC_DMA_IO_WR_BITS(priv->channel_head, DMA_CH_TCR, TSE, + priv->hw_feat.tso); +} + +static void fxgmac_config_sph_mode(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_CR, SPH, 0); + + FXGMAC_MAC_IO_WR_BITS(priv, MAC_ECR, HDSMS, MAC_ECR_HDSMS_512B); +} + +static void fxgmac_config_rx_threshold(struct fxgmac_pdata *priv, + unsigned int set_val) +{ + for (u32 i = 0; i < priv->rx_q_count; i++) + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, RTC, set_val); +} + +static void fxgmac_config_mtl_mode(struct fxgmac_pdata *priv) +{ + /* Set Tx to weighted round robin scheduling algorithm */ + FXGMAC_MAC_IO_WR_BITS(priv, MTL_OMR, ETSALG, MTL_ETSALG_WRR); + + /* Set Tx traffic classes to use WRR algorithm with equal weights */ + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_TC_QWR, QW, 1); + + /* Set Rx to strict priority algorithm */ + FXGMAC_MAC_IO_WR_BITS(priv, MTL_OMR, RAA, MTL_RAA_SP); +} + +static void fxgmac_config_queue_mapping(struct fxgmac_pdata *priv) +{ + unsigned int ppq, ppq_extra, prio_queues; + unsigned int __maybe_unused prio; + unsigned int reg, val, mask; + + /* Map the 8 VLAN priority values to available MTL Rx queues */ + prio_queues = + min_t(unsigned int, IEEE_8021QAZ_MAX_TCS, priv->rx_q_count); + ppq = IEEE_8021QAZ_MAX_TCS / prio_queues; + ppq_extra = IEEE_8021QAZ_MAX_TCS % prio_queues; + + reg = MAC_RQC2R; + for (u32 i = 0, prio = 0; i < prio_queues;) { + val = 0; + mask = 0; + for (u32 j = 0; j < ppq; j++) { + mask |= (1 << prio); + prio++; + } + + if (i < ppq_extra) { + mask |= (1 << prio); + prio++; + } + + val |= (mask << ((i++ % MAC_RQC2_Q_PER_REG) << 3)); + + if ((i % MAC_RQC2_Q_PER_REG) && i != prio_queues) + continue; + + FXGMAC_MAC_IO_WR(priv, reg, val); + reg += MAC_RQC2_INC; + } + + /* Configure one to one, MTL Rx queue to DMA Rx channel mapping + * ie Q0 <--> CH0, Q1 <--> CH1 ... Q7 <--> CH7 + */ + val = FXGMAC_MAC_IO_RD(priv, MTL_RQDCM0R); + val |= (MTL_RQDCM0R_Q0MDMACH | MTL_RQDCM0R_Q1MDMACH | + MTL_RQDCM0R_Q2MDMACH | MTL_RQDCM0R_Q3MDMACH); + FXGMAC_MAC_IO_WR(priv, MTL_RQDCM0R, val); + + val = FXGMAC_MAC_IO_RD(priv, MTL_RQDCM0R + MTL_RQDCM_INC); + val |= (MTL_RQDCM1R_Q4MDMACH | MTL_RQDCM1R_Q5MDMACH | + MTL_RQDCM1R_Q6MDMACH | MTL_RQDCM1R_Q7MDMACH); + FXGMAC_MAC_IO_WR(priv, MTL_RQDCM0R + MTL_RQDCM_INC, val); +} + +static unsigned int fxgmac_calculate_per_queue_fifo(unsigned int fifo_size, + unsigned int queue_count) +{ + u32 q_fifo_size, p_fifo; + + /* Calculate the configured fifo size */ + q_fifo_size = 1 << (fifo_size + 7); + +#define FXGMAC_MAX_FIFO 81920 + /* The configured value may not be the actual amount of fifo RAM */ + q_fifo_size = min_t(unsigned int, FXGMAC_MAX_FIFO, q_fifo_size); + q_fifo_size = q_fifo_size / queue_count; + + /* Each increment in the queue fifo size represents 256 bytes of + * fifo, with 0 representing 256 bytes. Distribute the fifo equally + * between the queues. + */ + p_fifo = q_fifo_size / 256; + if (p_fifo) + p_fifo--; + + return p_fifo; +} + +static void fxgmac_config_tx_fifo_size(struct fxgmac_pdata *priv) +{ + u32 fifo_size; + + fifo_size = fxgmac_calculate_per_queue_fifo(priv->hw_feat.tx_fifo_size, + FXGMAC_TX_1_Q); + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_Q_TQOMR, TQS, fifo_size); +} + +static void fxgmac_config_rx_fifo_size(struct fxgmac_pdata *priv) +{ + u32 fifo_size; + + fifo_size = fxgmac_calculate_per_queue_fifo(priv->hw_feat.rx_fifo_size, + priv->rx_q_count); + + for (u32 i = 0; i < priv->rx_q_count; i++) + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, RQS, fifo_size); +} + +static void fxgmac_config_flow_control_threshold(struct fxgmac_pdata *priv) +{ + for (u32 i = 0; i < priv->rx_q_count; i++) { + /* Activate flow control when less than 4k left in fifo */ + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, RFA, 6); + /* De-activate flow control when more than 6k left in fifo */ + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, RFD, 10); + } +} + +static void fxgmac_config_tx_threshold(struct fxgmac_pdata *priv, + unsigned int set_val) +{ + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_Q_TQOMR, TTC, set_val); +} + +static void fxgmac_config_rsf_mode(struct fxgmac_pdata *priv, + unsigned int set_val) +{ + for (u32 i = 0; i < priv->rx_q_count; i++) + FXGMAC_MTL_IO_WR_BITS(priv, i, MTL_Q_RQOMR, RSF, set_val); +} + +static void fxgmac_config_tsf_mode(struct fxgmac_pdata *priv, + unsigned int set_val) +{ + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_Q_TQOMR, TSF, set_val); +} + +static void fxgmac_config_osp_mode(struct fxgmac_pdata *priv) +{ + FXGMAC_DMA_IO_WR_BITS(priv->channel_head, DMA_CH_TCR, OSP, + priv->tx_osp_mode); +} + +static void fxgmac_config_pblx8(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_CR, PBLX8, priv->pblx8); +} + +static void fxgmac_config_tx_pbl_val(struct fxgmac_pdata *priv) +{ + FXGMAC_DMA_IO_WR_BITS(priv->channel_head, DMA_CH_TCR, PBL, + priv->tx_pbl); +} + +static void fxgmac_config_rx_pbl_val(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) + FXGMAC_DMA_IO_WR_BITS(channel, DMA_CH_RCR, PBL, priv->rx_pbl); +} + +static void fxgmac_config_mmc(struct fxgmac_pdata *priv) +{ + /* Set counters to reset on read, Reset the counters */ + FXGMAC_MAC_IO_WR_BITS(priv, MMC_CR, ROR, 1); + FXGMAC_MAC_IO_WR_BITS(priv, MMC_CR, CR, 1); + + FXGMAC_MAC_IO_WR(priv, MMC_IPC_RXINT_MASK, 0xffffffff); +} + static void fxgmac_enable_dma_interrupts(struct fxgmac_pdata *priv) { struct fxgmac_channel *channel = priv->channel_head; @@ -523,6 +836,230 @@ static void fxgmac_enable_dma_interrupts(struct fxgmac_pdata *priv) } } +static void fxgmac_enable_mtl_interrupts(struct fxgmac_pdata *priv) +{ + unsigned int mtl_q_isr; + + for (u32 i = 0; i < priv->hw_feat.rx_q_cnt; i++) { + /* Clear all the interrupts which are set */ + mtl_q_isr = FXGMAC_MTL_IO_RD(priv, i, MTL_Q_ISR); + FXGMAC_MTL_IO_WR(priv, i, MTL_Q_ISR, mtl_q_isr); + + /* No MTL interrupts to be enabled */ + FXGMAC_MTL_IO_WR(priv, i, MTL_Q_IER, 0); + } +} + +static void fxgmac_enable_mac_interrupts(struct fxgmac_pdata *priv) +{ + /* Disable Timestamp interrupt */ + FXGMAC_MAC_IO_WR_BITS(priv, MAC_IER, TSIE, 0); + + FXGMAC_MAC_IO_WR_BITS(priv, MMC_RIER, ALL_INTERRUPTS, 0); + FXGMAC_MAC_IO_WR_BITS(priv, MMC_TIER, ALL_INTERRUPTS, 0); +} + +static int fxgmac_flush_tx_queues(struct fxgmac_pdata *priv) +{ + u32 val, count = 2000; + + FXGMAC_MTL_IO_WR_BITS(priv, 0, MTL_Q_TQOMR, FTQ, 1); + do { + fsleep(20); + val = FXGMAC_MTL_IO_RD(priv, 0, MTL_Q_TQOMR); + val = FXGMAC_GET_BITS(val, MTL_Q_TQOMR, FTQ); + + } while (--count && val); + + if (val) + return -EBUSY; + + return 0; +} + +static void fxgmac_config_dma_bus(struct fxgmac_pdata *priv) +{ + u32 val = FXGMAC_MAC_IO_RD(priv, DMA_SBMR); + + /* Set enhanced addressing mode */ + FXGMAC_SET_BITS(val, DMA_SBMR, EAME, 1); + + /* Out standing read/write requests */ + FXGMAC_SET_BITS(val, DMA_SBMR, RD_OSR_LMT, 0x7); + FXGMAC_SET_BITS(val, DMA_SBMR, WR_OSR_LMT, 0x7); + + /* Set the System Bus mode */ + FXGMAC_SET_BITS(val, DMA_SBMR, FB, 0); + FXGMAC_SET_BITS(val, DMA_SBMR, BLEN_4, 1); + FXGMAC_SET_BITS(val, DMA_SBMR, BLEN_8, 1); + FXGMAC_SET_BITS(val, DMA_SBMR, BLEN_16, 1); + FXGMAC_SET_BITS(val, DMA_SBMR, BLEN_32, 1); + + FXGMAC_MAC_IO_WR(priv, DMA_SBMR, val); +} + +static void fxgmac_desc_rx_channel_init(struct fxgmac_channel *channel) +{ + struct fxgmac_ring *ring = channel->rx_ring; + unsigned int start_index = ring->cur; + struct fxgmac_desc_data *desc_data; + + /* Initialize all descriptors */ + for (u32 i = 0; i < ring->dma_desc_count; i++) { + desc_data = FXGMAC_GET_DESC_DATA(ring, i); + fxgmac_desc_rx_reset(desc_data); /* Initialize Rx descriptor */ + } + + /* Update the total number of Rx descriptors */ + FXGMAC_DMA_IO_WR(channel, DMA_CH_RDRLR, ring->dma_desc_count - 1); + + /* Update the starting address of descriptor ring */ + desc_data = FXGMAC_GET_DESC_DATA(ring, start_index); + FXGMAC_DMA_IO_WR(channel, DMA_CH_RDLR_HI, + upper_32_bits(desc_data->dma_desc_addr)); + FXGMAC_DMA_IO_WR(channel, DMA_CH_RDLR_LO, + lower_32_bits(desc_data->dma_desc_addr)); + + /* Update the Rx Descriptor Tail Pointer */ + desc_data = FXGMAC_GET_DESC_DATA(ring, start_index + + ring->dma_desc_count - 1); + FXGMAC_DMA_IO_WR(channel, DMA_CH_RDTR_LO, + lower_32_bits(desc_data->dma_desc_addr)); +} + +static void fxgmac_desc_rx_init(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + struct fxgmac_desc_data *desc_data; + struct fxgmac_dma_desc *dma_desc; + dma_addr_t dma_desc_addr; + struct fxgmac_ring *ring; + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + ring = channel->rx_ring; + dma_desc = ring->dma_desc_head; + dma_desc_addr = ring->dma_desc_head_addr; + + for (u32 j = 0; j < ring->dma_desc_count; j++) { + desc_data = FXGMAC_GET_DESC_DATA(ring, j); + desc_data->dma_desc = dma_desc; + desc_data->dma_desc_addr = dma_desc_addr; + if (fxgmac_rx_buffe_map(priv, ring, desc_data)) + break; + + dma_desc++; + dma_desc_addr += sizeof(struct fxgmac_dma_desc); + } + + ring->cur = 0; + ring->dirty = 0; + + fxgmac_desc_rx_channel_init(channel); + } +} + +static void fxgmac_desc_tx_channel_init(struct fxgmac_channel *channel) +{ + struct fxgmac_ring *ring = channel->tx_ring; + struct fxgmac_desc_data *desc_data; + int start_index = ring->cur; + + /* Initialize all descriptors */ + for (u32 i = 0; i < ring->dma_desc_count; i++) { + desc_data = FXGMAC_GET_DESC_DATA(ring, i); + fxgmac_desc_tx_reset(desc_data); /* Initialize Tx descriptor */ + } + + /* Update the total number of Tx descriptors */ + FXGMAC_DMA_IO_WR(channel, DMA_CH_TDRLR, + channel->priv->tx_desc_count - 1); + + /* Update the starting address of descriptor ring */ + desc_data = FXGMAC_GET_DESC_DATA(ring, start_index); + FXGMAC_DMA_IO_WR(channel, DMA_CH_TDLR_HI, + upper_32_bits(desc_data->dma_desc_addr)); + FXGMAC_DMA_IO_WR(channel, DMA_CH_TDLR_LO, + lower_32_bits(desc_data->dma_desc_addr)); +} + +static void fxgmac_desc_tx_init(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + struct fxgmac_ring *ring = channel->tx_ring; + struct fxgmac_desc_data *desc_data; + struct fxgmac_dma_desc *dma_desc; + dma_addr_t dma_desc_addr; + + dma_desc = ring->dma_desc_head; + dma_desc_addr = ring->dma_desc_head_addr; + + for (u32 j = 0; j < ring->dma_desc_count; j++) { + desc_data = FXGMAC_GET_DESC_DATA(ring, j); + desc_data->dma_desc = dma_desc; + desc_data->dma_desc_addr = dma_desc_addr; + + dma_desc++; + dma_desc_addr += sizeof(struct fxgmac_dma_desc); + } + + ring->cur = 0; + ring->dirty = 0; + memset(&ring->tx, 0, sizeof(ring->tx)); + fxgmac_desc_tx_channel_init(priv->channel_head); +} + +static int fxgmac_hw_init(struct fxgmac_pdata *priv) +{ + int ret; + + ret = fxgmac_flush_tx_queues(priv); /* Flush Tx queues */ + if (ret < 0) { + yt_err(priv, "%s, flush tx queue err:%d\n", __func__, ret); + return ret; + } + + /* Initialize DMA related features */ + fxgmac_config_dma_bus(priv); + fxgmac_config_osp_mode(priv); + fxgmac_config_pblx8(priv); + fxgmac_config_tx_pbl_val(priv); + fxgmac_config_rx_pbl_val(priv); + fxgmac_config_rx_coalesce(priv); + fxgmac_config_rx_buffer_size(priv); + fxgmac_config_tso_mode(priv); + fxgmac_config_sph_mode(priv); + fxgmac_desc_tx_init(priv); + fxgmac_desc_rx_init(priv); + fxgmac_enable_dma_interrupts(priv); + + /* Initialize MTL related features */ + fxgmac_config_mtl_mode(priv); + fxgmac_config_queue_mapping(priv); + fxgmac_config_tsf_mode(priv, priv->tx_sf_mode); + fxgmac_config_rsf_mode(priv, priv->rx_sf_mode); + fxgmac_config_tx_threshold(priv, priv->tx_threshold); + fxgmac_config_rx_threshold(priv, priv->rx_threshold); + fxgmac_config_tx_fifo_size(priv); + fxgmac_config_rx_fifo_size(priv); + fxgmac_config_flow_control_threshold(priv); + fxgmac_config_rx_fep_disable(priv); + fxgmac_config_rx_fup_enable(priv); + fxgmac_enable_mtl_interrupts(priv); + + /* Initialize MAC related features */ + fxgmac_config_mac_address(priv); + fxgmac_config_crc_check_en(priv); + fxgmac_config_rx_mode(priv); + fxgmac_config_tx_flow_control(priv); + fxgmac_config_rx_flow_control(priv); + fxgmac_config_mac_speed(priv); + fxgmac_config_checksum_offload(priv); + fxgmac_config_mmc(priv); + fxgmac_enable_mac_interrupts(priv); + + return 0; +} + static void fxgmac_dismiss_all_int(struct fxgmac_pdata *priv) { struct fxgmac_channel *channel = priv->channel_head; From patchwork Fri Feb 28 10:01:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996106 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-148.mail.aliyun.com (out28-148.mail.aliyun.com [115.124.28.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36A5A25CC89; Fri, 28 Feb 2025 10:01:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736881; cv=none; b=sqFQLB6a4LbuB7h3DXDkKWyVomoAqqmN+1mcZ76fr3+lD9Ud67Ua0c0AkYxc9eIypBdhGL8couKMJ3JhZNk0Z8g5spGRnVm0KSHomF3RTq5Qt5whxOwW9B8JV6GFtqSFOjmABoyoVPQFBv9PnI4buCLdaM5gdleikOuU2ST6Lyc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736881; c=relaxed/simple; bh=JmZvXSoE2b1I1y2BThOc2gOxaHDUsYTR2/2DKKnP+Lw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HyxzBKV+AvmgQ5flnDkqZrHCDM+dTlTTzj/bQrbtrrqL1ZfxYoFUoJBmiX2zYkBffxPVSA49XhM2YD2JJYob5f+L+kQ2mQ1BFv6A/RktePCJT2cApKNhlyKExFvgrMhYezLasXbezyOwn7wE9MB1Eu3pfwpyhC+/hHQwFNYnXUg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Iw_1740736838 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:38 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 09/14] motorcomm:yt6801: Implement the poll functions Date: Fri, 28 Feb 2025 18:01:15 +0800 Message-Id: <20250228100020.3944-10-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the fxgmac_request_irqs to request legacy or msix irqs, msix irqs. Implement the fxgmac_create_skb to create skb for rx. Implement the fxgmac_isr function to handle legacy irq. Implement the fxgmac_dma_isr function to handle tx and rx irq. Implement the fxgmac_all_poll for legacy irq. Implement the fxgmac_one_poll_rx and fxgmac_one_poll_tx for msix irq. Implement the fxgmac_tx_poll and fxgmac_rx_poll to handle tx and rx. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_desc.c | 298 +++++++++++++ .../ethernet/motorcomm/yt6801/yt6801_net.c | 397 ++++++++++++++++++ 2 files changed, 695 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c index 74a0bec45..a4d116f33 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_desc.c @@ -271,3 +271,301 @@ int fxgmac_channels_rings_alloc(struct fxgmac_pdata *priv) fxgmac_channels_rings_free(priv); return ret; } + +static void fxgmac_set_buffer_data(struct fxgmac_buffer_data *bd, + struct fxgmac_page_alloc *pa, + unsigned int len) +{ + get_page(pa->pages); + bd->pa = *pa; + + bd->dma_base = pa->pages_dma; + bd->dma_off = pa->pages_offset; + bd->dma_len = len; + + pa->pages_offset += len; + if ((pa->pages_offset + len) > pa->pages_len) { + /* This data descriptor is responsible for unmapping page(s) */ + bd->pa_unmap = *pa; + + /* Get a new allocation next time */ + pa->pages = NULL; + pa->pages_len = 0; + pa->pages_offset = 0; + pa->pages_dma = 0; + } +} + +static int fxgmac_alloc_pages(struct fxgmac_pdata *priv, + struct fxgmac_page_alloc *pa, gfp_t gfp, + int order) +{ + struct page *pages = NULL; + dma_addr_t pages_dma; + + /* Try to obtain pages, decreasing order if necessary */ + gfp |= __GFP_COMP | __GFP_NOWARN; + while (order >= 0) { + pages = alloc_pages(gfp, order); + if (pages) + break; + + order--; + } + + if (!pages) + return -ENOMEM; + + /* Map the pages */ + pages_dma = dma_map_page(priv->dev, pages, 0, PAGE_SIZE << order, + DMA_FROM_DEVICE); + if (dma_mapping_error(priv->dev, pages_dma)) { + put_page(pages); + return -ENOMEM; + } + + pa->pages = pages; + pa->pages_len = PAGE_SIZE << order; + pa->pages_offset = 0; + pa->pages_dma = pages_dma; + + return 0; +} + +#define FXGMAC_SKB_ALLOC_SIZE 512 + +int fxgmac_rx_buffe_map(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + struct fxgmac_desc_data *desc_data) +{ + int ret; + + if (!ring->rx_hdr_pa.pages) { + ret = fxgmac_alloc_pages(priv, &ring->rx_hdr_pa, GFP_ATOMIC, 0); + if (ret) + return ret; + } + /* Set up the header page info */ + fxgmac_set_buffer_data(&desc_data->rx.hdr, &ring->rx_hdr_pa, + priv->rx_buf_size); + + return 0; +} + +void fxgmac_desc_tx_reset(struct fxgmac_desc_data *desc_data) +{ + struct fxgmac_dma_desc *dma_desc = desc_data->dma_desc; + + /* Reset the Tx descriptor + * Set buffer 1 (lo) address to zero + * Set buffer 1 (hi) address to zero + * Reset all other control bits (IC, TTSE, B2L & B1L) + * Reset all other control bits (OWN, CTXT, FD, LD, CPC, CIC, etc) + */ + dma_desc->desc0 = 0; + dma_desc->desc1 = 0; + dma_desc->desc2 = 0; + dma_desc->desc3 = 0; + + /* Make sure ownership is written to the descriptor */ + dma_wmb(); +} + +void fxgmac_desc_rx_reset(struct fxgmac_desc_data *desc_data) +{ + struct fxgmac_dma_desc *dma_desc = desc_data->dma_desc; + dma_addr_t hdr_dma; + + /* Reset the Rx descriptor + * Set buffer 1 (lo) address to header dma address (lo) + * Set buffer 1 (hi) address to header dma address (hi) + * set control bits OWN and INTE + */ + hdr_dma = desc_data->rx.hdr.dma_base + desc_data->rx.hdr.dma_off; + dma_desc->desc0 = cpu_to_le32(lower_32_bits(hdr_dma)); + dma_desc->desc1 = cpu_to_le32(upper_32_bits(hdr_dma)); + dma_desc->desc2 = 0; + dma_desc->desc3 = 0; + FXGMAC_SET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, INTE, 1); + FXGMAC_SET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, BUF2V, 0); + FXGMAC_SET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, BUF1V, 1); + + /* Since the Rx DMA engine is likely running, make sure everything + * is written to the descriptor(s) before setting the OWN bit + * for the descriptor + */ + dma_wmb(); + + FXGMAC_SET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, OWN, 1); + + /* Make sure ownership is written to the descriptor */ + dma_wmb(); +} + +int fxgmac_tx_skb_map(struct fxgmac_channel *channel, struct sk_buff *skb) +{ + struct fxgmac_pdata *priv = channel->priv; + struct fxgmac_ring *ring = channel->tx_ring; + unsigned int start_index, cur_index; + struct fxgmac_desc_data *desc_data; + unsigned int offset, datalen, len; + struct fxgmac_pkt_info *pkt_info; + unsigned int tso, vlan; + dma_addr_t skb_dma; + skb_frag_t *frag; + + offset = 0; + start_index = ring->cur; + cur_index = ring->cur; + pkt_info = &ring->pkt_info; + pkt_info->desc_count = 0; + pkt_info->length = 0; + + tso = FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, TSO_ENABLE); + vlan = FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, VLAN_CTAG); + + /* Save space for a context descriptor if needed */ + if ((tso && pkt_info->mss != ring->tx.cur_mss) || + (vlan && pkt_info->vlan_ctag != ring->tx.cur_vlan_ctag)) + cur_index = FXGMAC_GET_ENTRY(cur_index, ring->dma_desc_count); + + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + + if (tso) { + /* Map the TSO header */ + skb_dma = dma_map_single(priv->dev, skb->data, + pkt_info->header_len, DMA_TO_DEVICE); + if (dma_mapping_error(priv->dev, skb_dma)) { + yt_err(priv, "dma_map_single err\n"); + goto err_out; + } + desc_data->skb_dma = skb_dma; + desc_data->skb_dma_len = pkt_info->header_len; + + offset = pkt_info->header_len; + pkt_info->length += pkt_info->header_len; + + cur_index = FXGMAC_GET_ENTRY(cur_index, ring->dma_desc_count); + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + } + + /* Map the (remainder of the) packet */ + for (datalen = skb_headlen(skb) - offset; datalen;) { + len = min_t(unsigned int, datalen, FXGMAC_TX_MAX_BUF_SIZE); + skb_dma = dma_map_single(priv->dev, skb->data + offset, len, + DMA_TO_DEVICE); + if (dma_mapping_error(priv->dev, skb_dma)) { + yt_err(priv, "dma_map_single err\n"); + goto err_out; + } + desc_data->skb_dma = skb_dma; + desc_data->skb_dma_len = len; + + datalen -= len; + offset += len; + pkt_info->length += len; + + cur_index = FXGMAC_GET_ENTRY(cur_index, ring->dma_desc_count); + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + } + + for (u32 i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + frag = &skb_shinfo(skb)->frags[i]; + offset = 0; + + for (datalen = skb_frag_size(frag); datalen;) { + len = min_t(unsigned int, datalen, + FXGMAC_TX_MAX_BUF_SIZE); + skb_dma = skb_frag_dma_map(priv->dev, frag, offset, len, + DMA_TO_DEVICE); + if (dma_mapping_error(priv->dev, skb_dma)) { + yt_err(priv, "skb_frag_dma_map err\n"); + goto err_out; + } + desc_data->skb_dma = skb_dma; + desc_data->skb_dma_len = len; + desc_data->mapped_as_page = 1; + + datalen -= len; + offset += len; + pkt_info->length += len; + + cur_index = FXGMAC_GET_ENTRY(cur_index, + ring->dma_desc_count); + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + } + } + + /* Save the skb address in the last entry. We always have some data + * that has been mapped so desc_data is always advanced past the last + * piece of mapped data - use the entry pointed to by cur_index - 1. + */ + desc_data = FXGMAC_GET_DESC_DATA(ring, (cur_index - 1) & + (ring->dma_desc_count - 1)); + desc_data->skb = skb; + + /* Save the number of descriptor entries used */ + if (start_index <= cur_index) + pkt_info->desc_count = cur_index - start_index; + else + pkt_info->desc_count = + ring->dma_desc_count - start_index + cur_index; + + return pkt_info->desc_count; + +err_out: + while (start_index < cur_index) { + desc_data = FXGMAC_GET_DESC_DATA(ring, start_index); + start_index = + FXGMAC_GET_ENTRY(start_index, ring->dma_desc_count); + fxgmac_desc_data_unmap(priv, desc_data); + } + + return 0; +} + +void fxgmac_dump_rx_desc(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + unsigned int idx) +{ + struct fxgmac_desc_data *desc_data; + struct fxgmac_dma_desc *dma_desc; + + desc_data = FXGMAC_GET_DESC_DATA(ring, idx); + dma_desc = desc_data->dma_desc; + yt_dbg(priv, + "RX: dma_desc=%p, dma_desc_addr=%pad, RX_NORMAL_DESC[%d RX BY DEVICE] = %08x:%08x:%08x:%08x\n\n", + dma_desc, &desc_data->dma_desc_addr, idx, + le32_to_cpu(dma_desc->desc0), le32_to_cpu(dma_desc->desc1), + le32_to_cpu(dma_desc->desc2), le32_to_cpu(dma_desc->desc3)); +} + +void fxgmac_dump_tx_desc(struct fxgmac_pdata *priv, struct fxgmac_ring *ring, + unsigned int idx, unsigned int count, + unsigned int flag) +{ + struct fxgmac_desc_data *desc_data; + + while (count--) { + desc_data = FXGMAC_GET_DESC_DATA(ring, idx); + yt_dbg(priv, + "TX: dma_desc=%p, dma_desc_addr=%pad, TX_NORMAL_DESC[%d %s] = %08x:%08x:%08x:%08x\n", + desc_data->dma_desc, &desc_data->dma_desc_addr, idx, + (flag == 1) ? "QUEUED FOR TX" : "TX BY DEVICE", + le32_to_cpu(desc_data->dma_desc->desc0), + le32_to_cpu(desc_data->dma_desc->desc1), + le32_to_cpu(desc_data->dma_desc->desc2), + le32_to_cpu(desc_data->dma_desc->desc3)); + + idx++; + } +} + +int fxgmac_is_tx_complete(struct fxgmac_dma_desc *dma_desc) +{ + return !FXGMAC_GET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, OWN); +} + +int fxgmac_is_last_desc(struct fxgmac_dma_desc *dma_desc) +{ + /* Rx and Tx share LD bit, so check TDES3.LD bit */ + return FXGMAC_GET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, LD); +} diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index 14c59cece..ddfdde001 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -102,17 +102,69 @@ static int fxgmac_mdio_register(struct fxgmac_pdata *priv) return 0; } +static unsigned int fxgmac_desc_tx_avail(struct fxgmac_ring *ring) +{ + if (ring->dirty > ring->cur) + return ring->dirty - ring->cur; + else + return ring->dma_desc_count - ring->cur + ring->dirty; +} static void fxgmac_enable_msix_one_irq(struct fxgmac_pdata *priv, u32 int_id) { FXGMAC_IO_WR(priv, MSIX_TBL_MASK + int_id * 16, 0); } +static void fxgmac_disable_msix_one_irq(struct fxgmac_pdata *priv, u32 intid) +{ + FXGMAC_IO_WR(priv, MSIX_TBL_MASK + intid * 16, 1); +} + static void fxgmac_disable_mgm_irq(struct fxgmac_pdata *priv) { FXGMAC_IO_WR_BITS(priv, MGMT_INT_CTRL0, INT_MASK, MGMT_INT_CTRL0_INT_MASK_MASK); } +static irqreturn_t fxgmac_isr(int irq, void *data) +{ + struct fxgmac_pdata *priv = data; + u32 val; + + val = FXGMAC_IO_RD(priv, MGMT_INT_CTRL0); + if (!(val & MGMT_INT_CTRL0_INT_STATUS_RXTXMISC)) + return IRQ_NONE; + + /* Restart the device on a Fatal Bus Error */ + for (u32 i = 0; i < priv->channel_count; i++) { + val = FXGMAC_DMA_IO_RD(priv->channel_head + i, DMA_CH_SR); + if (FXGMAC_GET_BITS(val, DMA_CH_SR, FBE)) + schedule_work(&priv->restart_work); + } + + fxgmac_disable_mgm_irq(priv); + napi_schedule_irqoff(&priv->napi); /* Turn on polling */ + return IRQ_HANDLED; +} + +static irqreturn_t fxgmac_dma_isr(int irq, void *data) +{ + struct fxgmac_channel *channel = data; + + if (irq == channel->dma_irq_tx) { + fxgmac_disable_msix_one_irq(channel->priv, MSI_ID_TXQ0); + /* Clear Tx signal */ + FXGMAC_DMA_IO_WR(channel, DMA_CH_SR, BIT(DMA_CH_SR_TI_POS)); + napi_schedule_irqoff(&channel->napi_tx); + return IRQ_HANDLED; + } + + fxgmac_disable_msix_one_irq(channel->priv, channel->queue_index); + /* Clear Rx signal */ + FXGMAC_DMA_IO_WR(channel, DMA_CH_SR, BIT(DMA_CH_SR_RI_POS)); + napi_schedule_irqoff(&channel->napi_rx); + return IRQ_HANDLED; +} + static void napi_disable_del(struct fxgmac_pdata *priv, struct napi_struct *n, u32 flag_pos) { @@ -1880,6 +1932,30 @@ int fxgmac_drv_probe(struct device *dev, struct fxgmac_resources *res) return ret; } +static void fxgmac_dbg_pkt(struct fxgmac_pdata *priv, struct sk_buff *skb, + bool tx_rx) +{ + struct ethhdr *eth = (struct ethhdr *)skb->data; + unsigned char buffer[128]; + + yt_dbg(priv, "\n************** SKB dump ****************\n"); + yt_dbg(priv, "%s, packet of %d bytes\n", (tx_rx ? "TX" : "RX"), + skb->len); + yt_dbg(priv, "Dst MAC addr: %pM\n", eth->h_dest); + yt_dbg(priv, "Src MAC addr: %pM\n", eth->h_source); + yt_dbg(priv, "Protocol: %#06x\n", ntohs(eth->h_proto)); + + for (u32 i = 0; i < skb->len; i += 32) { + unsigned int len = min(skb->len - i, 32U); + + hex_dump_to_buffer(&skb->data[i], len, 32, 1, buffer, + sizeof(buffer), false); + yt_dbg(priv, " %#06x: %s\n", i, buffer); + } + + yt_dbg(priv, "\n************** SKB dump ****************\n"); +} + static const struct net_device_ops fxgmac_netdev_ops = { .ndo_open = fxgmac_open, }; @@ -1889,6 +1965,327 @@ const struct net_device_ops *fxgmac_get_netdev_ops(void) return &fxgmac_netdev_ops; } +static void fxgmac_rx_refresh(struct fxgmac_channel *channel) +{ + struct fxgmac_ring *ring = channel->rx_ring; + struct fxgmac_pdata *priv = channel->priv; + struct fxgmac_desc_data *desc_data; + + while (ring->dirty != ring->cur) { + desc_data = FXGMAC_GET_DESC_DATA(ring, ring->dirty); + + /* Reset desc_data values */ + fxgmac_desc_data_unmap(priv, desc_data); + + if (fxgmac_rx_buffe_map(priv, ring, desc_data)) + break; + + fxgmac_desc_rx_reset(desc_data); + ring->dirty = + FXGMAC_GET_ENTRY(ring->dirty, ring->dma_desc_count); + } + + /* Make sure everything is written before the register write */ + wmb(); + + /* Update the Rx Tail Pointer Register with address of + * the last cleaned entry + */ + desc_data = FXGMAC_GET_DESC_DATA(ring, (ring->dirty - 1) & + (ring->dma_desc_count - 1)); + FXGMAC_DMA_IO_WR(channel, DMA_CH_RDTR_LO, + lower_32_bits(desc_data->dma_desc_addr)); +} + +static struct sk_buff *fxgmac_create_skb(struct fxgmac_pdata *priv, + struct napi_struct *napi, + struct fxgmac_desc_data *desc_data, + unsigned int len) +{ + unsigned int copy_len; + struct sk_buff *skb; + u8 *packet; + + skb = napi_alloc_skb(napi, desc_data->rx.hdr.dma_len); + if (!skb) + return NULL; + + /* Start with the header buffer which may contain just the header + * or the header plus data + */ + dma_sync_single_range_for_cpu(priv->dev, desc_data->rx.hdr.dma_base, + desc_data->rx.hdr.dma_off, + desc_data->rx.hdr.dma_len, + DMA_FROM_DEVICE); + + packet = page_address(desc_data->rx.hdr.pa.pages) + + desc_data->rx.hdr.pa.pages_offset; + copy_len = min(desc_data->rx.hdr.dma_len, len); + skb_copy_to_linear_data(skb, packet, copy_len); + skb_put(skb, copy_len); + + return skb; +} + +static int fxgmac_tx_poll(struct fxgmac_channel *channel) +{ + struct fxgmac_pdata *priv = channel->priv; + unsigned int cur, tx_packets = 0, tx_bytes = 0; + struct fxgmac_ring *ring = channel->tx_ring; + struct net_device *netdev = priv->netdev; + struct fxgmac_desc_data *desc_data; + struct fxgmac_dma_desc *dma_desc; + struct netdev_queue *txq; + int processed = 0; + + /* Nothing to do if there isn't a Tx ring for this channel */ + if (!ring) + return 0; + + if (ring->cur != ring->dirty && (netif_msg_tx_done(priv))) + yt_dbg(priv, "%s, ring_cur=%d,ring_dirty=%d,qIdx=%d\n", + __func__, ring->cur, ring->dirty, channel->queue_index); + + cur = ring->cur; + + /* Be sure we get ring->cur before accessing descriptor data */ + smp_rmb(); + + txq = netdev_get_tx_queue(netdev, channel->queue_index); + while (ring->dirty != cur) { + desc_data = FXGMAC_GET_DESC_DATA(ring, ring->dirty); + dma_desc = desc_data->dma_desc; + + if (!fxgmac_is_tx_complete(dma_desc)) + break; + + /* Make sure descriptor fields are read after reading + * the OWN bit + */ + dma_rmb(); + + if (netif_msg_tx_done(priv)) + fxgmac_dump_tx_desc(priv, ring, ring->dirty, 1, 0); + + if (fxgmac_is_last_desc(dma_desc)) { + tx_packets += desc_data->tx.packets; + tx_bytes += desc_data->tx.bytes; + } + + /* Free the SKB and reset the descriptor for re-use */ + fxgmac_desc_data_unmap(priv, desc_data); + fxgmac_desc_tx_reset(desc_data); + + processed++; + ring->dirty = + FXGMAC_GET_ENTRY(ring->dirty, ring->dma_desc_count); + } + + if (!processed) + return 0; + + netdev_tx_completed_queue(txq, tx_packets, tx_bytes); + + /* Make sure ownership is written to the descriptor */ + smp_wmb(); + if (ring->tx.queue_stopped == 1 && + (fxgmac_desc_tx_avail(ring) > FXGMAC_TX_DESC_MIN_FREE)) { + ring->tx.queue_stopped = 0; + netif_tx_wake_queue(txq); + } + + return processed; +} + +static int fxgmac_one_poll_tx(struct napi_struct *napi, int budget) +{ + struct fxgmac_channel *channel = + container_of(napi, struct fxgmac_channel, napi_tx); + struct fxgmac_pdata *priv = channel->priv; + int ret; + + ret = fxgmac_tx_poll(channel); + if (napi_complete_done(napi, 0)) + fxgmac_enable_msix_one_irq(priv, MSI_ID_TXQ0); + + return ret; +} + +static unsigned int fxgmac_desc_rx_dirty(struct fxgmac_ring *ring) +{ + unsigned int dirty; + + if (ring->dirty <= ring->cur) + dirty = ring->cur - ring->dirty; + else + dirty = ring->dma_desc_count - ring->dirty + ring->cur; + + return dirty; +} + +static int fxgmac_rx_poll(struct fxgmac_channel *channel, int budget) +{ + struct fxgmac_pdata *priv = channel->priv; + struct fxgmac_ring *ring = channel->rx_ring; + struct net_device *netdev = priv->netdev; + u32 context_next, context, incomplete; + struct fxgmac_desc_data *desc_data; + struct fxgmac_pkt_info *pkt_info; + struct napi_struct *napi; + u32 len, max_len; + int packet_count = 0; + + struct sk_buff *skb; + + /* Nothing to do if there isn't a Rx ring for this channel */ + if (!ring) + return 0; + + napi = (priv->per_channel_irq) ? &channel->napi_rx : &priv->napi; + pkt_info = &ring->pkt_info; + + while (packet_count < budget) { + memset(pkt_info, 0, sizeof(*pkt_info)); + skb = NULL; + len = 0; + +read_again: + desc_data = FXGMAC_GET_DESC_DATA(ring, ring->cur); + + if (fxgmac_desc_rx_dirty(ring) > FXGMAC_RX_DESC_MAX_DIRTY) + fxgmac_rx_refresh(channel); + + if (fxgmac_dev_read(channel)) + break; + + ring->cur = FXGMAC_GET_ENTRY(ring->cur, ring->dma_desc_count); + incomplete = FXGMAC_GET_BITS(pkt_info->attr, ATTR_RX, INCOMPLETE); + context_next = FXGMAC_GET_BITS(pkt_info->attr, ATTR_RX, CONTEXT_NEXT); + context = FXGMAC_GET_BITS(pkt_info->attr, ATTR_RX, CONTEXT); + + if (incomplete || context_next) + goto read_again; + + if (pkt_info->errors) { + dev_kfree_skb(skb); + priv->netdev->stats.rx_dropped++; + yt_err(priv, "error in received packet\n"); + goto next_packet; + } + + if (!context) { + len = desc_data->rx.len; + if (len == 0) { + if (net_ratelimit()) + yt_err(priv, + "A packet of length 0 was received\n"); + priv->netdev->stats.rx_length_errors++; + priv->netdev->stats.rx_dropped++; + goto next_packet; + } + + if (len && !skb) { + skb = fxgmac_create_skb(priv, napi, desc_data, + len); + if (unlikely(!skb)) { + if (net_ratelimit()) + yt_err(priv, + "create skb err\n"); + priv->netdev->stats.rx_dropped++; + goto next_packet; + } + } + max_len = netdev->mtu + ETH_HLEN; + if (!(netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && + skb->protocol == htons(ETH_P_8021Q)) + max_len += VLAN_HLEN; + + if (len > max_len) { + if (net_ratelimit()) + yt_err(priv, + "len %d larger than max size %d\n", + len, max_len); + priv->netdev->stats.rx_length_errors++; + priv->netdev->stats.rx_dropped++; + dev_kfree_skb(skb); + goto next_packet; + } + } + + if (!skb) { + priv->netdev->stats.rx_dropped++; + goto next_packet; + } + + if (netif_msg_pktdata(priv)) + fxgmac_dbg_pkt(priv, skb, false); + + skb_checksum_none_assert(skb); + if (netdev->features & NETIF_F_RXCSUM) + skb->ip_summed = CHECKSUM_UNNECESSARY; + + if (FXGMAC_GET_BITS(pkt_info->attr, ATTR_RX, VLAN_CTAG)) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + pkt_info->vlan_ctag); + + if (FXGMAC_GET_BITS(pkt_info->attr, ATTR_RX, RSS_HASH)) + skb_set_hash(skb, pkt_info->rss_hash, + pkt_info->rss_hash_type); + + skb->dev = netdev; + skb->protocol = eth_type_trans(skb, netdev); + skb_record_rx_queue(skb, channel->queue_index); + napi_gro_receive(napi, skb); + +next_packet: + packet_count++; + priv->netdev->stats.rx_packets++; + priv->netdev->stats.rx_bytes += len; + } + + return packet_count; +} + +static int fxgmac_one_poll_rx(struct napi_struct *napi, int budget) +{ + struct fxgmac_channel *channel = + container_of(napi, struct fxgmac_channel, napi_rx); + int processed = fxgmac_rx_poll(channel, budget); + + if (processed < budget && (napi_complete_done(napi, processed))) + fxgmac_enable_msix_one_irq(channel->priv, channel->queue_index); + + return processed; +} + +static int fxgmac_all_poll(struct napi_struct *napi, int budget) +{ + struct fxgmac_channel *channel; + struct fxgmac_pdata *priv; + int processed = 0; + + priv = container_of(napi, struct fxgmac_pdata, napi); + do { + channel = priv->channel_head; + /* Only support 1 tx channel, poll ch 0. */ + fxgmac_tx_poll(priv->channel_head + 0); + for (u32 i = 0; i < priv->channel_count; i++, channel++) + processed += fxgmac_rx_poll(channel, budget); + } while (false); + + /* If we processed everything, we are done */ + if (processed < budget) { + /* Turn off polling */ + if (napi_complete_done(napi, processed)) + fxgmac_enable_mgm_irq(priv); + } + + if ((processed) && (netif_msg_rx_status(priv))) + yt_dbg(priv, "%s, received : %d\n", __func__, processed); + + return processed; +} + static void napi_add_enable(struct fxgmac_pdata *priv, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), u32 flag_pos) From patchwork Fri Feb 28 10:01:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996150 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-170.mail.aliyun.com (out28-170.mail.aliyun.com [115.124.28.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2725425BACA; Fri, 28 Feb 2025 10:06:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737207; cv=none; b=ED+BqVgUMC4sg4mHUsWAB5DU4SSWEmJtiVrC9WhDFa1oUEFX2P3Rzg8ClAkEVoyt2oWZYhBfS80PFnPX8TpmFeqM3RJB9nPW1K/Pjjr3AZY15TvtiVuNC01tWekMc4ejIObweSpAQPQLEV1X0toSehQu5vqJgYMiY9H5GngUlbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740737207; c=relaxed/simple; bh=/Egb4jGASEIxFJ/497+mh5eGk73BWY/qxCviR/YM420=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IvwXXlFU+Zm1OfqD7kjJI7ZHaMDALbFs1VeSSCCT+mfBN0vbMkTxAaw3LDyeror+Qx1NSMcjmSOzIo77Vd1ADxinedzAet2m1jccWMrw5uiTg7chK4JxxcYGW5yCU30y7pgHjnqfCqubZ98G5YJO8VOBRA48v6sb9qAbj0Gmkts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Jl_1740736839 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:39 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 10/14] motorcomm:yt6801: Implement .ndo_start_xmit function Date: Fri, 28 Feb 2025 18:01:23 +0800 Message-Id: <20250228100020.3944-11-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement .ndo_start_xmit function to prepare preliminary packet info for TX, prepare tso and vlan, then map tx skb, at last it call dev_xmit function to send data. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 368 ++++++++++++++++++ 1 file changed, 368 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index ddfdde001..74af6bcd4 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -102,6 +102,23 @@ static int fxgmac_mdio_register(struct fxgmac_pdata *priv) return 0; } +static void fxgmac_tx_start_xmit(struct fxgmac_channel *channel, + struct fxgmac_ring *ring) +{ + struct fxgmac_desc_data *desc_data; + + wmb(); /* Make sure everything is written before the register write */ + + /* Issue a poll command to Tx DMA by writing address + * of next immediate free descriptor + */ + desc_data = FXGMAC_GET_DESC_DATA(ring, ring->cur); + FXGMAC_DMA_IO_WR(channel, DMA_CH_TDTR_LO, + lower_32_bits(desc_data->dma_desc_addr)); + + ring->tx.xmit_more = 0; +} + static unsigned int fxgmac_desc_tx_avail(struct fxgmac_ring *ring) { if (ring->dirty > ring->cur) @@ -109,6 +126,30 @@ static unsigned int fxgmac_desc_tx_avail(struct fxgmac_ring *ring) else return ring->dma_desc_count - ring->cur + ring->dirty; } + +static netdev_tx_t fxgmac_maybe_stop_tx_queue(struct fxgmac_channel *channel, + struct fxgmac_ring *ring, + unsigned int count) +{ + struct fxgmac_pdata *priv = channel->priv; + + if (count > fxgmac_desc_tx_avail(ring)) { + yt_err(priv, "Tx queue stopped, not enough descriptors available\n"); + netif_stop_subqueue(priv->netdev, channel->queue_index); + ring->tx.queue_stopped = 1; + + /* If we haven't notified the hardware because of xmit_more + * support, tell it now + */ + if (ring->tx.xmit_more) + fxgmac_tx_start_xmit(channel, ring); + + return NETDEV_TX_BUSY; + } + + return NETDEV_TX_OK; +} + static void fxgmac_enable_msix_one_irq(struct fxgmac_pdata *priv, u32 int_id) { FXGMAC_IO_WR(priv, MSIX_TBL_MASK + int_id * 16, 0); @@ -1956,8 +1997,335 @@ static void fxgmac_dbg_pkt(struct fxgmac_pdata *priv, struct sk_buff *skb, yt_dbg(priv, "\n************** SKB dump ****************\n"); } +static void fxgmac_dev_xmit(struct fxgmac_channel *channel) +{ + struct fxgmac_pdata *priv = channel->priv; + struct fxgmac_ring *ring = channel->tx_ring; + unsigned int tso_context, vlan_context; + struct fxgmac_desc_data *desc_data; + struct fxgmac_dma_desc *dma_desc; + struct fxgmac_pkt_info *pkt_info; + unsigned int csum, tso, vlan; + int i, start_index = ring->cur; + int cur_index = ring->cur; + + pkt_info = &ring->pkt_info; + csum = FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, CSUM_ENABLE); + tso = FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, TSO_ENABLE); + vlan = FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, VLAN_CTAG); + + if (tso && pkt_info->mss != ring->tx.cur_mss) + tso_context = 1; + else + tso_context = 0; + + if (vlan && pkt_info->vlan_ctag != ring->tx.cur_vlan_ctag) + vlan_context = 1; + else + vlan_context = 0; + + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + dma_desc = desc_data->dma_desc; + + /* Create a context descriptor if this is a TSO pkt_info */ + if (tso_context) { + /* Set the MSS size */ + FXGMAC_SET_BITS_LE(dma_desc->desc2, TX_CONTEXT_DESC2, MSS, + pkt_info->mss); + + /* Mark it as a CONTEXT descriptor */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_CONTEXT_DESC3, CTXT, 1); + + /* Indicate this descriptor contains the MSS */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_CONTEXT_DESC3, TCMSSV, + 1); + + ring->tx.cur_mss = pkt_info->mss; + } + + if (vlan_context) { + /* Mark it as a CONTEXT descriptor */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_CONTEXT_DESC3, CTXT, 1); + + /* Set the VLAN tag */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_CONTEXT_DESC3, VT, + pkt_info->vlan_ctag); + + /* Indicate this descriptor contains the VLAN tag */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_CONTEXT_DESC3, VLTV, 1); + + ring->tx.cur_vlan_ctag = pkt_info->vlan_ctag; + } + if (tso_context || vlan_context) { + cur_index = FXGMAC_GET_ENTRY(cur_index, ring->dma_desc_count); + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + dma_desc = desc_data->dma_desc; + } + + /* Update buffer address (for TSO this is the header) */ + dma_desc->desc0 = cpu_to_le32(lower_32_bits(desc_data->skb_dma)); + dma_desc->desc1 = cpu_to_le32(upper_32_bits(desc_data->skb_dma)); + + /* Update the buffer length */ + FXGMAC_SET_BITS_LE(dma_desc->desc2, TX_NORMAL_DESC2, HL_B1L, + desc_data->skb_dma_len); + + /* VLAN tag insertion check */ + if (vlan) + FXGMAC_SET_BITS_LE(dma_desc->desc2, TX_NORMAL_DESC2, VTIR, + TX_NORMAL_DESC2_VLAN_INSERT); + + /* Timestamp enablement check */ + if (FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, PTP)) + FXGMAC_SET_BITS_LE(dma_desc->desc2, TX_NORMAL_DESC2, TTSE, 1); + + /* Mark it as First Descriptor */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, FD, 1); + + /* Mark it as a NORMAL descriptor */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + + /* Set OWN bit if not the first descriptor */ + if (cur_index != start_index) + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, OWN, 1); + + if (tso) { + /* Enable TSO */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, TSE, 1); + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, TCPPL, + pkt_info->tcp_payload_len); + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, TCPHDRLEN, + pkt_info->tcp_header_len / 4); + } else { + /* Enable CRC and Pad Insertion */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, CPC, 0); + + /* Enable HW CSUM */ + if (csum) + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, + CIC, 0x3); + + /* Set the total length to be transmitted */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, FL, + pkt_info->length); + } + + if (start_index <= cur_index) + i = cur_index - start_index + 1; + else + i = ring->dma_desc_count - start_index + cur_index; + + for (; i < pkt_info->desc_count; i++) { + cur_index = FXGMAC_GET_ENTRY(cur_index, ring->dma_desc_count); + desc_data = FXGMAC_GET_DESC_DATA(ring, cur_index); + dma_desc = desc_data->dma_desc; + + /* Update buffer address */ + dma_desc->desc0 = + cpu_to_le32(lower_32_bits(desc_data->skb_dma)); + dma_desc->desc1 = + cpu_to_le32(upper_32_bits(desc_data->skb_dma)); + + /* Update the buffer length */ + FXGMAC_SET_BITS_LE(dma_desc->desc2, TX_NORMAL_DESC2, HL_B1L, + desc_data->skb_dma_len); + + /* Set OWN bit */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, OWN, 1); + + /* Mark it as NORMAL descriptor */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + + /* Enable HW CSUM */ + if (csum) + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, + CIC, 0x3); + } + + /* Set LAST bit for the last descriptor */ + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, LD, 1); + + FXGMAC_SET_BITS_LE(dma_desc->desc2, TX_NORMAL_DESC2, IC, 1); + + /* Save the Tx info to report back during cleanup */ + desc_data->tx.packets = pkt_info->tx_packets; + desc_data->tx.bytes = pkt_info->tx_bytes; + + /* In case the Tx DMA engine is running, make sure everything + * is written to the descriptor(s) before setting the OWN bit + * for the first descriptor + */ + dma_wmb(); + + /* Set OWN bit for the first descriptor */ + desc_data = FXGMAC_GET_DESC_DATA(ring, start_index); + dma_desc = desc_data->dma_desc; + FXGMAC_SET_BITS_LE(dma_desc->desc3, TX_NORMAL_DESC3, OWN, 1); + + if (netif_msg_tx_queued(priv)) + fxgmac_dump_tx_desc(priv, ring, start_index, + pkt_info->desc_count, 1); + + smp_wmb(); /* Make sure ownership is written to the descriptor */ + + ring->cur = FXGMAC_GET_ENTRY(cur_index, ring->dma_desc_count); + fxgmac_tx_start_xmit(channel, ring); +} + +static void fxgmac_prep_vlan(struct sk_buff *skb, + struct fxgmac_pkt_info *pkt_info) +{ + if (skb_vlan_tag_present(skb)) + pkt_info->vlan_ctag = skb_vlan_tag_get(skb); +} + +static int fxgmac_prep_tso(struct fxgmac_pdata *priv, struct sk_buff *skb, + struct fxgmac_pkt_info *pkt_info) +{ + int ret; + + if (!FXGMAC_GET_BITS(pkt_info->attr, ATTR_TX, TSO_ENABLE)) + return 0; + + ret = skb_cow_head(skb, 0); + if (ret) + return ret; + + pkt_info->header_len = skb_transport_offset(skb) + tcp_hdrlen(skb); + pkt_info->tcp_header_len = tcp_hdrlen(skb); + pkt_info->tcp_payload_len = skb->len - pkt_info->header_len; + pkt_info->mss = skb_shinfo(skb)->gso_size; + + /* Update the number of packets that will ultimately be transmitted + * along with the extra bytes for each extra packet + */ + pkt_info->tx_packets = skb_shinfo(skb)->gso_segs; + pkt_info->tx_bytes += (pkt_info->tx_packets - 1) * pkt_info->header_len; + + return 0; +} + +static int fxgmac_is_tso(struct sk_buff *skb) +{ + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 0; + + if (!skb_is_gso(skb)) + return 0; + + return 1; +} + +static void fxgmac_prep_tx_pkt(struct fxgmac_pdata *priv, + struct fxgmac_ring *ring, struct sk_buff *skb, + struct fxgmac_pkt_info *pkt_info) +{ + u32 len, context_desc = 0; + + pkt_info->skb = skb; + pkt_info->desc_count = 0; + pkt_info->tx_packets = 1; + pkt_info->tx_bytes = skb->len; + + if (fxgmac_is_tso(skb)) { + /* TSO requires an extra descriptor if mss is different */ + if (skb_shinfo(skb)->gso_size != ring->tx.cur_mss) { + context_desc = 1; + pkt_info->desc_count++; + } + + /* TSO requires an extra descriptor for TSO header */ + pkt_info->desc_count++; + FXGMAC_SET_BITS(pkt_info->attr, ATTR_TX, TSO_ENABLE, 1); + FXGMAC_SET_BITS(pkt_info->attr, ATTR_TX, CSUM_ENABLE, 1); + } else if (skb->ip_summed == CHECKSUM_PARTIAL) { + FXGMAC_SET_BITS(pkt_info->attr, ATTR_TX, CSUM_ENABLE, 1); + } + + if (skb_vlan_tag_present(skb)) { + /* VLAN requires an extra descriptor if tag is different */ + if (skb_vlan_tag_get(skb) != ring->tx.cur_vlan_ctag) + /* We can share with the TSO context descriptor */ + if (!context_desc) + pkt_info->desc_count++; + + FXGMAC_SET_BITS(pkt_info->attr, ATTR_TX, VLAN_CTAG, 1); + } + + for (len = skb_headlen(skb); len;) { + pkt_info->desc_count++; + len -= min_t(unsigned int, len, FXGMAC_TX_MAX_BUF_SIZE); + } + + for (u32 i = 0; i < skb_shinfo(skb)->nr_frags; i++) + for (len = skb_frag_size(&skb_shinfo(skb)->frags[i]); len;) { + pkt_info->desc_count++; + len -= min_t(unsigned int, len, FXGMAC_TX_MAX_BUF_SIZE); + } +} + +static netdev_tx_t fxgmac_xmit(struct sk_buff *skb, struct net_device *netdev) +{ + struct fxgmac_pdata *priv = netdev_priv(netdev); + struct fxgmac_pkt_info *tx_pkt_info; + struct fxgmac_channel *channel; + struct netdev_queue *txq; + struct fxgmac_ring *ring; + int ret; + + channel = priv->channel_head + skb->queue_mapping; + txq = netdev_get_tx_queue(netdev, channel->queue_index); + ring = channel->tx_ring; + tx_pkt_info = &ring->pkt_info; + + if (skb->len == 0) { + yt_err(priv, "empty skb received from stack\n"); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + + /* Prepare preliminary packet info for TX */ + memset(tx_pkt_info, 0, sizeof(*tx_pkt_info)); + fxgmac_prep_tx_pkt(priv, ring, skb, tx_pkt_info); + + /* Check that there are enough descriptors available */ + ret = fxgmac_maybe_stop_tx_queue(channel, ring, + tx_pkt_info->desc_count); + if (ret == NETDEV_TX_BUSY) + return ret; + + ret = fxgmac_prep_tso(priv, skb, tx_pkt_info); + if (ret < 0) { + yt_err(priv, "error processing TSO packet\n"); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + fxgmac_prep_vlan(skb, tx_pkt_info); + + if (!fxgmac_tx_skb_map(channel, skb)) { + dev_kfree_skb_any(skb); + yt_err(priv, "xmit, map tx skb err\n"); + return NETDEV_TX_OK; + } + + /* Report on the actual number of bytes (to be) sent */ + netdev_tx_sent_queue(txq, tx_pkt_info->tx_bytes); + + /* Configure required descriptor fields for transmission */ + fxgmac_dev_xmit(channel); + + if (netif_msg_pktdata(priv)) + fxgmac_dbg_pkt(priv, skb, true); + + /* Stop the queue in advance if there may not be enough descriptors */ + fxgmac_maybe_stop_tx_queue(channel, ring, FXGMAC_TX_MAX_DESC_NR); + + return NETDEV_TX_OK; +} + static const struct net_device_ops fxgmac_netdev_ops = { .ndo_open = fxgmac_open, + .ndo_start_xmit = fxgmac_xmit, }; const struct net_device_ops *fxgmac_get_netdev_ops(void) From patchwork Fri Feb 28 10:01:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996129 X-Patchwork-Delegate: kuba@kernel.org Received: from out28-194.mail.aliyun.com (out28-194.mail.aliyun.com [115.124.28.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B7B5267387; Fri, 28 Feb 2025 10:01:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.28.194 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736890; cv=none; b=hWpKBeY5lQeRwBKAi9zhUtUD5VMdBBZ/lllumEBMat7GKwKesHmruwMWyrKHM7YStjtS9pVDgLZDvTzbQ9PIN99b7DlEh3BaWMtzd/IWpRPlbL1BC0nzPAVHeUtZBT9cXbiL/oVnL6ei0nwIbAi4iPhi4WLLlIhyDmTkZegjrcM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736890; c=relaxed/simple; bh=i8oy1nl2lI4VEOafYDHC5zGh6osQGB0cnoBSrfv9BHs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qsggiShuYB/3IwdDrhF8W6/+I3XA7j5GKG3eQLPpPkowv7bUGuYwrVAK8BbTlsS3VPNht86VULiKdNqiEeeX3OA/HMDFYAmWh04pPBE+N4yBTt5Jy67NPvSk77luYGmkmt5Zr5sITKYXFjgydmtQOMzGrC0BpD3mYZT46+rk2pY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=115.124.28.194 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Kn_1740736839 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:40 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 11/14] motorcomm:yt6801: Implement some net_device_ops function Date: Fri, 28 Feb 2025 18:01:20 +0800 Message-Id: <20250228100020.3944-12-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement following callback function .ndo_stop .ndo_start_xmit .ndo_tx_timeout .ndo_validate_addr .ndo_poll_controller Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 173 ++++++++++++++++++ 1 file changed, 173 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index 74af6bcd4..d6c1c0fd4 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -1475,6 +1475,68 @@ static int fxgmac_open(struct net_device *netdev) return ret; } +static int fxgmac_close(struct net_device *netdev) +{ + struct fxgmac_pdata *priv = netdev_priv(netdev); + + mutex_lock(&priv->mutex); + fxgmac_stop(priv); /* Stop the device */ + priv->dev_state = FXGMAC_DEV_CLOSE; + fxgmac_channels_rings_free(priv); /* Free the channels and rings */ + fxgmac_phy_reset(priv); + phy_disconnect(priv->phydev); + mutex_unlock(&priv->mutex); + return 0; +} + +static void fxgmac_dump_state(struct fxgmac_pdata *priv) +{ + struct fxgmac_channel *channel = priv->channel_head; + struct fxgmac_ring *ring = &channel->tx_ring[0]; + + yt_err(priv, "Tx descriptor info:\n"); + yt_err(priv, "Tx cur = 0x%x\n", ring->cur); + yt_err(priv, "Tx dirty = 0x%x\n", ring->dirty); + yt_err(priv, "Tx dma_desc_head = %pad\n", &ring->dma_desc_head); + yt_err(priv, "Tx desc_data_head = %pad\n", &ring->desc_data_head); + + for (u32 i = 0; i < priv->channel_count; i++, channel++) { + ring = &channel->rx_ring[0]; + yt_err(priv, "Rx[%d] descriptor info:\n", i); + yt_err(priv, "Rx cur = 0x%x\n", ring->cur); + yt_err(priv, "Rx dirty = 0x%x\n", ring->dirty); + yt_err(priv, "Rx dma_desc_head = %pad\n", &ring->dma_desc_head); + yt_err(priv, "Rx desc_data_head = %pad\n", + &ring->desc_data_head); + } + + yt_err(priv, "Device Registers:\n"); + yt_err(priv, "MAC_ISR = %08x\n", FXGMAC_MAC_IO_RD(priv, MAC_ISR)); + yt_err(priv, "MAC_IER = %08x\n", FXGMAC_MAC_IO_RD(priv, MAC_IER)); + yt_err(priv, "MMC_RISR = %08x\n", FXGMAC_MAC_IO_RD(priv, MMC_RISR)); + yt_err(priv, "MMC_RIER = %08x\n", FXGMAC_MAC_IO_RD(priv, MMC_RIER)); + yt_err(priv, "MMC_TISR = %08x\n", FXGMAC_MAC_IO_RD(priv, MMC_TISR)); + yt_err(priv, "MMC_TIER = %08x\n", FXGMAC_MAC_IO_RD(priv, MMC_TIER)); + + yt_err(priv, "EPHY_CTRL = %04x\n", FXGMAC_IO_RD(priv, EPHY_CTRL)); + yt_err(priv, "MGMT_INT_CTRL0 = %04x\n", + FXGMAC_IO_RD(priv, MGMT_INT_CTRL0)); + yt_err(priv, "MSIX_TBL_MASK = %04x\n", + FXGMAC_IO_RD(priv, MSIX_TBL_MASK)); + + yt_err(priv, "Dump nonstick regs:\n"); + for (u32 i = GLOBAL_CTRL0; i < MSI_PBA; i += 4) + yt_err(priv, "[%d] = %04x\n", i / 4, FXGMAC_IO_RD(priv, i)); +} + +static void fxgmac_tx_timeout(struct net_device *netdev, unsigned int unused) +{ + struct fxgmac_pdata *priv = netdev_priv(netdev); + + fxgmac_dump_state(priv); + schedule_work(&priv->restart_work); +} + #define EFUSE_FISRT_UPDATE_ADDR 255 #define EFUSE_SECOND_UPDATE_ADDR 209 #define EFUSE_MAX_ENTRY 39 @@ -2323,9 +2385,33 @@ static netdev_tx_t fxgmac_xmit(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_OK; } +#ifdef CONFIG_NET_POLL_CONTROLLER +static void fxgmac_poll_controller(struct net_device *netdev) +{ + struct fxgmac_pdata *priv = netdev_priv(netdev); + struct fxgmac_channel *channel; + + if (priv->per_channel_irq) { + channel = priv->channel_head; + for (u32 i = 0; i < priv->channel_count; i++, channel++) + fxgmac_dma_isr(channel->dma_irq_rx, channel); + } else { + disable_irq(priv->dev_irq); + fxgmac_isr(priv->dev_irq, priv); + enable_irq(priv->dev_irq); + } +} +#endif /* CONFIG_NET_POLL_CONTROLLER */ + static const struct net_device_ops fxgmac_netdev_ops = { .ndo_open = fxgmac_open, + .ndo_stop = fxgmac_close, .ndo_start_xmit = fxgmac_xmit, + .ndo_tx_timeout = fxgmac_tx_timeout, + .ndo_validate_addr = eth_validate_addr, +#ifdef CONFIG_NET_POLL_CONTROLLER + .ndo_poll_controller = fxgmac_poll_controller, +#endif }; const struct net_device_ops *fxgmac_get_netdev_ops(void) @@ -2479,6 +2565,93 @@ static int fxgmac_one_poll_tx(struct napi_struct *napi, int budget) return ret; } +static int fxgmac_dev_read(struct fxgmac_channel *channel) +{ + struct fxgmac_pdata *priv = channel->priv; + struct fxgmac_ring *ring = channel->rx_ring; + struct net_device *netdev = priv->netdev; + static unsigned int cnt_incomplete; + struct fxgmac_desc_data *desc_data; + struct fxgmac_dma_desc *dma_desc; + struct fxgmac_pkt_info *pkt_info; + u32 ipce, iphe, rxparser; + unsigned int err, etlt; + + desc_data = FXGMAC_GET_DESC_DATA(ring, ring->cur); + dma_desc = desc_data->dma_desc; + pkt_info = &ring->pkt_info; + + /* Check for data availability */ + if (FXGMAC_GET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, OWN)) + return 1; + + /* Make sure descriptor fields are read after reading the OWN bit */ + dma_rmb(); + + if (netif_msg_rx_status(priv)) + fxgmac_dump_rx_desc(priv, ring, ring->cur); + + /* Normal Descriptor, be sure Context Descriptor bit is off */ + FXGMAC_SET_BITS(pkt_info->attr, ATTR_RX, CONTEXT, 0); + + /* Indicate if a Context Descriptor is next */ + /* Get the header length */ + if (FXGMAC_GET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, FD)) { + desc_data->rx.hdr_len = FXGMAC_GET_BITS_LE(dma_desc->desc2, + RX_NORMAL_DESC2, HL); + } + + /* Get the pkt_info length */ + desc_data->rx.len = + FXGMAC_GET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, PL); + + if (!FXGMAC_GET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, LD)) { + /* Not all the data has been transferred for this pkt_info */ + FXGMAC_SET_BITS(pkt_info->attr, ATTR_RX, INCOMPLETE, 1); + cnt_incomplete++; + return 0; + } + + if ((cnt_incomplete) && netif_msg_rx_status(priv)) + yt_dbg(priv, "%s, rx back to normal and incomplete cnt=%u\n", + __func__, cnt_incomplete); + cnt_incomplete = 0; + + /* This is the last of the data for this pkt_info */ + FXGMAC_SET_BITS(pkt_info->attr, ATTR_RX, INCOMPLETE, 0); + + /* Set checksum done indicator as appropriate */ + if (netdev->features & NETIF_F_RXCSUM) { + ipce = FXGMAC_GET_BITS_LE(dma_desc->desc1, RX_NORMAL_DESC1_WB, + IPCE); + iphe = FXGMAC_GET_BITS_LE(dma_desc->desc1, RX_NORMAL_DESC1_WB, + IPHE); + if (!ipce && !iphe) + FXGMAC_SET_BITS(pkt_info->attr, ATTR_RX, CSUM_DONE, 1); + else + return 0; + } + + /* Check for errors (only valid in last descriptor) */ + err = FXGMAC_GET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, ES); + rxparser = FXGMAC_GET_BITS_LE(dma_desc->desc2, RX_NORMAL_DESC2_WB, + RAPARSER); + /* Error or incomplete parsing due to ECC error */ + if (err || rxparser == 0x7) { + FXGMAC_SET_BITS(pkt_info->errors, ERRORS_RX, FRAME, 1); + return 0; + } + + etlt = FXGMAC_GET_BITS_LE(dma_desc->desc3, RX_NORMAL_DESC3, ETLT); + if (etlt == 0x4 && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) { + FXGMAC_SET_BITS(pkt_info->attr, ATTR_RX, VLAN_CTAG, 1); + pkt_info->vlan_ctag = FXGMAC_GET_BITS_LE(dma_desc->desc0, + RX_NORMAL_DESC0, OVT); + } + + return 0; +} + static unsigned int fxgmac_desc_rx_dirty(struct fxgmac_ring *ring) { unsigned int dirty; From patchwork Fri Feb 28 10:00:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996105 X-Patchwork-Delegate: kuba@kernel.org Received: from out198-11.us.a.mail.aliyun.com (out198-11.us.a.mail.aliyun.com [47.90.198.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E75F2620C1; Fri, 28 Feb 2025 10:00:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=47.90.198.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736860; cv=none; b=Vmoawjxjk2L7LDK7KVrlMG5RZl8TdqUg8UGFOlRs5GDIs1cxUFiceN8f6YG+QlbX2oVpk8vgokDvMVz6q7EPDFFRx9A3Zq0E90kDsWSduX32ie5AHp6xBmbmXak5Cae6SN5Tvd2Fsn1O9gt+SMsmZv+XeUAM1qb6W2YIOx8qp9E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736860; c=relaxed/simple; bh=Rjs6BCqM+SxeVg1lbFe9Ds4CosVf3jyDUn/fUVkK+dg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hJFydqbmfABoorajR0kUE4vPLVKP0MUDjwkoqJBWjmuSBq3pVjHANgq1AEmPoUWtC4MrGx8bow/BfGFvIHzfWpXHAwUCcUgHJAcpWHKhDfuM4vMTo3xBlNdV/xOh5Y73FS+2FtBtQRn+e4jZmebQ/k7H+Kb85kkJYWFaWNxjTyA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=47.90.198.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Lr_1740736840 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:40 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 12/14] motorcomm:yt6801: Implement pci_driver suspend and resume Date: Fri, 28 Feb 2025 18:00:18 +0800 Message-Id: <20250228100020.3944-13-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Implement the pci_driver suspend function to enable the device to sleep, and implement the resume function to enable the device to resume operation. Signed-off-by: Frank Sae --- .../ethernet/motorcomm/yt6801/yt6801_net.c | 14 +++++ .../ethernet/motorcomm/yt6801/yt6801_pci.c | 58 +++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c index d6c1c0fd4..01df945d0 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_net.c @@ -1378,6 +1378,20 @@ static void fxgmac_restart_work(struct work_struct *work) rtnl_unlock(); } +int fxgmac_net_powerup(struct fxgmac_pdata *priv) +{ + int ret; + + priv->powerstate = 0;/* clear all bits as normal now */ + ret = fxgmac_start(priv); + if (ret < 0) { + yt_err(priv, "%s: fxgmac_start ret: %d\n", __func__, ret); + return ret; + } + + return 0; +} + static void fxgmac_config_powerdown(struct fxgmac_pdata *priv) { FXGMAC_MAC_IO_WR_BITS(priv, MAC_CR, RE, 1); /* Enable MAC Rx */ diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c index fba01e393..e9d2ac820 100644 --- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c +++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_pci.c @@ -103,6 +103,59 @@ static void fxgmac_shutdown(struct pci_dev *pcidev) } mutex_unlock(&priv->mutex); } + +static int fxgmac_suspend(struct device *device) +{ + struct fxgmac_pdata *priv = dev_get_drvdata(device); + struct net_device *netdev = priv->netdev; + int ret = 0; + + mutex_lock(&priv->mutex); + if (priv->dev_state != FXGMAC_DEV_START) + goto unlock; + + if (netif_running(netdev)) + __fxgmac_shutdown(to_pci_dev(device)); + + priv->dev_state = FXGMAC_DEV_SUSPEND; +unlock: + mutex_unlock(&priv->mutex); + + return ret; +} + +static int fxgmac_resume(struct device *device) +{ + struct fxgmac_pdata *priv = dev_get_drvdata(device); + struct net_device *netdev = priv->netdev; + int ret = 0; + + mutex_lock(&priv->mutex); + if (priv->dev_state != FXGMAC_DEV_SUSPEND) + goto unlock; + + priv->dev_state = FXGMAC_DEV_RESUME; + __clear_bit(FXGMAC_POWER_STATE_DOWN, &priv->powerstate); + + rtnl_lock(); + if (netif_running(netdev)) { + ret = fxgmac_net_powerup(priv); + if (ret < 0) { + dev_err(device, "%s, fxgmac_net_powerup err:%d\n", + __func__, ret); + goto unlock; + } + } + + netif_device_attach(netdev); + rtnl_unlock(); + +unlock: + mutex_unlock(&priv->mutex); + + return ret; +} + #define MOTORCOMM_PCI_ID 0x1f0a #define YT6801_PCI_DEVICE_ID 0x6801 @@ -113,11 +166,16 @@ static const struct pci_device_id fxgmac_pci_tbl[] = { MODULE_DEVICE_TABLE(pci, fxgmac_pci_tbl); +static const struct dev_pm_ops fxgmac_pm_ops = { + SYSTEM_SLEEP_PM_OPS(fxgmac_suspend, fxgmac_resume) +}; + static struct pci_driver fxgmac_pci_driver = { .name = FXGMAC_DRV_NAME, .id_table = fxgmac_pci_tbl, .probe = fxgmac_probe, .remove = fxgmac_remove, + .driver.pm = pm_ptr(&fxgmac_pm_ops), .shutdown = fxgmac_shutdown, }; From patchwork Fri Feb 28 10:00:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996103 X-Patchwork-Delegate: kuba@kernel.org Received: from out198-12.us.a.mail.aliyun.com (out198-12.us.a.mail.aliyun.com [47.90.198.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD42A25C700; Fri, 28 Feb 2025 10:00:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=47.90.198.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736857; cv=none; b=Kju1I2a3pCefSctyqY9oyaOyMjz2HxTxz2jTUCaipqarEpvjmFHRkKGZiGPY6MxNsHZLe7JPyvpnksMbyiXnOfnycfixd+7L3FvUaw4w84EayKjmnoUTE72VyABAXYI/sDFX3dXSphG31XaoxNXTugRMT7l/aNQm9kXGnq+U3JM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736857; c=relaxed/simple; bh=T/HPbv1PGiMRr03PFoY1hMdTweVwOtuTGKoPOotDdVw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V4+d9AOLAaopM+Jz/PxozTTX+qxfoJKu4j2THFv7zo+jUTTTzdSL++7ZeZqo17g2A5Nd2sKraiEYt0RMEyic6g5HSAnkG8B9K0RQV8Fq8Wi6lRlXFAVgO8UxmhP+4M72E2jHRNrAirBKSqw6nLsDkuw2FeIV8O7yYcrNBeMTU+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=47.90.198.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Mk_1740736841 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:41 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 13/14] motorcomm:yt6801: Add makefile and Kconfig Date: Fri, 28 Feb 2025 18:00:19 +0800 Message-Id: <20250228100020.3944-14-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add a Makefile in the motorcomm folder to build yt6801 driver. Add the YT6801 and NET_VENDOR_MOTORCOMM entry in the Kconfig. Add the CONFIG_YT6801 entry in the Makefile. Add the motorcomm entry in the Kconfig. Add the CONFIG_NET_VENDOR_MOTORCOMM entry in the Makefile. Signed-off-by: Frank Sae --- drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/motorcomm/Kconfig | 27 +++++++++++++++++++ drivers/net/ethernet/motorcomm/Makefile | 6 +++++ .../net/ethernet/motorcomm/yt6801/Makefile | 8 ++++++ 5 files changed, 43 insertions(+) create mode 100644 drivers/net/ethernet/motorcomm/Kconfig create mode 100644 drivers/net/ethernet/motorcomm/Makefile create mode 100644 drivers/net/ethernet/motorcomm/yt6801/Makefile diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 977b42bc1..a02ef77f8 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -127,6 +127,7 @@ source "drivers/net/ethernet/micrel/Kconfig" source "drivers/net/ethernet/microchip/Kconfig" source "drivers/net/ethernet/mscc/Kconfig" source "drivers/net/ethernet/microsoft/Kconfig" +source "drivers/net/ethernet/motorcomm/Kconfig" source "drivers/net/ethernet/moxa/Kconfig" source "drivers/net/ethernet/myricom/Kconfig" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index 99fa180de..f1f44396f 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_NET_VENDOR_META) += meta/ obj-$(CONFIG_NET_VENDOR_MICREL) += micrel/ obj-$(CONFIG_NET_VENDOR_MICROCHIP) += microchip/ obj-$(CONFIG_NET_VENDOR_MICROSEMI) += mscc/ +obj-$(CONFIG_NET_VENDOR_MOTORCOMM) += motorcomm/ obj-$(CONFIG_NET_VENDOR_MOXART) += moxa/ obj-$(CONFIG_NET_VENDOR_MYRI) += myricom/ obj-$(CONFIG_FEALNX) += fealnx.o diff --git a/drivers/net/ethernet/motorcomm/Kconfig b/drivers/net/ethernet/motorcomm/Kconfig new file mode 100644 index 000000000..abcc6cbcc --- /dev/null +++ b/drivers/net/ethernet/motorcomm/Kconfig @@ -0,0 +1,27 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Motorcomm network device configuration +# + +config NET_VENDOR_MOTORCOMM + bool "Motorcomm devices" + default y + help + If you have a network (Ethernet) device belonging to this class, + say Y. + + Note that the answer to this question doesn't directly affect the + kernel: saying N will just cause the configurator to skip all + the questions about Motorcomm devices. If you say Y, you will be + asked for your specific device in the following questions. + +if NET_VENDOR_MOTORCOMM + +config YT6801 + tristate "Motorcomm(R) 6801 PCI-Express Gigabit Ethernet support" + depends on PCI && NET + help + This driver supports Motorcomm(R) 6801 gigabit ethernet family of + adapters. + +endif # NET_VENDOR_MOTORCOMM diff --git a/drivers/net/ethernet/motorcomm/Makefile b/drivers/net/ethernet/motorcomm/Makefile new file mode 100644 index 000000000..511940680 --- /dev/null +++ b/drivers/net/ethernet/motorcomm/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the Motorcomm network device drivers. +# + +obj-$(CONFIG_YT6801) += yt6801/ diff --git a/drivers/net/ethernet/motorcomm/yt6801/Makefile b/drivers/net/ethernet/motorcomm/yt6801/Makefile new file mode 100644 index 000000000..2f370d933 --- /dev/null +++ b/drivers/net/ethernet/motorcomm/yt6801/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2021 Motor-comm Corporation. +# +# Makefile for the Motorcomm(R) 6801 PCI-Express ethernet driver +# + +obj-$(CONFIG_YT6801) += yt6801.o +yt6801-objs := yt6801_desc.o yt6801_net.o yt6801_pci.o From patchwork Fri Feb 28 10:00:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Sae X-Patchwork-Id: 13996104 X-Patchwork-Delegate: kuba@kernel.org Received: from out198-8.us.a.mail.aliyun.com (out198-8.us.a.mail.aliyun.com [47.90.198.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2CFA2566E2; Fri, 28 Feb 2025 10:00:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=47.90.198.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736858; cv=none; b=eVV2gTSHMb62I1YzjaOAEwKOtZA2OU/1NaVYbFBnDENua64L8wEREuWab4PVpweCEk/tURnemtxrEpfXXiqfBtpMPIHcHBigCknWhDlHOSduwN18lGIoeaUWbAzzV2RqjGZlz7Tozw1D5p3GFWAFX1ZSj0g6S7ijTXOHjuf3tf0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740736858; c=relaxed/simple; bh=fypNBqsI7gBTZM4RhcruoTMGebOUNPDKibTuPVUpOQM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=D66+vD1l8ifMyPoaPKc7P7cv7aKGE4DPQPwPdABRLe79kPuYvQXQBS/uhdVVfHVT9+CAkxDMCLkHYxrXCsjM7xcPwb8iJKpPBIl5iM+FKNKA/iCRyXYrromj4FbaxJPNanXKBm8TA3Mz3nWpsXgBgdYkazxtSuQ0nAUBrShfbhk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com; spf=pass smtp.mailfrom=motor-comm.com; arc=none smtp.client-ip=47.90.198.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=motor-comm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=motor-comm.com Received: from sun-VirtualBox..(mailfrom:Frank.Sae@motor-comm.com fp:SMTPD_---.bfyn1Nf_1740736841 cluster:ay29) by smtp.aliyun-inc.com; Fri, 28 Feb 2025 18:00:42 +0800 From: Frank Sae To: Jakub Kicinski , Paolo Abeni , Andrew Lunn , Heiner Kallweit , Russell King , "David S . Miller" , Eric Dumazet , Frank , netdev@vger.kernel.org Cc: Masahiro Yamada , Parthiban.Veerasooran@microchip.com, linux-kernel@vger.kernel.org, xiaogang.fan@motor-comm.com, fei.zhang@motor-comm.com, hua.sun@motor-comm.com Subject: [PATCH net-next v3 14/14] motorcomm:yt6801: update ethernet documentation and maintainer Date: Fri, 28 Feb 2025 18:00:20 +0800 Message-Id: <20250228100020.3944-15-Frank.Sae@motor-comm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250228100020.3944-1-Frank.Sae@motor-comm.com> References: <20250228100020.3944-1-Frank.Sae@motor-comm.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add the yt6801.rst in ethernet/motorcomm folder Add the yt6801 entry in the index.rst. Add myself as the maintainer for the motorcomm ethernet driver. Signed-off-by: Frank Sae --- .../device_drivers/ethernet/index.rst | 1 + .../ethernet/motorcomm/yt6801.rst | 20 +++++++++++++++++++ MAINTAINERS | 8 ++++++++ 3 files changed, 29 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/motorcomm/yt6801.rst diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst index 6fc196149..f8b88408d 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -46,6 +46,7 @@ Contents: mellanox/mlx5/index meta/fbnic microsoft/netvsc + motorcomm/yt6801 neterion/s2io netronome/nfp pensando/ionic diff --git a/Documentation/networking/device_drivers/ethernet/motorcomm/yt6801.rst b/Documentation/networking/device_drivers/ethernet/motorcomm/yt6801.rst new file mode 100644 index 000000000..dd1e59c33 --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/motorcomm/yt6801.rst @@ -0,0 +1,20 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================================================ +Linux Base Driver for Motorcomm(R) Gigabit PCI Express Adapters +================================================================ + +Motorcomm Gigabit Linux driver. +Copyright (c) 2021 - 2024 Motor-comm Co., Ltd. + + +Contents +======== + +- Support + + +Support +======= +If you got any problem, contact Motorcomm support team via support@motor-comm.com +and Cc: netdev. diff --git a/MAINTAINERS b/MAINTAINERS index 8019d5a97..9b1530020 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -16012,6 +16012,14 @@ F: drivers/most/ F: drivers/staging/most/ F: include/linux/most.h +MOTORCOMM ETHERNET DRIVER +M: Frank +L: netdev@vger.kernel.org +S: Maintained +W: https://www.motor-comm.com/ +F: Documentation/networking/device_drivers/ethernet/motorcomm/* +F: drivers/net/ethernet/motorcomm/* + MOTORCOMM PHY DRIVER M: Frank L: netdev@vger.kernel.org