From patchwork Thu Mar 23 10:26:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 13185467 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE7C8C6FD1C for ; Thu, 23 Mar 2023 10:30:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231465AbjCWKa5 (ORCPT ); Thu, 23 Mar 2023 06:30:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231472AbjCWK3m (ORCPT ); Thu, 23 Mar 2023 06:29:42 -0400 Received: from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net [217.70.183.194]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 894F51BAE4; Thu, 23 Mar 2023 03:27:22 -0700 (PDT) Received: (Authenticated sender: herve.codina@bootlin.com) by mail.gandi.net (Postfix) with ESMTPA id 3491A4000E; Thu, 23 Mar 2023 10:27:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1679567229; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d77/D+JuZBjmzTtf3NfMJsiDb9mGDThCv+xlXnMUVTI=; b=AgtgHs+WnIt+veqm0pkATQKWB5+F0fC8ljpwhuHWAQNK8JOnfRzKrBuZ+bTL7zUWjOXBud ed6N8We6o/zAoC+h+GUNy18ZERHK7J22OQKEkQzU5LzS7xFWxgUkT3igVSahoiJlz2iI5L t8glVaecx54I/KSc4Gpqb5jzwAas1m5MuFKV1lT0KleoFb2zUpBup/6c8q9BnVBzA32s57 JtO/htF/iJlpJZ1NTjmVAXlhzXH0ITk1yLBys9tyskS55Kkich5ByMH5tNLMgktAGgpxAO 4LJ8WWBymE9tOUZCyeCXlQ4DKMAI2k6F4L3GBi5coMFHoOsc9DQmzMJ9PRU4Hg== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Vinod Koul , Kishon Vijay Abraham I Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-phy@lists.infradead.org, Christophe Leroy , Thomas Petazzoni Subject: [RFC PATCH 1/4] net: wan: Add support for QMC HDLC Date: Thu, 23 Mar 2023 11:26:52 +0100 Message-Id: <20230323102655.264115-2-herve.codina@bootlin.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230323102655.264115-1-herve.codina@bootlin.com> References: <20230323102655.264115-1-herve.codina@bootlin.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The QMC HDLC driver provides support for HDLC using the QMC (QUICC Multichannel Controller) to transfer the HDLC data. Signed-off-by: Herve Codina --- drivers/net/wan/Kconfig | 12 + drivers/net/wan/Makefile | 1 + drivers/net/wan/fsl_qmc_hdlc.c | 408 +++++++++++++++++++++++++++++++++ 3 files changed, 421 insertions(+) create mode 100644 drivers/net/wan/fsl_qmc_hdlc.c diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig index dcb069dde66b..8de99f4b647b 100644 --- a/drivers/net/wan/Kconfig +++ b/drivers/net/wan/Kconfig @@ -195,6 +195,18 @@ config FARSYNC To compile this driver as a module, choose M here: the module will be called farsync. +config FSL_QMC_HDLC + tristate "Freescale QMC HDLC support" + depends on HDLC + depends on CPM_QMC + help + HDLC support using the Freescale QUICC Multichannel Controller (QMC). + + To compile this driver as a module, choose M here: the + module will be called fsl_qmc_hdlc. + + If unsure, say N. + config FSL_UCC_HDLC tristate "Freescale QUICC Engine HDLC support" depends on HDLC diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile index 5bec8fae47f8..f338f4830626 100644 --- a/drivers/net/wan/Makefile +++ b/drivers/net/wan/Makefile @@ -23,6 +23,7 @@ obj-$(CONFIG_WANXL) += wanxl.o obj-$(CONFIG_PCI200SYN) += pci200syn.o obj-$(CONFIG_PC300TOO) += pc300too.o obj-$(CONFIG_IXP4XX_HSS) += ixp4xx_hss.o +obj-$(CONFIG_FSL_QMC_HDLC) += fsl_qmc_hdlc.o obj-$(CONFIG_FSL_UCC_HDLC) += fsl_ucc_hdlc.o obj-$(CONFIG_SLIC_DS26522) += slic_ds26522.o diff --git a/drivers/net/wan/fsl_qmc_hdlc.c b/drivers/net/wan/fsl_qmc_hdlc.c new file mode 100644 index 000000000000..f12d00c78497 --- /dev/null +++ b/drivers/net/wan/fsl_qmc_hdlc.c @@ -0,0 +1,408 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Freescale QMC HDLC Device Driver + * + * Copyright 2023 CS GROUP France + * + * Author: Herve Codina + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +struct qmc_hdlc_desc { + struct net_device *netdev; + struct sk_buff *skb; /* NULL if the descriptor is not in use */ + dma_addr_t dma_addr; + size_t dma_size; +}; + +struct qmc_hdlc { + struct device *dev; + struct qmc_chan *qmc_chan; + struct net_device *netdev; + bool is_crc32; + spinlock_t tx_lock; /* Protect tx descriptors */ + struct qmc_hdlc_desc tx_descs[8]; + unsigned int tx_out; + struct qmc_hdlc_desc rx_descs[4]; +}; + +static inline struct qmc_hdlc *netdev_to_qmc_hdlc(struct net_device *netdev) +{ + return (struct qmc_hdlc *)dev_to_hdlc(netdev)->priv; +} + +static int qmc_hdlc_recv_queue(struct qmc_hdlc *qmc_hdlc, struct qmc_hdlc_desc *desc, size_t size); + +static void qmc_hcld_recv_complete(void *context, size_t length) +{ + struct qmc_hdlc_desc *desc = context; + struct net_device *netdev = desc->netdev; + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(desc->netdev); + int ret; + + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, DMA_FROM_DEVICE); + + netdev->stats.rx_packets++; + netdev->stats.rx_bytes += length; + + skb_put(desc->skb, length); + desc->skb->protocol = hdlc_type_trans(desc->skb, netdev); + netif_rx(desc->skb); + + /* Re-queue a transfer using the same descriptor */ + ret = qmc_hdlc_recv_queue(qmc_hdlc, desc, desc->dma_size); + if (ret) { + dev_err(qmc_hdlc->dev, "queue recv desc failed (%d)\n", ret); + netdev->stats.rx_errors++; + } +} + +static int qmc_hdlc_recv_queue(struct qmc_hdlc *qmc_hdlc, struct qmc_hdlc_desc *desc, size_t size) +{ + int ret; + + desc->skb = dev_alloc_skb(size); + if (!desc->skb) + return -ENOMEM; + + desc->dma_size = size; + desc->dma_addr = dma_map_single(qmc_hdlc->dev, desc->skb->data, + desc->dma_size, DMA_FROM_DEVICE); + ret = dma_mapping_error(qmc_hdlc->dev, desc->dma_addr); + if (ret) + goto free_skb; + + ret = qmc_chan_read_submit(qmc_hdlc->qmc_chan, desc->dma_addr, desc->dma_size, + qmc_hcld_recv_complete, desc); + if (ret) + goto dma_unmap; + + return 0; + +dma_unmap: + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, DMA_FROM_DEVICE); +free_skb: + kfree_skb(desc->skb); + desc->skb = NULL; + return ret; +} + +static void qmc_hdlc_xmit_complete(void *context) +{ + struct qmc_hdlc_desc *desc = context; + struct net_device *netdev = desc->netdev; + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev); + struct sk_buff *skb; + unsigned long flags; + + spin_lock_irqsave(&qmc_hdlc->tx_lock, flags); + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, DMA_TO_DEVICE); + skb = desc->skb; + desc->skb = NULL; /* Release the descriptor */ + if (netif_queue_stopped(netdev)) + netif_wake_queue(netdev); + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags); + + netdev->stats.tx_packets++; + netdev->stats.tx_bytes += skb->len; + + dev_consume_skb_any(skb); +} + +static int qmc_hdlc_xmit_queue(struct qmc_hdlc *qmc_hdlc, struct qmc_hdlc_desc *desc) +{ + int ret; + + desc->dma_addr = dma_map_single(qmc_hdlc->dev, desc->skb->data, + desc->dma_size, DMA_TO_DEVICE); + ret = dma_mapping_error(qmc_hdlc->dev, desc->dma_addr); + if (ret) { + dev_err(qmc_hdlc->dev, "failed to map skb\n"); + return ret; + } + + ret = qmc_chan_write_submit(qmc_hdlc->qmc_chan, desc->dma_addr, desc->dma_size, + qmc_hdlc_xmit_complete, desc); + if (ret) { + dev_err(qmc_hdlc->dev, "qmc chan write returns %d\n", ret); + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, DMA_TO_DEVICE); + return ret; + } + + return 0; +} + +static netdev_tx_t qmc_hdlc_xmit(struct sk_buff *skb, struct net_device *netdev) +{ + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev); + struct qmc_hdlc_desc *desc; + unsigned long flags; + int ret; + + spin_lock_irqsave(&qmc_hdlc->tx_lock, flags); + desc = &qmc_hdlc->tx_descs[qmc_hdlc->tx_out]; + if (desc->skb) { + /* Should never happen. + * Previous xmit should have already stopped the queue. + */ + netif_stop_queue(netdev); + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags); + return NETDEV_TX_BUSY; + } + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags); + + desc->netdev = netdev; + desc->dma_size = skb->len; + desc->skb = skb; + ret = qmc_hdlc_xmit_queue(qmc_hdlc, desc); + if (ret) { + desc->skb = NULL; /* Release the descriptor */ + if (ret == -EBUSY) { + netif_stop_queue(netdev); + return NETDEV_TX_BUSY; + } + dev_kfree_skb(skb); + netdev->stats.tx_dropped++; + return NETDEV_TX_OK; + } + + qmc_hdlc->tx_out = (qmc_hdlc->tx_out + 1) % ARRAY_SIZE(qmc_hdlc->tx_descs); + + spin_lock_irqsave(&qmc_hdlc->tx_lock, flags); + if (qmc_hdlc->tx_descs[qmc_hdlc->tx_out].skb) + netif_stop_queue(netdev); + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags); + + return NETDEV_TX_OK; +} + +static int qmc_hdlc_open(struct net_device *netdev) +{ + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev); + struct qmc_chan_param chan_param; + struct qmc_hdlc_desc *desc; + int ret; + int i; + + ret = hdlc_open(netdev); + if (ret) + return ret; + + chan_param.mode = QMC_HDLC; + /* HDLC_MAX_MRU + 4 for the CRC + * HDLC_MAX_MRU + 4 + 8 for the CRC and some extraspace needed by the QMC + */ + chan_param.hdlc.max_rx_buf_size = HDLC_MAX_MRU + 4 + 8; + chan_param.hdlc.max_rx_frame_size = HDLC_MAX_MRU + 4; + chan_param.hdlc.is_crc32 = qmc_hdlc->is_crc32; + ret = qmc_chan_set_param(qmc_hdlc->qmc_chan, &chan_param); + if (ret) { + dev_err(qmc_hdlc->dev, "failed to set param (%d)\n", ret); + goto hdlc_close; + } + + /* Queue as many recv descriptors as possible */ + for (i = 0; i < ARRAY_SIZE(qmc_hdlc->rx_descs); i++) { + desc = &qmc_hdlc->rx_descs[i]; + + desc->netdev = netdev; + ret = qmc_hdlc_recv_queue(qmc_hdlc, desc, chan_param.hdlc.max_rx_buf_size); + if (ret) { + if (ret == -EBUSY && i != 0) + break; /* We use all the QMC chan capability */ + goto free_desc; + } + } + + ret = qmc_chan_start(qmc_hdlc->qmc_chan, QMC_CHAN_ALL); + if (ret) { + dev_err(qmc_hdlc->dev, "qmc chan start failed (%d)\n", ret); + goto free_desc; + } + + netif_start_queue(netdev); + + return 0; + +free_desc: + qmc_chan_reset(qmc_hdlc->qmc_chan, QMC_CHAN_ALL); + for (i = 0; i < ARRAY_SIZE(qmc_hdlc->rx_descs); i++) { + desc = &qmc_hdlc->rx_descs[i]; + if (!desc->skb) + continue; + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, + DMA_FROM_DEVICE); + kfree_skb(desc->skb); + desc->skb = NULL; + } +hdlc_close: + hdlc_close(netdev); + return ret; +} + +static int qmc_hdlc_close(struct net_device *netdev) +{ + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev); + struct qmc_hdlc_desc *desc; + int i; + + netif_stop_queue(netdev); + + qmc_chan_stop(qmc_hdlc->qmc_chan, QMC_CHAN_ALL); + qmc_chan_reset(qmc_hdlc->qmc_chan, QMC_CHAN_ALL); + + for (i = 0; i < ARRAY_SIZE(qmc_hdlc->tx_descs); i++) { + desc = &qmc_hdlc->tx_descs[i]; + if (!desc->skb) + continue; + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, + DMA_TO_DEVICE); + kfree_skb(desc->skb); + desc->skb = NULL; + } + + for (i = 0; i < ARRAY_SIZE(qmc_hdlc->rx_descs); i++) { + desc = &qmc_hdlc->rx_descs[i]; + if (!desc->skb) + continue; + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, + DMA_FROM_DEVICE); + kfree_skb(desc->skb); + desc->skb = NULL; + } + + hdlc_close(netdev); + return 0; +} + +static int qmc_hdlc_attach(struct net_device *netdev, unsigned short encoding, + unsigned short parity) +{ + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev); + + if (encoding != ENCODING_NRZ) + return -EINVAL; + + switch (parity) { + case PARITY_CRC16_PR1_CCITT: + qmc_hdlc->is_crc32 = false; + break; + case PARITY_CRC32_PR1_CCITT: + qmc_hdlc->is_crc32 = true; + break; + default: + dev_err(qmc_hdlc->dev, "unsupported parity %u\n", parity); + return -EINVAL; + } + + return 0; +} + +static const struct net_device_ops qmc_hdlc_netdev_ops = { + .ndo_open = qmc_hdlc_open, + .ndo_stop = qmc_hdlc_close, + .ndo_start_xmit = hdlc_start_xmit, + .ndo_siocwandev = hdlc_ioctl, +}; + +static int qmc_hdlc_probe(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct qmc_hdlc *qmc_hdlc; + struct qmc_chan_info info; + hdlc_device *hdlc; + int ret; + + qmc_hdlc = devm_kzalloc(&pdev->dev, sizeof(*qmc_hdlc), GFP_KERNEL); + if (!qmc_hdlc) + return -ENOMEM; + + qmc_hdlc->dev = &pdev->dev; + spin_lock_init(&qmc_hdlc->tx_lock); + + qmc_hdlc->qmc_chan = devm_qmc_chan_get_byphandle(qmc_hdlc->dev, np, "fsl,qmc-chan"); + if (IS_ERR(qmc_hdlc->qmc_chan)) { + ret = PTR_ERR(qmc_hdlc->qmc_chan); + return dev_err_probe(qmc_hdlc->dev, ret, "get QMC channel failed\n"); + } + + ret = qmc_chan_get_info(qmc_hdlc->qmc_chan, &info); + if (ret) { + dev_err(qmc_hdlc->dev, "get QMC channel info failed %d\n", ret); + return ret; + } + dev_info(qmc_hdlc->dev, "QMC channel mode %d, nb_tx_ts %u, nb_rx_ts %u\n", + info.mode, info.nb_tx_ts, info.nb_rx_ts); + + if (info.mode != QMC_HDLC) { + dev_err(qmc_hdlc->dev, "QMC chan mode %d is not QMC_HDLC\n", + info.mode); + return -EINVAL; + } + + qmc_hdlc->netdev = alloc_hdlcdev(qmc_hdlc); + if (!qmc_hdlc->netdev) { + dev_err(qmc_hdlc->dev, "failed to alloc hdlc dev\n"); + return -ENOMEM; + } + + hdlc = dev_to_hdlc(qmc_hdlc->netdev); + hdlc->attach = qmc_hdlc_attach; + hdlc->xmit = qmc_hdlc_xmit; + SET_NETDEV_DEV(qmc_hdlc->netdev, qmc_hdlc->dev); + qmc_hdlc->netdev->tx_queue_len = ARRAY_SIZE(qmc_hdlc->tx_descs); + qmc_hdlc->netdev->netdev_ops = &qmc_hdlc_netdev_ops; + ret = register_hdlc_device(qmc_hdlc->netdev); + if (ret) { + dev_err(qmc_hdlc->dev, "failed to register hdlc device (%d)\n", ret); + goto free_netdev; + } + + platform_set_drvdata(pdev, qmc_hdlc); + + dev_info(qmc_hdlc->dev, "probed\n"); + + return 0; + +free_netdev: + free_netdev(qmc_hdlc->netdev); + return ret; +} + +static int qmc_hdlc_remove(struct platform_device *pdev) +{ + struct qmc_hdlc *qmc_hdlc = platform_get_drvdata(pdev); + + unregister_hdlc_device(qmc_hdlc->netdev); + free_netdev(qmc_hdlc->netdev); + + return 0; +} + +static const struct of_device_id qmc_hdlc_id_table[] = { + { .compatible = "fsl,qmc-hdlc" }, + {} /* sentinel */ +}; +MODULE_DEVICE_TABLE(of, qmc_hdlc_driver); + +static struct platform_driver qmc_hdlc_driver = { + .driver = { + .name = "fsl-qmc-hdlc", + .of_match_table = qmc_hdlc_id_table, + }, + .probe = qmc_hdlc_probe, + .remove = qmc_hdlc_remove, +}; +module_platform_driver(qmc_hdlc_driver); + +MODULE_AUTHOR("Herve Codina "); +MODULE_DESCRIPTION("QMC HDLC driver"); +MODULE_LICENSE("GPL"); From patchwork Thu Mar 23 10:26:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 13185465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C368C6FD1C for ; Thu, 23 Mar 2023 10:30:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231506AbjCWKaL (ORCPT ); Thu, 23 Mar 2023 06:30:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231419AbjCWK31 (ORCPT ); Thu, 23 Mar 2023 06:29:27 -0400 Received: from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::222]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09CDF1D921; Thu, 23 Mar 2023 03:27:12 -0700 (PDT) Received: (Authenticated sender: herve.codina@bootlin.com) by mail.gandi.net (Postfix) with ESMTPA id A73F54000D; Thu, 23 Mar 2023 10:27:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1679567230; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kyML+Tb93DHPPScUmVGGn4WAujJ7ZGz0ZkmMwXFTevo=; b=ku071i7LJ8rvNRXn5+UuUmJjJUTNHt2DL7CpY1RzDLW4+KMjB269x5a/aXRqdufujE2gHE op5LDLr1Ihn2VuBTz4nbLH61vD6dPqJvwLbtvwwctLUMmKc1LX2J8ZNWn39tSep0iJLbpe rP06NybLxi1FSzllWSoFSrvjrJczg1zlutoPLEavfwGzIMUYLF1eLO99cL/7A2vxlNbIzS FX4V8Okf6GMDBZd5qf8UJkXahyPDlmjwN6T259M5fvCufpkSQSHSJmosWcT5TxoMOu2CCQ UD9F0Mhij/ldt6JgWq6w73p0AlCqwn53ZvZoYBqSrjmlqqjZQb7yYoE1I4HYeQ== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Vinod Koul , Kishon Vijay Abraham I Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-phy@lists.infradead.org, Christophe Leroy , Thomas Petazzoni Subject: [RFC PATCH 2/4] phy: Extend API to support 'status' get and notification Date: Thu, 23 Mar 2023 11:26:53 +0100 Message-Id: <20230323102655.264115-3-herve.codina@bootlin.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230323102655.264115-1-herve.codina@bootlin.com> References: <20230323102655.264115-1-herve.codina@bootlin.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC The PHY API provides functions to control and pass information from the PHY consumer to the PHY provider. There is no way for the consumer to get direct information from the PHY or be notified by the PHY. To fill this hole, two API function are provided: - phy_get_status() This function can be used to get a "status" from the PHY. It is built as the same ways as the configure() function. The status information present in the status retrieved depends on the PHY's phy_mode. This allows to get a "status" depending on the kind of PHY. - phy_atomic_notifier_(un)register() These functions can be used to register/unregister an atomic notifier block. The event available at this time is the PHY_EVENT_STATUS status event which purpose is to signal some changes in the status available using phy_get_status(). An new kind of PHY is added: PHY_MODE_BASIC. This new kind of PHY represents a basic PHY offering a basic status This status contains a link state indication. With the new API, a link state indication can be retrieve using phy_get_status() and link state changes can be notified. Signed-off-by: Herve Codina --- drivers/phy/phy-core.c | 88 ++++++++++++++++++++++++++++++++++ include/linux/phy/phy-basic.h | 27 +++++++++++ include/linux/phy/phy.h | 89 ++++++++++++++++++++++++++++++++++- 3 files changed, 203 insertions(+), 1 deletion(-) create mode 100644 include/linux/phy/phy-basic.h diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c index 9951efc03eaa..c7b568b99dce 100644 --- a/drivers/phy/phy-core.c +++ b/drivers/phy/phy-core.c @@ -551,6 +551,94 @@ int phy_validate(struct phy *phy, enum phy_mode mode, int submode, } EXPORT_SYMBOL_GPL(phy_validate); +/** + * phy_get_status() - Gets the phy status + * @phy: the phy returned by phy_get() + * @status: the status to retrieve + * + * Used to get the PHY status. phy_init() must have been called + * on the phy. The status will be retrieved from the current phy mode, + * that can be changed using phy_set_mode(). + * + * Return: %0 if successful, a negative error code otherwise + */ +int phy_get_status(struct phy *phy, union phy_status *status) +{ + int ret; + + if (!phy) + return -EINVAL; + + if (!phy->ops->get_status) + return -EOPNOTSUPP; + + mutex_lock(&phy->mutex); + ret = phy->ops->get_status(phy, status); + mutex_unlock(&phy->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(phy_get_status); + +/** + * phy_atomic_notifier_register() - Registers an atomic notifier + * @phy: the phy returned by phy_get() + * @nb: the notifier block to register + * + * Used to register a notifier block on PHY events. phy_init() must have + * been called on the phy. + * The notifier function given in the notifier_block must not sleep. + * The available PHY events are present in enum phy_events + * + * Return: %0 if successful, a negative error code otherwise + */ +int phy_atomic_notifier_register(struct phy *phy, struct notifier_block *nb) +{ + int ret; + + if (!phy) + return -EINVAL; + + if (!phy->ops->atomic_notifier_register || + !phy->ops->atomic_notifier_unregister) + return -EOPNOTSUPP; + + mutex_lock(&phy->mutex); + ret = phy->ops->atomic_notifier_register(phy, nb); + mutex_unlock(&phy->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(phy_atomic_notifier_register); + +/** + * phy_atomic_notifier_unregister() - Unregisters an atomic notifier + * @phy: the phy returned by phy_get() + * @nb: the notifier block to unregister + * + * Used to unregister a notifier block. phy_init() must have + * been called on the phy. + * + * Return: %0 if successful, a negative error code otherwise + */ +int phy_atomic_notifier_unregister(struct phy *phy, struct notifier_block *nb) +{ + int ret; + + if (!phy) + return -EINVAL; + + if (!phy->ops->atomic_notifier_unregister) + return -EOPNOTSUPP; + + mutex_lock(&phy->mutex); + ret = phy->ops->atomic_notifier_unregister(phy, nb); + mutex_unlock(&phy->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(phy_atomic_notifier_unregister); + /** * _of_phy_get() - lookup and obtain a reference to a phy by phandle * @np: device_node for which to get the phy diff --git a/include/linux/phy/phy-basic.h b/include/linux/phy/phy-basic.h new file mode 100644 index 000000000000..95668c610c78 --- /dev/null +++ b/include/linux/phy/phy-basic.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2023 CS GROUP France + * + * Author: Herve Codina + */ + +#ifndef __PHY_BASIC_H_ +#define __PHY_BASIC_H_ + +#include + +/** + * struct phy_status_basic - Basic PHY status + * + * This structure is used to represent the status of a Basic phy. + */ +struct phy_status_basic { + /** + * @link_state: + * + * Link state. true, the link is on, false, the link is off. + */ + bool link_is_on; +}; + +#endif /* __PHY_DP_H_ */ diff --git a/include/linux/phy/phy.h b/include/linux/phy/phy.h index 3a570bc59fc7..40370d41012b 100644 --- a/include/linux/phy/phy.h +++ b/include/linux/phy/phy.h @@ -16,6 +16,7 @@ #include #include +#include #include #include #include @@ -42,7 +43,8 @@ enum phy_mode { PHY_MODE_MIPI_DPHY, PHY_MODE_SATA, PHY_MODE_LVDS, - PHY_MODE_DP + PHY_MODE_DP, + PHY_MODE_BASIC, }; enum phy_media { @@ -67,6 +69,22 @@ union phy_configure_opts { struct phy_configure_opts_lvds lvds; }; +/** + * union phy_status - Opaque generic phy status + * + * @basic: Status availbale phys supporting the Basic phy mode. + */ +union phy_status { + struct phy_status_basic basic; +}; + +/** + * phy_event - event available for notification + */ +enum phy_event { + PHY_EVENT_STATUS, /* Event notified on phy_status changes */ +}; + /** * struct phy_ops - set of function pointers for performing phy operations * @init: operation to be performed for initializing phy @@ -120,6 +138,45 @@ struct phy_ops { */ int (*validate)(struct phy *phy, enum phy_mode mode, int submode, union phy_configure_opts *opts); + + /** + * @get_status: + * + * Optional. + * + * Used to get the PHY status. phy_init() must have + * been called on the phy. + * + * Returns: 0 if successful, an negative error code otherwise + */ + int (*get_status)(struct phy *phy, union phy_status *status); + + /** + * @atomic_notifier_register: + * + * Optional. + * + * Used to register a notifier block on PHY events. phy_init() must have + * been called on the phy. + * The notifier function given in the notifier_block must not sleep. + * The available PHY events are present in enum phy_events + * + * Returns: 0 if successful, an negative error code otherwise + */ + int (*atomic_notifier_register)(struct phy *phy, struct notifier_block *nb); + + /** + * @atomic_notifier_unregister: + * + * Mandatoty if @atomic_notifier_register is set. + * + * Used to unregister a notifier block on PHY events. phy_init() must have + * been called on the phy. + * + * Returns: 0 if successful, an negative error code otherwise + */ + int (*atomic_notifier_unregister)(struct phy *phy, struct notifier_block *nb); + int (*reset)(struct phy *phy); int (*calibrate)(struct phy *phy); void (*release)(struct phy *phy); @@ -234,6 +291,10 @@ int phy_set_speed(struct phy *phy, int speed); int phy_configure(struct phy *phy, union phy_configure_opts *opts); int phy_validate(struct phy *phy, enum phy_mode mode, int submode, union phy_configure_opts *opts); +int phy_get_status(struct phy *phy, union phy_status *status); +int phy_atomic_notifier_register(struct phy *phy, struct notifier_block *nb); +int phy_atomic_notifier_unregister(struct phy *phy, struct notifier_block *nb); + static inline enum phy_mode phy_get_mode(struct phy *phy) { @@ -412,6 +473,32 @@ static inline int phy_validate(struct phy *phy, enum phy_mode mode, int submode, return -ENOSYS; } +static inline int phy_get_status(struct phy *phy, union phy_status *status) +{ + if (!phy) + return 0; + + return -ENOSYS; +} + +static inline int phy_atomic_notifier_register(struct phy *phy, + struct notifier_block *nb) +{ + if (!phy) + return 0; + + return -ENOSYS; +} + +static inline int phy_atomic_notifier_unregister(struct phy *phy, + struct notifier_block *nb) +{ + if (!phy) + return 0; + + return -ENOSYS; +} + static inline int phy_get_bus_width(struct phy *phy) { return -ENOSYS; From patchwork Thu Mar 23 10:26:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 13185468 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4165C7619A for ; Thu, 23 Mar 2023 10:31:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231474AbjCWKbA (ORCPT ); Thu, 23 Mar 2023 06:31:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231400AbjCWK3n (ORCPT ); Thu, 23 Mar 2023 06:29:43 -0400 Received: from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net [217.70.183.194]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C78823803A; Thu, 23 Mar 2023 03:27:22 -0700 (PDT) Received: (Authenticated sender: herve.codina@bootlin.com) by mail.gandi.net (Postfix) with ESMTPA id C98CB40007; Thu, 23 Mar 2023 10:27:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1679567231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZXghyFKbkBEH9/EKzDk+5w9Pp/aMwqnL/aCTRT1Laoo=; b=LJTLbTwGPnQSRj8spEu6q0fcfKe1kXQvZU5lYdeDWKD6iV27Fom4Zonxo1imQoDv0rd5uq claLSkqz5Om9N7BsV18vNOjhZeezZ7JBTNTdK6VUzQfkCsHKzSBcNFgdicF2SvAvfiof2b zA59gj8Qm0h/wvaNVklyIL+cCSu/Uynyu7m/mCiyykcUXoA5OHAKgar82eUKb7H+DGlqMV pjeCDKQ8us6XZPOeg+6efzlmiVn4ev0w06sYugUT7H7mHkXFqvVWL15xsh1powdkuCW7bF 4dQKnNjXDzoeAm8rjH7+CV+wtNN7rlCi1eBM3XuXTU+kOg8GyYusK8BEGCVn8w== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Vinod Koul , Kishon Vijay Abraham I Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-phy@lists.infradead.org, Christophe Leroy , Thomas Petazzoni Subject: [RFC PATCH 3/4] net: wan: fsl_qmc_hdlc: Add PHY support Date: Thu, 23 Mar 2023 11:26:54 +0100 Message-Id: <20230323102655.264115-4-herve.codina@bootlin.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230323102655.264115-1-herve.codina@bootlin.com> References: <20230323102655.264115-1-herve.codina@bootlin.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add PHY support in the fsl_qmc_hdlc driver in order to be able to signal carrier changes to the network stack based on the PHY status. Signed-off-by: Herve Codina --- drivers/net/wan/fsl_qmc_hdlc.c | 152 ++++++++++++++++++++++++++++++++- 1 file changed, 151 insertions(+), 1 deletion(-) diff --git a/drivers/net/wan/fsl_qmc_hdlc.c b/drivers/net/wan/fsl_qmc_hdlc.c index f12d00c78497..edea0f678ffe 100644 --- a/drivers/net/wan/fsl_qmc_hdlc.c +++ b/drivers/net/wan/fsl_qmc_hdlc.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -27,6 +28,11 @@ struct qmc_hdlc { struct device *dev; struct qmc_chan *qmc_chan; struct net_device *netdev; + struct phy *phy; + spinlock_t carrier_lock; /* Protect carrier detection */ + struct notifier_block nb; + bool is_phy_notifier; + struct delayed_work phy_poll_task; bool is_crc32; spinlock_t tx_lock; /* Protect tx descriptors */ struct qmc_hdlc_desc tx_descs[8]; @@ -39,6 +45,126 @@ static inline struct qmc_hdlc *netdev_to_qmc_hdlc(struct net_device *netdev) return (struct qmc_hdlc *)dev_to_hdlc(netdev)->priv; } +static int qmc_hdlc_phy_set_carrier(struct qmc_hdlc *qmc_hdlc) +{ + union phy_status phy_status; + unsigned long flags; + int ret; + + if (!qmc_hdlc->phy) + return 0; + + spin_lock_irqsave(&qmc_hdlc->carrier_lock, flags); + + ret = phy_get_status(qmc_hdlc->phy, &phy_status); + if (ret) { + dev_err(qmc_hdlc->dev, "get PHY status failed (%d)\n", ret); + goto end; + } + if (phy_status.basic.link_is_on) + netif_carrier_on(qmc_hdlc->netdev); + else + netif_carrier_off(qmc_hdlc->netdev); + +end: + spin_unlock_irqrestore(&qmc_hdlc->carrier_lock, flags); + return ret; +} + +static int qmc_hdlc_phy_notifier(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct qmc_hdlc *qmc_hdlc = container_of(nb, struct qmc_hdlc, nb); + int ret; + + if (action != PHY_EVENT_STATUS) + return NOTIFY_DONE; + + ret = qmc_hdlc_phy_set_carrier(qmc_hdlc); + return ret ? NOTIFY_DONE : NOTIFY_OK; +} + +static void qmc_hdlc_phy_poll_task(struct work_struct *work) +{ + struct qmc_hdlc *qmc_hdlc = container_of(work, struct qmc_hdlc, phy_poll_task.work); + int ret; + + ret = qmc_hdlc_phy_set_carrier(qmc_hdlc); + if (ret) { + /* Should not happened but ... + * On error, force carrier on and stop scheduling this task + */ + dev_err(qmc_hdlc->dev, "set carrier failed (%d) -> force carrier on\n", + ret); + netif_carrier_on(qmc_hdlc->netdev); + return; + } + + /* Re-schedule task in 1 sec */ + queue_delayed_work(system_power_efficient_wq, &qmc_hdlc->phy_poll_task, 1 * HZ); +} + +static int qmc_hdlc_phy_init(struct qmc_hdlc *qmc_hdlc) +{ + union phy_status phy_status; + int ret; + + if (!qmc_hdlc->phy) + return 0; + + ret = phy_init(qmc_hdlc->phy); + if (ret) { + dev_err(qmc_hdlc->dev, "PHY init failed (%d)\n", ret); + return ret; + } + + ret = phy_power_on(qmc_hdlc->phy); + if (ret) { + dev_err(qmc_hdlc->dev, "PHY power-on failed (%d)\n", ret); + goto phy_exit; + } + + /* Be sure that get_status is supported */ + ret = phy_get_status(qmc_hdlc->phy, &phy_status); + if (ret) { + dev_err(qmc_hdlc->dev, "get PHY status failed (%d)\n", ret); + goto phy_power_off; + } + + qmc_hdlc->nb.notifier_call = qmc_hdlc_phy_notifier; + ret = phy_atomic_notifier_register(qmc_hdlc->phy, &qmc_hdlc->nb); + if (ret) { + qmc_hdlc->is_phy_notifier = false; + + /* Cannot register a PHY notifier -> Use polling */ + INIT_DELAYED_WORK(&qmc_hdlc->phy_poll_task, qmc_hdlc_phy_poll_task); + queue_delayed_work(system_power_efficient_wq, &qmc_hdlc->phy_poll_task, 1 * HZ); + } else { + qmc_hdlc->is_phy_notifier = true; + } + + return 0; + +phy_power_off: + phy_power_off(qmc_hdlc->phy); +phy_exit: + phy_exit(qmc_hdlc->phy); + return ret; +} + +static void qmc_hdlc_phy_exit(struct qmc_hdlc *qmc_hdlc) +{ + if (!qmc_hdlc->phy) + return; + + if (qmc_hdlc->is_phy_notifier) + phy_atomic_notifier_unregister(qmc_hdlc->phy, &qmc_hdlc->nb); + else + cancel_delayed_work_sync(&qmc_hdlc->phy_poll_task); + phy_power_off(qmc_hdlc->phy); + phy_exit(qmc_hdlc->phy); +} + static int qmc_hdlc_recv_queue(struct qmc_hdlc *qmc_hdlc, struct qmc_hdlc_desc *desc, size_t size); static void qmc_hcld_recv_complete(void *context, size_t length) @@ -192,10 +318,17 @@ static int qmc_hdlc_open(struct net_device *netdev) int ret; int i; - ret = hdlc_open(netdev); + ret = qmc_hdlc_phy_init(qmc_hdlc); if (ret) return ret; + ret = hdlc_open(netdev); + if (ret) + goto phy_exit; + + /* Update carrier */ + qmc_hdlc_phy_set_carrier(qmc_hdlc); + chan_param.mode = QMC_HDLC; /* HDLC_MAX_MRU + 4 for the CRC * HDLC_MAX_MRU + 4 + 8 for the CRC and some extraspace needed by the QMC @@ -245,6 +378,8 @@ static int qmc_hdlc_open(struct net_device *netdev) } hdlc_close: hdlc_close(netdev); +phy_exit: + qmc_hdlc_phy_exit(qmc_hdlc); return ret; } @@ -280,6 +415,7 @@ static int qmc_hdlc_close(struct net_device *netdev) } hdlc_close(netdev); + qmc_hdlc_phy_exit(qmc_hdlc); return 0; } @@ -318,6 +454,7 @@ static int qmc_hdlc_probe(struct platform_device *pdev) struct device_node *np = pdev->dev.of_node; struct qmc_hdlc *qmc_hdlc; struct qmc_chan_info info; + enum phy_mode phy_mode; hdlc_device *hdlc; int ret; @@ -327,6 +464,7 @@ static int qmc_hdlc_probe(struct platform_device *pdev) qmc_hdlc->dev = &pdev->dev; spin_lock_init(&qmc_hdlc->tx_lock); + spin_lock_init(&qmc_hdlc->carrier_lock); qmc_hdlc->qmc_chan = devm_qmc_chan_get_byphandle(qmc_hdlc->dev, np, "fsl,qmc-chan"); if (IS_ERR(qmc_hdlc->qmc_chan)) { @@ -348,6 +486,18 @@ static int qmc_hdlc_probe(struct platform_device *pdev) return -EINVAL; } + qmc_hdlc->phy = devm_of_phy_optional_get(qmc_hdlc->dev, np, NULL); + if (IS_ERR(qmc_hdlc->phy)) + return PTR_ERR(qmc_hdlc->phy); + if (qmc_hdlc->phy) { + phy_mode = phy_get_mode(qmc_hdlc->phy); + if (phy_mode != PHY_MODE_BASIC) { + dev_err(qmc_hdlc->dev, "Unsupported PHY mode (%d)\n", + phy_mode); + return -EINVAL; + } + } + qmc_hdlc->netdev = alloc_hdlcdev(qmc_hdlc); if (!qmc_hdlc->netdev) { dev_err(qmc_hdlc->dev, "failed to alloc hdlc dev\n"); From patchwork Thu Mar 23 10:26:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 13185466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03FEAC6FD1C for ; Thu, 23 Mar 2023 10:30:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230469AbjCWKaO (ORCPT ); Thu, 23 Mar 2023 06:30:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230023AbjCWK33 (ORCPT ); Thu, 23 Mar 2023 06:29:29 -0400 Received: from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::222]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A40F37F32; Thu, 23 Mar 2023 03:27:14 -0700 (PDT) Received: (Authenticated sender: herve.codina@bootlin.com) by mail.gandi.net (Postfix) with ESMTPA id F14504001A; Thu, 23 Mar 2023 10:27:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1679567232; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0oqGErMkj5FkUfonUc1n/rq79xcUyJMxSW3w/JRA1mw=; b=ePk1E6dGI2R+OGEhasW0yHUoD45GS+TBRiwIPh9cwepgdCgtlN76mw4Dp4o/SLntQJgHwX YJXdZMigP8/PZuKvpkHJEf7CJjBihoEpNT5TNNZjGLZm4lxh//0lcs9L0vCy9B5+gMuaGG A7IYzvlQIrTKROBFuQh1yg8YG/UL+xH25mzn7wi/8knwzhaJxgIqsSh+U/qsezBYhfAasf sTW9OP17DwMe0CWNu3WtIkh/UlQxMMI+ZkV9YsqJCLQQdXxyJ6qQm3dDEnMr7Ag4rMUXxn 0gM7DPiAx33oO9IXfL8Lu4C3LZ8J5HmUMOedXmD/NHbYck0wGEzriItmqbOnFg== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Vinod Koul , Kishon Vijay Abraham I Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-phy@lists.infradead.org, Christophe Leroy , Thomas Petazzoni Subject: [RFC PATCH 4/4] phy: lantiq: Add PEF2256 PHY support Date: Thu, 23 Mar 2023 11:26:55 +0100 Message-Id: <20230323102655.264115-5-herve.codina@bootlin.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230323102655.264115-1-herve.codina@bootlin.com> References: <20230323102655.264115-1-herve.codina@bootlin.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC The Lantiq PEF2256, is a framer and line interface component designed to fulfill all required interfacing between an analog E1/T1/J1 line and the digital PCM system highway/H.100 bus. The PHY support allows to provide the PEF2556 as a generic PHY and to use the PHY API to retrieve the E1 line carrier status from the PHY consumer. Signed-off-by: Herve Codina --- drivers/phy/lantiq/Kconfig | 15 +++ drivers/phy/lantiq/Makefile | 1 + drivers/phy/lantiq/phy-lantiq-pef2256.c | 131 ++++++++++++++++++++++++ 3 files changed, 147 insertions(+) create mode 100644 drivers/phy/lantiq/phy-lantiq-pef2256.c diff --git a/drivers/phy/lantiq/Kconfig b/drivers/phy/lantiq/Kconfig index c4df9709d53f..c87881255458 100644 --- a/drivers/phy/lantiq/Kconfig +++ b/drivers/phy/lantiq/Kconfig @@ -2,6 +2,21 @@ # # Phy drivers for Lantiq / Intel platforms # +config PHY_LANTIQ_PEF2256 + tristate "Lantiq PEF2256 PHY" + depends on MFD_PEF2256 + select GENERIC_PHY + help + Enable support for the Lantiq PEF2256 (FALC56) PHY. + The PEF2256 is a framer and line interface between analog E1/T1/J1 + line and a digital PCM bus. + This PHY support allows to consider the PEF2256 as a PHY. + + If unsure, say N. + + To compile this driver as a module, choose M here: the + module will be called phy-lantiq-pef2256. + config PHY_LANTIQ_VRX200_PCIE tristate "Lantiq VRX200/ARX300 PCIe PHY" depends on SOC_TYPE_XWAY || COMPILE_TEST diff --git a/drivers/phy/lantiq/Makefile b/drivers/phy/lantiq/Makefile index 7c14eb24ab73..6e501d865620 100644 --- a/drivers/phy/lantiq/Makefile +++ b/drivers/phy/lantiq/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_PHY_LANTIQ_PEF2256) += phy-lantiq-pef2256.o obj-$(CONFIG_PHY_LANTIQ_RCU_USB2) += phy-lantiq-rcu-usb2.o obj-$(CONFIG_PHY_LANTIQ_VRX200_PCIE) += phy-lantiq-vrx200-pcie.o diff --git a/drivers/phy/lantiq/phy-lantiq-pef2256.c b/drivers/phy/lantiq/phy-lantiq-pef2256.c new file mode 100644 index 000000000000..1a1a4f66c102 --- /dev/null +++ b/drivers/phy/lantiq/phy-lantiq-pef2256.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PEF2256 phy support + * + * Copyright 2023 CS GROUP France + * + * Author: Herve Codina + */ + +#include +#include +#include +#include +#include +#include + +struct pef2256_phy { + struct phy *phy; + struct pef2256 *pef2256; + struct device *dev; + struct atomic_notifier_head event_notifier_list; + struct notifier_block nb; +}; + +static int pef2256_carrier_notifier(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct pef2256_phy *pef2256 = container_of(nb, struct pef2256_phy, nb); + + switch (action) { + case PEF2256_EVENT_CARRIER: + return atomic_notifier_call_chain(&pef2256->event_notifier_list, + PHY_EVENT_STATUS, + NULL); + default: + break; + } + + return NOTIFY_DONE; +} + +static int pef2256_phy_atomic_notifier_register(struct phy *phy, struct notifier_block *nb) +{ + struct pef2256_phy *pef2256 = phy_get_drvdata(phy); + + return atomic_notifier_chain_register(&pef2256->event_notifier_list, nb); +} + +static int pef2256_phy_atomic_notifier_unregister(struct phy *phy, struct notifier_block *nb) +{ + struct pef2256_phy *pef2256 = phy_get_drvdata(phy); + + return atomic_notifier_chain_unregister(&pef2256->event_notifier_list, nb); +} + +static int pef2256_phy_init(struct phy *phy) +{ + struct pef2256_phy *pef2256 = phy_get_drvdata(phy); + + ATOMIC_INIT_NOTIFIER_HEAD(&pef2256->event_notifier_list); + + pef2256->nb.notifier_call = pef2256_carrier_notifier; + return pef2256_register_event_notifier(pef2256->pef2256, &pef2256->nb); +} + +static int pef2256_phy_exit(struct phy *phy) +{ + struct pef2256_phy *pef2256 = phy_get_drvdata(phy); + + return pef2256_unregister_event_notifier(pef2256->pef2256, &pef2256->nb); +} + +static int pef2256_phy_get_status(struct phy *phy, union phy_status *status) +{ + struct pef2256_phy *pef2256 = phy_get_drvdata(phy); + + status->basic.link_is_on = pef2256_get_carrier(pef2256->pef2256); + return 0; +} + +static const struct phy_ops pef2256_phy_ops = { + .owner = THIS_MODULE, + .init = pef2256_phy_init, + .exit = pef2256_phy_exit, + .get_status = pef2256_phy_get_status, + .atomic_notifier_register = pef2256_phy_atomic_notifier_register, + .atomic_notifier_unregister = pef2256_phy_atomic_notifier_unregister, +}; + +static int pef2256_phy_probe(struct platform_device *pdev) +{ + struct phy_provider *provider; + struct pef2256_phy *pef2256; + + pef2256 = devm_kzalloc(&pdev->dev, sizeof(*pef2256), GFP_KERNEL); + if (!pef2256) + return -ENOMEM; + + pef2256->dev = &pdev->dev; + pef2256->pef2256 = dev_get_drvdata(pef2256->dev->parent); + + pef2256->phy = devm_phy_create(pef2256->dev, NULL, &pef2256_phy_ops); + if (IS_ERR(pef2256->phy)) + return PTR_ERR(pef2256->phy); + + phy_set_drvdata(pef2256->phy, pef2256); + pef2256->phy->attrs.mode = PHY_MODE_BASIC; + + provider = devm_of_phy_provider_register(pef2256->dev, of_phy_simple_xlate); + + return PTR_ERR_OR_ZERO(provider); +} + +static const struct of_device_id pef2256_phy_of_match[] = { + { .compatible = "lantiq,pef2256-phy" }, + {} /* sentinel */ +}; +MODULE_DEVICE_TABLE(of, pef2256_phy_of_match); + +static struct platform_driver pef2256_phy_driver = { + .driver = { + .name = "lantiq-pef2256-phy", + .of_match_table = pef2256_phy_of_match, + }, + .probe = pef2256_phy_probe, +}; +module_platform_driver(pef2256_phy_driver); + +MODULE_AUTHOR("Herve Codina "); +MODULE_DESCRIPTION("PEF2256 PHY driver"); +MODULE_LICENSE("GPL");