From patchwork Wed Jan 15 10:22:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940204 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-56.ptr.blmpb.com (va-2-56.ptr.blmpb.com [209.127.231.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F29381EEA42 for ; Wed, 15 Jan 2025 10:22:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936576; cv=none; b=JLhgybUSYqMdFwQTO2QgdGKa2tRGhOMb39aQOZuTKOwGB9/Hpxiq/pq4Yva7BAegGY/Z1n8QjzCPPvBN6S07QuFPRVjlBPHIQBht4JOhawr5RwK/qovZT3iTi8v0vVb1CShaRWpv7CeMgSGc0Fma2A0y9G/kpK4EA6fBk9C3EsY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936576; c=relaxed/simple; bh=iZ1SHKxrb1+GIEDQ7vTNH5D81dd9SiT197FywVP1at8=; h=Message-Id:Cc:Date:Mime-Version:In-Reply-To:References:To:Subject: From:Content-Type; b=f12wObd8SDZaS5kABEZNKNPZOeMFzM2cYJ3D5hYSEyrM4sh62HSeFV1h1G29Dou5ttE3/Zgr1qttfPW0v07xSOuqa0llUCFhqk5qVgENtLbgDZ7UnqE+0TpxSvgR53OTdsCOtI810XhzFDxC7HDKm7hDKPc8Yv64vGZsaK5ID7I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=ThOnWrMi; arc=none smtp.client-ip=209.127.231.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="ThOnWrMi" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936566; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=uFL+Y8tU0Jd6Hev15jhr8xc5JM0emr0TZQqrZYwkY0g=; b=ThOnWrMib23uvmLt3IVnH96B5GTV0jC9vja7zt3XDRtsjEBgsJgrrEhZ1bdH+H+od3oVB5 00DA1rb0lrFACOihM1RgmJ2zZq9dcbfDlp7FwhMlIKgL/VSKXxwwc3GFavqLoRvxcF8KNu 9Z38Es+w0w24L19eS4eNbPkLXqh+++sL8MRdPcWG29KiRLb52H7FyDjrWLjEie90ZUS8yf jrUSnBuWcPw03AbLgD8Da1/St4gDyMl7j88icrwSERNkgTwOcd6ikwf7qqS6ZlbknQg3g7 yz56/t6+UviKuxShvHhr2zmqF3gC+ZRUCvqwwaIMqT2eVnQwT3qq4t1W4PmrPA== Message-Id: <20250115102242.3541496-2-tianx@yunsilicon.com> X-Original-From: Xin Tian Cc: , , , , , , , , , Date: Wed, 15 Jan 2025 18:22:44 +0800 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> References: <20250115102242.3541496-1-tianx@yunsilicon.com> To: Subject: [PATCH v3 01/14] net-next/yunsilicon: Add xsc driver basic framework From: "Xin Tian" Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:43 +0800 X-Mailer: git-send-email 2.25.1 X-Patchwork-Delegate: kuba@kernel.org Add yunsilicon xsc driver basic framework, including xsc_pci driver and xsc_eth driver Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/yunsilicon/Kconfig | 26 ++ drivers/net/ethernet/yunsilicon/Makefile | 8 + .../ethernet/yunsilicon/xsc/common/xsc_core.h | 53 ++++ .../net/ethernet/yunsilicon/xsc/net/Kconfig | 17 ++ .../net/ethernet/yunsilicon/xsc/net/Makefile | 9 + .../net/ethernet/yunsilicon/xsc/pci/Kconfig | 16 ++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 9 + .../net/ethernet/yunsilicon/xsc/pci/main.c | 251 ++++++++++++++++++ 10 files changed, 391 insertions(+) create mode 100644 drivers/net/ethernet/yunsilicon/Kconfig create mode 100644 drivers/net/ethernet/yunsilicon/Makefile create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Kconfig create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Makefile create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Makefile create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/main.c diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 0baac25db..aa6016597 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig" source "drivers/net/ethernet/ibm/Kconfig" source "drivers/net/ethernet/intel/Kconfig" source "drivers/net/ethernet/xscale/Kconfig" +source "drivers/net/ethernet/yunsilicon/Kconfig" config JME tristate "JMicron(R) PCI-Express Gigabit Ethernet support" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index c03203439..c16c34d4b 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -51,6 +51,7 @@ obj-$(CONFIG_NET_VENDOR_INTEL) += intel/ obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/ obj-$(CONFIG_NET_VENDOR_MICROSOFT) += microsoft/ obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/ +obj-$(CONFIG_NET_VENDOR_YUNSILICON) += yunsilicon/ obj-$(CONFIG_JME) += jme.o obj-$(CONFIG_KORINA) += korina.o obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o diff --git a/drivers/net/ethernet/yunsilicon/Kconfig b/drivers/net/ethernet/yunsilicon/Kconfig new file mode 100644 index 000000000..ff57fedf8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/Kconfig @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Yunsilicon driver configuration +# + +config NET_VENDOR_YUNSILICON + bool "Yunsilicon devices" + default y + depends on PCI + depends on ARM64 || X86_64 + help + If you have a network (Ethernet) device belonging to this class, + say Y. + + Note that the answer to this question doesn't directly affect the + kernel: saying N will just cause the configurator to skip all + the questions about Yunsilicon cards. If you say Y, you will be asked + for your specific card in the following questions. + +if NET_VENDOR_YUNSILICON + +source "drivers/net/ethernet/yunsilicon/xsc/net/Kconfig" +source "drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig" + +endif # NET_VENDOR_YUNSILICON diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile new file mode 100644 index 000000000..6fc8259a7 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Makefile for the Yunsilicon device drivers. +# + +# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/ +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/ \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h new file mode 100644 index 000000000..2c4e8e731 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_CORE_H +#define __XSC_CORE_H + +#include +#include + +#define XSC_PCI_VENDOR_ID 0x1f67 + +#define XSC_MC_PF_DEV_ID 0x1011 +#define XSC_MC_VF_DEV_ID 0x1012 +#define XSC_MC_PF_DEV_ID_DIAMOND 0x1021 + +#define XSC_MF_HOST_PF_DEV_ID 0x1051 +#define XSC_MF_HOST_VF_DEV_ID 0x1052 +#define XSC_MF_SOC_PF_DEV_ID 0x1053 + +#define XSC_MS_PF_DEV_ID 0x1111 +#define XSC_MS_VF_DEV_ID 0x1112 + +#define XSC_MV_HOST_PF_DEV_ID 0x1151 +#define XSC_MV_HOST_VF_DEV_ID 0x1152 +#define XSC_MV_SOC_PF_DEV_ID 0x1153 + +struct xsc_dev_resource { + struct mutex alloc_mutex; /* protect buffer alocation according to numa node */ +}; + +enum xsc_pci_state { + XSC_PCI_STATE_DISABLED, + XSC_PCI_STATE_ENABLED, +}; + +struct xsc_core_device { + struct pci_dev *pdev; + struct device *device; + struct xsc_dev_resource *dev_res; + int numa_node; + + void __iomem *bar; + int bar_num; + + struct mutex pci_state_mutex; /* protect pci_state */ + enum xsc_pci_state pci_state; + struct mutex intf_state_mutex; /* protect intf_state */ + unsigned long intf_state; +}; + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig new file mode 100644 index 000000000..de743487e --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Yunsilicon driver configuration +# + +config YUNSILICON_XSC_ETH + tristate "Yunsilicon XSC ethernet driver" + default n + depends on YUNSILICON_XSC_PCI + depends on NET + help + This driver provides ethernet support for + Yunsilicon XSC devices. + + To compile this driver as a module, choose M here. The module + will be called xsc_eth. diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile new file mode 100644 index 000000000..2811433af --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. + +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc + +obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o + +xsc_eth-y := main.o \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig new file mode 100644 index 000000000..2b6d79905 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Yunsilicon PCI configuration +# + +config YUNSILICON_XSC_PCI + tristate "Yunsilicon XSC PCI driver" + default n + select PAGE_POOL + help + This driver is common for Yunsilicon XSC + ethernet and RDMA drivers. + + To compile this driver as a module, choose M here. The module + will be called xsc_pci. diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile new file mode 100644 index 000000000..709270df8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. + +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc + +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o + +xsc_pci-y := main.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c new file mode 100644 index 000000000..4859be58f --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -0,0 +1,251 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "common/xsc_core.h" + +static const struct pci_device_id xsc_pci_id_table[] = { + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID_DIAMOND) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_HOST_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_SOC_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MS_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_HOST_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_SOC_PF_DEV_ID) }, + { 0 } +}; + +static int set_dma_caps(struct pci_dev *pdev) +{ + int err; + + err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + else + err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); + + if (!err) + dma_set_max_seg_size(&pdev->dev, SZ_2G); + + return err; +} + +static int xsc_pci_enable_device(struct xsc_core_device *xdev) +{ + struct pci_dev *pdev = xdev->pdev; + int err = 0; + + mutex_lock(&xdev->pci_state_mutex); + if (xdev->pci_state == XSC_PCI_STATE_DISABLED) { + err = pci_enable_device(pdev); + if (!err) + xdev->pci_state = XSC_PCI_STATE_ENABLED; + } + mutex_unlock(&xdev->pci_state_mutex); + + return err; +} + +static void xsc_pci_disable_device(struct xsc_core_device *xdev) +{ + struct pci_dev *pdev = xdev->pdev; + + mutex_lock(&xdev->pci_state_mutex); + if (xdev->pci_state == XSC_PCI_STATE_ENABLED) { + pci_disable_device(pdev); + xdev->pci_state = XSC_PCI_STATE_DISABLED; + } + mutex_unlock(&xdev->pci_state_mutex); +} + +static int xsc_pci_init(struct xsc_core_device *xdev, const struct pci_device_id *id) +{ + struct pci_dev *pdev = xdev->pdev; + void __iomem *bar_base; + int bar_num = 0; + int err; + + xdev->numa_node = dev_to_node(&pdev->dev); + + err = xsc_pci_enable_device(xdev); + if (err) { + pci_err(pdev, "failed to enable PCI device: err=%d\n", err); + goto err_ret; + } + + err = pci_request_region(pdev, bar_num, KBUILD_MODNAME); + if (err) { + pci_err(pdev, "failed to request %s pci_region=%d: err=%d\n", + KBUILD_MODNAME, bar_num, err); + goto err_disable; + } + + pci_set_master(pdev); + + err = set_dma_caps(pdev); + if (err) { + pci_err(pdev, "failed to set DMA capabilities mask: err=%d\n", err); + goto err_clr_master; + } + + bar_base = pci_ioremap_bar(pdev, bar_num); + if (!bar_base) { + pci_err(pdev, "failed to ioremap %s bar%d\n", KBUILD_MODNAME, bar_num); + goto err_clr_master; + } + + err = pci_save_state(pdev); + if (err) { + pci_err(pdev, "pci_save_state failed: err=%d\n", err); + goto err_io_unmap; + } + + xdev->bar_num = bar_num; + xdev->bar = bar_base; + + return 0; + +err_io_unmap: + pci_iounmap(pdev, bar_base); +err_clr_master: + pci_clear_master(pdev); + pci_release_region(pdev, bar_num); +err_disable: + xsc_pci_disable_device(xdev); +err_ret: + return err; +} + +static void xsc_pci_fini(struct xsc_core_device *xdev) +{ + struct pci_dev *pdev = xdev->pdev; + + if (xdev->bar) + pci_iounmap(pdev, xdev->bar); + pci_clear_master(pdev); + pci_release_region(pdev, xdev->bar_num); + xsc_pci_disable_device(xdev); +} + +static int xsc_dev_res_init(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res; + + dev_res = kvzalloc(sizeof(*dev_res), GFP_KERNEL); + if (!dev_res) + return -ENOMEM; + + xdev->dev_res = dev_res; + mutex_init(&dev_res->alloc_mutex); + + return 0; +} + +static void xsc_dev_res_cleanup(struct xsc_core_device *xdev) +{ + kfree(xdev->dev_res); +} + +static int xsc_core_dev_init(struct xsc_core_device *xdev) +{ + int err; + + mutex_init(&xdev->pci_state_mutex); + mutex_init(&xdev->intf_state_mutex); + + err = xsc_dev_res_init(xdev); + if (err) { + pci_err(xdev->pdev, "xsc dev res init failed %d\n", err); + goto out; + } + + return 0; +out: + return err; +} + +static void xsc_core_dev_cleanup(struct xsc_core_device *xdev) +{ + xsc_dev_res_cleanup(xdev); +} + +static int xsc_pci_probe(struct pci_dev *pci_dev, + const struct pci_device_id *id) +{ + struct xsc_core_device *xdev; + int err; + + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL); + if (!xdev) + return -ENOMEM; + + xdev->pdev = pci_dev; + xdev->device = &pci_dev->dev; + + pci_set_drvdata(pci_dev, xdev); + err = xsc_pci_init(xdev, id); + if (err) { + pci_err(pci_dev, "xsc_pci_init failed %d\n", err); + goto err_unset_pci_drvdata; + } + + err = xsc_core_dev_init(xdev); + if (err) { + pci_err(pci_dev, "xsc_core_dev_init failed %d\n", err); + goto err_pci_fini; + } + + return 0; +err_pci_fini: + xsc_pci_fini(xdev); +err_unset_pci_drvdata: + pci_set_drvdata(pci_dev, NULL); + kfree(xdev); + + return err; +} + +static void xsc_pci_remove(struct pci_dev *pci_dev) +{ + struct xsc_core_device *xdev = pci_get_drvdata(pci_dev); + + xsc_core_dev_cleanup(xdev); + xsc_pci_fini(xdev); + pci_set_drvdata(pci_dev, NULL); + kfree(xdev); +} + +static struct pci_driver xsc_pci_driver = { + .name = "xsc-pci", + .id_table = xsc_pci_id_table, + .probe = xsc_pci_probe, + .remove = xsc_pci_remove, +}; + +static int __init xsc_init(void) +{ + int err; + + err = pci_register_driver(&xsc_pci_driver); + if (err) { + pr_err("failed to register pci driver\n"); + goto out; + } + return 0; + +out: + return err; +} + +static void __exit xsc_fini(void) +{ + pci_unregister_driver(&xsc_pci_driver); +} + +module_init(xsc_init); +module_exit(xsc_fini); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Yunsilicon XSC PCI driver"); From patchwork Wed Jan 15 10:22:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940216 X-Patchwork-Delegate: kuba@kernel.org Received: from va-1-32.ptr.blmpb.com (va-1-32.ptr.blmpb.com [209.127.230.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34A401F9F41 for ; Wed, 15 Jan 2025 10:25:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936717; cv=none; b=UvHnOAI2J/toltfTGWVvptPHZnbMASVjt8pex4ZfPkFgxlM5UnJDOGwovOd4amAFvU7OCRFNyu/KgGkpBAeF25WibJywa30XlBYE5utNXXODLyd/MC5aa4J8id6J7Vd+8FFWAMnG+YlKVrMhJr4+PVNDY9XxI+GToDSOhYG7K/A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936717; c=relaxed/simple; bh=HrOodashvRxD8FCKAVQJhSpi3+kq23g0G/vN844UdoU=; h=Mime-Version:From:In-Reply-To:Cc:Subject:References:To:Date: Message-Id:Content-Type; b=eNiH+XRUMmw7AQs0rRyFK0wDADXReU3MNnElFe5ANy3CHRTk8awRPVM8nMk8IYXllOb06Wphmn3DUHJjPqavr7sGNHgYRKA9XA8kGm+fEeM4pPHc3N1D9Btson5wn8Vp8BjJFX1lXx4D30/v2R/Xh0995xP+MkDvPAxLluwGlXg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=AD6tVedc; arc=none smtp.client-ip=209.127.230.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="AD6tVedc" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936569; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=MXgb5AYt5XfWnsczhbe1nU5vPGKGMsCDBWmbbs7lUc0=; b=AD6tVedc00uDk8GIfjn46ko7EsFE+r0IZxt7dsc6TT7j2MJlYrEcTd7ag6P9SPGbDUQ1gR GEQjIhB3bzoD/O1clW2U2oPyfAZ0MgUnmw6oFzcXfRKqyPOUG0VeQKjue2lz8LUhpLdOOL GDyi2uNeMMhtiXm4bDZb59ek4Ve2RI/TLwEp+T0t9xt+bnowYOdqCjDhdXay5hDNnte9qR HGRpfb7IiqAXeebJnY34nY2JkfcLyMTTBiNsI+Ewp2YpKsrT/bF9qzkkhVluepcP0MgZ3c 9cWIxfBKDxdEPscL3n6KIhw4OvSWxdTv2XtZtkDIQj7RG10SXs0G6c98OrSaCw== Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: From: "Xin Tian" In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:46 +0800 Cc: , , , , , , , , , Subject: [PATCH v3 02/14] net-next/yunsilicon: Enable CMDQ References: <20250115102242.3541496-1-tianx@yunsilicon.com> To: Date: Wed, 15 Jan 2025 18:22:46 +0800 Message-Id: <20250115102245.3541496-3-tianx@yunsilicon.com> X-Original-From: Xin Tian X-Mailer: git-send-email 2.25.1 X-Patchwork-Delegate: kuba@kernel.org Enable cmd queue to support driver-firmware communication. Hardware control will be performed through cmdq mostly. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../yunsilicon/xsc/common/xsc_auto_hw.h | 94 + .../ethernet/yunsilicon/xsc/common/xsc_cmd.h | 632 +++++++ .../ethernet/yunsilicon/xsc/common/xsc_cmdq.h | 215 +++ .../ethernet/yunsilicon/xsc/common/xsc_core.h | 13 + .../yunsilicon/xsc/common/xsc_driver.h | 25 + .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/pci/cmdq.c | 1555 +++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 81 + 8 files changed, 2616 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h new file mode 100644 index 000000000..03c781de8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_HW_H +#define __XSC_HW_H + +//hif_irq_csr_defines.h +#define HIF_IRQ_TBL2IRQ_TBL_RD_DONE_INT_MSIX_REG_ADDR 0xa1100070 + +//hif_cpm_csr_defines.h +#define HIF_CPM_LOCK_GET_REG_ADDR 0xa0000104 +#define HIF_CPM_LOCK_PUT_REG_ADDR 0xa0000108 +#define HIF_CPM_LOCK_AVAIL_REG_ADDR 0xa000010c +#define HIF_CPM_IDA_DATA_MEM_ADDR 0xa0000800 +#define HIF_CPM_IDA_CMD_REG_ADDR 0xa0000020 +#define HIF_CPM_IDA_ADDR_REG_ADDR 0xa0000080 +#define HIF_CPM_IDA_BUSY_REG_ADDR 0xa0000100 +#define HIF_CPM_IDA_CMD_REG_IDA_IDX_WIDTH 5 +#define HIF_CPM_IDA_CMD_REG_IDA_LEN_WIDTH 4 +#define HIF_CPM_IDA_CMD_REG_IDA_R0W1_WIDTH 1 +#define HIF_CPM_LOCK_GET_REG_LOCK_VLD_SHIFT 5 +#define HIF_CPM_LOCK_GET_REG_LOCK_IDX_MASK 0x1f +#define HIF_CPM_IDA_ADDR_REG_STRIDE 0x4 +#define HIF_CPM_CHIP_VERSION_H_REG_ADDR 0xa0000010 + +//mmc_csr_defines.h +#define MMC_MPT_TBL_MEM_DEPTH 32768 +#define MMC_MTT_TBL_MEM_DEPTH 262144 +#define MMC_MPT_TBL_MEM_WIDTH 256 +#define MMC_MTT_TBL_MEM_WIDTH 64 +#define MMC_MPT_TBL_MEM_ADDR 0xa4100000 +#define MMC_MTT_TBL_MEM_ADDR 0xa4200000 + +//clsf_dma_csr_defines.h +#define CLSF_DMA_DMA_UL_BUSY_REG_ADDR 0xa6010048 +#define CLSF_DMA_DMA_DL_DONE_REG_ADDR 0xa60100d0 +#define CLSF_DMA_DMA_DL_SUCCESS_REG_ADDR 0xa60100c0 +#define CLSF_DMA_ERR_CODE_CLR_REG_ADDR 0xa60100d4 +#define CLSF_DMA_DMA_RD_TABLE_ID_REG_DMA_RD_TBL_ID_MASK 0x7f +#define CLSF_DMA_DMA_RD_TABLE_ID_REG_ADDR 0xa6010020 +#define CLSF_DMA_DMA_RD_ADDR_REG_DMA_RD_BURST_NUM_SHIFT 16 +#define CLSF_DMA_DMA_RD_ADDR_REG_ADDR 0xa6010024 +#define CLSF_DMA_INDRW_RD_START_REG_ADDR 0xa6010028 + +//hif_tbl_csr_defines.h +#define HIF_TBL_TBL_DL_BUSY_REG_ADDR 0xa1060030 +#define HIF_TBL_TBL_DL_REQ_REG_TBL_DL_LEN_SHIFT 12 +#define HIF_TBL_TBL_DL_REQ_REG_TBL_DL_HOST_ID_SHIFT 11 +#define HIF_TBL_TBL_DL_REQ_REG_ADDR 0xa1060020 +#define HIF_TBL_TBL_DL_ADDR_L_REG_TBL_DL_ADDR_L_MASK 0xffffffff +#define HIF_TBL_TBL_DL_ADDR_L_REG_ADDR 0xa1060024 +#define HIF_TBL_TBL_DL_ADDR_H_REG_TBL_DL_ADDR_H_MASK 0xffffffff +#define HIF_TBL_TBL_DL_ADDR_H_REG_ADDR 0xa1060028 +#define HIF_TBL_TBL_DL_START_REG_ADDR 0xa106002c +#define HIF_TBL_TBL_UL_REQ_REG_TBL_UL_HOST_ID_SHIFT 11 +#define HIF_TBL_TBL_UL_REQ_REG_ADDR 0xa106007c +#define HIF_TBL_TBL_UL_ADDR_L_REG_TBL_UL_ADDR_L_MASK 0xffffffff +#define HIF_TBL_TBL_UL_ADDR_L_REG_ADDR 0xa1060080 +#define HIF_TBL_TBL_UL_ADDR_H_REG_TBL_UL_ADDR_H_MASK 0xffffffff +#define HIF_TBL_TBL_UL_ADDR_H_REG_ADDR 0xa1060084 +#define HIF_TBL_TBL_UL_START_REG_ADDR 0xa1060088 +#define HIF_TBL_MSG_RDY_REG_ADDR 0xa1060044 + +//hif_cmdqm_csr_defines.h +#define HIF_CMDQM_HOST_REQ_PID_MEM_ADDR 0xa1026000 +#define HIF_CMDQM_HOST_REQ_CID_MEM_ADDR 0xa1028000 +#define HIF_CMDQM_HOST_RSP_PID_MEM_ADDR 0xa102e000 +#define HIF_CMDQM_HOST_RSP_CID_MEM_ADDR 0xa1030000 +#define HIF_CMDQM_HOST_REQ_BUF_BASE_H_ADDR_MEM_ADDR 0xa1022000 +#define HIF_CMDQM_HOST_REQ_BUF_BASE_L_ADDR_MEM_ADDR 0xa1024000 +#define HIF_CMDQM_HOST_RSP_BUF_BASE_H_ADDR_MEM_ADDR 0xa102a000 +#define HIF_CMDQM_HOST_RSP_BUF_BASE_L_ADDR_MEM_ADDR 0xa102c000 +#define HIF_CMDQM_VECTOR_ID_MEM_ADDR 0xa1034000 +#define HIF_CMDQM_Q_ELEMENT_SZ_REG_ADDR 0xa1020020 +#define HIF_CMDQM_HOST_Q_DEPTH_REG_ADDR 0xa1020028 +#define HIF_CMDQM_HOST_VF_ERR_STS_MEM_ADDR 0xa1032000 + +//PSV use +//hif_irq_csr_defines.h +#define HIF_IRQ_CONTROL_TBL_MEM_ADDR 0xa1102000 +#define HIF_IRQ_INT_DB_REG_ADDR 0xa11000b4 +#define HIF_IRQ_CFG_VECTOR_TABLE_BUSY_REG_ADDR 0xa1100114 +#define HIF_IRQ_CFG_VECTOR_TABLE_ADDR_REG_ADDR 0xa11000f0 +#define HIF_IRQ_CFG_VECTOR_TABLE_CMD_REG_ADDR 0xa11000ec +#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_LADDR_REG_ADDR 0xa11000f4 +#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_UADDR_REG_ADDR 0xa11000f8 +#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_DATA_REG_ADDR 0xa11000fc +#define HIF_IRQ_CFG_VECTOR_TABLE_CTRL_REG_ADDR 0xa1100100 +#define HIF_IRQ_CFG_VECTOR_TABLE_START_REG_ADDR 0xa11000e8 + +#endif /* __XSC_HW_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h new file mode 100644 index 000000000..dbd5a3ae4 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h @@ -0,0 +1,632 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_CMD_H +#define __XSC_CMD_H + +#define XSC_CMDQ_VERSION 0x32 + +#define XSC_BOARD_SN_LEN 32 + +enum { + XSC_CMD_STAT_OK = 0x0, + XSC_CMD_STAT_INT_ERR = 0x1, + XSC_CMD_STAT_BAD_OP_ERR = 0x2, + XSC_CMD_STAT_BAD_PARAM_ERR = 0x3, + XSC_CMD_STAT_BAD_SYS_STATE_ERR = 0x4, + XSC_CMD_STAT_BAD_RES_ERR = 0x5, + XSC_CMD_STAT_RES_BUSY = 0x6, + XSC_CMD_STAT_LIM_ERR = 0x8, + XSC_CMD_STAT_BAD_RES_STATE_ERR = 0x9, + XSC_CMD_STAT_IX_ERR = 0xa, + XSC_CMD_STAT_NO_RES_ERR = 0xf, + XSC_CMD_STAT_BAD_QP_STATE_ERR = 0x10, + XSC_CMD_STAT_BAD_PKT_ERR = 0x30, + XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR = 0x40, + XSC_CMD_STAT_BAD_INP_LEN_ERR = 0x50, + XSC_CMD_STAT_BAD_OUTP_LEN_ERR = 0x51, +}; + +enum { + XSC_CMD_OP_QUERY_HCA_CAP = 0x100, + XSC_CMD_OP_QUERY_CMDQ_VERSION = 0x10a, + XSC_CMD_OP_FUNCTION_RESET = 0x10c, + XSC_CMD_OP_DUMMY = 0x10d, + XSC_CMD_OP_QUERY_GUID = 0x113, + XSC_CMD_OP_ACTIVATE_HW_CONFIG = 0x114, + + XSC_CMD_OP_CREATE_EQ = 0x301, + XSC_CMD_OP_DESTROY_EQ = 0x302, + + XSC_CMD_OP_CREATE_CQ = 0x400, + XSC_CMD_OP_DESTROY_CQ = 0x401, + + XSC_CMD_OP_CREATE_QP = 0x500, + XSC_CMD_OP_DESTROY_QP = 0x501, + XSC_CMD_OP_RST2INIT_QP = 0x502, + XSC_CMD_OP_INIT2RTR_QP = 0x503, + XSC_CMD_OP_RTR2RTS_QP = 0x504, + XSC_CMD_OP_RTS2RTS_QP = 0x505, + XSC_CMD_OP_SQERR2RTS_QP = 0x506, + XSC_CMD_OP_2ERR_QP = 0x507, + XSC_CMD_OP_RTS2SQD_QP = 0x508, + XSC_CMD_OP_SQD2RTS_QP = 0x509, + XSC_CMD_OP_2RST_QP = 0x50a, + XSC_CMD_OP_INIT2INIT_QP = 0x50e, + XSC_CMD_OP_CREATE_MULTI_QP = 0x515, + + XSC_CMD_OP_MODIFY_RAW_QP = 0x81f, + + XSC_CMD_OP_ENABLE_NIC_HCA = 0x810, + XSC_CMD_OP_DISABLE_NIC_HCA = 0x811, + + XSC_CMD_OP_QUERY_VPORT_STATE = 0x822, + XSC_CMD_OP_MODIFY_VPORT_STATE = 0x823, + XSC_CMD_OP_QUERY_EVENT_TYPE = 0x831, + + XSC_CMD_OP_ENABLE_MSIX = 0x850, + + XSC_CMD_OP_SET_MTU = 0x1100, + XSC_CMD_OP_QUERY_ETH_MAC = 0X1101, + + XSC_CMD_OP_SET_PORT_ADMIN_STATUS = 0x1801, + + XSC_CMD_OP_MAX +}; + +enum xsc_dma_direct { + XSC_DMA_DIR_TO_MAC, + XSC_DMA_DIR_READ, + XSC_DMA_DIR_WRITE, + XSC_DMA_DIR_LOOPBACK, + XSC_DMA_DIR_MAX +}; + +/* hw feature bitmap, 32bit */ +enum xsc_hw_feature_flag { + XSC_HW_RDMA_SUPPORT = BIT(0), + XSC_HW_PFC_PRIO_STATISTIC_SUPPORT = BIT(1), + XSC_HW_THIRD_FEATURE = BIT(2), + XSC_HW_PFC_STALL_STATS_SUPPORT = BIT(3), + XSC_HW_RDMA_CM_SUPPORT = BIT(5), + + XSC_HW_LAST_FEATURE = BIT(31) +}; + +struct xsc_inbox_hdr { + __be16 opcode; + u8 rsvd[4]; + __be16 ver; // cmd version +}; + +struct xsc_outbox_hdr { + u8 status; + u8 rsvd[5]; + __be16 ver; +}; + +/*CQ mbox*/ +struct xsc_cq_context { + __be16 eqn; // event queue number + __be16 pa_num; // physical address count in the ctx + __be16 glb_func_id; + u8 log_cq_sz; + u8 cq_type; +}; + +struct xsc_create_cq_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_cq_context ctx; + __be64 pas[]; // physical address list +}; + +struct xsc_create_cq_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 cqn; // completion queue number + u8 rsvd[4]; +}; + +struct xsc_destroy_cq_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 cqn; + u8 rsvd[4]; +}; + +struct xsc_destroy_cq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +/*QP mbox*/ +struct xsc_create_qp_request { + __be16 input_qpn; + __be16 pa_num; + u8 qp_type; + u8 log_sq_sz; + u8 log_rq_sz; + u8 dma_direct; + __be32 pdn; // protect domain number + __be16 cqn_send; + __be16 cqn_recv; + __be16 glb_funcid; + /*rsvd, the old logic_port */ + u8 rsvd[2]; + __be64 pas[]; +}; + +struct xsc_create_qp_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_create_qp_request req; +}; + +struct xsc_create_qp_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 qpn; // queue pair number + u8 rsvd[4]; +}; + +struct xsc_destroy_qp_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 qpn; + u8 rsvd[4]; +}; + +struct xsc_destroy_qp_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_qp_context { + __be32 remote_qpn; + __be32 cqn_send; + __be32 cqn_recv; + __be32 next_send_psn; + __be32 next_recv_psn; + __be32 pdn; + __be16 src_udp_port; + __be16 path_id; + u8 mtu_mode; + u8 lag_sel; + u8 lag_sel_en; + u8 retry_cnt; + u8 rnr_retry; + u8 dscp; + u8 state; + u8 hop_limit; + u8 dmac[6]; + u8 smac[6]; + __be32 dip[4]; + __be32 sip[4]; + __be16 ip_type; + __be16 grp_id; + u8 vlan_valid; + u8 dci_cfi_prio_sl; + __be16 vlan_id; + u8 qp_out_port; + u8 pcie_no; + __be16 lag_id; + __be16 func_id; + __be16 rsvd; +}; + +struct xsc_modify_qp_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 qpn; + struct xsc_qp_context ctx; + u8 no_need_wait; +}; + +struct xsc_modify_qp_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_create_multiqp_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 qp_num; + u8 qp_type; + u8 rsvd; + __be32 req_len; + u8 data[]; +}; + +struct xsc_create_multiqp_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 qpn_base; +}; + +/* MSIX TABLE mbox */ +struct xsc_msix_table_info_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 index; + u8 rsvd[6]; +}; + +struct xsc_msix_table_info_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 addr_lo; + __be32 addr_hi; + __be32 data; +}; + +/*EQ mbox*/ +struct xsc_eq_context { + __be16 vecidx; + __be16 pa_num; + u8 log_eq_sz; + __be16 glb_func_id; + u8 is_async_eq; + u8 rsvd; +}; + +struct xsc_create_eq_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_eq_context ctx; + __be64 pas[]; +}; + +struct xsc_create_eq_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 eqn; + u8 rsvd[4]; +}; + +struct xsc_destroy_eq_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 eqn; + u8 rsvd[4]; +}; + +struct xsc_destroy_eq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_query_eq_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd0[3]; + u8 eqn; + u8 rsvd1[4]; +}; + +struct xsc_query_eq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; + struct xsc_eq_context ctx; +}; + +struct xsc_query_cq_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 cqn; + u8 rsvd0[4]; +}; + +struct xsc_query_cq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[8]; + struct xsc_cq_context ctx; + u8 rsvd6[16]; + __be64 pas[]; +}; + +struct xsc_cmd_query_cmdq_ver_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_query_cmdq_ver_mbox_out { + struct xsc_outbox_hdr hdr; + __be16 cmdq_ver; + u8 rsvd[6]; +}; + +struct xsc_cmd_dummy_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_dummy_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_fw_version { + u8 fw_version_major; + u8 fw_version_minor; + __be16 fw_version_patch; + __be32 fw_version_tweak; + u8 fw_version_extra_flag; + u8 rsvd[7]; +}; + +struct xsc_hca_cap { + u8 rsvd1[12]; + u8 send_seg_num; + u8 send_wqe_shift; + u8 recv_seg_num; + u8 recv_wqe_shift; + u8 log_max_srq_sz; + u8 log_max_qp_sz; + u8 log_max_mtt; + u8 log_max_qp; + u8 log_max_strq_sz; + u8 log_max_srqs; + u8 rsvd4[2]; + u8 log_max_tso; + u8 log_max_cq_sz; + u8 rsvd6; + u8 log_max_cq; + u8 log_max_eq_sz; + u8 log_max_mkey; + u8 log_max_msix; + u8 log_max_eq; + u8 max_indirection; + u8 log_max_mrw_sz; + u8 log_max_bsf_list_sz; + u8 log_max_klm_list_sz; + u8 rsvd_8_0; + u8 log_max_ra_req_dc; + u8 rsvd_8_1; + u8 log_max_ra_res_dc; + u8 rsvd9; + u8 log_max_ra_req_qp; + u8 log_max_qp_depth; + u8 log_max_ra_res_qp; + __be16 max_vfs; + __be16 raweth_qp_id_end; + __be16 raw_tpe_qp_num; + __be16 max_qp_count; + __be16 raweth_qp_id_base; + u8 rsvd13; + u8 local_ca_ack_delay; + u8 max_num_eqs; + u8 num_ports; + u8 log_max_msg; + u8 mac_port; + __be16 raweth_rss_qp_id_base; + __be16 stat_rate_support; + u8 rsvd16[2]; + __be64 flags; + u8 rsvd17; + u8 uar_sz; + u8 rsvd18; + u8 log_pg_sz; + __be16 bf_log_bf_reg_size; + __be16 msix_base; + __be16 msix_num; + __be16 max_desc_sz_sq; + u8 rsvd20[2]; + __be16 max_desc_sz_rq; + u8 rsvd21[2]; + __be16 max_desc_sz_sq_dc; + u8 rsvd22[4]; + __be16 max_qp_mcg; + u8 rsvd23; + u8 log_max_mcg; + u8 rsvd24; + u8 log_max_pd; + u8 rsvd25; + u8 log_max_xrcd; + u8 rsvd26[40]; + __be32 uar_page_sz; + u8 rsvd27[8]; + __be32 hw_feature_flag;/*enum xsc_hw_feature_flag*/ + __be16 pf0_vf_funcid_base; + __be16 pf0_vf_funcid_top; + __be16 pf1_vf_funcid_base; + __be16 pf1_vf_funcid_top; + __be16 pcie0_pf_funcid_base; + __be16 pcie0_pf_funcid_top; + __be16 pcie1_pf_funcid_base; + __be16 pcie1_pf_funcid_top; + u8 log_msx_atomic_size_qp; + u8 pcie_host; + u8 rsvd28; + u8 log_msx_atomic_size_dc; + u8 board_sn[XSC_BOARD_SN_LEN]; + u8 max_tc; + u8 mac_bit; + __be16 funcid_to_logic_port; + u8 rsvd29[6]; + u8 nif_port_num; + u8 reg_mr_via_cmdq; + __be32 hca_core_clock; + __be32 max_rwq_indirection_tables;/*rss_caps*/ + __be32 max_rwq_indirection_table_size;/*rss_caps*/ + __be32 chip_ver_h; + __be32 chip_ver_m; + __be32 chip_ver_l; + __be32 hotfix_num; + __be32 feature_flag; + __be32 rx_pkt_len_max; + __be32 glb_func_id; + __be64 tx_db; + __be64 rx_db; + __be64 complete_db; + __be64 complete_reg; + __be64 event_db; + __be32 qp_rate_limit_min; + __be32 qp_rate_limit_max; + struct xsc_fw_version fw_ver; + u8 lag_logic_port_ofst; +}; + +struct xsc_cmd_query_hca_cap_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 cpu_num; + u8 rsvd[6]; +}; + +struct xsc_cmd_query_hca_cap_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[8]; + struct xsc_hca_cap hca_cap; +}; + +struct xsc_query_vport_state_out { + struct xsc_outbox_hdr hdr; + u8 admin_state:4; + u8 state:4; +}; + +struct xsc_query_vport_state_in { + struct xsc_inbox_hdr hdr; + u32 other_vport:1; + u32 vport_number:16; + u32 rsvd0:15; +}; + +enum { + XSC_CMD_EVENT_RESP_CHANGE_LINK = BIT(0), + XSC_CMD_EVENT_RESP_TEMP_WARN = BIT(1), + XSC_CMD_EVENT_RESP_OVER_TEMP_PROTECTION = BIT(2) +}; + +struct xsc_event_resp { + u8 resp_cmd_type; +}; + +struct xsc_event_query_type_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[2]; +}; + +struct xsc_event_query_type_mbox_out { + struct xsc_outbox_hdr hdr; + struct xsc_event_resp ctx; +}; + +struct xsc_modify_raw_qp_request { + u16 qpn; + u16 lag_id; + u16 func_id; + u8 dma_direct; + u8 prio; + u8 qp_out_port; + u8 rsvd[7]; +}; + +struct xsc_modify_raw_qp_mbox_in { + struct xsc_inbox_hdr hdr; + u8 pcie_no; + u8 rsvd[7]; + struct xsc_modify_raw_qp_request req; +}; + +struct xsc_modify_raw_qp_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_set_mtu_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 mtu; + __be16 rx_buf_sz_min; + u8 mac_port; + u8 rsvd; +}; + +struct xsc_set_mtu_mbox_out { + struct xsc_outbox_hdr hdr; +}; + +struct xsc_query_eth_mac_mbox_in { + struct xsc_inbox_hdr hdr; + u8 index; +}; + +struct xsc_query_eth_mac_mbox_out { + struct xsc_outbox_hdr hdr; + u8 mac[6]; +}; + +enum { + XSC_TBM_CAP_HASH_PPH = 0, + XSC_TBM_CAP_RSS, + XSC_TBM_CAP_PP_BYPASS, + XSC_TBM_CAP_PCT_DROP_CONFIG, +}; + +struct xsc_nic_attr { + __be16 caps; + __be16 caps_mask; + u8 mac_addr[6]; +}; + +struct xsc_rss_attr { + u8 rss_en; + u8 hfunc; + __be16 rqn_base; + __be16 rqn_num; + __be32 hash_tmpl; +}; + +struct xsc_cmd_enable_nic_hca_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_nic_attr nic; + struct xsc_rss_attr rss; +}; + +struct xsc_cmd_enable_nic_hca_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[2]; +}; + +struct xsc_nic_dis_attr { + __be16 caps; +}; + +struct xsc_cmd_disable_nic_hca_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_nic_dis_attr nic; +}; + +struct xsc_cmd_disable_nic_hca_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[4]; +}; + +struct xsc_function_reset_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 glb_func_id; + u8 rsvd[6]; +}; + +struct xsc_function_reset_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_query_guid_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_query_guid_mbox_out { + struct xsc_outbox_hdr hdr; + __be64 guid; +}; + +struct xsc_cmd_activate_hw_config_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_activate_hw_config_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_event_set_port_admin_status_mbox_in { + struct xsc_inbox_hdr hdr; + u16 admin_status; +}; + +struct xsc_event_set_port_admin_status_mbox_out { + struct xsc_outbox_hdr hdr; + u32 status; +}; + +#endif /* __XSC_CMD_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h new file mode 100644 index 000000000..6ca6aae52 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h @@ -0,0 +1,215 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_CMDQ_H +#define __XSC_CMDQ_H + +#include "common/xsc_cmd.h" + +enum { + XSC_CMD_TIMEOUT_MSEC = 10 * 1000, + XSC_CMD_WQ_MAX_NAME = 32, +}; + +enum { + XSC_CMD_DATA, /* print command payload only */ + XSC_CMD_TIME, /* print command execution time */ +}; + +enum { + XSC_MAX_COMMANDS = 32, + XSC_CMD_DATA_BLOCK_SIZE = 512, + XSC_PCI_CMD_XPORT = 7, +}; + +struct xsc_cmd_prot_block { + u8 data[XSC_CMD_DATA_BLOCK_SIZE]; + u8 rsvd0[48]; + __be64 next; + __be32 block_num; + u8 owner_status; //init to 0, dma user should change this val to 1 + u8 token; + u8 ctrl_sig; + u8 sig; +}; + +struct cache_ent { + /* protect block chain allocations + */ + spinlock_t lock; + struct list_head head; +}; + +struct cmd_msg_cache { + struct cache_ent large; + struct cache_ent med; + +}; + +#define CMD_FIRST_SIZE 8 +struct xsc_cmd_first { + __be32 data[CMD_FIRST_SIZE]; +}; + +struct xsc_cmd_mailbox { + void *buf; + dma_addr_t dma; + struct xsc_cmd_mailbox *next; +}; + +struct xsc_cmd_msg { + struct list_head list; + struct cache_ent *cache; + u32 len; + struct xsc_cmd_first first; + struct xsc_cmd_mailbox *next; +}; + +#define RSP_FIRST_SIZE 14 +struct xsc_rsp_first { + __be32 data[RSP_FIRST_SIZE]; //can be larger, xsc_rsp_layout +}; + +struct xsc_rsp_msg { + struct list_head list; + struct cache_ent *cache; + u32 len; + struct xsc_rsp_first first; + struct xsc_cmd_mailbox *next; +}; + +typedef void (*xsc_cmd_cbk_t)(int status, void *context); + +//hw will use this for some records(e.g. vf_id) +struct cmdq_rsv { + u16 vf_id; + u8 rsv[2]; +}; + +//related with hw, won't change +#define CMDQ_ENTRY_SIZE 64 + +struct xsc_cmd_layout { + struct cmdq_rsv rsv0; + __be32 inlen; + __be64 in_ptr; + __be32 in[CMD_FIRST_SIZE]; + __be64 out_ptr; + __be32 outlen; + u8 token; + u8 sig; + u8 idx; + u8 type: 7; + u8 owner_bit: 1; //rsv for hw, arm will check this bit to make sure mem written +}; + +struct xsc_rsp_layout { + struct cmdq_rsv rsv0; + __be32 out[RSP_FIRST_SIZE]; + u8 token; + u8 sig; + u8 idx; + u8 type: 7; + u8 owner_bit: 1; //rsv for hw, driver will check this bit to make sure mem written +}; + +struct xsc_cmd_work_ent { + struct xsc_cmd_msg *in; + struct xsc_rsp_msg *out; + int idx; + struct completion done; + struct xsc_cmd *cmd; + struct work_struct work; + struct xsc_cmd_layout *lay; + struct xsc_rsp_layout *rsp_lay; + int ret; + u8 status; + u8 token; + struct timespec64 ts1; + struct timespec64 ts2; +}; + +struct xsc_cmd_debug { + struct dentry *dbg_root; + struct dentry *dbg_in; + struct dentry *dbg_out; + struct dentry *dbg_outlen; + struct dentry *dbg_status; + struct dentry *dbg_run; + void *in_msg; + void *out_msg; + u8 status; + u16 inlen; + u16 outlen; +}; + +struct xsc_cmd_stats { + u64 sum; + u64 n; + struct dentry *root; + struct dentry *avg; + struct dentry *count; + /* protect command average calculations */ + spinlock_t lock; +}; + +struct xsc_cmd_reg { + u32 req_pid_addr; + u32 req_cid_addr; + u32 rsp_pid_addr; + u32 rsp_cid_addr; + u32 req_buf_h_addr; + u32 req_buf_l_addr; + u32 rsp_buf_h_addr; + u32 rsp_buf_l_addr; + u32 msix_vec_addr; + u32 element_sz_addr; + u32 q_depth_addr; + u32 interrupt_stat_addr; +}; + +enum xsc_cmd_status { + XSC_CMD_STATUS_NORMAL, + XSC_CMD_STATUS_TIMEDOUT, +}; + +struct xsc_cmd { + struct xsc_cmd_reg reg; + void *cmd_buf; + void *cq_buf; + dma_addr_t dma; + dma_addr_t cq_dma; + u16 cmd_pid; + u16 cq_cid; + u8 owner_bit; + u8 cmdif_rev; + u8 log_sz; + u8 log_stride; + int max_reg_cmds; + int events; + u32 __iomem *vector; + + spinlock_t alloc_lock; /* protect command queue allocations */ + spinlock_t token_lock; /* protect token allocations */ + spinlock_t doorbell_lock; /* protect cmdq req pid doorbell */ + u8 token; + unsigned long bitmask; + char wq_name[XSC_CMD_WQ_MAX_NAME]; + struct workqueue_struct *wq; + struct task_struct *cq_task; + struct semaphore sem; + int mode; + struct xsc_cmd_work_ent *ent_arr[XSC_MAX_COMMANDS]; + struct dma_pool *pool; + struct xsc_cmd_debug dbg; + struct cmd_msg_cache cache; + int checksum_disabled; + struct xsc_cmd_stats stats[XSC_CMD_OP_MAX]; + unsigned int irqn; + u8 ownerbit_learned; + u8 cmd_status; +}; + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 2c4e8e731..3b4b77948 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -8,6 +8,7 @@ #include #include +#include "common/xsc_cmdq.h" #define XSC_PCI_VENDOR_ID 0x1f67 @@ -26,6 +27,11 @@ #define XSC_MV_HOST_VF_DEV_ID 0x1152 #define XSC_MV_SOC_PF_DEV_ID 0x1153 +#define REG_ADDR(dev, offset) \ + (((dev)->bar) + ((offset) - 0xA0000000)) + +#define REG_WIDTH_TO_STRIDE(width) ((width) / 8) + struct xsc_dev_resource { struct mutex alloc_mutex; /* protect buffer alocation according to numa node */ }; @@ -35,6 +41,10 @@ enum xsc_pci_state { XSC_PCI_STATE_ENABLED, }; +enum xsc_interface_state { + XSC_INTERFACE_STATE_UP = BIT(0), +}; + struct xsc_core_device { struct pci_dev *pdev; struct device *device; @@ -44,6 +54,9 @@ struct xsc_core_device { void __iomem *bar; int bar_num; + struct xsc_cmd cmd; + u16 cmdq_ver; + struct mutex pci_state_mutex; /* protect pci_state */ enum xsc_pci_state pci_state; struct mutex intf_state_mutex; /* protect intf_state */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h new file mode 100644 index 000000000..72b2df6c9 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_DRIVER_H +#define __XSC_DRIVER_H + +#include "common/xsc_core.h" +#include "common/xsc_cmd.h" + +int xsc_cmd_init(struct xsc_core_device *xdev); +void xsc_cmd_cleanup(struct xsc_core_device *xdev); +void xsc_cmd_use_events(struct xsc_core_device *xdev); +void xsc_cmd_use_polling(struct xsc_core_device *xdev); +int xsc_cmd_err_handler(struct xsc_core_device *xdev); +void xsc_cmd_resp_handler(struct xsc_core_device *xdev); +int xsc_cmd_status_to_err(struct xsc_outbox_hdr *hdr); +int xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out, + int out_size); +int xsc_cmd_version_check(struct xsc_core_device *xdev); +const char *xsc_command_str(int command); + +#endif + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 709270df8..5e0f0a205 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o +xsc_pci-y := main.o cmdq.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c new file mode 100644 index 000000000..028970151 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c @@ -0,0 +1,1555 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + * Copyright (c) 2013-2016, Mellanox Technologies. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "common/xsc_driver.h" +#include "common/xsc_cmd.h" +#include "common/xsc_auto_hw.h" +#include "common/xsc_core.h" + +enum { + CMD_IF_REV = 3, +}; + +enum { + CMD_MODE_POLLING, + CMD_MODE_EVENTS +}; + +enum { + NUM_LONG_LISTS = 2, + NUM_MED_LISTS = 64, + LONG_LIST_SIZE = (2ULL * 1024 * 1024 * 1024 / PAGE_SIZE) * 8 + 16 + + XSC_CMD_DATA_BLOCK_SIZE, + MED_LIST_SIZE = 16 + XSC_CMD_DATA_BLOCK_SIZE, +}; + +enum { + XSC_CMD_DELIVERY_STAT_OK = 0x0, + XSC_CMD_DELIVERY_STAT_SIGNAT_ERR = 0x1, + XSC_CMD_DELIVERY_STAT_TOK_ERR = 0x2, + XSC_CMD_DELIVERY_STAT_BAD_BLK_NUM_ERR = 0x3, + XSC_CMD_DELIVERY_STAT_OUT_PTR_ALIGN_ERR = 0x4, + XSC_CMD_DELIVERY_STAT_IN_PTR_ALIGN_ERR = 0x5, + XSC_CMD_DELIVERY_STAT_FW_ERR = 0x6, + XSC_CMD_DELIVERY_STAT_IN_LENGTH_ERR = 0x7, + XSC_CMD_DELIVERY_STAT_OUT_LENGTH_ERR = 0x8, + XSC_CMD_DELIVERY_STAT_RES_FLD_NOT_CLR_ERR = 0x9, + XSC_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10, +}; + +static struct xsc_cmd_work_ent *alloc_cmd(struct xsc_cmd *cmd, + struct xsc_cmd_msg *in, + struct xsc_rsp_msg *out) +{ + struct xsc_cmd_work_ent *ent; + + ent = kzalloc(sizeof(*ent), GFP_KERNEL); + if (!ent) + return ERR_PTR(-ENOMEM); + + ent->in = in; + ent->out = out; + ent->cmd = cmd; + + return ent; +} + +static u8 alloc_token(struct xsc_cmd *cmd) +{ + u8 token; + + spin_lock(&cmd->token_lock); + token = cmd->token++ % 255 + 1; + spin_unlock(&cmd->token_lock); + + return token; +} + +static int alloc_ent(struct xsc_cmd *cmd) +{ + unsigned long flags; + int ret; + + spin_lock_irqsave(&cmd->alloc_lock, flags); + ret = find_first_bit(&cmd->bitmask, cmd->max_reg_cmds); + if (ret < cmd->max_reg_cmds) + clear_bit(ret, &cmd->bitmask); + spin_unlock_irqrestore(&cmd->alloc_lock, flags); + + return ret < cmd->max_reg_cmds ? ret : -ENOMEM; +} + +static void free_ent(struct xsc_cmd *cmd, int idx) +{ + unsigned long flags; + + spin_lock_irqsave(&cmd->alloc_lock, flags); + set_bit(idx, &cmd->bitmask); + spin_unlock_irqrestore(&cmd->alloc_lock, flags); +} + +static struct xsc_cmd_layout *get_inst(struct xsc_cmd *cmd, int idx) +{ + return cmd->cmd_buf + (idx << cmd->log_stride); +} + +static struct xsc_rsp_layout *get_cq_inst(struct xsc_cmd *cmd, int idx) +{ + return cmd->cq_buf + (idx << cmd->log_stride); +} + +static u8 xor8_buf(void *buf, int len) +{ + u8 *ptr = buf; + u8 sum = 0; + int i; + + for (i = 0; i < len; i++) + sum ^= ptr[i]; + + return sum; +} + +static int verify_block_sig(struct xsc_cmd_prot_block *block) +{ + if (xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 1) != 0xff) + return -EINVAL; + + if (xor8_buf(block, sizeof(*block)) != 0xff) + return -EINVAL; + + return 0; +} + +static void calc_block_sig(struct xsc_cmd_prot_block *block, u8 token) +{ + block->token = token; + block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 2); + block->sig = ~xor8_buf(block, sizeof(*block) - 1); +} + +static void calc_chain_sig(struct xsc_cmd_mailbox *head, u8 token) +{ + struct xsc_cmd_mailbox *next = head; + + while (next) { + calc_block_sig(next->buf, token); + next = next->next; + } +} + +static void set_signature(struct xsc_cmd_work_ent *ent) +{ + ent->lay->sig = ~xor8_buf(ent->lay, sizeof(*ent->lay)); + calc_chain_sig(ent->in->next, ent->token); + calc_chain_sig(ent->out->next, ent->token); +} + +static void free_cmd(struct xsc_cmd_work_ent *ent) +{ + kfree(ent); +} + +static int verify_signature(struct xsc_cmd_work_ent *ent) +{ + struct xsc_cmd_mailbox *next = ent->out->next; + int err; + u8 sig; + + sig = xor8_buf(ent->rsp_lay, sizeof(*ent->rsp_lay)); + if (sig != 0xff) + return -EINVAL; + + while (next) { + err = verify_block_sig(next->buf); + if (err) + return err; + + next = next->next; + } + + return 0; +} + +const char *xsc_command_str(int command) +{ + switch (command) { + case XSC_CMD_OP_QUERY_HCA_CAP: + return "QUERY_HCA_CAP"; + + case XSC_CMD_OP_QUERY_CMDQ_VERSION: + return "QUERY_CMDQ_VERSION"; + + case XSC_CMD_OP_FUNCTION_RESET: + return "FUNCTION_RESET"; + + case XSC_CMD_OP_DUMMY: + return "DUMMY_CMD"; + + case XSC_CMD_OP_CREATE_EQ: + return "CREATE_EQ"; + + case XSC_CMD_OP_DESTROY_EQ: + return "DESTROY_EQ"; + + case XSC_CMD_OP_CREATE_CQ: + return "CREATE_CQ"; + + case XSC_CMD_OP_DESTROY_CQ: + return "DESTROY_CQ"; + + case XSC_CMD_OP_CREATE_QP: + return "CREATE_QP"; + + case XSC_CMD_OP_DESTROY_QP: + return "DESTROY_QP"; + + case XSC_CMD_OP_RST2INIT_QP: + return "RST2INIT_QP"; + + case XSC_CMD_OP_INIT2RTR_QP: + return "INIT2RTR_QP"; + + case XSC_CMD_OP_RTR2RTS_QP: + return "RTR2RTS_QP"; + + case XSC_CMD_OP_RTS2RTS_QP: + return "RTS2RTS_QP"; + + case XSC_CMD_OP_SQERR2RTS_QP: + return "SQERR2RTS_QP"; + + case XSC_CMD_OP_2ERR_QP: + return "2ERR_QP"; + + case XSC_CMD_OP_RTS2SQD_QP: + return "RTS2SQD_QP"; + + case XSC_CMD_OP_SQD2RTS_QP: + return "SQD2RTS_QP"; + + case XSC_CMD_OP_2RST_QP: + return "2RST_QP"; + + case XSC_CMD_OP_INIT2INIT_QP: + return "INIT2INIT_QP"; + + case XSC_CMD_OP_MODIFY_RAW_QP: + return "MODIFY_RAW_QP"; + + case XSC_CMD_OP_ENABLE_NIC_HCA: + return "ENABLE_NIC_HCA"; + + case XSC_CMD_OP_DISABLE_NIC_HCA: + return "DISABLE_NIC_HCA"; + + case XSC_CMD_OP_QUERY_VPORT_STATE: + return "QUERY_VPORT_STATE"; + + case XSC_CMD_OP_MODIFY_VPORT_STATE: + return "MODIFY_VPORT_STATE"; + + case XSC_CMD_OP_QUERY_EVENT_TYPE: + return "QUERY_EVENT_TYPE"; + + case XSC_CMD_OP_ENABLE_MSIX: + return "ENABLE_MSIX"; + + case XSC_CMD_OP_SET_MTU: + return "SET_MTU"; + + case XSC_CMD_OP_QUERY_ETH_MAC: + return "QUERY_ETH_MAC"; + + default: return "unknown command opcode"; + } +} + +static void cmd_work_handler(struct work_struct *work) +{ + struct xsc_cmd_work_ent *ent = container_of(work, struct xsc_cmd_work_ent, work); + struct xsc_cmd *cmd = ent->cmd; + struct xsc_core_device *xdev = container_of(cmd, struct xsc_core_device, cmd); + struct xsc_cmd_layout *lay; + struct semaphore *sem; + unsigned long flags; + + sem = &cmd->sem; + down(sem); + ent->idx = alloc_ent(cmd); + if (ent->idx < 0) { + pci_err(xdev->pdev, "failed to allocate command entry\n"); + up(sem); + return; + } + + ent->token = alloc_token(cmd); + cmd->ent_arr[ent->idx] = ent; + + spin_lock_irqsave(&cmd->doorbell_lock, flags); + lay = get_inst(cmd, cmd->cmd_pid); + ent->lay = lay; + memset(lay, 0, sizeof(*lay)); + memcpy(lay->in, ent->in->first.data, sizeof(lay->in)); + if (ent->in->next) + lay->in_ptr = cpu_to_be64(ent->in->next->dma); + lay->inlen = cpu_to_be32(ent->in->len); + if (ent->out->next) + lay->out_ptr = cpu_to_be64(ent->out->next->dma); + lay->outlen = cpu_to_be32(ent->out->len); + lay->type = XSC_PCI_CMD_XPORT; + lay->token = ent->token; + lay->idx = ent->idx; + if (!cmd->checksum_disabled) + set_signature(ent); + else + lay->sig = 0xff; + + ktime_get_ts64(&ent->ts1); + + /* ring doorbell after the descriptor is valid */ + wmb(); + + cmd->cmd_pid = (cmd->cmd_pid + 1) % (1 << cmd->log_sz); + writel(cmd->cmd_pid, REG_ADDR(xdev, cmd->reg.req_pid_addr)); + spin_unlock_irqrestore(&cmd->doorbell_lock, flags); +} + +static const char *deliv_status_to_str(u8 status) +{ + switch (status) { + case XSC_CMD_DELIVERY_STAT_OK: + return "no errors"; + case XSC_CMD_DELIVERY_STAT_SIGNAT_ERR: + return "signature error"; + case XSC_CMD_DELIVERY_STAT_TOK_ERR: + return "token error"; + case XSC_CMD_DELIVERY_STAT_BAD_BLK_NUM_ERR: + return "bad block number"; + case XSC_CMD_DELIVERY_STAT_OUT_PTR_ALIGN_ERR: + return "output pointer not aligned to block size"; + case XSC_CMD_DELIVERY_STAT_IN_PTR_ALIGN_ERR: + return "input pointer not aligned to block size"; + case XSC_CMD_DELIVERY_STAT_FW_ERR: + return "firmware internal error"; + case XSC_CMD_DELIVERY_STAT_IN_LENGTH_ERR: + return "command input length error"; + case XSC_CMD_DELIVERY_STAT_OUT_LENGTH_ERR: + return "command output length error"; + case XSC_CMD_DELIVERY_STAT_RES_FLD_NOT_CLR_ERR: + return "reserved fields not cleared"; + case XSC_CMD_DELIVERY_STAT_CMD_DESCR_ERR: + return "bad command descriptor type"; + default: + return "unknown status code"; + } +} + +static u16 msg_to_opcode(struct xsc_cmd_msg *in) +{ + struct xsc_inbox_hdr *hdr = (struct xsc_inbox_hdr *)(in->first.data); + + return be16_to_cpu(hdr->opcode); +} + +static int wait_func(struct xsc_core_device *xdev, struct xsc_cmd_work_ent *ent) +{ + unsigned long timeout = msecs_to_jiffies(XSC_CMD_TIMEOUT_MSEC); + int err; + struct xsc_cmd *cmd = &xdev->cmd; + + if (!wait_for_completion_timeout(&ent->done, timeout)) + err = -ETIMEDOUT; + else + err = ent->ret; + + if (err == -ETIMEDOUT) { + cmd->cmd_status = XSC_CMD_STATUS_TIMEDOUT; + pci_err(xdev->pdev, "wait for %s(0x%x) response timeout!\n", + xsc_command_str(msg_to_opcode(ent->in)), + msg_to_opcode(ent->in)); + } else if (err) { + pci_err(xdev->pdev, "err %d, delivery status %s(%d)\n", err, + deliv_status_to_str(ent->status), ent->status); + } + + return err; +} + +/* Notes: + * 1. Callback functions may not sleep + * 2. page queue commands do not support asynchrous completion + */ +static int xsc_cmd_invoke(struct xsc_core_device *xdev, struct xsc_cmd_msg *in, + struct xsc_rsp_msg *out, u8 *status) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_work_ent *ent; + ktime_t t1, t2, delta; + struct xsc_cmd_stats *stats; + int err = 0; + s64 ds; + u16 op; + struct semaphore *sem; + + ent = alloc_cmd(cmd, in, out); + if (IS_ERR(ent)) + return PTR_ERR(ent); + + init_completion(&ent->done); + INIT_WORK(&ent->work, cmd_work_handler); + if (!queue_work(cmd->wq, &ent->work)) { + pci_err(xdev->pdev, "failed to queue work\n"); + err = -ENOMEM; + goto out_free; + } + + err = wait_func(xdev, ent); + if (err == -ETIMEDOUT) + goto out; + t1 = timespec64_to_ktime(ent->ts1); + t2 = timespec64_to_ktime(ent->ts2); + delta = ktime_sub(t2, t1); + ds = ktime_to_ns(delta); + op = be16_to_cpu(((struct xsc_inbox_hdr *)in->first.data)->opcode); + if (op < ARRAY_SIZE(cmd->stats)) { + stats = &cmd->stats[op]; + spin_lock(&stats->lock); + stats->sum += ds; + ++stats->n; + spin_unlock(&stats->lock); + } + *status = ent->status; + free_cmd(ent); + + return err; + +out: + sem = &cmd->sem; + up(sem); +out_free: + free_cmd(ent); + return err; +} + +static int xsc_copy_to_cmd_msg(struct xsc_cmd_msg *to, void *from, int size) +{ + struct xsc_cmd_prot_block *block; + struct xsc_cmd_mailbox *next; + int copy; + + if (!to || !from) + return -ENOMEM; + + copy = min_t(int, size, sizeof(to->first.data)); + memcpy(to->first.data, from, copy); + size -= copy; + from += copy; + + next = to->next; + while (size) { + if (!next) { + /* this is a BUG */ + return -ENOMEM; + } + + copy = min_t(int, size, XSC_CMD_DATA_BLOCK_SIZE); + block = next->buf; + memcpy(block->data, from, copy); + block->owner_status = 0; + from += copy; + size -= copy; + next = next->next; + } + + return 0; +} + +static int xsc_copy_from_rsp_msg(void *to, struct xsc_rsp_msg *from, int size) +{ + struct xsc_cmd_prot_block *block; + struct xsc_cmd_mailbox *next; + int copy; + + if (!to || !from) + return -ENOMEM; + + copy = min_t(int, size, sizeof(from->first.data)); + memcpy(to, from->first.data, copy); + size -= copy; + to += copy; + + next = from->next; + while (size) { + if (!next) { + /* this is a BUG */ + return -ENOMEM; + } + + copy = min_t(int, size, XSC_CMD_DATA_BLOCK_SIZE); + block = next->buf; + if (!block->owner_status) + pr_err("block ownership check failed\n"); + + memcpy(to, block->data, copy); + to += copy; + size -= copy; + next = next->next; + } + + return 0; +} + +static struct xsc_cmd_mailbox *alloc_cmd_box(struct xsc_core_device *xdev, + gfp_t flags) +{ + struct xsc_cmd_mailbox *mailbox; + + mailbox = kmalloc(sizeof(*mailbox), flags); + if (!mailbox) + return ERR_PTR(-ENOMEM); + + mailbox->buf = dma_pool_alloc(xdev->cmd.pool, flags, + &mailbox->dma); + if (!mailbox->buf) { + kfree(mailbox); + return ERR_PTR(-ENOMEM); + } + memset(mailbox->buf, 0, sizeof(struct xsc_cmd_prot_block)); + mailbox->next = NULL; + + return mailbox; +} + +static void free_cmd_box(struct xsc_core_device *xdev, + struct xsc_cmd_mailbox *mailbox) +{ + dma_pool_free(xdev->cmd.pool, mailbox->buf, mailbox->dma); + + kfree(mailbox); +} + +static struct xsc_cmd_msg *xsc_alloc_cmd_msg(struct xsc_core_device *xdev, + gfp_t flags, int size) +{ + struct xsc_cmd_mailbox *tmp, *head = NULL; + struct xsc_cmd_prot_block *block; + struct xsc_cmd_msg *msg; + int blen; + int err; + int n; + int i; + + msg = kzalloc(sizeof(*msg), GFP_KERNEL); + if (!msg) + return ERR_PTR(-ENOMEM); + + blen = size - min_t(int, sizeof(msg->first.data), size); + n = (blen + XSC_CMD_DATA_BLOCK_SIZE - 1) / XSC_CMD_DATA_BLOCK_SIZE; + + for (i = 0; i < n; i++) { + tmp = alloc_cmd_box(xdev, flags); + if (IS_ERR(tmp)) { + pci_err(xdev->pdev, "failed allocating block\n"); + err = PTR_ERR(tmp); + goto err_alloc; + } + + block = tmp->buf; + tmp->next = head; + block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0); + block->block_num = cpu_to_be32(n - i - 1); + head = tmp; + } + msg->next = head; + msg->len = size; + return msg; + +err_alloc: + while (head) { + tmp = head->next; + free_cmd_box(xdev, head); + head = tmp; + } + kfree(msg); + + return ERR_PTR(err); +} + +static void xsc_free_cmd_msg(struct xsc_core_device *xdev, + struct xsc_cmd_msg *msg) +{ + struct xsc_cmd_mailbox *head = msg->next; + struct xsc_cmd_mailbox *next; + + while (head) { + next = head->next; + free_cmd_box(xdev, head); + head = next; + } + kfree(msg); +} + +static struct xsc_rsp_msg *xsc_alloc_rsp_msg(struct xsc_core_device *xdev, + gfp_t flags, int size) +{ + struct xsc_cmd_mailbox *tmp, *head = NULL; + struct xsc_cmd_prot_block *block; + struct xsc_rsp_msg *msg; + int blen; + int err; + int n; + int i; + + msg = kzalloc(sizeof(*msg), GFP_KERNEL); + if (!msg) + return ERR_PTR(-ENOMEM); + + blen = size - min_t(int, sizeof(msg->first.data), size); + n = (blen + XSC_CMD_DATA_BLOCK_SIZE - 1) / XSC_CMD_DATA_BLOCK_SIZE; + + for (i = 0; i < n; i++) { + tmp = alloc_cmd_box(xdev, flags); + if (IS_ERR(tmp)) { + pci_err(xdev->pdev, "failed allocating block\n"); + err = PTR_ERR(tmp); + goto err_alloc; + } + + block = tmp->buf; + tmp->next = head; + block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0); + block->block_num = cpu_to_be32(n - i - 1); + head = tmp; + } + msg->next = head; + msg->len = size; + return msg; + +err_alloc: + while (head) { + tmp = head->next; + free_cmd_box(xdev, head); + head = tmp; + } + kfree(msg); + + return ERR_PTR(err); +} + +static void xsc_free_rsp_msg(struct xsc_core_device *xdev, + struct xsc_rsp_msg *msg) +{ + struct xsc_cmd_mailbox *head = msg->next; + struct xsc_cmd_mailbox *next; + + while (head) { + next = head->next; + free_cmd_box(xdev, head); + head = next; + } + kfree(msg); +} + +static void set_wqname(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + + snprintf(cmd->wq_name, sizeof(cmd->wq_name), "xsc_cmd_%s", + dev_name(&xdev->pdev->dev)); +} + +void xsc_cmd_use_events(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + int i; + + for (i = 0; i < cmd->max_reg_cmds; i++) + down(&cmd->sem); + + flush_workqueue(cmd->wq); + + cmd->mode = CMD_MODE_EVENTS; + + while (cmd->cmd_pid != cmd->cq_cid) + msleep(20); + kthread_stop(cmd->cq_task); + cmd->cq_task = NULL; + + for (i = 0; i < cmd->max_reg_cmds; i++) + up(&cmd->sem); +} + +static int cmd_cq_polling(void *data); +void xsc_cmd_use_polling(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + int i; + + for (i = 0; i < cmd->max_reg_cmds; i++) + down(&cmd->sem); + + flush_workqueue(cmd->wq); + cmd->mode = CMD_MODE_POLLING; + cmd->cq_task = kthread_create(cmd_cq_polling, (void *)xdev, "xsc_cmd_cq_polling"); + if (cmd->cq_task) + wake_up_process(cmd->cq_task); + + for (i = 0; i < cmd->max_reg_cmds; i++) + up(&cmd->sem); +} + +static int status_to_err(u8 status) +{ + return status ? -1 : 0; /* TBD more meaningful codes */ +} + +static struct xsc_cmd_msg *alloc_msg(struct xsc_core_device *xdev, int in_size) +{ + struct xsc_cmd_msg *msg = ERR_PTR(-ENOMEM); + struct xsc_cmd *cmd = &xdev->cmd; + struct cache_ent *ent = NULL; + + if (in_size > MED_LIST_SIZE && in_size <= LONG_LIST_SIZE) + ent = &cmd->cache.large; + else if (in_size > 16 && in_size <= MED_LIST_SIZE) + ent = &cmd->cache.med; + + if (ent) { + spin_lock(&ent->lock); + if (!list_empty(&ent->head)) { + msg = list_entry(ent->head.next, typeof(*msg), list); + /* For cached lists, we must explicitly state what is + * the real size + */ + msg->len = in_size; + list_del(&msg->list); + } + spin_unlock(&ent->lock); + } + + if (IS_ERR(msg)) + msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, in_size); + + return msg; +} + +static void free_msg(struct xsc_core_device *xdev, struct xsc_cmd_msg *msg) +{ + if (msg->cache) { + spin_lock(&msg->cache->lock); + list_add_tail(&msg->list, &msg->cache->head); + spin_unlock(&msg->cache->lock); + } else { + xsc_free_cmd_msg(xdev, msg); + } +} + +static int dummy_work(struct xsc_core_device *xdev, struct xsc_cmd_msg *in, + struct xsc_rsp_msg *out, u16 dummy_cnt, u16 dummy_start_pid) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_work_ent **dummy_ent_arr; + struct xsc_cmd_layout *lay; + struct semaphore *sem; + int err = 0; + u16 i; + u16 free_cnt = 0; + u16 temp_pid = dummy_start_pid; + + sem = &cmd->sem; + + dummy_ent_arr = kcalloc(dummy_cnt, sizeof(struct xsc_cmd_work_ent *), GFP_KERNEL); + if (!dummy_ent_arr) { + err = -ENOMEM; + goto alloc_ent_arr_err; + } + + for (i = 0; i < dummy_cnt; i++) { + dummy_ent_arr[i] = alloc_cmd(cmd, in, out); + if (IS_ERR(dummy_ent_arr[i])) { + pci_err(xdev->pdev, "failed to alloc cmd buffer\n"); + err = -ENOMEM; + free_cnt = i; + goto alloc_ent_err; + } + + down(sem); + + dummy_ent_arr[i]->idx = alloc_ent(cmd); + if (dummy_ent_arr[i]->idx < 0) { + pci_err(xdev->pdev, "failed to allocate command entry\n"); + err = -1; + free_cnt = i; + goto get_cmd_ent_idx_err; + } + dummy_ent_arr[i]->token = alloc_token(cmd); + cmd->ent_arr[dummy_ent_arr[i]->idx] = dummy_ent_arr[i]; + init_completion(&dummy_ent_arr[i]->done); + + lay = get_inst(cmd, temp_pid); + dummy_ent_arr[i]->lay = lay; + memset(lay, 0, sizeof(*lay)); + memcpy(lay->in, dummy_ent_arr[i]->in->first.data, sizeof(dummy_ent_arr[i]->in)); + lay->inlen = cpu_to_be32(dummy_ent_arr[i]->in->len); + lay->outlen = cpu_to_be32(dummy_ent_arr[i]->out->len); + lay->type = XSC_PCI_CMD_XPORT; + lay->token = dummy_ent_arr[i]->token; + lay->idx = dummy_ent_arr[i]->idx; + if (!cmd->checksum_disabled) + set_signature(dummy_ent_arr[i]); + else + lay->sig = 0xff; + temp_pid = (temp_pid + 1) % (1 << cmd->log_sz); + } + + /* ring doorbell after the descriptor is valid */ + wmb(); + writel(cmd->cmd_pid, REG_ADDR(xdev, cmd->reg.req_pid_addr)); + if (readl(REG_ADDR(xdev, cmd->reg.interrupt_stat_addr)) != 0) + writel(0xF, REG_ADDR(xdev, cmd->reg.interrupt_stat_addr)); + + if (wait_for_completion_timeout(&dummy_ent_arr[dummy_cnt - 1]->done, + msecs_to_jiffies(3000)) == 0) { + pci_err(xdev->pdev, "dummy_cmd %d ent timeout, cmdq fail\n", dummy_cnt - 1); + err = -ETIMEDOUT; + } + + for (i = 0; i < dummy_cnt; i++) + free_cmd(dummy_ent_arr[i]); + + kfree(dummy_ent_arr); + return err; + +get_cmd_ent_idx_err: + free_cmd(dummy_ent_arr[free_cnt]); + up(sem); +alloc_ent_err: + for (i = 0; i < free_cnt; i++) { + free_ent(cmd, dummy_ent_arr[i]->idx); + up(sem); + free_cmd(dummy_ent_arr[i]); + } + kfree(dummy_ent_arr); +alloc_ent_arr_err: + return err; +} + +static int xsc_dummy_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out, + int out_size, u16 dmmy_cnt, u16 dummy_start) +{ + struct xsc_cmd_msg *inb; + struct xsc_rsp_msg *outb; + int err; + + inb = alloc_msg(xdev, in_size); + if (IS_ERR(inb)) { + err = PTR_ERR(inb); + return err; + } + + err = xsc_copy_to_cmd_msg(inb, in, in_size); + if (err) { + pci_err(xdev->pdev, "err %d\n", err); + goto out_in; + } + + outb = xsc_alloc_rsp_msg(xdev, GFP_KERNEL, out_size); + if (IS_ERR(outb)) { + err = PTR_ERR(outb); + goto out_in; + } + + err = dummy_work(xdev, inb, outb, dmmy_cnt, dummy_start); + + if (err) + goto out_out; + + err = xsc_copy_from_rsp_msg(out, outb, out_size); + +out_out: + xsc_free_rsp_msg(xdev, outb); + +out_in: + free_msg(xdev, inb); + return err; +} + +static int xsc_send_dummy_cmd(struct xsc_core_device *xdev, u16 gap, u16 dummy_start) +{ + struct xsc_cmd_dummy_mbox_out *out; + struct xsc_cmd_dummy_mbox_in in; + int err; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) { + err = -ENOMEM; + goto no_mem_out; + } + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DUMMY); + + err = xsc_dummy_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out), gap, dummy_start); + if (err) + goto out_out; + + if (out->hdr.status) { + err = xsc_cmd_status_to_err(&out->hdr); + goto out_out; + } + +out_out: + kfree(out); +no_mem_out: + return err; +} + +static int request_pid_cid_mismatch_restore(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + u16 req_pid, req_cid; + u16 gap; + + int err; + + req_pid = readl(REG_ADDR(xdev, cmd->reg.req_pid_addr)); + req_cid = readl(REG_ADDR(xdev, cmd->reg.req_cid_addr)); + if (req_pid >= (1 << cmd->log_sz) || req_cid >= (1 << cmd->log_sz)) { + pci_err(xdev->pdev, + "req_pid %d, req_cid %d, out of normal range!!! max value is %d\n", + req_pid, req_cid, (1 << cmd->log_sz)); + return -1; + } + + if (req_pid == req_cid) + return 0; + + gap = (req_pid > req_cid) ? (req_pid - req_cid) : ((1 << cmd->log_sz) + req_pid - req_cid); + + err = xsc_send_dummy_cmd(xdev, gap, req_cid); + if (err) { + pci_err(xdev->pdev, "Send dummy cmd failed\n"); + goto send_dummy_fail; + } + +send_dummy_fail: + return err; +} + +static int _xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out, + int out_size) +{ + struct xsc_cmd_msg *inb; + struct xsc_rsp_msg *outb; + int err; + u8 status = 0; + struct xsc_cmd *cmd = &xdev->cmd; + + if (cmd->cmd_status == XSC_CMD_STATUS_TIMEDOUT) + return -ETIMEDOUT; + + inb = alloc_msg(xdev, in_size); + if (IS_ERR(inb)) { + err = PTR_ERR(inb); + return err; + } + + err = xsc_copy_to_cmd_msg(inb, in, in_size); + if (err) { + pci_err(xdev->pdev, "copy to cmd_msg err %d\n", err); + goto out_in; + } + + outb = xsc_alloc_rsp_msg(xdev, GFP_KERNEL, out_size); + if (IS_ERR(outb)) { + err = PTR_ERR(outb); + goto out_in; + } + + err = xsc_cmd_invoke(xdev, inb, outb, &status); + if (err) + goto out_out; + + if (status) { + pci_err(xdev->pdev, "opcode:%#x, err %d, status %d\n", + msg_to_opcode(inb), err, status); + err = status_to_err(status); + goto out_out; + } + + err = xsc_copy_from_rsp_msg(out, outb, out_size); + +out_out: + xsc_free_rsp_msg(xdev, outb); + +out_in: + free_msg(xdev, inb); + return err; +} + +int xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out, + int out_size) +{ + struct xsc_inbox_hdr *hdr = (struct xsc_inbox_hdr *)in; + + hdr->ver = 0; + if (hdr->ver != 0) { + pci_err(xdev->pdev, "recv an unexpected cmd ver = %d, opcode = %d\n", + be16_to_cpu(hdr->ver), be16_to_cpu(hdr->opcode)); + WARN_ON(hdr->ver != 0); + } + + return _xsc_cmd_exec(xdev, in, in_size, out, out_size); +} +EXPORT_SYMBOL(xsc_cmd_exec); + +static void destroy_msg_cache(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_msg *msg; + struct xsc_cmd_msg *n; + + list_for_each_entry_safe(msg, n, &cmd->cache.large.head, list) { + list_del(&msg->list); + xsc_free_cmd_msg(xdev, msg); + } + + list_for_each_entry_safe(msg, n, &cmd->cache.med.head, list) { + list_del(&msg->list); + xsc_free_cmd_msg(xdev, msg); + } +} + +static int create_msg_cache(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_msg *msg; + int err; + int i; + + spin_lock_init(&cmd->cache.large.lock); + INIT_LIST_HEAD(&cmd->cache.large.head); + spin_lock_init(&cmd->cache.med.lock); + INIT_LIST_HEAD(&cmd->cache.med.head); + + for (i = 0; i < NUM_LONG_LISTS; i++) { + msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, LONG_LIST_SIZE); + if (IS_ERR(msg)) { + err = PTR_ERR(msg); + goto ex_err; + } + msg->cache = &cmd->cache.large; + list_add_tail(&msg->list, &cmd->cache.large.head); + } + + for (i = 0; i < NUM_MED_LISTS; i++) { + msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, MED_LIST_SIZE); + if (IS_ERR(msg)) { + err = PTR_ERR(msg); + goto ex_err; + } + msg->cache = &cmd->cache.med; + list_add_tail(&msg->list, &cmd->cache.med.head); + } + + return 0; + +ex_err: + destroy_msg_cache(xdev); + return err; +} + +static void xsc_cmd_comp_handler(struct xsc_core_device *xdev, u8 idx, struct xsc_rsp_layout *rsp) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_work_ent *ent; + struct xsc_inbox_hdr *hdr; + + if (idx > cmd->max_reg_cmds || (cmd->bitmask & (1 << idx))) { + pci_err(xdev->pdev, "idx[%d] exceed max cmds, or has no relative request.\n", idx); + return; + } + ent = cmd->ent_arr[idx]; + ent->rsp_lay = rsp; + ktime_get_ts64(&ent->ts2); + + memcpy(ent->out->first.data, ent->rsp_lay->out, sizeof(ent->rsp_lay->out)); + if (!cmd->checksum_disabled) + ent->ret = verify_signature(ent); + else + ent->ret = 0; + ent->status = 0; + + hdr = (struct xsc_inbox_hdr *)ent->in->first.data; + free_ent(cmd, ent->idx); + complete(&ent->done); + up(&cmd->sem); +} + +static int cmd_cq_polling(void *data) +{ + struct xsc_core_device *xdev = data; + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_rsp_layout *rsp; + u32 cq_pid; + + while (!kthread_should_stop()) { + if (need_resched()) + schedule(); + cq_pid = readl(REG_ADDR(xdev, cmd->reg.rsp_pid_addr)); + if (cmd->cq_cid == cq_pid) { + mdelay(3); + continue; + } + + //get cqe + rsp = get_cq_inst(cmd, cmd->cq_cid); + if (!cmd->ownerbit_learned) { + cmd->ownerbit_learned = 1; + cmd->owner_bit = rsp->owner_bit; + } + if (cmd->owner_bit != rsp->owner_bit) { + //hw update cq doorbell but buf may not ready + pci_err(xdev->pdev, "hw update cq doorbell but buf not ready %u %u\n", + cmd->cq_cid, cq_pid); + continue; + } + + xsc_cmd_comp_handler(xdev, rsp->idx, rsp); + + cmd->cq_cid = (cmd->cq_cid + 1) % (1 << cmd->log_sz); + + writel(cmd->cq_cid, REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + if (cmd->cq_cid == 0) + cmd->owner_bit = !cmd->owner_bit; + } + return 0; +} + +int xsc_cmd_err_handler(struct xsc_core_device *xdev) +{ + union interrupt_stat { + struct { + u32 hw_read_req_err:1; + u32 hw_write_req_err:1; + u32 req_pid_err:1; + u32 rsp_cid_err:1; + }; + u32 raw; + } stat; + int err = 0; + int retry = 0; + + stat.raw = readl(REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + while (stat.raw != 0) { + err++; + if (stat.hw_read_req_err) { + retry = 1; + stat.hw_read_req_err = 0; + pci_err(xdev->pdev, "hw report read req from host failed!\n"); + } else if (stat.hw_write_req_err) { + retry = 1; + stat.hw_write_req_err = 0; + pci_err(xdev->pdev, "hw report write req to fw failed!\n"); + } else if (stat.req_pid_err) { + stat.req_pid_err = 0; + pci_err(xdev->pdev, "hw report unexpected req pid!\n"); + } else if (stat.rsp_cid_err) { + stat.rsp_cid_err = 0; + pci_err(xdev->pdev, "hw report unexpected rsp cid!\n"); + } else { + stat.raw = 0; + pci_err(xdev->pdev, "ignore unknown interrupt!\n"); + } + } + + if (retry) + writel(xdev->cmd.cmd_pid, REG_ADDR(xdev, xdev->cmd.reg.req_pid_addr)); + + if (err) + writel(0xf, REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + + return err; +} + +void xsc_cmd_resp_handler(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_rsp_layout *rsp; + u32 cq_pid; + const int budget = 32; + int count = 0; + + while (count < budget) { + cq_pid = readl(REG_ADDR(xdev, cmd->reg.rsp_pid_addr)); + if (cq_pid == cmd->cq_cid) + return; + + rsp = get_cq_inst(cmd, cmd->cq_cid); + if (!cmd->ownerbit_learned) { + cmd->ownerbit_learned = 1; + cmd->owner_bit = rsp->owner_bit; + } + if (cmd->owner_bit != rsp->owner_bit) { + pci_err(xdev->pdev, "hw update cq doorbell but buf not ready %u %u\n", + cmd->cq_cid, cq_pid); + return; + } + + xsc_cmd_comp_handler(xdev, rsp->idx, rsp); + + cmd->cq_cid = (cmd->cq_cid + 1) % (1 << cmd->log_sz); + writel(cmd->cq_cid, REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + if (cmd->cq_cid == 0) + cmd->owner_bit = !cmd->owner_bit; + + count++; + } +} + +static void xsc_cmd_handle_rsp_before_reload +(struct xsc_cmd *cmd, struct xsc_core_device *xdev) +{ + u32 rsp_pid, rsp_cid; + + rsp_pid = readl(REG_ADDR(xdev, cmd->reg.rsp_pid_addr)); + rsp_cid = readl(REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + if (rsp_pid == rsp_cid) + return; + + cmd->cq_cid = rsp_pid; + + writel(cmd->cq_cid, REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); +} + +int xsc_cmd_init(struct xsc_core_device *xdev) +{ + int size = sizeof(struct xsc_cmd_prot_block); + int align = roundup_pow_of_two(size); + struct xsc_cmd *cmd = &xdev->cmd; + u32 cmd_h, cmd_l; + u32 err_stat; + int err; + int i; + + //sriov need adapt for this process. + //now there is 544 cmdq resource, soc using from id 514 + cmd->reg.req_pid_addr = HIF_CMDQM_HOST_REQ_PID_MEM_ADDR; + cmd->reg.req_cid_addr = HIF_CMDQM_HOST_REQ_CID_MEM_ADDR; + cmd->reg.rsp_pid_addr = HIF_CMDQM_HOST_RSP_PID_MEM_ADDR; + cmd->reg.rsp_cid_addr = HIF_CMDQM_HOST_RSP_CID_MEM_ADDR; + cmd->reg.req_buf_h_addr = HIF_CMDQM_HOST_REQ_BUF_BASE_H_ADDR_MEM_ADDR; + cmd->reg.req_buf_l_addr = HIF_CMDQM_HOST_REQ_BUF_BASE_L_ADDR_MEM_ADDR; + cmd->reg.rsp_buf_h_addr = HIF_CMDQM_HOST_RSP_BUF_BASE_H_ADDR_MEM_ADDR; + cmd->reg.rsp_buf_l_addr = HIF_CMDQM_HOST_RSP_BUF_BASE_L_ADDR_MEM_ADDR; + cmd->reg.msix_vec_addr = HIF_CMDQM_VECTOR_ID_MEM_ADDR; + cmd->reg.element_sz_addr = HIF_CMDQM_Q_ELEMENT_SZ_REG_ADDR; + cmd->reg.q_depth_addr = HIF_CMDQM_HOST_Q_DEPTH_REG_ADDR; + cmd->reg.interrupt_stat_addr = HIF_CMDQM_HOST_VF_ERR_STS_MEM_ADDR; + + cmd->pool = dma_pool_create("xsc_cmd", &xdev->pdev->dev, size, align, 0); + + if (!cmd->pool) + return -ENOMEM; + + cmd->cmd_buf = (void *)__get_free_pages(GFP_ATOMIC, 0); + if (!cmd->cmd_buf) { + err = -ENOMEM; + goto err_free_pool; + } + cmd->cq_buf = (void *)__get_free_pages(GFP_ATOMIC, 0); + if (!cmd->cq_buf) { + err = -ENOMEM; + goto err_free_cmd; + } + + cmd->dma = dma_map_single(&xdev->pdev->dev, cmd->cmd_buf, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(&xdev->pdev->dev, cmd->dma)) { + err = -ENOMEM; + goto err_free; + } + + cmd->cq_dma = dma_map_single(&xdev->pdev->dev, cmd->cq_buf, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(&xdev->pdev->dev, cmd->cq_dma)) { + err = -ENOMEM; + goto err_map_cmd; + } + + cmd->cmd_pid = readl(REG_ADDR(xdev, cmd->reg.req_pid_addr)); + cmd->cq_cid = readl(REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + cmd->ownerbit_learned = 0; + + xsc_cmd_handle_rsp_before_reload(cmd, xdev); + +#define ELEMENT_SIZE_LOG 6 //64B +#define Q_DEPTH_LOG 5 //32 + + cmd->log_sz = Q_DEPTH_LOG; + cmd->log_stride = readl(REG_ADDR(xdev, cmd->reg.element_sz_addr)); + writel(1 << cmd->log_sz, REG_ADDR(xdev, cmd->reg.q_depth_addr)); + if (cmd->log_stride != ELEMENT_SIZE_LOG) { + dev_err(&xdev->pdev->dev, "firmware failed to init cmdq, log_stride=(%d, %d)\n", + cmd->log_stride, ELEMENT_SIZE_LOG); + err = -ENODEV; + goto err_map; + } + + if (1 << cmd->log_sz > XSC_MAX_COMMANDS) { + dev_err(&xdev->pdev->dev, "firmware reports too many outstanding commands %d\n", + 1 << cmd->log_sz); + err = -EINVAL; + goto err_map; + } + + if (cmd->log_sz + cmd->log_stride > PAGE_SHIFT) { + dev_err(&xdev->pdev->dev, "command queue size overflow\n"); + err = -EINVAL; + goto err_map; + } + + cmd->checksum_disabled = 1; + cmd->max_reg_cmds = (1 << cmd->log_sz) - 1; + cmd->bitmask = (1 << cmd->max_reg_cmds) - 1; + + spin_lock_init(&cmd->alloc_lock); + spin_lock_init(&cmd->token_lock); + spin_lock_init(&cmd->doorbell_lock); + for (i = 0; i < ARRAY_SIZE(cmd->stats); i++) + spin_lock_init(&cmd->stats[i].lock); + + sema_init(&cmd->sem, cmd->max_reg_cmds); + + cmd_h = (u32)((u64)(cmd->dma) >> 32); + cmd_l = (u32)(cmd->dma); + if (cmd_l & 0xfff) { + dev_err(&xdev->pdev->dev, "invalid command queue address\n"); + err = -ENOMEM; + goto err_map; + } + + writel(cmd_h, REG_ADDR(xdev, cmd->reg.req_buf_h_addr)); + writel(cmd_l, REG_ADDR(xdev, cmd->reg.req_buf_l_addr)); + + cmd_h = (u32)((u64)(cmd->cq_dma) >> 32); + cmd_l = (u32)(cmd->cq_dma); + if (cmd_l & 0xfff) { + dev_err(&xdev->pdev->dev, "invalid command queue address\n"); + err = -ENOMEM; + goto err_map; + } + writel(cmd_h, REG_ADDR(xdev, cmd->reg.rsp_buf_h_addr)); + writel(cmd_l, REG_ADDR(xdev, cmd->reg.rsp_buf_l_addr)); + + /* Make sure firmware sees the complete address before we proceed */ + wmb(); + + cmd->mode = CMD_MODE_POLLING; + cmd->cmd_status = XSC_CMD_STATUS_NORMAL; + + err = create_msg_cache(xdev); + if (err) { + dev_err(&xdev->pdev->dev, "failed to create command cache\n"); + goto err_map; + } + + set_wqname(xdev); + cmd->wq = create_singlethread_workqueue(cmd->wq_name); + if (!cmd->wq) { + dev_err(&xdev->pdev->dev, "failed to create command workqueue\n"); + err = -ENOMEM; + goto err_cache; + } + + cmd->cq_task = kthread_create(cmd_cq_polling, (void *)xdev, "xsc_cmd_cq_polling"); + if (!cmd->cq_task) { + dev_err(&xdev->pdev->dev, "failed to create cq task\n"); + err = -ENOMEM; + goto err_wq; + } + wake_up_process(cmd->cq_task); + + err = request_pid_cid_mismatch_restore(xdev); + if (err) { + dev_err(&xdev->pdev->dev, "request pid,cid wrong, restore failed\n"); + goto err_req_restore; + } + + // clear abnormal state to avoid the impact of previous error + err_stat = readl(REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + if (err_stat) { + pci_err(xdev->pdev, "err_stat 0x%x when init, clear it\n", err_stat); + writel(0xf, REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + } + + return 0; + +err_req_restore: + kthread_stop(cmd->cq_task); + +err_wq: + destroy_workqueue(cmd->wq); + +err_cache: + destroy_msg_cache(xdev); + +err_map: + dma_unmap_single(&xdev->pdev->dev, cmd->cq_dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); + +err_map_cmd: + dma_unmap_single(&xdev->pdev->dev, cmd->dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); +err_free: + free_pages((unsigned long)cmd->cq_buf, 0); + +err_free_cmd: + free_pages((unsigned long)cmd->cmd_buf, 0); + +err_free_pool: + dma_pool_destroy(cmd->pool); + + return err; +} + +void xsc_cmd_cleanup(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + + destroy_workqueue(cmd->wq); + if (cmd->cq_task) + kthread_stop(cmd->cq_task); + destroy_msg_cache(xdev); + dma_unmap_single(&xdev->pdev->dev, cmd->dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); + free_pages((unsigned long)cmd->cq_buf, 0); + dma_unmap_single(&xdev->pdev->dev, cmd->cq_dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); + free_pages((unsigned long)cmd->cmd_buf, 0); + dma_pool_destroy(cmd->pool); +} + +int xsc_cmd_version_check(struct xsc_core_device *xdev) +{ + struct xsc_cmd_query_cmdq_ver_mbox_out *out; + struct xsc_cmd_query_cmdq_ver_mbox_in in; + + int err; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) { + err = -ENOMEM; + goto no_mem_out; + } + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_CMDQ_VERSION); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out)); + if (err) + goto out_out; + + if (out->hdr.status) { + err = xsc_cmd_status_to_err(&out->hdr); + goto out_out; + } + + if (be16_to_cpu(out->cmdq_ver) != XSC_CMDQ_VERSION) { + pci_err(xdev->pdev, "cmdq version check failed, expecting %d, actual %d\n", + XSC_CMDQ_VERSION, be16_to_cpu(out->cmdq_ver)); + err = -EINVAL; + goto out_out; + } + xdev->cmdq_ver = XSC_CMDQ_VERSION; + +out_out: + kfree(out); +no_mem_out: + return err; +} + +static const char *cmd_status_str(u8 status) +{ + switch (status) { + case XSC_CMD_STAT_OK: + return "OK"; + case XSC_CMD_STAT_INT_ERR: + return "internal error"; + case XSC_CMD_STAT_BAD_OP_ERR: + return "bad operation"; + case XSC_CMD_STAT_BAD_PARAM_ERR: + return "bad parameter"; + case XSC_CMD_STAT_BAD_SYS_STATE_ERR: + return "bad system state"; + case XSC_CMD_STAT_BAD_RES_ERR: + return "bad resource"; + case XSC_CMD_STAT_RES_BUSY: + return "resource busy"; + case XSC_CMD_STAT_LIM_ERR: + return "limits exceeded"; + case XSC_CMD_STAT_BAD_RES_STATE_ERR: + return "bad resource state"; + case XSC_CMD_STAT_IX_ERR: + return "bad index"; + case XSC_CMD_STAT_NO_RES_ERR: + return "no resources"; + case XSC_CMD_STAT_BAD_INP_LEN_ERR: + return "bad input length"; + case XSC_CMD_STAT_BAD_OUTP_LEN_ERR: + return "bad output length"; + case XSC_CMD_STAT_BAD_QP_STATE_ERR: + return "bad QP state"; + case XSC_CMD_STAT_BAD_PKT_ERR: + return "bad packet (discarded)"; + case XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR: + return "bad size too many outstanding CQEs"; + default: + return "unknown status"; + } +} + +int xsc_cmd_status_to_err(struct xsc_outbox_hdr *hdr) +{ + if (!hdr->status) + return 0; + + pr_warn("command failed, status %s(0x%x)\n", + cmd_status_str(hdr->status), hdr->status); + + switch (hdr->status) { + case XSC_CMD_STAT_OK: return 0; + case XSC_CMD_STAT_INT_ERR: return -EIO; + case XSC_CMD_STAT_BAD_OP_ERR: return -EOPNOTSUPP; + case XSC_CMD_STAT_BAD_PARAM_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_SYS_STATE_ERR: return -EIO; + case XSC_CMD_STAT_BAD_RES_ERR: return -EINVAL; + case XSC_CMD_STAT_RES_BUSY: return -EBUSY; + case XSC_CMD_STAT_LIM_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_RES_STATE_ERR: return -EINVAL; + case XSC_CMD_STAT_IX_ERR: return -EINVAL; + case XSC_CMD_STAT_NO_RES_ERR: return -EAGAIN; + case XSC_CMD_STAT_BAD_INP_LEN_ERR: return -EIO; + case XSC_CMD_STAT_BAD_OUTP_LEN_ERR: return -EIO; + case XSC_CMD_STAT_BAD_QP_STATE_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_PKT_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR: return -EINVAL; + default: return -EIO; + } +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 4859be58f..f232e61d5 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -4,6 +4,7 @@ */ #include "common/xsc_core.h" +#include "common/xsc_driver.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -171,6 +172,77 @@ static void xsc_core_dev_cleanup(struct xsc_core_device *xdev) xsc_dev_res_cleanup(xdev); } +static int xsc_hw_setup(struct xsc_core_device *xdev) +{ + int err; + + err = xsc_cmd_init(xdev); + if (err) { + pci_err(xdev->pdev, "Failed initializing command interface, aborting\n"); + goto out; + } + + err = xsc_cmd_version_check(xdev); + if (err) { + pci_err(xdev->pdev, "Failed to check cmdq version\n"); + goto err_cmd_cleanup; + } + + return 0; +err_cmd_cleanup: + xsc_cmd_cleanup(xdev); +out: + return err; +} + +static int xsc_hw_cleanup(struct xsc_core_device *xdev) +{ + xsc_cmd_cleanup(xdev); + + return 0; +} + +static int xsc_load(struct xsc_core_device *xdev) +{ + int err = 0; + + mutex_lock(&xdev->intf_state_mutex); + if (test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) + goto out; + + err = xsc_hw_setup(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_hw_setup failed %d\n", err); + goto out; + } + + set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); + mutex_unlock(&xdev->intf_state_mutex); + + return 0; +out: + mutex_unlock(&xdev->intf_state_mutex); + return err; +} + +static int xsc_unload(struct xsc_core_device *xdev) +{ + mutex_lock(&xdev->intf_state_mutex); + if (!test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) { + xsc_hw_cleanup(xdev); + goto out; + } + + clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); + + xsc_hw_cleanup(xdev); + +out: + mutex_unlock(&xdev->intf_state_mutex); + + return 0; +} + static int xsc_pci_probe(struct pci_dev *pci_dev, const struct pci_device_id *id) { @@ -197,7 +269,15 @@ static int xsc_pci_probe(struct pci_dev *pci_dev, goto err_pci_fini; } + err = xsc_load(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_load failed %d\n", err); + goto err_core_dev_cleanup; + } + return 0; +err_core_dev_cleanup: + xsc_core_dev_cleanup(xdev); err_pci_fini: xsc_pci_fini(xdev); err_unset_pci_drvdata: @@ -211,6 +291,7 @@ static void xsc_pci_remove(struct pci_dev *pci_dev) { struct xsc_core_device *xdev = pci_get_drvdata(pci_dev); + xsc_unload(xdev); xsc_core_dev_cleanup(xdev); xsc_pci_fini(xdev); pci_set_drvdata(pci_dev, NULL); From patchwork Wed Jan 15 10:22:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940215 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-44.ptr.blmpb.com (va-2-44.ptr.blmpb.com [209.127.231.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98A9C248166 for ; Wed, 15 Jan 2025 10:25:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936715; cv=none; b=Kv28VAHaG0MjAbAzMIzhVzhvZEZOLXQF6jscbZeb5yHqRpPA1kqmHONWZByXiEp4dDC/xtt+KqRlUfG+7q/n+GgT8BMg3belRpErAIgGznlN0sZp+lJlAbGJiwb+fpe1QNkpE35EnuXA6AkNKrb2hT2gCARuLtTfxtM/HM866qA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936715; c=relaxed/simple; bh=dbtzAneJv4eNKhOEbndznLATNBORWlicu0IVdMiBeRk=; h=To:Cc:Content-Type:From:Subject:Date:References:In-Reply-To: Message-Id:Mime-Version; b=aMLaBK3WU5TXcKNk1eotdWo/zEG5WjrZX7YvGtjgkCivWBfaIQChBeM0ElrEyK7JWziAnh5cGbb4v84Dm704Oui5eqNLCwrUUv7sMGV1lH/4eDBjXffltQ0AO4o2ip8LrS+Mt38xqORJdCJzV1OZ8aTCaXRPw0I05WC/UtvBtCQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=gDUZtmFV; arc=none smtp.client-ip=209.127.231.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="gDUZtmFV" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936571; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=TPFi4u/Zsve2bnXJReAdyU82L6ZTHxmmORnRCDn8rPc=; b=gDUZtmFVKSG3LfuUg1nZG1Nxj8Z8pT2uhkGoJJVSrsOytCEvcebdTC4cNSdDlYarp7ZM25 EuaDl4O4I50oIkAEVaVBgHaav+GPgMQEq5oqrl7wLqphPY6qk82a6J1wqwShc2M5XR6qN1 h4Tg7eSw8LidNo7pxnjl6s51yReAJDejmsfGjc8yFxJcYynduFtN72yRWsCEO1nuOUGLNU oQYqjzj1jE2DfMDclK9S0H4XPKTHAHjjqtOKeJgvsUuYMR181PwwA3LskC9dL1sSk1D2Ox GWVp+stFWBluWw0olKKXjri0yrmWu/CwEtZ0HKMVKkmUXHVn3sn7YcJ3ScUEOg== To: Cc: , , , , , , , , , X-Mailer: git-send-email 2.25.1 From: "Xin Tian" Subject: [PATCH v3 03/14] net-next/yunsilicon: Add hardware setup APIs Date: Wed, 15 Jan 2025 18:22:48 +0800 References: <20250115102242.3541496-1-tianx@yunsilicon.com> In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Lms-Return-Path: X-Original-From: Xin Tian Message-Id: <20250115102247.3541496-4-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:48 +0800 X-Patchwork-Delegate: kuba@kernel.org Add hardware setup APIs Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 158 +++++++++++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- drivers/net/ethernet/yunsilicon/xsc/pci/hw.c | 266 ++++++++++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/hw.h | 18 ++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 26 ++ 5 files changed, 469 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 3b4b77948..afb08f987 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -32,6 +32,145 @@ #define REG_WIDTH_TO_STRIDE(width) ((width) / 8) +enum { + XSC_MAX_PORTS = 2, +}; + +enum { + XSC_MAX_FW_PORTS = 1, +}; + +enum { + XSC_BF_REGS_PER_PAGE = 4, + XSC_MAX_UAR_PAGES = 1 << 8, + XSC_MAX_UUARS = XSC_MAX_UAR_PAGES * XSC_BF_REGS_PER_PAGE, +}; + +// hw +struct xsc_reg_addr { + u64 tx_db; + u64 rx_db; + u64 complete_db; + u64 complete_reg; + u64 event_db; + u64 cpm_get_lock; + u64 cpm_put_lock; + u64 cpm_lock_avail; + u64 cpm_data_mem; + u64 cpm_cmd; + u64 cpm_addr; + u64 cpm_busy; +}; + +struct xsc_board_info { + u32 board_id; + char board_sn[XSC_BOARD_SN_LEN]; + __be64 guid; + u8 guid_valid; + u8 hw_config_activated; +}; + +struct xsc_port_caps { + int gid_table_len; + int pkey_table_len; +}; + +struct xsc_caps { + u8 log_max_eq; + u8 log_max_cq; + u8 log_max_qp; + u8 log_max_mkey; + u8 log_max_pd; + u8 log_max_srq; + u8 log_max_msix; + u32 max_cqes; + u32 max_wqes; + u32 max_sq_desc_sz; + u32 max_rq_desc_sz; + u64 flags; + u16 stat_rate_support; + u32 log_max_msg; + u32 num_ports; + u32 max_ra_res_qp; + u32 max_ra_req_qp; + u32 max_srq_wqes; + u32 bf_reg_size; + u32 bf_regs_per_page; + struct xsc_port_caps port[XSC_MAX_PORTS]; + u8 ext_port_cap[XSC_MAX_PORTS]; + u32 reserved_lkey; + u8 local_ca_ack_delay; + u8 log_max_mcg; + u16 max_qp_mcg; + u32 min_page_sz; + u32 send_ds_num; + u32 send_wqe_shift; + u32 recv_ds_num; + u32 recv_wqe_shift; + u32 rx_pkt_len_max; + + u32 msix_enable:1; + u32 port_type:1; + u32 embedded_cpu:1; + u32 eswitch_manager:1; + u32 ecpf_vport_exists:1; + u32 vport_group_manager:1; + u32 sf:1; + u32 wqe_inline_mode:3; + u32 raweth_qp_id_base:15; + u32 rsvd0:7; + + u16 max_vfs; + u8 log_max_qp_depth; + u8 log_max_current_uc_list; + u8 log_max_current_mc_list; + u16 log_max_vlan_list; + u8 fdb_multi_path_to_table; + u8 log_esw_max_sched_depth; + + u8 max_num_sf_partitions; + u8 log_max_esw_sf; + u16 sf_base_id; + + u32 max_tc:8; + u32 ets:1; + u32 dcbx:1; + u32 dscp:1; + u32 sbcam_reg:1; + u32 qos:1; + u32 port_buf:1; + u32 rsvd1:2; + u32 raw_tpe_qp_num:16; + u32 max_num_eqs:8; + u32 mac_port:8; + u32 raweth_rss_qp_id_base:16; + u16 msix_base; + u16 msix_num; + u8 log_max_mtt; + u8 log_max_tso; + u32 hca_core_clock; + u32 max_rwq_indirection_tables;/*rss_caps*/ + u32 max_rwq_indirection_table_size;/*rss_caps*/ + u16 raweth_qp_id_end; + u32 qp_rate_limit_min; + u32 qp_rate_limit_max; + u32 hw_feature_flag; + u16 pf0_vf_funcid_base; + u16 pf0_vf_funcid_top; + u16 pf1_vf_funcid_base; + u16 pf1_vf_funcid_top; + u16 pcie0_pf_funcid_base; + u16 pcie0_pf_funcid_top; + u16 pcie1_pf_funcid_base; + u16 pcie1_pf_funcid_top; + u8 nif_port_num; + u8 pcie_host; + u8 mac_bit; + u16 funcid_to_logic_port; + u8 lag_logic_port_ofst; +}; + +// xsc_core struct xsc_dev_resource { struct mutex alloc_mutex; /* protect buffer alocation according to numa node */ }; @@ -54,6 +193,9 @@ struct xsc_core_device { void __iomem *bar; int bar_num; + u8 mac_port; + u16 glb_func_id; + struct xsc_cmd cmd; u16 cmdq_ver; @@ -61,6 +203,22 @@ struct xsc_core_device { enum xsc_pci_state pci_state; struct mutex intf_state_mutex; /* protect intf_state */ unsigned long intf_state; + + struct xsc_caps caps; + struct xsc_board_info *board_info; + + struct xsc_reg_addr regs; + u32 chip_ver_h; + u32 chip_ver_m; + u32 chip_ver_l; + u32 hotfix_num; + u32 feature_flag; + + u8 fw_version_major; + u8 fw_version_minor; + u16 fw_version_patch; + u32 fw_version_tweak; + u8 fw_version_extra_flag; }; #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 5e0f0a205..fea625d54 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o +xsc_pci-y := main.o cmdq.o hw.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c new file mode 100644 index 000000000..e271a5f4c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c @@ -0,0 +1,266 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include "common/xsc_driver.h" +#include "hw.h" + +#define MAX_BOARD_NUM 32 + +static struct xsc_board_info *board_info[MAX_BOARD_NUM]; + +static struct xsc_board_info *xsc_get_board_info(char *board_sn) +{ + int i; + + for (i = 0; i < MAX_BOARD_NUM; i++) { + if (!board_info[i]) + continue; + if (!strncmp(board_info[i]->board_sn, board_sn, XSC_BOARD_SN_LEN)) + return board_info[i]; + } + return NULL; +} + +static struct xsc_board_info *xsc_alloc_board_info(void) +{ + int i; + + for (i = 0; i < MAX_BOARD_NUM; i++) { + if (!board_info[i]) + break; + } + if (i == MAX_BOARD_NUM) + return NULL; + board_info[i] = vmalloc(sizeof(*board_info[i])); + if (!board_info[i]) + return NULL; + memset(board_info[i], 0, sizeof(*board_info[i])); + board_info[i]->board_id = i; + return board_info[i]; +} + +void xsc_free_board_info(void) +{ + int i; + + for (i = 0; i < MAX_BOARD_NUM; i++) + vfree(board_info[i]); +} + +int xsc_cmd_query_hca_cap(struct xsc_core_device *xdev, + struct xsc_caps *caps) +{ + struct xsc_cmd_query_hca_cap_mbox_out *out; + struct xsc_cmd_query_hca_cap_mbox_in in; + int err; + u16 t16; + struct xsc_board_info *board_info = NULL; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) + return -ENOMEM; + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_HCA_CAP); + in.cpu_num = cpu_to_be16(num_online_cpus()); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out)); + if (err) + goto out_out; + + if (out->hdr.status) { + err = xsc_cmd_status_to_err(&out->hdr); + goto out_out; + } + + xdev->glb_func_id = be32_to_cpu(out->hca_cap.glb_func_id); + caps->pcie0_pf_funcid_base = be16_to_cpu(out->hca_cap.pcie0_pf_funcid_base); + caps->pcie0_pf_funcid_top = be16_to_cpu(out->hca_cap.pcie0_pf_funcid_top); + caps->pcie1_pf_funcid_base = be16_to_cpu(out->hca_cap.pcie1_pf_funcid_base); + caps->pcie1_pf_funcid_top = be16_to_cpu(out->hca_cap.pcie1_pf_funcid_top); + caps->funcid_to_logic_port = be16_to_cpu(out->hca_cap.funcid_to_logic_port); + + caps->pcie_host = out->hca_cap.pcie_host; + caps->nif_port_num = out->hca_cap.nif_port_num; + caps->hw_feature_flag = be32_to_cpu(out->hca_cap.hw_feature_flag); + + caps->raweth_qp_id_base = be16_to_cpu(out->hca_cap.raweth_qp_id_base); + caps->raweth_qp_id_end = be16_to_cpu(out->hca_cap.raweth_qp_id_end); + caps->raweth_rss_qp_id_base = be16_to_cpu(out->hca_cap.raweth_rss_qp_id_base); + caps->raw_tpe_qp_num = be16_to_cpu(out->hca_cap.raw_tpe_qp_num); + caps->max_cqes = 1 << out->hca_cap.log_max_cq_sz; + caps->max_wqes = 1 << out->hca_cap.log_max_qp_sz; + caps->max_sq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_sq); + caps->max_rq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_rq); + caps->flags = be64_to_cpu(out->hca_cap.flags); + caps->stat_rate_support = be16_to_cpu(out->hca_cap.stat_rate_support); + caps->log_max_msg = out->hca_cap.log_max_msg & 0x1f; + caps->num_ports = out->hca_cap.num_ports & 0xf; + caps->log_max_cq = out->hca_cap.log_max_cq & 0x1f; + caps->log_max_eq = out->hca_cap.log_max_eq & 0xf; + caps->log_max_msix = out->hca_cap.log_max_msix & 0xf; + caps->mac_port = out->hca_cap.mac_port & 0xff; + xdev->mac_port = caps->mac_port; + if (caps->num_ports > XSC_MAX_FW_PORTS) { + pci_err(xdev->pdev, "device has %d ports while the driver supports max %d ports\n", + caps->num_ports, XSC_MAX_FW_PORTS); + err = -EINVAL; + goto out_out; + } + caps->send_ds_num = out->hca_cap.send_seg_num; + caps->send_wqe_shift = out->hca_cap.send_wqe_shift; + caps->recv_ds_num = out->hca_cap.recv_seg_num; + caps->recv_wqe_shift = out->hca_cap.recv_wqe_shift; + + caps->embedded_cpu = 0; + caps->ecpf_vport_exists = 0; + caps->log_max_current_uc_list = 0; + caps->log_max_current_mc_list = 0; + caps->log_max_vlan_list = 8; + caps->log_max_qp = out->hca_cap.log_max_qp & 0x1f; + caps->log_max_mkey = out->hca_cap.log_max_mkey & 0x3f; + caps->log_max_pd = out->hca_cap.log_max_pd & 0x1f; + caps->log_max_srq = out->hca_cap.log_max_srqs & 0x1f; + caps->local_ca_ack_delay = out->hca_cap.local_ca_ack_delay & 0x1f; + caps->log_max_mcg = out->hca_cap.log_max_mcg; + caps->log_max_mtt = out->hca_cap.log_max_mtt; + caps->log_max_tso = out->hca_cap.log_max_tso; + caps->hca_core_clock = be32_to_cpu(out->hca_cap.hca_core_clock); + caps->max_rwq_indirection_tables = + be32_to_cpu(out->hca_cap.max_rwq_indirection_tables); + caps->max_rwq_indirection_table_size = + be32_to_cpu(out->hca_cap.max_rwq_indirection_table_size); + caps->max_qp_mcg = be16_to_cpu(out->hca_cap.max_qp_mcg); + caps->max_ra_res_qp = 1 << (out->hca_cap.log_max_ra_res_qp & 0x3f); + caps->max_ra_req_qp = 1 << (out->hca_cap.log_max_ra_req_qp & 0x3f); + caps->max_srq_wqes = 1 << out->hca_cap.log_max_srq_sz; + caps->rx_pkt_len_max = be32_to_cpu(out->hca_cap.rx_pkt_len_max); + caps->max_vfs = be16_to_cpu(out->hca_cap.max_vfs); + caps->qp_rate_limit_min = be32_to_cpu(out->hca_cap.qp_rate_limit_min); + caps->qp_rate_limit_max = be32_to_cpu(out->hca_cap.qp_rate_limit_max); + + caps->msix_enable = 1; + caps->msix_base = be16_to_cpu(out->hca_cap.msix_base); + caps->msix_num = be16_to_cpu(out->hca_cap.msix_num); + + t16 = be16_to_cpu(out->hca_cap.bf_log_bf_reg_size); + if (t16 & 0x8000) { + caps->bf_reg_size = 1 << (t16 & 0x1f); + caps->bf_regs_per_page = XSC_BF_REGS_PER_PAGE; + } else { + caps->bf_reg_size = 0; + caps->bf_regs_per_page = 0; + } + caps->min_page_sz = ~(u32)((1 << PAGE_SHIFT) - 1); + + caps->dcbx = 1; + caps->qos = 1; + caps->ets = 1; + caps->dscp = 1; + caps->max_tc = out->hca_cap.max_tc; + caps->log_max_qp_depth = out->hca_cap.log_max_qp_depth & 0xff; + caps->mac_bit = out->hca_cap.mac_bit; + caps->lag_logic_port_ofst = out->hca_cap.lag_logic_port_ofst; + + xdev->chip_ver_h = be32_to_cpu(out->hca_cap.chip_ver_h); + xdev->chip_ver_m = be32_to_cpu(out->hca_cap.chip_ver_m); + xdev->chip_ver_l = be32_to_cpu(out->hca_cap.chip_ver_l); + xdev->hotfix_num = be32_to_cpu(out->hca_cap.hotfix_num); + xdev->feature_flag = be32_to_cpu(out->hca_cap.feature_flag); + + board_info = xsc_get_board_info(out->hca_cap.board_sn); + if (!board_info) { + board_info = xsc_alloc_board_info(); + if (!board_info) + return -ENOMEM; + + memcpy(board_info->board_sn, out->hca_cap.board_sn, sizeof(out->hca_cap.board_sn)); + } + xdev->board_info = board_info; + + xdev->regs.tx_db = be64_to_cpu(out->hca_cap.tx_db); + xdev->regs.rx_db = be64_to_cpu(out->hca_cap.rx_db); + xdev->regs.complete_db = be64_to_cpu(out->hca_cap.complete_db); + xdev->regs.complete_reg = be64_to_cpu(out->hca_cap.complete_reg); + xdev->regs.event_db = be64_to_cpu(out->hca_cap.event_db); + + xdev->fw_version_major = out->hca_cap.fw_ver.fw_version_major; + xdev->fw_version_minor = out->hca_cap.fw_ver.fw_version_minor; + xdev->fw_version_patch = be16_to_cpu(out->hca_cap.fw_ver.fw_version_patch); + xdev->fw_version_tweak = be32_to_cpu(out->hca_cap.fw_ver.fw_version_tweak); + xdev->fw_version_extra_flag = out->hca_cap.fw_ver.fw_version_extra_flag; +out_out: + kfree(out); + + return err; +} + +static int xsc_cmd_query_guid(struct xsc_core_device *xdev) +{ + struct xsc_cmd_query_guid_mbox_in in; + struct xsc_cmd_query_guid_mbox_out out; + int err; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_GUID); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err) + return err; + + if (out.hdr.status) + return xsc_cmd_status_to_err(&out.hdr); + xdev->board_info->guid = out.guid; + xdev->board_info->guid_valid = 1; + return 0; +} + +int xsc_query_guid(struct xsc_core_device *xdev) +{ + if (xdev->board_info->guid_valid) + return 0; + + return xsc_cmd_query_guid(xdev); +} + +static int xsc_cmd_activate_hw_config(struct xsc_core_device *xdev) +{ + struct xsc_cmd_activate_hw_config_mbox_in in; + struct xsc_cmd_activate_hw_config_mbox_out out; + int err = 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ACTIVATE_HW_CONFIG); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err) + return err; + if (out.hdr.status) + return xsc_cmd_status_to_err(&out.hdr); + xdev->board_info->hw_config_activated = 1; + return 0; +} + +int xsc_activate_hw_config(struct xsc_core_device *xdev) +{ + if (xdev->board_info->hw_config_activated) + return 0; + + return xsc_cmd_activate_hw_config(xdev); +} + +int xsc_reset_function_resource(struct xsc_core_device *xdev) +{ + struct xsc_function_reset_mbox_in in; + struct xsc_function_reset_mbox_out out; + int err; + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_FUNCTION_RESET); + in.glb_func_id = cpu_to_be16(xdev->glb_func_id); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) + return -EINVAL; + + return 0; +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h new file mode 100644 index 000000000..d1030bfde --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __HW_H +#define __HW_H + +#include "common/xsc_core.h" + +void xsc_free_board_info(void); +int xsc_cmd_query_hca_cap(struct xsc_core_device *xdev, + struct xsc_caps *caps); +int xsc_query_guid(struct xsc_core_device *xdev); +int xsc_activate_hw_config(struct xsc_core_device *xdev); +int xsc_reset_function_resource(struct xsc_core_device *xdev); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index f232e61d5..550ea3c7a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -5,6 +5,7 @@ #include "common/xsc_core.h" #include "common/xsc_driver.h" +#include "hw.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -188,6 +189,30 @@ static int xsc_hw_setup(struct xsc_core_device *xdev) goto err_cmd_cleanup; } + err = xsc_cmd_query_hca_cap(xdev, &xdev->caps); + if (err) { + pci_err(xdev->pdev, "Failed to query hca, err=%d\n", err); + goto err_cmd_cleanup; + } + + err = xsc_query_guid(xdev); + if (err) { + pci_err(xdev->pdev, "failed to query guid, err=%d\n", err); + goto err_cmd_cleanup; + } + + err = xsc_activate_hw_config(xdev); + if (err) { + pci_err(xdev->pdev, "failed to activate hw config, err=%d\n", err); + goto err_cmd_cleanup; + } + + err = xsc_reset_function_resource(xdev); + if (err) { + pci_err(xdev->pdev, "Failed to reset function resource\n"); + goto err_cmd_cleanup; + } + return 0; err_cmd_cleanup: xsc_cmd_cleanup(xdev); @@ -323,6 +348,7 @@ static int __init xsc_init(void) static void __exit xsc_fini(void) { pci_unregister_driver(&xsc_pci_driver); + xsc_free_board_info(); } module_init(xsc_init); From patchwork Wed Jan 15 10:22:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940209 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-53.ptr.blmpb.com (lf-2-53.ptr.blmpb.com [101.36.218.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D96423F28F for ; Wed, 15 Jan 2025 10:24:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936650; cv=none; b=YGcCc4VPYu7OUS5qw72mHN2ZjAMaXz3BfSb1AsHLbGpqXDEsniQhtig0KofmF/QRpR41ZqgO4UG/SAGEt7tt8kglAepbJMmH27z8kaDBificZQbaOC7ehqd3d0tI+byX/MaYxi2hYou5SWT9gBf/SHsaNCr9osEdIvUXzWC6yjE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936650; c=relaxed/simple; bh=inbe8cCFWdaOyB0IYxfJhVrScyr6rn6xuru7mTavvPY=; h=From:Subject:Mime-Version:References:Cc:Date:Message-Id: In-Reply-To:To:Content-Type; b=a4qE4jltXnf6tQvXp27q6Btt33cpEQkQMG7GLvcypbCKgWFdDWptDKskB+N16vnYmq8ehrLhUSX5Sqq/nwycT2pn+bfM6MEXaGl1lRLn3gjwZHaQlhNzm/VkPhClr0sprKJyFxOrN8+Wb4P9UDn1Wz4CXvwGhyg8bTWLKzHqT8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=QmegL1+x; arc=none smtp.client-ip=101.36.218.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="QmegL1+x" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936573; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=CyrHRsOqmrV9rEvntCI/AE5SZ04VVcYTxd4svEjLHus=; b=QmegL1+xMzmNhIU0Lu5e6SIt/yqfUf8eGjNRx17VYjS4LACN3Z5zGeyebZTt2SwW7nJuEE 16kT14zMWb64XfBTv6YJ7TYnYbWqLacUvAokVmNmN+gi+0KU0A0Kouvs9Qd7sfPgdBlik4 AT3915V5qJOUbCjHLI3POzRTAIp4zdu0bACkm68IBOKrgqwfsyrraXE9FtomZ3bfeoBWRT CIdgUsnk63JWMyoWDzCZxR+/26ps/BqaJd29QTFx5K3m8BtIEJWs+8G+e86pCC79cxo193 7AwG6vyANAV5OOBZHDbZbq2CJL9IGuxLHmWgLZVzqUroth5OXKSIaHAt1V5K6Q== From: "Xin Tian" Subject: [PATCH v3 04/14] net-next/yunsilicon: Add qp and cq management Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:50 +0800 References: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Original-From: Xin Tian X-Mailer: git-send-email 2.25.1 Cc: , , , , , , , , , Date: Wed, 15 Jan 2025 18:22:51 +0800 Message-Id: <20250115102249.3541496-5-tianx@yunsilicon.com> In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Lms-Return-Path: To: X-Patchwork-Delegate: kuba@kernel.org Add qp(queue pair) and cq(completion queue) resource management APIs Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 172 +++++++++++++++++- .../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +- drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 39 ++++ drivers/net/ethernet/yunsilicon/xsc/pci/cq.h | 14 ++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 5 + drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 79 ++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/qp.h | 15 ++ 7 files changed, 325 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index afb08f987..ee1cea10d 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -32,6 +32,10 @@ #define REG_WIDTH_TO_STRIDE(width) ((width) / 8) +enum { + XSC_MAX_EQ_NAME = 20 +}; + enum { XSC_MAX_PORTS = 2, }; @@ -46,6 +50,155 @@ enum { XSC_MAX_UUARS = XSC_MAX_UAR_PAGES * XSC_BF_REGS_PER_PAGE, }; +// alloc +struct xsc_buf_list { + void *buf; + dma_addr_t map; +}; + +struct xsc_buf { + struct xsc_buf_list direct; + struct xsc_buf_list *page_list; + int nbufs; + int npages; + int page_shift; + int size; +}; + +struct xsc_frag_buf { + struct xsc_buf_list *frags; + int npages; + int size; + u8 page_shift; +}; + +struct xsc_frag_buf_ctrl { + struct xsc_buf_list *frags; + u32 sz_m1; + u16 frag_sz_m1; + u16 strides_offset; + u8 log_sz; + u8 log_stride; + u8 log_frag_strides; +}; + +// qp +struct xsc_send_wqe_ctrl_seg { + __le32 msg_opcode:8; + __le32 with_immdt:1; + __le32 csum_en:2; + __le32 ds_data_num:5; + __le32 wqe_id:16; + __le32 msg_len; + union { + __le32 opcode_data; + struct { + u8 has_pph:1; + u8 so_type:1; + __le16 so_data_size:14; + u8:8; + u8 so_hdr_len:8; + }; + struct { + __le16 desc_id; + __le16 is_last_wqe:1; + __le16 dst_qp_id:15; + }; + }; + __le32 se:1; + __le32 ce:1; + __le32:30; +}; + +struct xsc_wqe_data_seg { + union { + __le32 in_line:1; + struct { + __le32:1; + __le32 seg_len:31; + __le32 mkey; + __le64 va; + }; + struct { + __le32:1; + __le32 len:7; + u8 in_line_data[15]; + }; + }; +}; + +struct xsc_core_qp { + void (*event)(struct xsc_core_qp *qp, int type); + int qpn; + atomic_t refcount; + struct completion free; + int pid; + u16 qp_type; + u16 eth_queue_type; + u16 qp_type_internal; + u16 grp_id; + u8 mac_id; +}; + +struct xsc_qp_table { + spinlock_t lock; /* protect radix tree */ + struct radix_tree_root tree; +}; + +// cq +enum xsc_event { + XSC_EVENT_TYPE_COMP = 0x0, + XSC_EVENT_TYPE_COMM_EST = 0x02,//mad + XSC_EVENT_TYPE_CQ_ERROR = 0x04, + XSC_EVENT_TYPE_WQ_CATAS_ERROR = 0x05, + XSC_EVENT_TYPE_INTERNAL_ERROR = 0x08, + XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR = 0x10,//IBV_EVENT_QP_REQ_ERR + XSC_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,//IBV_EVENT_QP_ACCESS_ERR +}; + +struct xsc_core_cq { + u32 cqn; + int cqe_sz; + u64 arm_db; + u64 ci_db; + struct xsc_core_device *xdev; + atomic_t refcount; + struct completion free; + unsigned int vector; + int irqn; + u16 dim_us; + u16 dim_pkts; + void (*comp)(struct xsc_core_cq *cq); + void (*event)(struct xsc_core_cq *cq, enum xsc_event); + u32 cons_index; + unsigned int arm_sn; + int pid; + u32 reg_next_cid; + u32 reg_done_pid; + struct xsc_eq *eq; +}; + +struct xsc_cq_table { + spinlock_t lock; /* protect radix tree */ + struct radix_tree_root tree; +}; + +struct xsc_eq { + struct xsc_core_device *dev; + struct xsc_cq_table cq_table; + u32 doorbell;//offset from bar0/2 space start + u32 cons_index; + struct xsc_buf buf; + int size; + unsigned int irqn; + u16 eqn; + int nent; + cpumask_var_t mask; + char name[XSC_MAX_EQ_NAME]; + struct list_head list; + int index; +}; + // hw struct xsc_reg_addr { u64 tx_db; @@ -172,7 +325,10 @@ struct xsc_caps { // xsc_core struct xsc_dev_resource { - struct mutex alloc_mutex; /* protect buffer alocation according to numa node */ + struct xsc_qp_table qp_table; + struct xsc_cq_table cq_table; + + struct mutex alloc_mutex; /* protect buffer alocation according to numa node */ }; enum xsc_pci_state { @@ -221,4 +377,18 @@ struct xsc_core_device { u8 fw_version_extra_flag; }; +int xsc_core_create_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp); +void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp); + +static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset) +{ + if (likely(BITS_PER_LONG == 64 || buf->nbufs == 1)) + return buf->direct.buf + offset; + else + return buf->page_list[offset >> PAGE_SHIFT].buf + + (offset & (PAGE_SIZE - 1)); +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index fea625d54..9a4a6e02d 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c new file mode 100644 index 000000000..5cff9025c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "common/xsc_core.h" +#include "cq.h" + +void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + struct xsc_core_cq *cq; + + spin_lock(&table->lock); + + cq = radix_tree_lookup(&table->tree, cqn); + if (cq) + atomic_inc(&cq->refcount); + + spin_unlock(&table->lock); + + if (!cq) { + pci_err(xdev->pdev, "Async event for bogus CQ 0x%x\n", cqn); + return; + } + + cq->event(cq, event_type); + + if (atomic_dec_and_test(&cq->refcount)) + complete(&cq->free); +} + +void xsc_init_cq_table(struct xsc_core_device *xdev) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + + spin_lock_init(&table->lock); + INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h new file mode 100644 index 000000000..902a7f1f2 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __CQ_H +#define __CQ_H + +#include "common/xsc_core.h" + +void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type); +void xsc_init_cq_table(struct xsc_core_device *xdev); + +#endif /* __CQ_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 550ea3c7a..bf9c8dd3d 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -6,6 +6,8 @@ #include "common/xsc_core.h" #include "common/xsc_driver.h" #include "hw.h" +#include "qp.h" +#include "cq.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -213,6 +215,9 @@ static int xsc_hw_setup(struct xsc_core_device *xdev) goto err_cmd_cleanup; } + xsc_init_cq_table(xdev); + xsc_init_qp_table(xdev); + return 0; err_cmd_cleanup: xsc_cmd_cleanup(xdev); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c new file mode 100644 index 000000000..f08c0e34f --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c @@ -0,0 +1,79 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include +#include "common/xsc_core.h" +#include "qp.h" + +int xsc_core_create_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + int err; + + spin_lock_irq(&table->lock); + err = radix_tree_insert(&table->tree, qp->qpn, qp); + spin_unlock_irq(&table->lock); + if (err) + return err; + + atomic_set(&qp->refcount, 1); + init_completion(&qp->free); + qp->pid = current->pid; + + return 0; +} +EXPORT_SYMBOL(xsc_core_create_resource_common); + +void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + unsigned long flags; + + spin_lock_irqsave(&table->lock, flags); + radix_tree_delete(&table->tree, qp->qpn); + spin_unlock_irqrestore(&table->lock, flags); + + if (atomic_dec_and_test(&qp->refcount)) + complete(&qp->free); + wait_for_completion(&qp->free); +} +EXPORT_SYMBOL(xsc_core_destroy_resource_common); + +void xsc_qp_event(struct xsc_core_device *xdev, u32 qpn, int event_type) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + struct xsc_core_qp *qp; + + spin_lock(&table->lock); + + qp = radix_tree_lookup(&table->tree, qpn); + if (qp) + atomic_inc(&qp->refcount); + + spin_unlock(&table->lock); + + if (!qp) { + pci_err(xdev->pdev, "Async event for bogus QP 0x%x\n", qpn); + return; + } + + qp->event(qp, event_type); + + if (atomic_dec_and_test(&qp->refcount)) + complete(&qp->free); +} + +void xsc_init_qp_table(struct xsc_core_device *xdev) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + + spin_lock_init(&table->lock); + INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h new file mode 100644 index 000000000..52af8db7c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __QP_H +#define __QP_H + +#include "common/xsc_core.h" + +void xsc_init_qp_table(struct xsc_core_device *xdev); +void xsc_cleanup_qp_table(struct xsc_core_device *xdev); +void xsc_qp_event(struct xsc_core_device *xdev, u32 qpn, int event_type); + +#endif /* __QP_H */ From patchwork Wed Jan 15 10:22:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940210 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-53.ptr.blmpb.com (lf-2-53.ptr.blmpb.com [101.36.218.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08BA6241694 for ; Wed, 15 Jan 2025 10:24:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936652; cv=none; b=BUlhIVPEC/OtxIJ0pbgeKTPncq4pF/WnZYfxpO0+8Mq2yYKTfQvpDIBV8pxjHcqrbpkRQ2DUA0nny52ukUlj3H88yVvXu25QByd3J6IWWjJoN83qJWtOa6Xm6+JnqzQc2eaf1J0Xeo8zPKcLksY5Wv589OevwRByX0BwXbDTJk4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936652; c=relaxed/simple; bh=02y7XSCXMieR64AR1xVlnsK3ITY6UdxQQZ35ImMOC5U=; h=References:From:Message-Id:Mime-Version:In-Reply-To:Subject:Date: Cc:Content-Type:To; b=OA7/qSvn0mkdGdnV7DNeW058oKZQNKvhgoGhrSng4lYvjb1B2oCX+Kx85axh0JSbMTfMIyxFFCm/ZRehSm1c783fDXhBwSTn/iJF8FTYO4/cPvDb8LL0nKmYxYy/myVBkdlffsxcQuxoI67TEBBnhCLuOMEebOnF6M0q/pfSFQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=fxiOA0KS; arc=none smtp.client-ip=101.36.218.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="fxiOA0KS" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936575; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=DCzBxh8zKxKU7mfmbgdZcyR5zVxfl25b56NwiwwiS+E=; b=fxiOA0KSMeEP6k9CXDiASUxXrpNI8QrKQdhdF5tNrl2Vgr1FmgDIb4aF4hUFkiiqwnQmEo wpM3dlbF7NylfVLJaR9E2CB6DXyeF2JXxYr49IR4nJordDBDEdwJgfOAZn1ZPCLx2WSdU7 m/p2T6ROmSz6OahvhLQ5Di20m0FBaCtdDod0Gcbu0FwIOJ5q6qEHq/Pbk7n1g+g3kzw/Pz EQ58/KqATxgzk4HnYZ4N0F/PRzQDZJUGjLA9Tq5j+5mOtcj2SEqpzlG3wNCBbeUAF/7pDi 0UQMvmZ3hntsfWL/lHW7wHlgrT9WX7FPPa3Ey8pj2qgGcUAQmqZqCg7Q+C6h7g== References: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Lms-Return-Path: From: "Xin Tian" Message-Id: <20250115102252.3541496-6-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Xin Tian Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:53 +0800 In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Subject: [PATCH v3 05/14] net-next/yunsilicon: Add eq and alloc X-Mailer: git-send-email 2.25.1 Date: Wed, 15 Jan 2025 18:22:53 +0800 Cc: , , , , , , , , , To: X-Patchwork-Delegate: kuba@kernel.org Add eq management and buffer alloc apis Signed-off-by: Xin Tian Signed-off-by: Honggang Wei --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 39 ++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +- .../net/ethernet/yunsilicon/xsc/pci/alloc.c | 125 +++++++ .../net/ethernet/yunsilicon/xsc/pci/alloc.h | 16 + drivers/net/ethernet/yunsilicon/xsc/pci/eq.c | 334 ++++++++++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/eq.h | 46 +++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 2 + 7 files changed, 563 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index ee1cea10d..afd4d4a43 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -27,6 +27,10 @@ #define XSC_MV_HOST_VF_DEV_ID 0x1152 #define XSC_MV_SOC_PF_DEV_ID 0x1153 +#define PAGE_SHIFT_4K 12 +#define PAGE_SIZE_4K (_AC(1, UL) << PAGE_SHIFT_4K) +#define PAGE_MASK_4K (~(PAGE_SIZE_4K - 1)) + #define REG_ADDR(dev, offset) \ (((dev)->bar) + ((offset) - 0xA0000000)) @@ -36,6 +40,10 @@ enum { XSC_MAX_EQ_NAME = 20 }; +enum { + XSC_MAX_IRQ_NAME = 32 +}; + enum { XSC_MAX_PORTS = 2, }; @@ -183,6 +191,7 @@ struct xsc_cq_table { struct radix_tree_root tree; }; +// eq struct xsc_eq { struct xsc_core_device *dev; struct xsc_cq_table cq_table; @@ -199,6 +208,26 @@ struct xsc_eq { int index; }; +struct xsc_eq_table { + void __iomem *update_ci; + void __iomem *update_arm_ci; + struct list_head comp_eqs_list; + struct xsc_eq pages_eq; + struct xsc_eq async_eq; + struct xsc_eq cmd_eq; + int num_comp_vectors; + int eq_vec_comp_base; + /* protect EQs list + */ + spinlock_t lock; +}; + +// irq +struct xsc_irq_info { + cpumask_var_t mask; + char name[XSC_MAX_IRQ_NAME]; +}; + // hw struct xsc_reg_addr { u64 tx_db; @@ -327,6 +356,8 @@ struct xsc_caps { struct xsc_dev_resource { struct xsc_qp_table qp_table; struct xsc_cq_table cq_table; + struct xsc_eq_table eq_table; + struct xsc_irq_info *irq_info; struct mutex alloc_mutex; /* protect buffer alocation according to numa node */ }; @@ -352,6 +383,8 @@ struct xsc_core_device { u8 mac_port; u16 glb_func_id; + u16 msix_vec_base; + struct xsc_cmd cmd; u16 cmdq_ver; @@ -381,6 +414,7 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); +struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i); static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset) { @@ -391,4 +425,9 @@ static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset) (offset & (PAGE_SIZE - 1)); } +static inline bool xsc_fw_is_available(struct xsc_core_device *xdev) +{ + return xdev->cmd.cmd_status == XSC_CMD_STATUS_NORMAL; +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 9a4a6e02d..667319958 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,5 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o - +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c new file mode 100644 index 000000000..3d2509459 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include "alloc.h" + +/* Handling for queue buffers -- we allocate a bunch of memory and + * register it in a memory region at HCA virtual address 0. If the + * requested size is > max_direct, we split the allocation into + * multiple pages, so we don't require too much contiguous memory. + */ +int xsc_buf_alloc(struct xsc_core_device *xdev, int size, int max_direct, + struct xsc_buf *buf) +{ + dma_addr_t t; + + buf->size = size; + if (size <= max_direct) { + buf->nbufs = 1; + buf->npages = 1; + buf->page_shift = get_order(size) + PAGE_SHIFT; + buf->direct.buf = dma_alloc_coherent(&xdev->pdev->dev, + size, &t, GFP_KERNEL | __GFP_ZERO); + if (!buf->direct.buf) + return -ENOMEM; + + buf->direct.map = t; + + while (t & ((1 << buf->page_shift) - 1)) { + --buf->page_shift; + buf->npages *= 2; + } + } else { + int i; + + buf->direct.buf = NULL; + buf->nbufs = (size + PAGE_SIZE - 1) / PAGE_SIZE; + buf->npages = buf->nbufs; + buf->page_shift = PAGE_SHIFT; + buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list), + GFP_KERNEL); + if (!buf->page_list) + return -ENOMEM; + + for (i = 0; i < buf->nbufs; i++) { + buf->page_list[i].buf = + dma_alloc_coherent(&xdev->pdev->dev, PAGE_SIZE, + &t, GFP_KERNEL | __GFP_ZERO); + if (!buf->page_list[i].buf) + goto err_free; + + buf->page_list[i].map = t; + } + + if (BITS_PER_LONG == 64) { + struct page **pages; + + pages = kmalloc_array(buf->nbufs, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_free; + for (i = 0; i < buf->nbufs; i++) { + if (is_vmalloc_addr(buf->page_list[i].buf)) + pages[i] = vmalloc_to_page(buf->page_list[i].buf); + else + pages[i] = virt_to_page(buf->page_list[i].buf); + } + buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL); + kfree(pages); + if (!buf->direct.buf) + goto err_free; + } + } + + return 0; + +err_free: + xsc_buf_free(xdev, buf); + + return -ENOMEM; +} + +void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf) +{ + int i; + + if (buf->nbufs == 1) { + dma_free_coherent(&xdev->pdev->dev, buf->size, buf->direct.buf, + buf->direct.map); + } else { + if (BITS_PER_LONG == 64 && buf->direct.buf) + vunmap(buf->direct.buf); + + for (i = 0; i < buf->nbufs; i++) + if (buf->page_list[i].buf) + dma_free_coherent(&xdev->pdev->dev, PAGE_SIZE, + buf->page_list[i].buf, + buf->page_list[i].map); + kfree(buf->page_list); + } +} + +void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages) +{ + u64 addr; + int i; + int shift = PAGE_SHIFT - PAGE_SHIFT_4K; + int mask = (1 << shift) - 1; + + for (i = 0; i < npages; i++) { + if (buf->nbufs == 1) + addr = buf->direct.map + (i << PAGE_SHIFT_4K); + else + addr = buf->page_list[i >> shift].map + ((i & mask) << PAGE_SHIFT_4K); + + pas[i] = cpu_to_be64(addr); + } +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h new file mode 100644 index 000000000..8ec465fa9 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __ALLOC_H +#define __ALLOC_H + +#include "common/xsc_core.h" + +int xsc_buf_alloc(struct xsc_core_device *xdev, int size, int max_direct, + struct xsc_buf *buf); +void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf); +void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c new file mode 100644 index 000000000..7952ca51d --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c @@ -0,0 +1,334 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ +#include +#include +#include +#include "common/xsc_driver.h" +#include "common/xsc_core.h" +#include "qp.h" +#include "alloc.h" +#include "eq.h" + +enum { + XSC_EQE_SIZE = sizeof(struct xsc_eqe), + XSC_EQE_OWNER_INIT_VAL = 0x1, +}; + +enum { + XSC_NUM_SPARE_EQE = 0x80, + XSC_NUM_ASYNC_EQE = 0x100, +}; + +static int xsc_cmd_destroy_eq(struct xsc_core_device *xdev, u32 eqn) +{ + struct xsc_destroy_eq_mbox_in in; + struct xsc_destroy_eq_mbox_out out; + int err; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_EQ); + in.eqn = cpu_to_be32(eqn); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (!err) + goto ex; + + if (out.hdr.status) + err = xsc_cmd_status_to_err(&out.hdr); + +ex: + return err; +} + +static struct xsc_eqe *get_eqe(struct xsc_eq *eq, u32 entry) +{ + return xsc_buf_offset(&eq->buf, entry * XSC_EQE_SIZE); +} + +static struct xsc_eqe *next_eqe_sw(struct xsc_eq *eq) +{ + struct xsc_eqe *eqe = get_eqe(eq, eq->cons_index & (eq->nent - 1)); + + return ((eqe->owner & 1) ^ !!(eq->cons_index & eq->nent)) ? NULL : eqe; +} + +static void eq_update_ci(struct xsc_eq *eq, int arm) +{ + union xsc_eq_doorbell db; + + db.val = 0; + db.arm = !!arm; + db.eq_next_cid = eq->cons_index; + db.eq_id = eq->eqn; + writel(db.val, REG_ADDR(eq->dev, eq->doorbell)); + /* We still want ordering, just not swabbing, so add a barrier */ + mb(); +} + +static void xsc_cq_completion(struct xsc_core_device *xdev, u32 cqn) +{ + struct xsc_core_cq *cq; + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + + rcu_read_lock(); + cq = radix_tree_lookup(&table->tree, cqn); + if (likely(cq)) + atomic_inc(&cq->refcount); + rcu_read_unlock(); + + if (!cq) { + pci_err(xdev->pdev, "Completion event for bogus CQ, cqn=%d\n", cqn); + return; + } + + ++cq->arm_sn; + + if (!cq->comp) + pci_err(xdev->pdev, "cq->comp is NULL\n"); + else + cq->comp(cq); + + if (atomic_dec_and_test(&cq->refcount)) + complete(&cq->free); +} + +static void xsc_eq_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type) +{ + struct xsc_core_cq *cq; + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + + spin_lock(&table->lock); + cq = radix_tree_lookup(&table->tree, cqn); + if (likely(cq)) + atomic_inc(&cq->refcount); + spin_unlock(&table->lock); + + if (unlikely(!cq)) { + pci_err(xdev->pdev, "Async event for bogus CQ, cqn=%d\n", cqn); + return; + } + + cq->event(cq, event_type); + + if (atomic_dec_and_test(&cq->refcount)) + complete(&cq->free); +} + +static int xsc_eq_int(struct xsc_core_device *xdev, struct xsc_eq *eq) +{ + struct xsc_eqe *eqe; + int eqes_found = 0; + int set_ci = 0; + u32 cqn, qpn, queue_id; + + while ((eqe = next_eqe_sw(eq))) { + /* Make sure we read EQ entry contents after we've + * checked the ownership bit. + */ + rmb(); + switch (eqe->type) { + case XSC_EVENT_TYPE_COMP: + case XSC_EVENT_TYPE_INTERNAL_ERROR: + /* eqe is changing */ + queue_id = eqe->queue_id; + cqn = queue_id; + xsc_cq_completion(xdev, cqn); + break; + + case XSC_EVENT_TYPE_CQ_ERROR: + queue_id = eqe->queue_id; + cqn = queue_id; + xsc_eq_cq_event(xdev, cqn, eqe->type); + break; + case XSC_EVENT_TYPE_WQ_CATAS_ERROR: + case XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + case XSC_EVENT_TYPE_WQ_ACCESS_ERROR: + queue_id = eqe->queue_id; + qpn = queue_id; + xsc_qp_event(xdev, qpn, eqe->type); + break; + default: + break; + } + + ++eq->cons_index; + eqes_found = 1; + ++set_ci; + + /* The HCA will think the queue has overflowed if we + * don't tell it we've been processing events. We + * create our EQs with XSC_NUM_SPARE_EQE extra + * entries, so we must update our consumer index at + * least that often. + */ + if (unlikely(set_ci >= XSC_NUM_SPARE_EQE)) { + eq_update_ci(eq, 0); + set_ci = 0; + } + } + + eq_update_ci(eq, 1); + + return eqes_found; +} + +static irqreturn_t xsc_msix_handler(int irq, void *eq_ptr) +{ + struct xsc_eq *eq = eq_ptr; + struct xsc_core_device *xdev = eq->dev; + + xsc_eq_int(xdev, eq); + + /* MSI-X vectors always belong to us */ + return IRQ_HANDLED; +} + +static void init_eq_buf(struct xsc_eq *eq) +{ + struct xsc_eqe *eqe; + int i; + + for (i = 0; i < eq->nent; i++) { + eqe = get_eqe(eq, i); + eqe->owner = XSC_EQE_OWNER_INIT_VAL; + } +} + +int xsc_create_map_eq(struct xsc_core_device *xdev, struct xsc_eq *eq, u8 vecidx, + int nent, const char *name) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + u16 msix_vec_offset = xdev->msix_vec_base + vecidx; + struct xsc_create_eq_mbox_in *in; + struct xsc_create_eq_mbox_out out; + int err; + int inlen; + int hw_npages; + + eq->nent = roundup_pow_of_two(roundup(nent, XSC_NUM_SPARE_EQE)); + err = xsc_buf_alloc(xdev, eq->nent * XSC_EQE_SIZE, PAGE_SIZE, &eq->buf); + if (err) + return err; + + init_eq_buf(eq); + + hw_npages = DIV_ROUND_UP(eq->nent * XSC_EQE_SIZE, PAGE_SIZE_4K); + inlen = sizeof(*in) + sizeof(in->pas[0]) * hw_npages; + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) { + err = -ENOMEM; + goto err_buf; + } + memset(&out, 0, sizeof(out)); + + xsc_fill_page_array(&eq->buf, in->pas, hw_npages); + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_EQ); + in->ctx.log_eq_sz = ilog2(eq->nent); + in->ctx.vecidx = cpu_to_be16(msix_vec_offset); + in->ctx.pa_num = cpu_to_be16(hw_npages); + in->ctx.glb_func_id = cpu_to_be16(xdev->glb_func_id); + in->ctx.is_async_eq = (vecidx == XSC_EQ_VEC_ASYNC ? 1 : 0); + + err = xsc_cmd_exec(xdev, in, inlen, &out, sizeof(out)); + if (err) + goto err_in; + + if (out.hdr.status) { + err = -ENOSPC; + goto err_in; + } + + snprintf(dev_res->irq_info[vecidx].name, XSC_MAX_IRQ_NAME, "%s@pci:%s", + name, pci_name(xdev->pdev)); + + eq->eqn = be32_to_cpu(out.eqn); + eq->irqn = pci_irq_vector(xdev->pdev, vecidx); + eq->dev = xdev; + eq->doorbell = xdev->regs.event_db; + eq->index = vecidx; + + err = request_irq(eq->irqn, xsc_msix_handler, 0, + dev_res->irq_info[vecidx].name, eq); + if (err) + goto err_eq; + + /* EQs are created in ARMED state + */ + eq_update_ci(eq, 1); + kvfree(in); + return 0; + +err_eq: + xsc_cmd_destroy_eq(xdev, eq->eqn); + +err_in: + kvfree(in); + +err_buf: + xsc_buf_free(xdev, &eq->buf); + return err; +} + +int xsc_destroy_unmap_eq(struct xsc_core_device *xdev, struct xsc_eq *eq) +{ + int err; + + if (!xsc_fw_is_available(xdev)) + return 0; + + free_irq(eq->irqn, eq); + err = xsc_cmd_destroy_eq(xdev, eq->eqn); + if (err) + pci_err(xdev->pdev, "failed to destroy a previously created eq: eqn %d\n", + eq->eqn); + xsc_buf_free(xdev, &eq->buf); + + return err; +} + +void xsc_eq_init(struct xsc_core_device *xdev) +{ + spin_lock_init(&xdev->dev_res->eq_table.lock); +} + +int xsc_start_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int err; + + err = xsc_create_map_eq(xdev, &table->async_eq, XSC_EQ_VEC_ASYNC, + XSC_NUM_ASYNC_EQE, "xsc_async_eq"); + if (err) + pci_err(xdev->pdev, "failed to create async EQ %d\n", err); + + return err; +} + +void xsc_stop_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + + xsc_destroy_unmap_eq(xdev, &table->async_eq); +} + +struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq, *n; + struct xsc_eq *eq_ret = NULL; + + spin_lock(&table->lock); + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + if (eq->index == i) { + eq_ret = eq; + break; + } + } + spin_unlock(&table->lock); + + return eq_ret; +} +EXPORT_SYMBOL(xsc_core_eq_get); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h new file mode 100644 index 000000000..d66687b40 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __EQ_H +#define __EQ_H + +#include "common/xsc_core.h" + +enum { + XSC_EQ_VEC_ASYNC = 0, + XSC_VEC_CMD = 1, + XSC_VEC_CMD_EVENT = 2, + XSC_DMA_READ_DONE_VEC = 3, + XSC_EQ_VEC_COMP_BASE, +}; + +struct xsc_eqe { + u8 type; + u8 sub_type; + __le16 queue_id:15; + u8 rsv1:1; + u8 err_code; + u8 rsvd[2]; + u8 rsv2:7; + u8 owner:1; +}; + +union xsc_eq_doorbell { + struct{ + u32 eq_next_cid : 11; + u32 eq_id : 11; + u32 arm : 1; + }; + u32 val; +}; + +int xsc_create_map_eq(struct xsc_core_device *xdev, struct xsc_eq *eq, u8 vecidx, + int nent, const char *name); +int xsc_destroy_unmap_eq(struct xsc_core_device *xdev, struct xsc_eq *eq); +void xsc_eq_init(struct xsc_core_device *xdev); +int xsc_start_eqs(struct xsc_core_device *xdev); +void xsc_stop_eqs(struct xsc_core_device *xdev); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index bf9c8dd3d..bde6d85d0 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -8,6 +8,7 @@ #include "hw.h" #include "qp.h" #include "cq.h" +#include "eq.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -217,6 +218,7 @@ static int xsc_hw_setup(struct xsc_core_device *xdev) xsc_init_cq_table(xdev); xsc_init_qp_table(xdev); + xsc_eq_init(xdev); return 0; err_cmd_cleanup: From patchwork Wed Jan 15 10:22:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940205 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-45.ptr.blmpb.com (va-2-45.ptr.blmpb.com [209.127.231.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16B0A20C499 for ; Wed, 15 Jan 2025 10:23:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936588; cv=none; b=ClkWL6SDRt74WYej6Dz45x3UCMP+dfPNXhOTCyQonFaMz0J8MR4PjC6CCwMQ+eCzDKqM3b8OqOfkavp8q1VG+P5hUKSkbmuiZOUIlEAcETg/QM2y3oHsT7q5HqKUQuBqjBKeFMxhqlbmGK4V8xK+TFScRIPOhGoTEdqM54bwem0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936588; c=relaxed/simple; bh=MBIejebeeH9de3FU5+tcMv3wVsgkPFZvSAVpoYTWEJA=; h=From:Mime-Version:Content-Type:Cc:Subject:Message-Id:To:Date: References:In-Reply-To; b=hAWAHZ7T6Z5AOqfdk/vKkHmyWsAdoCfvxX4Qd+80z8QE1iNMSjRVTUPuHTGHOUWKubBla2VOfn7B8YolMqKA2j1wAnNJ8uLy6BV7EDLqeR7Sk7WD9wConFZm5+n0RlUEuBin7iugsc5ANWEJfaxuAtNrZr+wUuauYzrZKkQJMGI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=fxO6lbov; arc=none smtp.client-ip=209.127.231.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="fxO6lbov" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936578; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=UIaBBR0z41nxjNO+R+dT+CcMTLFmWl4tdMWt4T3V+5Y=; b=fxO6lbovLGHrla/0qXdhw76tVmqVvASwNjnnd/dstJVz65LEmzuEQp9gLVdaqzx35GGtso n3SY606cLWGuqyCcwwWjQv9xbnAn8ogcmBkYEcMnVtDhfxeuwn8mO2eUzbsQse+4JFz8zo OwbtdJGRWs4wgbtgj5OvmVF+jxwF6wfs7xx5zEK+V6wFkHCfw8jabYo/O+EG0+2t6hNXJb er1+UhxxWBxKNvna9JdB2UM56UZ3C4omkNP7NtoaOQoGESDEWt4H0gg8Tgk1wop1ZT2/1m 3K2zfYVpM/t7cBPSwSJ51ty6I7dXdotX6f2XfIDmHXeAg+3wAs50SWHYsuVNBA== From: "Xin Tian" Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Xin Tian Cc: , , , , , , , , , Subject: [PATCH v3 06/14] net-next/yunsilicon: Add pci irq Message-Id: <20250115102254.3541496-7-tianx@yunsilicon.com> To: Date: Wed, 15 Jan 2025 18:22:55 +0800 X-Lms-Return-Path: References: <20250115102242.3541496-1-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:55 +0800 In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Mailer: git-send-email 2.25.1 X-Patchwork-Delegate: kuba@kernel.org Implement interrupt management and event handling Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 6 + .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/pci/main.c | 11 +- .../net/ethernet/yunsilicon/xsc/pci/pci_irq.c | 419 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/pci_irq.h | 14 + 5 files changed, 450 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index afd4d4a43..2e6ff6204 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -374,9 +374,12 @@ enum xsc_interface_state { struct xsc_core_device { struct pci_dev *pdev; struct device *device; + void *eth_priv; struct xsc_dev_resource *dev_res; int numa_node; + void (*event_handler)(void *adapter); + void __iomem *bar; int bar_num; @@ -408,6 +411,7 @@ struct xsc_core_device { u16 fw_version_patch; u32 fw_version_tweak; u8 fw_version_extra_flag; + cpumask_var_t xps_cpumask; }; int xsc_core_create_resource_common(struct xsc_core_device *xdev, @@ -415,6 +419,8 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev, void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i); +int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn, + unsigned int *irqn); static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset) { diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 667319958..3525d1c74 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index bde6d85d0..0acc3f080 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -9,6 +9,7 @@ #include "qp.h" #include "cq.h" #include "eq.h" +#include "pci_irq.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -248,10 +249,18 @@ static int xsc_load(struct xsc_core_device *xdev) goto out; } + err = xsc_irq_eq_create(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_irq_eq_create failed %d\n", err); + goto err_hw_cleanup; + } + set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); mutex_unlock(&xdev->intf_state_mutex); return 0; +err_hw_cleanup: + xsc_hw_cleanup(xdev); out: mutex_unlock(&xdev->intf_state_mutex); return err; @@ -266,7 +275,7 @@ static int xsc_unload(struct xsc_core_device *xdev) } clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); - + xsc_irq_eq_destroy(xdev); xsc_hw_cleanup(xdev); out: diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c new file mode 100644 index 000000000..56965c576 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c @@ -0,0 +1,419 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include +#include +#ifdef CONFIG_RFS_ACCEL +#include +#endif +#include "common/xsc_driver.h" +#include "common/xsc_core.h" +#include "eq.h" +#include "pci_irq.h" + +enum { + XSC_COMP_EQ_SIZE = 1024, +}; + +enum xsc_eq_type { + XSC_EQ_TYPE_COMP, + XSC_EQ_TYPE_ASYNC, +}; + +struct xsc_irq { + struct atomic_notifier_head nh; + cpumask_var_t mask; + char name[XSC_MAX_IRQ_NAME]; +}; + +struct xsc_irq_table { + struct xsc_irq *irq; + int nvec; +#ifdef CONFIG_RFS_ACCEL + struct cpu_rmap *rmap; +#endif +}; + +struct xsc_msix_resource *g_msix_xres; + +static void xsc_free_irq(struct xsc_core_device *xdev, unsigned int vector) +{ + unsigned int irqn = 0; + + irqn = pci_irq_vector(xdev->pdev, vector); + disable_irq(irqn); + + if (xsc_fw_is_available(xdev)) + free_irq(irqn, xdev); +} + +static int set_comp_irq_affinity_hint(struct xsc_core_device *xdev, int i) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int vecidx = table->eq_vec_comp_base + i; + struct xsc_eq *eq = xsc_core_eq_get(xdev, i); + unsigned int irqn; + int ret; + + irqn = pci_irq_vector(xdev->pdev, vecidx); + if (!zalloc_cpumask_var(&eq->mask, GFP_KERNEL)) { + pci_err(xdev->pdev, "zalloc_cpumask_var rx cpumask failed"); + return -ENOMEM; + } + + if (!zalloc_cpumask_var(&xdev->xps_cpumask, GFP_KERNEL)) { + pci_err(xdev->pdev, "zalloc_cpumask_var tx cpumask failed"); + return -ENOMEM; + } + + cpumask_set_cpu(cpumask_local_spread(i, xdev->numa_node), + xdev->xps_cpumask); + ret = irq_set_affinity_hint(irqn, eq->mask); + + return ret; +} + +static void clear_comp_irq_affinity_hint(struct xsc_core_device *xdev, int i) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int vecidx = table->eq_vec_comp_base + i; + struct xsc_eq *eq = xsc_core_eq_get(xdev, i); + int irqn; + + irqn = pci_irq_vector(xdev->pdev, vecidx); + irq_set_affinity_hint(irqn, NULL); + free_cpumask_var(eq->mask); +} + +static int set_comp_irq_affinity_hints(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int nvec = table->num_comp_vectors; + int err; + int i; + + for (i = 0; i < nvec; i++) { + err = set_comp_irq_affinity_hint(xdev, i); + if (err) + goto err_out; + } + + return 0; + +err_out: + for (i--; i >= 0; i--) + clear_comp_irq_affinity_hint(xdev, i); + free_cpumask_var(xdev->xps_cpumask); + + return err; +} + +static void clear_comp_irq_affinity_hints(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int nvec = table->num_comp_vectors; + int i; + + for (i = 0; i < nvec; i++) + clear_comp_irq_affinity_hint(xdev, i); + free_cpumask_var(xdev->xps_cpumask); +} + +static int xsc_alloc_irq_vectors(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + struct xsc_eq_table *table = &dev_res->eq_table; + int nvec = xdev->caps.msix_num; + int nvec_base; + int err; + + nvec_base = XSC_EQ_VEC_COMP_BASE; + if (nvec <= nvec_base) { + pci_err(xdev->pdev, "failed to alloc irq vector(%d)\n", nvec); + return -ENOMEM; + } + + dev_res->irq_info = kcalloc(nvec, sizeof(*dev_res->irq_info), GFP_KERNEL); + if (!dev_res->irq_info) + return -ENOMEM; + + nvec = pci_alloc_irq_vectors(xdev->pdev, nvec_base + 1, nvec, PCI_IRQ_MSIX); + if (nvec < 0) { + err = nvec; + goto err_free_irq_info; + } + + table->eq_vec_comp_base = nvec_base; + table->num_comp_vectors = nvec - nvec_base; + xdev->msix_vec_base = xdev->caps.msix_base; + + return 0; + +err_free_irq_info: + pci_free_irq_vectors(xdev->pdev); + kfree(dev_res->irq_info); + return err; +} + +static void xsc_free_irq_vectors(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + + if (!xsc_fw_is_available(xdev)) + return; + + pci_free_irq_vectors(xdev->pdev); + kfree(dev_res->irq_info); +} + +int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn, + unsigned int *irqn) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq, *n; + int err = -ENOENT; + + if (!xdev->caps.msix_enable) + return 0; + + spin_lock(&table->lock); + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + if (eq->index == vector) { + *eqn = eq->eqn; + *irqn = eq->irqn; + err = 0; + break; + } + } + spin_unlock(&table->lock); + + return err; +} +EXPORT_SYMBOL(xsc_core_vector2eqn); + +static void free_comp_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq, *n; + + spin_lock(&table->lock); + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + list_del(&eq->list); + spin_unlock(&table->lock); + if (xsc_destroy_unmap_eq(xdev, eq)) + pci_err(xdev->pdev, "failed to destroy EQ 0x%x\n", eq->eqn); + kfree(eq); + spin_lock(&table->lock); + } + spin_unlock(&table->lock); +} + +static int alloc_comp_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + char name[XSC_MAX_IRQ_NAME]; + struct xsc_eq *eq; + int ncomp_vec; + int nent; + int err; + int i; + + INIT_LIST_HEAD(&table->comp_eqs_list); + ncomp_vec = table->num_comp_vectors; + nent = XSC_COMP_EQ_SIZE; + + for (i = 0; i < ncomp_vec; i++) { + eq = kzalloc(sizeof(*eq), GFP_KERNEL); + if (!eq) { + err = -ENOMEM; + goto clean; + } + + snprintf(name, XSC_MAX_IRQ_NAME, "xsc_comp%d", i); + err = xsc_create_map_eq(xdev, eq, + i + table->eq_vec_comp_base, nent, name); + if (err) { + kfree(eq); + goto clean; + } + + eq->index = i; + spin_lock(&table->lock); + list_add_tail(&eq->list, &table->comp_eqs_list); + spin_unlock(&table->lock); + } + + return 0; + +clean: + free_comp_eqs(xdev); + return err; +} + +static irqreturn_t xsc_cmd_handler(int irq, void *arg) +{ + struct xsc_core_device *xdev = (struct xsc_core_device *)arg; + int err; + + disable_irq_nosync(xdev->cmd.irqn); + err = xsc_cmd_err_handler(xdev); + if (!err) + xsc_cmd_resp_handler(xdev); + enable_irq(xdev->cmd.irqn); + + return IRQ_HANDLED; +} + +static int xsc_request_irq_for_cmdq(struct xsc_core_device *xdev, u8 vecidx) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + + writel(xdev->msix_vec_base + vecidx, REG_ADDR(xdev, xdev->cmd.reg.msix_vec_addr)); + + snprintf(dev_res->irq_info[vecidx].name, XSC_MAX_IRQ_NAME, "%s@pci:%s", + "xsc_cmd", pci_name(xdev->pdev)); + xdev->cmd.irqn = pci_irq_vector(xdev->pdev, vecidx); + return request_irq(xdev->cmd.irqn, xsc_cmd_handler, 0, + dev_res->irq_info[vecidx].name, xdev); +} + +static void xsc_free_irq_for_cmdq(struct xsc_core_device *xdev) +{ + xsc_free_irq(xdev, XSC_VEC_CMD); +} + +static irqreturn_t xsc_event_handler(int irq, void *arg) +{ + struct xsc_core_device *xdev = (struct xsc_core_device *)arg; + + if (!xdev->eth_priv) + return IRQ_NONE; + + if (!xdev->event_handler) + return IRQ_NONE; + + xdev->event_handler(xdev->eth_priv); + + return IRQ_HANDLED; +} + +static int xsc_request_irq_for_event(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + + snprintf(dev_res->irq_info[XSC_VEC_CMD_EVENT].name, XSC_MAX_IRQ_NAME, "%s@pci:%s", + "xsc_eth_event", pci_name(xdev->pdev)); + return request_irq(pci_irq_vector(xdev->pdev, XSC_VEC_CMD_EVENT), xsc_event_handler, 0, + dev_res->irq_info[XSC_VEC_CMD_EVENT].name, xdev); +} + +static void xsc_free_irq_for_event(struct xsc_core_device *xdev) +{ + xsc_free_irq(xdev, XSC_VEC_CMD_EVENT); +} + +static int xsc_cmd_enable_msix(struct xsc_core_device *xdev) +{ + struct xsc_msix_table_info_mbox_in in; + struct xsc_msix_table_info_mbox_out out; + int err; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ENABLE_MSIX); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err) { + pci_err(xdev->pdev, "xsc_cmd_exec enable msix failed %d\n", err); + return err; + } + + return 0; +} + +int xsc_irq_eq_create(struct xsc_core_device *xdev) +{ + int err; + + if (xdev->caps.msix_enable == 0) + return 0; + + err = xsc_alloc_irq_vectors(xdev); + if (err) { + pci_err(xdev->pdev, "enable msix failed, err=%d\n", err); + goto out; + } + + err = xsc_start_eqs(xdev); + if (err) { + pci_err(xdev->pdev, "failed to start EQs, err=%d\n", err); + goto err_free_irq_vectors; + } + + err = alloc_comp_eqs(xdev); + if (err) { + pci_err(xdev->pdev, "failed to alloc comp EQs, err=%d\n", err); + goto err_stop_eqs; + } + + err = xsc_request_irq_for_cmdq(xdev, XSC_VEC_CMD); + if (err) { + pci_err(xdev->pdev, "failed to request irq for cmdq, err=%d\n", err); + goto err_free_comp_eqs; + } + + err = xsc_request_irq_for_event(xdev); + if (err) { + pci_err(xdev->pdev, "failed to request irq for event, err=%d\n", err); + goto err_free_irq_cmdq; + } + + err = set_comp_irq_affinity_hints(xdev); + if (err) { + pci_err(xdev->pdev, "failed to alloc affinity hint cpumask, err=%d\n", err); + goto err_free_irq_evnt; + } + + xsc_cmd_use_events(xdev); + err = xsc_cmd_enable_msix(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_cmd_enable_msix failed %d.\n", err); + xsc_cmd_use_polling(xdev); + goto err_free_irq_evnt; + } + return 0; + +err_free_irq_evnt: + xsc_free_irq_for_event(xdev); +err_free_irq_cmdq: + xsc_free_irq_for_cmdq(xdev); +err_free_comp_eqs: + free_comp_eqs(xdev); +err_stop_eqs: + xsc_stop_eqs(xdev); +err_free_irq_vectors: + xsc_free_irq_vectors(xdev); +out: + return err; +} + +int xsc_irq_eq_destroy(struct xsc_core_device *xdev) +{ + if (xdev->caps.msix_enable == 0) + return 0; + + xsc_stop_eqs(xdev); + clear_comp_irq_affinity_hints(xdev); + free_comp_eqs(xdev); + + xsc_free_irq_for_event(xdev); + xsc_free_irq_for_cmdq(xdev); + xsc_free_irq_vectors(xdev); + + return 0; +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h new file mode 100644 index 000000000..7b0aae349 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __PCI_IRQ_H +#define __PCI_IRQ_H + +#include "common/xsc_core.h" + +int xsc_irq_eq_create(struct xsc_core_device *xdev); +int xsc_irq_eq_destroy(struct xsc_core_device *xdev); + +#endif From patchwork Wed Jan 15 10:22:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940206 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-60.ptr.blmpb.com (va-2-60.ptr.blmpb.com [209.127.231.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DAE820DD66 for ; Wed, 15 Jan 2025 10:23:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.60 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936590; cv=none; b=UmoBWlhrMAO7C2CeBDZeKZc34kopETQ6rGATHIVxmkls8Azq+ROzyGDMl9awXEyRRtkh18MJompMxkKTMO992sXBSMd0xcb3OHARG6CQXZk2kyK3+Jjw7BgM1UnM45k+9zI6X0KEJgEKLQH3p33VlysVQt+Nm4oG1LnpUnPDbsk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936590; c=relaxed/simple; bh=eAKl4oN4wHiQnONPIcmWVFufNXWWGbWuGL+5u9g2JXA=; h=Cc:From:In-Reply-To:Subject:Date:Content-Type:To:Mime-Version: Message-Id:References; b=EIwvyYieVH2YJOeO6l5l29Q/fBxY2PGLR8fTQQ8S8FAZynniCCWotFR5mTQT9MPtmKHZSy52VqVt7ZxUvKMw2TZyp/FwOhAMUBLbcyP+CrzDNbwVepMgOKTAzAYXHZpb1KKA0H6ZwLAoZfzFxBrDzfXAAe9zeRtbp0D2B034Ud0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=E9QqRDDC; arc=none smtp.client-ip=209.127.231.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="E9QqRDDC" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936580; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=H93V5OHDPnGmvAgVZtkN0t5ujYZMqeg++S5fYsoFNUU=; b=E9QqRDDC6UAeO8mAv3Do0BF72EZLZw9Qg5nHosl68fgiMtQkg/jd+KIYjBFyZmOIHOMS+x X8ZPNw6mKfysBeVycRqe9e9bBKzPsLlM73hmBdH/pFsJ/orr6ZAV+YNlSiAEsdyE1qt2+p ndugz1mP3t5b27sMozzFxSmD3ihWjk7Ay3nXkTCIUU95rxwr73UKdph/ndCGXBKfPwTiiG B0jvPnn8zXcLI7BLSjP2Ej+jzY1DrI7SXSdR3amwefjWCcQ8yRIyV8da6F7/7GHvOT9tIY tvs1gC3IelN1vyJ91q9GGIPmGZiW7XnrrYkUgYfNdF0FOhcXYuxSc+QRN3rPlA== X-Original-From: Xin Tian Cc: , , , , , , , , , From: "Xin Tian" X-Lms-Return-Path: In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Subject: [PATCH v3 07/14] net-next/yunsilicon: Init auxiliary device Date: Wed, 15 Jan 2025 18:22:58 +0800 To: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.25.1 Message-Id: <20250115102257.3541496-8-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:22:58 +0800 References: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Patchwork-Delegate: kuba@kernel.org Initialize eth auxiliary device when pci probing Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 12 ++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +- .../net/ethernet/yunsilicon/xsc/pci/adev.c | 109 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/adev.h | 14 +++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 10 ++ 5 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/adev.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/adev.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 2e6ff6204..ac08ac380 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -8,6 +8,7 @@ #include #include +#include #include "common/xsc_cmdq.h" #define XSC_PCI_VENDOR_ID 0x1f67 @@ -228,6 +229,15 @@ struct xsc_irq_info { char name[XSC_MAX_IRQ_NAME]; }; +// adev +#define XSC_PCI_DRV_NAME "xsc_pci" +#define XSC_ETH_ADEV_NAME "eth" + +struct xsc_adev { + struct auxiliary_device adev; + struct xsc_core_device *xdev; +}; + // hw struct xsc_reg_addr { u64 tx_db; @@ -374,6 +384,8 @@ enum xsc_interface_state { struct xsc_core_device { struct pci_dev *pdev; struct device *device; + int adev_id; + struct xsc_adev **xsc_adev_list; void *eth_priv; struct xsc_dev_resource *dev_res; int numa_node; diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 3525d1c74..ad0ecc122 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o adev.o + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/adev.c b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.c new file mode 100644 index 000000000..4d295ece6 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include "adev.h" + +static DEFINE_IDA(xsc_adev_ida); + +enum xsc_adev_idx { + XSC_ADEV_IDX_ETH, + XSC_ADEV_IDX_MAX +}; + +static const char * const xsc_adev_name[] = { + [XSC_ADEV_IDX_ETH] = XSC_ETH_ADEV_NAME, +}; + +static void xsc_release_adev(struct device *dev) +{ + /* Doing nothing, but auxiliary bus requires a release function */ +} + +static int xsc_reg_adev(struct xsc_core_device *xdev, int idx) +{ + struct auxiliary_device *adev; + struct xsc_adev *xsc_adev; + int ret; + + xsc_adev = kzalloc(sizeof(*xsc_adev), GFP_KERNEL); + if (!xsc_adev) + return -ENOMEM; + + adev = &xsc_adev->adev; + adev->name = xsc_adev_name[idx]; + adev->id = xdev->adev_id; + adev->dev.parent = &xdev->pdev->dev; + adev->dev.release = xsc_release_adev; + xsc_adev->xdev = xdev; + + ret = auxiliary_device_init(adev); + if (ret) + goto err_free_adev; + + ret = auxiliary_device_add(adev); + if (ret) + goto err_uninit_adev; + + xdev->xsc_adev_list[idx] = xsc_adev; + + return 0; +err_uninit_adev: + auxiliary_device_uninit(adev); +err_free_adev: + kfree(xsc_adev); + + return ret; +} + +static void xsc_unreg_adev(struct xsc_core_device *xdev, int idx) +{ + struct xsc_adev *xsc_adev = xdev->xsc_adev_list[idx]; + struct auxiliary_device *adev = &xsc_adev->adev; + + auxiliary_device_delete(adev); + auxiliary_device_uninit(adev); + + kfree(xsc_adev); + xdev->xsc_adev_list[idx] = NULL; +} + +int xsc_adev_init(struct xsc_core_device *xdev) +{ + struct xsc_adev **xsc_adev_list; + int adev_id; + int ret; + + xsc_adev_list = kzalloc(sizeof(void *) * XSC_ADEV_IDX_MAX, GFP_KERNEL); + if (!xsc_adev_list) + return -ENOMEM; + xdev->xsc_adev_list = xsc_adev_list; + + adev_id = ida_alloc(&xsc_adev_ida, GFP_KERNEL); + if (adev_id < 0) + goto err_free_adev_list; + xdev->adev_id = adev_id; + + ret = xsc_reg_adev(xdev, XSC_ADEV_IDX_ETH); + if (ret) + goto err_dalloc_adev_id; + + return 0; +err_dalloc_adev_id: + ida_free(&xsc_adev_ida, xdev->adev_id); +err_free_adev_list: + kfree(xsc_adev_list); + + return ret; +} + +void xsc_adev_uninit(struct xsc_core_device *xdev) +{ + xsc_unreg_adev(xdev, XSC_ADEV_IDX_ETH); + ida_free(&xsc_adev_ida, xdev->adev_id); + kfree(xdev->xsc_adev_list); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/adev.h b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.h new file mode 100644 index 000000000..3de4dd26f --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __ADEV_H +#define __ADEV_H + +#include "common/xsc_core.h" + +int xsc_adev_init(struct xsc_core_device *xdev); +void xsc_adev_uninit(struct xsc_core_device *xdev); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 0acc3f080..3b8294889 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -10,6 +10,7 @@ #include "cq.h" #include "eq.h" #include "pci_irq.h" +#include "adev.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -255,10 +256,18 @@ static int xsc_load(struct xsc_core_device *xdev) goto err_hw_cleanup; } + err = xsc_adev_init(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_adev_init failed %d\n", err); + goto err_irq_eq_destroy; + } + set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); mutex_unlock(&xdev->intf_state_mutex); return 0; +err_irq_eq_destroy: + xsc_irq_eq_destroy(xdev); err_hw_cleanup: xsc_hw_cleanup(xdev); out: @@ -268,6 +277,7 @@ static int xsc_load(struct xsc_core_device *xdev) static int xsc_unload(struct xsc_core_device *xdev) { + xsc_adev_uninit(xdev); mutex_lock(&xdev->intf_state_mutex); if (!test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) { xsc_hw_cleanup(xdev); From patchwork Wed Jan 15 10:23:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940207 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-60.ptr.blmpb.com (va-2-60.ptr.blmpb.com [209.127.231.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73A3E22FDFB for ; Wed, 15 Jan 2025 10:23:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.60 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936590; cv=none; b=WyOkAj7yIqqk5mbBU6/hxsTphWSvISSf8cNMVBY6WQ5krjs3WXkhygOFLIpQYyTDiTKScNqiKGZ4ign6/USEbPz1OVZgV5odhH1Rws8cBO6OVSEoWO1j20ZuoZR2ZlqadHeEqie6TroHDQ8U8sGeNKgc1q4QvyNXLF8VxHEXHr8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936590; c=relaxed/simple; bh=wPr24lyhGp0AegPwnFPHBJr+Ckjo5kKY256Pipppnjk=; h=From:Date:Mime-Version:References:To:In-Reply-To:Cc:Subject: Message-Id:Content-Type; b=ageiVXz7evJ4mv8bSzI4MeZ3s6ct6PGCvuTmGccp71yklwF/rNsUKUOCqIFviqMYAZr6yfDF4lySZmMjYeglUX5mwloIo+1D7U9pLPEZ1l7FFKcv4tfarxRj1gpmSJcRgtz/mCtnFMqukFaOm/fNfFU+g92nJx3k/iJGQ0Bp2qc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=KDKKct3C; arc=none smtp.client-ip=209.127.231.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="KDKKct3C" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936583; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=/T300Wy0X6Ju0a6J/YBFbg4US6oqdbDMmAKwoRlXjyY=; b=KDKKct3C0d4kDKbH/qYezLnZxn7WhaDVaUkC4jv8ujblPzQ1zPuKdTqHYMsI0sAM0OcXJw KLpxBwu43LYMG/sEJA3OVgoBkksU+6J4oo2jxI+KfgrnMwtwkj8v3e/Ls5Ha1iIczYVLPl YGBzaOq/2IeYWM0aDVHVSavkSUbvV7DOsA4eX7wsma1UQHL+Ke12dLTuMuqeZkrtVWPAUV tyseu3G63uEpfee0LC2hjjXSa3l1YhKVMiszFy7ZJLh6n+XYAkDQA1sp4lFzFc/F8GVFzm ieHn3yQ3A/GQ4FSV+YjVmOh0pBKppzuXmWXcPDISfZLkPItLpEFf0U3670n27A== X-Original-From: Xin Tian From: "Xin Tian" Date: Wed, 15 Jan 2025 18:23:00 +0800 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:00 +0800 X-Mailer: git-send-email 2.25.1 X-Lms-Return-Path: References: <20250115102242.3541496-1-tianx@yunsilicon.com> To: In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Cc: , , , , , , , , , Subject: [PATCH v3 08/14] net-next/yunsilicon: Add ethernet interface Message-Id: <20250115102259.3541496-9-tianx@yunsilicon.com> X-Patchwork-Delegate: kuba@kernel.org Implement an auxiliary driver for ethernet and initialize the netdevice simply. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- drivers/net/ethernet/yunsilicon/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 99 +++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 16 +++ .../yunsilicon/xsc/net/xsc_eth_common.h | 15 +++ 4 files changed, 131 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/main.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile index 6fc8259a7..65b9a6265 100644 --- a/drivers/net/ethernet/yunsilicon/Makefile +++ b/drivers/net/ethernet/yunsilicon/Makefile @@ -4,5 +4,5 @@ # Makefile for the Yunsilicon device drivers. # -# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/ +obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/ obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/ \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c new file mode 100644 index 000000000..42636bec1 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include "common/xsc_core.h" +#include "xsc_eth_common.h" +#include "xsc_eth.h" + +static int xsc_get_max_num_channels(struct xsc_core_device *xdev) +{ + return min_t(int, xdev->dev_res->eq_table.num_comp_vectors, + XSC_ETH_MAX_NUM_CHANNELS); +} + +static int xsc_eth_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *adev_id) +{ + struct xsc_adev *xsc_adev = container_of(adev, struct xsc_adev, adev); + struct xsc_core_device *xdev = xsc_adev->xdev; + struct xsc_adapter *adapter; + struct net_device *netdev; + int num_chl, num_tc; + int err; + + num_chl = xsc_get_max_num_channels(xdev); + num_tc = xdev->caps.max_tc; + + netdev = alloc_etherdev_mqs(sizeof(struct xsc_adapter), + num_chl * num_tc, num_chl); + if (!netdev) { + pr_err("alloc_etherdev_mqs failed, txq=%d, rxq=%d\n", + (num_chl * num_tc), num_chl); + return -ENOMEM; + } + + netdev->dev.parent = &xdev->pdev->dev; + adapter = netdev_priv(netdev); + adapter->netdev = netdev; + adapter->pdev = xdev->pdev; + adapter->dev = &adapter->pdev->dev; + adapter->xdev = xdev; + xdev->eth_priv = adapter; + + err = register_netdev(netdev); + if (err) { + netdev_err(netdev, "register_netdev failed, err=%d\n", err); + goto err_free_netdev; + } + + return 0; + +err_free_netdev: + free_netdev(netdev); + + return err; +} + +static void xsc_eth_remove(struct auxiliary_device *adev) +{ + struct xsc_adev *xsc_adev = container_of(adev, struct xsc_adev, adev); + struct xsc_core_device *xdev = xsc_adev->xdev; + struct xsc_adapter *adapter; + + if (!xdev) + return; + + adapter = xdev->eth_priv; + if (!adapter) { + netdev_err(adapter->netdev, "failed! adapter is null\n"); + return; + } + + unregister_netdev(adapter->netdev); + + free_netdev(adapter->netdev); + + xdev->eth_priv = NULL; +} + +static const struct auxiliary_device_id xsc_eth_id_table[] = { + { .name = XSC_PCI_DRV_NAME "." XSC_ETH_ADEV_NAME }, + {}, +}; +MODULE_DEVICE_TABLE(auxiliary, xsc_eth_id_table); + +static struct auxiliary_driver xsc_eth_driver = { + .name = "eth", + .probe = xsc_eth_probe, + .remove = xsc_eth_remove, + .id_table = xsc_eth_id_table, +}; +module_auxiliary_driver(xsc_eth_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Yunsilicon XSC ethernet driver"); diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h new file mode 100644 index 000000000..0c70c0d59 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_ETH_H +#define __XSC_ETH_H + +struct xsc_adapter { + struct net_device *netdev; + struct pci_dev *pdev; + struct device *dev; + struct xsc_core_device *xdev; +}; + +#endif /* __XSC_ETH_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h new file mode 100644 index 000000000..b5640f05d --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_ETH_COMMON_H +#define __XSC_ETH_COMMON_H + +#define XSC_LOG_INDIR_RQT_SIZE 0x8 + +#define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE) +#define XSC_ETH_MIN_NUM_CHANNELS 2 +#define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE + +#endif From patchwork Wed Jan 15 10:23:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940217 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-56.ptr.blmpb.com (va-2-56.ptr.blmpb.com [209.127.231.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5A121DB12E for ; Wed, 15 Jan 2025 10:25:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936730; cv=none; b=GDcuO4bD32EQVpT2DG1+Crc617Io2u03qdZOAGYrorjIPXyzruVnIZJhC5iPBTo1iFdJhenA3ID47t+s6J4Wbg+Tnfd2StB7RbhtjbMFkdxgre+7sOHjn8XYpclSrbs+PO3zxLoR4ct0NVz6o3j+B2JZDPhSQ3LrNiAW0wSqYrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936730; c=relaxed/simple; bh=uLPGUYd6WeZd6euU9kzwu0k9Bd+x2bMuyz2i1N3N1Gs=; h=Subject:Content-Type:To:From:References:In-Reply-To:Cc:Date: Message-Id:Mime-Version; b=RG5yO1QH6fXopTMS3YsOoo5Nxyu8yCA+Rplwj9TBlliG1oire+ED9SuqoFA+ORBGXc+dY4wnQMNVwqsIqu2RIdSh/XBFPf4C4HFdgZL1dk7FT95ZEryZ1GOXUiF7RbnS/qHj+eL4qrVG8QIxq5FhE+U+U+KJS+lSGRWcZuixsUo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=Nbt/UHx0; arc=none smtp.client-ip=209.127.231.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="Nbt/UHx0" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936585; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=lbvvS7PG5UalrBoTmY6rn42FmP5pQn1BkXa/KEWvjYM=; b=Nbt/UHx08KvpxKOMfNLPoGCL7eQ2yKYzoCcNv3TD7kpotnlZiiSCuxAxabUsCJICj6d2Hs fmwRquA2I39ZEGGe1M+QqzeP0NHnC7xkP8/wBJw4a2cIAsWMN0AT4UncBi1LW7o9KFHoIp AuA7N7lzBd8onT23GQcOzIFsrMQWK6tx4yXY89psEVYoXg7vhgIobTbooszMsUFkCs6ME5 91sOSi2+OCHJBTm4Xcb7jTys1wItYphvoQPgAlSjVY5z/Iq3Gkx+aCydvnDYqRn3BcJuCZ 0JbCYj/N1eiMlRoGL9OVRYPFyMLWhpZUct5n1AsqhMjY/wJN7l7/+C3bzemeAA== Subject: [PATCH v3 09/14] net-next/yunsilicon: Init net device X-Lms-Return-Path: To: From: "Xin Tian" Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:03 +0800 X-Original-From: Xin Tian X-Mailer: git-send-email 2.25.1 References: <20250115102242.3541496-1-tianx@yunsilicon.com> In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Cc: , , , , , , , , , Date: Wed, 15 Jan 2025 18:23:03 +0800 Message-Id: <20250115102302.3541496-10-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Initialize network device: 1. initialize hardware 2. configure network parameters Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 1 + .../yunsilicon/xsc/common/xsc_device.h | 42 +++ .../ethernet/yunsilicon/xsc/common/xsc_pp.h | 38 ++ .../net/ethernet/yunsilicon/xsc/net/main.c | 325 +++++++++++++++++- .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 28 ++ .../yunsilicon/xsc/net/xsc_eth_common.h | 45 +++ .../net/ethernet/yunsilicon/xsc/net/xsc_pph.h | 176 ++++++++++ .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 49 +++ 8 files changed, 703 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index ac08ac380..0c9f944d8 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -9,6 +9,7 @@ #include #include #include +#include #include "common/xsc_cmdq.h" #define XSC_PCI_VENDOR_ID 0x1f67 diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h new file mode 100644 index 000000000..45ea8d2a0 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_DEVICE_H +#define __XSC_DEVICE_H + +enum xsc_traffic_types { + XSC_TT_IPV4, + XSC_TT_IPV4_TCP, + XSC_TT_IPV4_UDP, + XSC_TT_IPV6, + XSC_TT_IPV6_TCP, + XSC_TT_IPV6_UDP, + XSC_TT_IPV4_IPSEC_AH, + XSC_TT_IPV6_IPSEC_AH, + XSC_TT_IPV4_IPSEC_ESP, + XSC_TT_IPV6_IPSEC_ESP, + XSC_TT_ANY, + XSC_NUM_TT, +}; + +#define XSC_NUM_INDIR_TIRS XSC_NUM_TT + +enum { + XSC_L3_PROT_TYPE_IPV4 = BIT(0), + XSC_L3_PROT_TYPE_IPV6 = BIT(1), +}; + +enum { + XSC_L4_PROT_TYPE_TCP = BIT(0), + XSC_L4_PROT_TYPE_UDP = BIT(1), +}; + +struct xsc_tirc_config { + u8 l3_prot_type; + u8 l4_prot_type; + u32 rx_hash_fields; +}; + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h new file mode 100644 index 000000000..582f99d8c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_PP_H +#define __XSC_PP_H + +enum { + XSC_HASH_FIELD_SEL_SRC_IP = BIT(0), + XSC_HASH_FIELD_SEL_PROTO = BIT(1), + XSC_HASH_FIELD_SEL_DST_IP = BIT(2), + XSC_HASH_FIELD_SEL_SPORT = BIT(3), + XSC_HASH_FIELD_SEL_DPORT = BIT(4), + XSC_HASH_FIELD_SEL_SRC_IPV6 = BIT(5), + XSC_HASH_FIELD_SEL_DST_IPV6 = BIT(6), + XSC_HASH_FIELD_SEL_SPORT_V6 = BIT(7), + XSC_HASH_FIELD_SEL_DPORT_V6 = BIT(8), +}; + +#define XSC_HASH_IP (XSC_HASH_FIELD_SEL_SRC_IP |\ + XSC_HASH_FIELD_SEL_DST_IP |\ + XSC_HASH_FIELD_SEL_PROTO) +#define XSC_HASH_IP_PORTS (XSC_HASH_FIELD_SEL_SRC_IP |\ + XSC_HASH_FIELD_SEL_DST_IP |\ + XSC_HASH_FIELD_SEL_SPORT |\ + XSC_HASH_FIELD_SEL_DPORT |\ + XSC_HASH_FIELD_SEL_PROTO) +#define XSC_HASH_IP6 (XSC_HASH_FIELD_SEL_SRC_IPV6 |\ + XSC_HASH_FIELD_SEL_DST_IPV6 |\ + XSC_HASH_FIELD_SEL_PROTO) +#define XSC_HASH_IP6_PORTS (XSC_HASH_FIELD_SEL_SRC_IPV6 |\ + XSC_HASH_FIELD_SEL_DST_IPV6 |\ + XSC_HASH_FIELD_SEL_SPORT_V6 |\ + XSC_HASH_FIELD_SEL_DPORT_V6 |\ + XSC_HASH_FIELD_SEL_PROTO) + +#endif /* __XSC_PP_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index 42636bec1..fcb30676a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -6,16 +6,322 @@ #include #include #include +#include #include "common/xsc_core.h" +#include "common/xsc_driver.h" +#include "common/xsc_device.h" +#include "common/xsc_pp.h" #include "xsc_eth_common.h" #include "xsc_eth.h" +static const struct xsc_tirc_config tirc_default_config[XSC_NUM_INDIR_TIRS] = { + [XSC_TT_IPV4] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV4, + .l4_prot_type = 0, + .rx_hash_fields = XSC_HASH_IP, + }, + [XSC_TT_IPV4_TCP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV4, + .l4_prot_type = XSC_L4_PROT_TYPE_TCP, + .rx_hash_fields = XSC_HASH_IP_PORTS, + }, + [XSC_TT_IPV4_UDP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV4, + .l4_prot_type = XSC_L4_PROT_TYPE_UDP, + .rx_hash_fields = XSC_HASH_IP_PORTS, + }, + [XSC_TT_IPV6] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV6, + .l4_prot_type = 0, + .rx_hash_fields = XSC_HASH_IP6, + }, + [XSC_TT_IPV6_TCP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV6, + .l4_prot_type = XSC_L4_PROT_TYPE_TCP, + .rx_hash_fields = XSC_HASH_IP6_PORTS, + }, + [XSC_TT_IPV6_UDP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV6, + .l4_prot_type = XSC_L4_PROT_TYPE_UDP, + .rx_hash_fields = XSC_HASH_IP6_PORTS, + }, +}; + static int xsc_get_max_num_channels(struct xsc_core_device *xdev) { return min_t(int, xdev->dev_res->eq_table.num_comp_vectors, XSC_ETH_MAX_NUM_CHANNELS); } +static void xsc_build_default_indir_rqt(u32 *indirection_rqt, int len, + int num_channels) +{ + int i; + + for (i = 0; i < len; i++) + indirection_rqt[i] = i % num_channels; +} + +static void xsc_build_rss_param(struct xsc_rss_params *rss_param, u16 num_channels) +{ + enum xsc_traffic_types tt; + + rss_param->hfunc = ETH_RSS_HASH_TOP; + netdev_rss_key_fill(rss_param->toeplitz_hash_key, + sizeof(rss_param->toeplitz_hash_key)); + + xsc_build_default_indir_rqt(rss_param->indirection_rqt, + XSC_INDIR_RQT_SIZE, num_channels); + + for (tt = 0; tt < XSC_NUM_INDIR_TIRS; tt++) { + rss_param->rx_hash_fields[tt] = + tirc_default_config[tt].rx_hash_fields; + } + rss_param->rss_hash_tmpl = XSC_HASH_IP_PORTS | XSC_HASH_IP6_PORTS; +} + +static void xsc_eth_build_nic_params(struct xsc_adapter *adapter, u32 ch_num, u32 tc_num) +{ + struct xsc_eth_params *params = &adapter->nic_param; + struct xsc_core_device *xdev = adapter->xdev; + + params->mtu = SW_DEFAULT_MTU; + params->num_tc = tc_num; + + params->comp_vectors = xdev->dev_res->eq_table.num_comp_vectors; + params->max_num_ch = ch_num; + params->num_channels = ch_num; + + params->rq_max_size = BIT(xdev->caps.log_max_qp_depth); + params->sq_max_size = BIT(xdev->caps.log_max_qp_depth); + xsc_build_rss_param(&adapter->rss_param, adapter->nic_param.num_channels); +} + +static int xsc_eth_netdev_init(struct xsc_adapter *adapter) +{ + unsigned int node, tc, nch; + + tc = adapter->nic_param.num_tc; + nch = adapter->nic_param.max_num_ch; + node = dev_to_node(adapter->dev); + adapter->txq2sq = kcalloc_node(nch * tc, + sizeof(*adapter->txq2sq), GFP_KERNEL, node); + if (!adapter->txq2sq) + goto err_out; + + adapter->workq = create_singlethread_workqueue("xsc_eth"); + if (!adapter->workq) + goto err_free_priv; + + netif_carrier_off(adapter->netdev); + + return 0; + +err_free_priv: + kfree(adapter->txq2sq); +err_out: + return -ENOMEM; +} + +static int xsc_eth_close(struct net_device *netdev) +{ + return 0; +} + +static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz) +{ + struct xsc_set_mtu_mbox_in in; + struct xsc_set_mtu_mbox_out out; + int ret; + + memset(&in, 0, sizeof(struct xsc_set_mtu_mbox_in)); + memset(&out, 0, sizeof(struct xsc_set_mtu_mbox_out)); + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_SET_MTU); + in.mtu = cpu_to_be16(mtu); + in.rx_buf_sz_min = cpu_to_be16(rx_buf_sz); + in.mac_port = xdev->mac_port; + + ret = xsc_cmd_exec(xdev, &in, sizeof(struct xsc_set_mtu_mbox_in), &out, + sizeof(struct xsc_set_mtu_mbox_out)); + if (ret || out.hdr.status) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "failed to set hw_mtu=%u rx_buf_sz=%u, err=%d, status=%d\n", + mtu, rx_buf_sz, ret, out.hdr.status); + ret = -ENOEXEC; + } + + return ret; +} + +static const struct net_device_ops xsc_netdev_ops = { + // TBD +}; + +static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + + /* Set up network device as normal. */ + netdev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; + netdev->netdev_ops = &xsc_netdev_ops; + + netdev->min_mtu = SW_MIN_MTU; + netdev->max_mtu = SW_MAX_MTU; + /*mtu - macheaderlen - ipheaderlen should be aligned in 8B*/ + netdev->mtu = SW_DEFAULT_MTU; + + netdev->vlan_features |= NETIF_F_SG; + netdev->vlan_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;//NETIF_F_HW_CSUM; + netdev->vlan_features |= NETIF_F_GRO; + netdev->vlan_features |= NETIF_F_TSO;//NETIF_F_TSO_ECN + netdev->vlan_features |= NETIF_F_TSO6; + + netdev->vlan_features |= NETIF_F_RXCSUM; + netdev->vlan_features |= NETIF_F_RXHASH; + netdev->vlan_features |= NETIF_F_GSO_PARTIAL; + + netdev->hw_features = netdev->vlan_features; + + netdev->features |= netdev->hw_features; + netdev->features |= NETIF_F_HIGHDMA; +} + +static int xsc_eth_nic_init(struct xsc_adapter *adapter, + void *rep_priv, u32 ch_num, u32 tc_num) +{ + int err; + + xsc_eth_build_nic_params(adapter, ch_num, tc_num); + + err = xsc_eth_netdev_init(adapter); + if (err) + return err; + + xsc_eth_build_nic_netdev(adapter); + + return 0; +} + +static void xsc_eth_nic_cleanup(struct xsc_adapter *adapter) +{ + destroy_workqueue(adapter->workq); + kfree(adapter->txq2sq); +} + +static int xsc_eth_get_mac(struct xsc_core_device *xdev, char *mac) +{ + struct xsc_query_eth_mac_mbox_out *out; + struct xsc_query_eth_mac_mbox_in in; + int err; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) + return -ENOMEM; + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_ETH_MAC); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out)); + if (err || out->hdr.status) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "get mac failed! err=%d, out.status=%u\n", + err, out->hdr.status); + err = -ENOEXEC; + goto exit; + } + + memcpy(mac, out->mac, 6); + +exit: + kfree(out); + + return err; +} + +static void xsc_eth_l2_addr_init(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + char mac[6] = {0}; + int ret = 0; + + ret = xsc_eth_get_mac(adapter->xdev, mac); + if (ret) { + netdev_err(netdev, "get mac failed %d, generate random mac...", ret); + eth_random_addr(mac); + } + dev_addr_mod(netdev, 0, mac, 6); + + if (!is_valid_ether_addr(netdev->perm_addr)) + memcpy(netdev->perm_addr, netdev->dev_addr, netdev->addr_len); +} + +static int xsc_eth_nic_enable(struct xsc_adapter *adapter) +{ + struct xsc_core_device *xdev = adapter->xdev; + + xsc_eth_l2_addr_init(adapter); + + xsc_eth_set_hw_mtu(xdev, XSC_SW2HW_MTU(adapter->nic_param.mtu), + XSC_SW2HW_RX_PKT_LEN(adapter->nic_param.mtu)); + + rtnl_lock(); + netif_device_attach(adapter->netdev); + rtnl_unlock(); + + return 0; +} + +static void xsc_eth_nic_disable(struct xsc_adapter *adapter) +{ + rtnl_lock(); + if (netif_running(adapter->netdev)) + xsc_eth_close(adapter->netdev); + netif_device_detach(adapter->netdev); + rtnl_unlock(); +} + +static int xsc_attach_netdev(struct xsc_adapter *adapter) +{ + int err = -1; + + err = xsc_eth_nic_enable(adapter); + if (err) + return err; + + return 0; +} + +static void xsc_detach_netdev(struct xsc_adapter *adapter) +{ + xsc_eth_nic_disable(adapter); + + flush_workqueue(adapter->workq); + adapter->status = XSCALE_ETH_DRIVER_DETACH; +} + +static int xsc_eth_attach(struct xsc_core_device *xdev, struct xsc_adapter *adapter) +{ + int err = -1; + + if (netif_device_present(adapter->netdev)) + return 0; + + err = xsc_attach_netdev(adapter); + if (err) + return err; + + return 0; +} + +static void xsc_eth_detach(struct xsc_core_device *xdev, struct xsc_adapter *adapter) +{ + if (!netif_device_present(adapter->netdev)) + return; + + xsc_detach_netdev(adapter); +} + static int xsc_eth_probe(struct auxiliary_device *adev, const struct auxiliary_device_id *adev_id) { @@ -23,6 +329,7 @@ static int xsc_eth_probe(struct auxiliary_device *adev, struct xsc_core_device *xdev = xsc_adev->xdev; struct xsc_adapter *adapter; struct net_device *netdev; + void *rep_priv = NULL; int num_chl, num_tc; int err; @@ -45,14 +352,30 @@ static int xsc_eth_probe(struct auxiliary_device *adev, adapter->xdev = xdev; xdev->eth_priv = adapter; + err = xsc_eth_nic_init(adapter, rep_priv, num_chl, num_tc); + if (err) { + netdev_err(netdev, "xsc_eth_nic_init failed, err=%d\n", err); + goto err_free_netdev; + } + + err = xsc_eth_attach(xdev, adapter); + if (err) { + netdev_err(netdev, "xsc_eth_attach failed, err=%d\n", err); + goto err_nic_cleanup; + } + err = register_netdev(netdev); if (err) { netdev_err(netdev, "register_netdev failed, err=%d\n", err); - goto err_free_netdev; + goto err_detach; } return 0; +err_detach: + xsc_eth_detach(xdev, adapter); +err_nic_cleanup: + xsc_eth_nic_cleanup(adapter); err_free_netdev: free_netdev(netdev); diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 0c70c0d59..1f9bae10b 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -6,11 +6,39 @@ #ifndef __XSC_ETH_H #define __XSC_ETH_H +#include "common/xsc_device.h" +#include "xsc_eth_common.h" + +enum { + XSCALE_ETH_DRIVER_INIT, + XSCALE_ETH_DRIVER_OK, + XSCALE_ETH_DRIVER_CLOSE, + XSCALE_ETH_DRIVER_DETACH, +}; + +struct xsc_rss_params { + u32 indirection_rqt[XSC_INDIR_RQT_SIZE]; + u32 rx_hash_fields[XSC_NUM_INDIR_TIRS]; + u8 toeplitz_hash_key[52]; + u8 hfunc; + u32 rss_hash_tmpl; +}; + struct xsc_adapter { struct net_device *netdev; struct pci_dev *pdev; struct device *dev; struct xsc_core_device *xdev; + + struct xsc_eth_params nic_param; + struct xsc_rss_params rss_param; + + struct workqueue_struct *workq; + + struct xsc_sq **txq2sq; + + u32 status; + struct mutex status_lock; // protect status }; #endif /* __XSC_ETH_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index b5640f05d..997d3033c 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -6,10 +6,55 @@ #ifndef __XSC_ETH_COMMON_H #define __XSC_ETH_COMMON_H +#include "xsc_pph.h" + +#define SW_MIN_MTU ETH_MIN_MTU +#define SW_DEFAULT_MTU ETH_DATA_LEN +#define SW_MAX_MTU 9600 + +#define XSC_ETH_HW_MTU_SEND 9800 +#define XSC_ETH_HW_MTU_RECV 9800 +#define XSC_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN * 2 + ETH_FCS_LEN) +#define XSC_SW2HW_MTU(mtu) ((mtu) + XSC_ETH_HARD_MTU) +#define XSC_SW2HW_FRAG_SIZE(mtu) ((mtu) + XSC_ETH_HARD_MTU) +#define XSC_ETH_RX_MAX_HEAD_ROOM 256 +#define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM) + #define XSC_LOG_INDIR_RQT_SIZE 0x8 #define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE) #define XSC_ETH_MIN_NUM_CHANNELS 2 #define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE +struct xsc_eth_params { + u16 num_channels; + u16 max_num_ch; + u8 num_tc; + u32 mtu; + u32 hard_mtu; + u32 comp_vectors; + u32 sq_size; + u32 sq_max_size; + u8 rq_wq_type; + u32 rq_size; + u32 rq_max_size; + u32 rq_frags_size; + + u16 num_rl_txqs; + u8 rx_cqe_compress_def; + u8 tunneled_offload_en; + u8 lro_en; + u8 tx_min_inline_mode; + u8 vlan_strip_disable; + u8 scatter_fcs_en; + u8 rx_dim_enabled; + u8 tx_dim_enabled; + u32 rx_dim_usecs_low; + u32 rx_dim_frames_low; + u32 tx_dim_usecs_low; + u32 tx_dim_frames_low; + u32 lro_timeout; + u32 pflags; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h new file mode 100644 index 000000000..fa64f6731 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_PPH_H +#define __XSC_PPH_H + +#define XSC_PPH_HEAD_LEN 64 + +enum { + L4_PROTO_NONE = 0, + L4_PROTO_TCP = 1, + L4_PROTO_UDP = 2, + L4_PROTO_ICMP = 3, + L4_PROTO_GRE = 4, +}; + +enum { + L3_PROTO_NONE = 0, + L3_PROTO_IP = 2, + L3_PROTO_IP6 = 3, +}; + +struct epp_pph { + u16 outer_eth_type; //2 bytes + u16 inner_eth_type; //4 bytes + + u16 rsv1:1; + u16 outer_vlan_flag:2; + u16 outer_ip_type:2; + u16 outer_ip_ofst:5; + u16 outer_ip_len:6; //6 bytes + + u16 rsv2:1; + u16 outer_tp_type:3; + u16 outer_tp_csum_flag:1; + u16 outer_tp_ofst:7; + u16 ext_tunnel_type:4; //8 bytes + + u8 tunnel_ofst; //9 bytes + u8 inner_mac_ofst; //10 bytes + + u32 rsv3:2; + u32 inner_mac_flag:1; + u32 inner_vlan_flag:2; + u32 inner_ip_type:2; + u32 inner_ip_ofst:8; + u32 inner_ip_len:6; + u32 inner_tp_type:2; + u32 inner_tp_csum_flag:1; + u32 inner_tp_ofst:8; //14 bytees + + u16 rsv4:1; + u16 payload_type:4; + u16 payload_ofst:8; + u16 pkt_type:3; //16 bytes + + u16 rsv5:2; + u16 pri:3; + u16 logical_in_port:11; + u16 vlan_info; + u8 error_bitmap:8; //21 bytes + + u8 rsv6:7; + u8 recirc_id_vld:1; + u16 recirc_id; //24 bytes + + u8 rsv7:7; + u8 recirc_data_vld:1; + u32 recirc_data; //29 bytes + + u8 rsv8:6; + u8 mark_tag_vld:2; + u16 mark_tag; //32 bytes + + u8 rsv9:4; + u8 upa_to_soc:1; + u8 upa_from_soc:1; + u8 upa_re_up_call:1; + u8 upa_pkt_drop:1; //33 bytes + + u8 ucdv; + u16 rsv10:2; + u16 pkt_len:14; //36 bytes + + u16 rsv11:2; + u16 pkt_hdr_ptr:14; //38 bytes + + u64 rsv12:5; + u64 csum_ofst:8; + u64 csum_val:29; + u64 csum_plen:14; + u64 rsv11_0:8; //46 bytes + + u64 rsv11_1; + u64 rsv11_2; + u16 rsv11_3; +}; + +#define OUTER_L3_BIT BIT(3) +#define OUTER_L4_BIT BIT(2) +#define INNER_L3_BIT BIT(1) +#define INNER_L4_BIT BIT(0) +#define OUTER_BIT (OUTER_L3_BIT | OUTER_L4_BIT) +#define INNER_BIT (INNER_L3_BIT | INNER_L4_BIT) +#define OUTER_AND_INNER (OUTER_BIT | INNER_BIT) + +#define PACKET_UNKNOWN BIT(4) + +#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_OFFSET (6UL) +#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_MASK (0XF00) +#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_OFFSET (8) + +#define EPP2SOC_PPH_EXT_ERROR_BITMAP_OFFSET (20UL) +#define EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_MASK (0XFF) +#define EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_OFFSET (0) + +#define XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(PPH_BASE_ADDR) \ + ((*(u16 *)((u8 *)(PPH_BASE_ADDR) + EPP2SOC_PPH_EXT_TUNNEL_TYPE_OFFSET) & \ + EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_MASK) >> EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_OFFSET) + +#define XSC_GET_EPP2SOC_PPH_ERROR_BITMAP(PPH_BASE_ADDR) \ + ((*(u8 *)((u8 *)(PPH_BASE_ADDR) + EPP2SOC_PPH_EXT_ERROR_BITMAP_OFFSET) & \ + EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_MASK) >> EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_OFFSET) + +#define PPH_OUTER_IP_TYPE_OFF (4UL) +#define PPH_OUTER_IP_TYPE_MASK (0x3) +#define PPH_OUTER_IP_TYPE_SHIFT (11) +#define PPH_OUTER_IP_TYPE(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_TYPE_OFF)) >> \ + PPH_OUTER_IP_TYPE_SHIFT) & PPH_OUTER_IP_TYPE_MASK) + +#define PPH_OUTER_IP_OFST_OFF (4UL) +#define PPH_OUTER_IP_OFST_MASK (0x1f) +#define PPH_OUTER_IP_OFST_SHIFT (6) +#define PPH_OUTER_IP_OFST(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_OFST_OFF)) >> \ + PPH_OUTER_IP_OFST_SHIFT) & PPH_OUTER_IP_OFST_MASK) + +#define PPH_OUTER_IP_LEN_OFF (4UL) +#define PPH_OUTER_IP_LEN_MASK (0x3f) +#define PPH_OUTER_IP_LEN_SHIFT (0) +#define PPH_OUTER_IP_LEN(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_LEN_OFF)) >> \ + PPH_OUTER_IP_LEN_SHIFT) & PPH_OUTER_IP_LEN_MASK) + +#define PPH_OUTER_TP_TYPE_OFF (6UL) +#define PPH_OUTER_TP_TYPE_MASK (0x7) +#define PPH_OUTER_TP_TYPE_SHIFT (12) +#define PPH_OUTER_TP_TYPE(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_TP_TYPE_OFF)) >> \ + PPH_OUTER_TP_TYPE_SHIFT) & PPH_OUTER_TP_TYPE_MASK) + +#define PPH_PAYLOAD_OFST_OFF (14UL) +#define PPH_PAYLOAD_OFST_MASK (0xff) +#define PPH_PAYLOAD_OFST_SHIFT (3) +#define PPH_PAYLOAD_OFST(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_PAYLOAD_OFST_OFF)) >> \ + PPH_PAYLOAD_OFST_SHIFT) & PPH_PAYLOAD_OFST_MASK) + +#define PPH_CSUM_OFST_OFF (38UL) +#define PPH_CSUM_OFST_MASK (0xff) +#define PPH_CSUM_OFST_SHIFT (51) +#define PPH_CSUM_OFST(base) \ + ((be64_to_cpu(*(u64 *)((u8 *)(base) + PPH_CSUM_OFST_OFF)) >> \ + PPH_CSUM_OFST_SHIFT) & PPH_CSUM_OFST_MASK) + +#define PPH_CSUM_VAL_OFF (38UL) +#define PPH_CSUM_VAL_MASK (0xeffffff) +#define PPH_CSUM_VAL_SHIFT (22) +#define PPH_CSUM_VAL(base) \ + ((be64_to_cpu(*(u64 *)((u8 *)(base) + PPH_CSUM_VAL_OFF)) >> \ + PPH_CSUM_VAL_SHIFT) & PPH_CSUM_VAL_MASK) +#endif /* __XSC_TBM_H */ + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h new file mode 100644 index 000000000..8f33c78d8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved. + */ + +#ifndef __XSC_QUEUE_H +#define __XSC_QUEUE_H + +#include "common/xsc_core.h" + +struct xsc_sq { + struct xsc_core_qp cqp; + /* dirtied @completion */ + u16 cc; + u32 dma_fifo_cc; + + /* dirtied @xmit */ + u16 pc ____cacheline_aligned_in_smp; + u32 dma_fifo_pc; + + struct xsc_cq cq; + + /* read only */ + struct xsc_wq_cyc wq; + u32 dma_fifo_mask; + struct { + struct xsc_sq_dma *dma_fifo; + struct xsc_tx_wqe_info *wqe_info; + } db; + void __iomem *uar_map; + struct netdev_queue *txq; + u32 sqn; + u16 stop_room; + + __be32 mkey_be; + unsigned long state; + unsigned int hw_mtu; + + /* control path */ + struct xsc_wq_ctrl wq_ctrl; + struct xsc_channel *channel; + int ch_ix; + int txq_ix; + struct work_struct recover_work; +} ____cacheline_aligned_in_smp; + +#endif /* __XSC_QUEUE_H */ From patchwork Wed Jan 15 10:23:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940218 X-Patchwork-Delegate: kuba@kernel.org Received: from va-1-32.ptr.blmpb.com (va-1-32.ptr.blmpb.com [209.127.230.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0BC51DB130 for ; Wed, 15 Jan 2025 10:25:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936732; cv=none; b=LFeEyHocE/y/XZR9XNFz9eGORaJf4iEFWg3cMrU6Qr6nroD8Zfdyh4pfDHyfxsoF4Idu2e0FGOkmri8xfDg7x0KWVgy0Wurl72oh8nlEXyLv65F+jwaO9gtAjkEl89TpZJQSi9fjW70o1jd/00qclwzN9uIaDhTsnQYJ3n0L2GM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936732; c=relaxed/simple; bh=Jdj+kopACie/afq4Fsg2ChSdSEpLOfQ36DfweC9Ca5k=; h=Mime-Version:Date:Content-Type:Message-Id:In-Reply-To:References: Cc:From:Subject:To; b=ZOC2o5Sddp36rVXx0kzJ/hsjTwShN70/28/u2dwRbSAzNTEVb0uuqhMwqYgjtknWmCBEloEjXpeGbm0t/vA7Zcs6E32zWS/5OAXexs/vD7DGUPxjFMX37nPVRKpM7uJYTLlex4Swj++0aWzKrKbtk+WDgFv2eQCphhjPwp5cJWQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=jWHWe/CP; arc=none smtp.client-ip=209.127.230.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="jWHWe/CP" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936588; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=KZxFjXiBIzdoGTszIeloKFitMrBNEwVeF+5tkzFWR9M=; b=jWHWe/CPmbCeajUpqRUxbJCZztmqhdCPUSBGkXSRkFv0KIPRwOjCE88MutOBesYqCTb28s FmIFmY3v319leu/6LqvHg7vlcex7+WeuhAFm0l99mPwJOOD736ire3xShfu/KfAFUrVYmS j5NcL4I4/yDDfVxrEbeFKafgcR3fu9FQGD67B1/cXs6hoEm8/EXLr2m0I5okc/G/pW6PPE 48Uu8vxScOGVi8rUURMjIHnNw8q+ujK5XYju+ADDNu95F/FdkR2Uq8fX7JMpX3xwWRJvvG s8LWZ3gDdb7MPv39Qx1A2QJQ16BXNBITSGEmpnsx1090z5xNNgEdolABP3PVsA== Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: Date: Wed, 15 Jan 2025 18:23:05 +0800 Message-Id: <20250115102304.3541496-11-tianx@yunsilicon.com> X-Original-From: Xin Tian In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> References: <20250115102242.3541496-1-tianx@yunsilicon.com> Cc: , , , , , , , , , From: "Xin Tian" Subject: [PATCH v3 10/14] net-next/yunsilicon: Add eth needed qp and cq apis Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:05 +0800 X-Mailer: git-send-email 2.25.1 To: X-Patchwork-Delegate: kuba@kernel.org Add eth needed qp and cq apis Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 18 ++ .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../ethernet/yunsilicon/xsc/net/xsc_eth_wq.c | 80 ++++++++ .../ethernet/yunsilicon/xsc/net/xsc_eth_wq.h | 179 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/alloc.c | 96 ++++++++++ .../net/ethernet/yunsilicon/xsc/pci/alloc.h | 1 - drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 112 +++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 110 +++++++++++ 8 files changed, 596 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 0c9f944d8..a81f75e58 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -431,9 +431,27 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); +int xsc_core_eth_create_qp(struct xsc_core_device *xdev, + struct xsc_create_qp_mbox_in *in, + int insize, u32 *p_qpn); +int xsc_core_eth_modify_qp_status(struct xsc_core_device *xdev, u32 qpn, u16 status); +int xsc_core_eth_destroy_qp(struct xsc_core_device *xdev, u32 qpn); +int xsc_core_eth_create_rss_qp_rqs(struct xsc_core_device *xdev, + struct xsc_create_multiqp_mbox_in *in, + int insize, int *p_qpn_base); +int xsc_core_eth_modify_raw_qp(struct xsc_core_device *xdev, + struct xsc_modify_raw_qp_mbox_in *in); +int xsc_core_eth_create_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq, + struct xsc_create_cq_mbox_in *in, int insize); +int xsc_core_eth_destroy_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq); + struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i); int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, int *eqn, unsigned int *irqn); +void xsc_core_fill_page_frag_array(struct xsc_frag_buf *buf, __be64 *pas, int npages); +int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, int size, + struct xsc_frag_buf *buf, int node); +void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *buf); static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset) { diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 2811433af..697046979 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o \ No newline at end of file +xsc_eth-y := main.o xsc_eth_wq.o \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c new file mode 100644 index 000000000..6bbb940db --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All + * rights reserved. + * Copyright (c) 2013-2015, Mellanox Technologies, Ltd. All rights reserved. + */ + +#include "xsc_eth_wq.h" +#include "xsc_eth.h" + +u32 xsc_wq_cyc_get_size(struct xsc_wq_cyc *wq) +{ + return (u32)wq->fbc.sz_m1 + 1; +} + +static u32 wq_get_byte_sz(u8 log_sz, u8 log_stride) +{ + return ((u32)1 << log_sz) << log_stride; +} + +int xsc_eth_cqwq_create(struct xsc_core_device *xdev, struct xsc_wq_param *param, + u8 q_log_size, u8 ele_log_size, struct xsc_cqwq *wq, + struct xsc_wq_ctrl *wq_ctrl) +{ + u8 log_wq_stride = ele_log_size; + u8 log_wq_sz = q_log_size; + int err; + + err = xsc_core_frag_buf_alloc_node(xdev, wq_get_byte_sz(log_wq_sz, log_wq_stride), + &wq_ctrl->buf, + param->buf_numa_node); + if (err) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "xsc_core_frag_buf_alloc_node failed, %d\n", err); + goto err; + } + + xsc_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, &wq->fbc); + + wq_ctrl->xdev = xdev; + + return 0; + +err: + return err; +} + +int xsc_eth_wq_cyc_create(struct xsc_core_device *xdev, struct xsc_wq_param *param, + u8 q_log_size, u8 ele_log_size, struct xsc_wq_cyc *wq, + struct xsc_wq_ctrl *wq_ctrl) +{ + u8 log_wq_stride = ele_log_size; + u8 log_wq_sz = q_log_size; + struct xsc_frag_buf_ctrl *fbc = &wq->fbc; + int err; + + err = xsc_core_frag_buf_alloc_node(xdev, wq_get_byte_sz(log_wq_sz, log_wq_stride), + &wq_ctrl->buf, param->buf_numa_node); + if (err) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "xsc_core_frag_buf_alloc_node failed, %d\n", err); + goto err; + } + + xsc_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, fbc); + wq->sz = xsc_wq_cyc_get_size(wq); + + wq_ctrl->xdev = xdev; + + return 0; + +err: + return err; +} + +void xsc_eth_wq_destroy(struct xsc_wq_ctrl *wq_ctrl) +{ + xsc_core_frag_buf_free(wq_ctrl->xdev, &wq_ctrl->buf); +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h new file mode 100644 index 000000000..95858e9e2 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All + * rights reserved. + * Copyright (c) 2013-2015, Mellanox Technologies, Ltd. All rights reserved. + */ + +#ifndef __XSC_WQ_H +#define __XSC_WQ_H + +#include "common/xsc_core.h" + +struct xsc_wq_param { + int buf_numa_node; + int db_numa_node; +}; + +struct xsc_wq_ctrl { + struct xsc_core_device *xdev; + struct xsc_frag_buf buf; +}; + +struct xsc_wq_cyc { + struct xsc_frag_buf_ctrl fbc; + u16 sz; + u16 wqe_ctr; + u16 cur_sz; +}; + +struct xsc_cqwq { + struct xsc_frag_buf_ctrl fbc; + __be32 *db; + u32 cc; /* consumer counter */ +}; + +enum xsc_res_type { + XSC_RES_UND = 0, + XSC_RES_RQ, + XSC_RES_SQ, + XSC_RES_MAX, +}; + +u32 xsc_wq_cyc_get_size(struct xsc_wq_cyc *wq); + +/*api for eth driver*/ +int xsc_eth_cqwq_create(struct xsc_core_device *xdev, struct xsc_wq_param *param, + u8 q_log_size, u8 ele_log_size, struct xsc_cqwq *wq, + struct xsc_wq_ctrl *wq_ctrl); + +int xsc_eth_wq_cyc_create(struct xsc_core_device *xdev, struct xsc_wq_param *param, + u8 q_log_size, u8 ele_log_size, struct xsc_wq_cyc *wq, + struct xsc_wq_ctrl *wq_ctrl); +void xsc_eth_wq_destroy(struct xsc_wq_ctrl *wq_ctrl); + +static inline void xsc_init_fbc_offset(struct xsc_buf_list *frags, + u8 log_stride, u8 log_sz, + u16 strides_offset, + struct xsc_frag_buf_ctrl *fbc) +{ + fbc->frags = frags; + fbc->log_stride = log_stride; + fbc->log_sz = log_sz; + fbc->sz_m1 = (1 << fbc->log_sz) - 1; + fbc->log_frag_strides = PAGE_SHIFT - fbc->log_stride; + fbc->frag_sz_m1 = (1 << fbc->log_frag_strides) - 1; + fbc->strides_offset = strides_offset; +} + +static inline void xsc_init_fbc(struct xsc_buf_list *frags, + u8 log_stride, u8 log_sz, + struct xsc_frag_buf_ctrl *fbc) +{ + xsc_init_fbc_offset(frags, log_stride, log_sz, 0, fbc); +} + +static inline void *xsc_frag_buf_get_wqe(struct xsc_frag_buf_ctrl *fbc, + u32 ix) +{ + unsigned int frag; + + ix += fbc->strides_offset; + frag = ix >> fbc->log_frag_strides; + + return fbc->frags[frag].buf + ((fbc->frag_sz_m1 & ix) << fbc->log_stride); +} + +static inline u32 +xsc_frag_buf_get_idx_last_contig_stride(struct xsc_frag_buf_ctrl *fbc, u32 ix) +{ + u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1; + + return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1); +} + +static inline int xsc_wq_cyc_missing(struct xsc_wq_cyc *wq) +{ + return wq->sz - wq->cur_sz; +} + +static inline int xsc_wq_cyc_is_empty(struct xsc_wq_cyc *wq) +{ + return !wq->cur_sz; +} + +static inline void xsc_wq_cyc_push(struct xsc_wq_cyc *wq) +{ + wq->wqe_ctr++; + wq->cur_sz++; +} + +static inline void xsc_wq_cyc_push_n(struct xsc_wq_cyc *wq, u8 n) +{ + wq->wqe_ctr += n; + wq->cur_sz += n; +} + +static inline void xsc_wq_cyc_pop(struct xsc_wq_cyc *wq) +{ + wq->cur_sz--; +} + +static inline u16 xsc_wq_cyc_ctr2ix(struct xsc_wq_cyc *wq, u16 ctr) +{ + return ctr & wq->fbc.sz_m1; +} + +static inline u16 xsc_wq_cyc_get_head(struct xsc_wq_cyc *wq) +{ + return xsc_wq_cyc_ctr2ix(wq, wq->wqe_ctr); +} + +static inline u16 xsc_wq_cyc_get_tail(struct xsc_wq_cyc *wq) +{ + return xsc_wq_cyc_ctr2ix(wq, wq->wqe_ctr - wq->cur_sz); +} + +static inline void *xsc_wq_cyc_get_wqe(struct xsc_wq_cyc *wq, u16 ix) +{ + return xsc_frag_buf_get_wqe(&wq->fbc, ix); +} + +static inline u32 xsc_cqwq_ctr2ix(struct xsc_cqwq *wq, u32 ctr) +{ + return ctr & wq->fbc.sz_m1; +} + +static inline u32 xsc_cqwq_get_ci(struct xsc_cqwq *wq) +{ + return xsc_cqwq_ctr2ix(wq, wq->cc); +} + +static inline u32 xsc_cqwq_get_ctr_wrap_cnt(struct xsc_cqwq *wq, u32 ctr) +{ + return ctr >> wq->fbc.log_sz; +} + +static inline u32 xsc_cqwq_get_wrap_cnt(struct xsc_cqwq *wq) +{ + return xsc_cqwq_get_ctr_wrap_cnt(wq, wq->cc); +} + +static inline void xsc_cqwq_pop(struct xsc_cqwq *wq) +{ + wq->cc++; +} + +static inline u32 xsc_cqwq_get_size(struct xsc_cqwq *wq) +{ + return wq->fbc.sz_m1 + 1; +} + +static inline struct xsc_cqe *xsc_cqwq_get_wqe(struct xsc_cqwq *wq, u32 ix) +{ + struct xsc_cqe *cqe = xsc_frag_buf_get_wqe(&wq->fbc, ix); + + return cqe; +} + +#endif /* __XSC_WQ_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c index 3d2509459..cbad27581 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c @@ -123,3 +123,99 @@ void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages) pas[i] = cpu_to_be64(addr); } } + +void xsc_core_fill_page_frag_array(struct xsc_frag_buf *buf, __be64 *pas, int npages) +{ + int i; + dma_addr_t addr; + int shift = PAGE_SHIFT - PAGE_SHIFT_4K; + int mask = (1 << shift) - 1; + + for (i = 0; i < npages; i++) { + addr = buf->frags[i >> shift].map + ((i & mask) << PAGE_SHIFT_4K); + pas[i] = cpu_to_be64(addr); + } +} +EXPORT_SYMBOL(xsc_core_fill_page_frag_array); + +static void *xsc_dma_zalloc_coherent_node(struct xsc_core_device *xdev, + size_t size, dma_addr_t *dma_handle, + int node) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + struct device *device = &xdev->pdev->dev; + int original_node; + void *cpu_handle; + + /* WA for kernels that don't use numa_mem_id in alloc_pages_node */ + if (node == NUMA_NO_NODE) + node = numa_mem_id(); + + mutex_lock(&dev_res->alloc_mutex); + original_node = dev_to_node(device); + set_dev_node(device, node); + cpu_handle = dma_alloc_coherent(device, size, dma_handle, + GFP_KERNEL); + set_dev_node(device, original_node); + mutex_unlock(&dev_res->alloc_mutex); + return cpu_handle; +} + +int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, int size, + struct xsc_frag_buf *buf, int node) +{ + int i; + + buf->size = size; + buf->npages = DIV_ROUND_UP(size, PAGE_SIZE); + buf->page_shift = PAGE_SHIFT; + buf->frags = kcalloc(buf->npages, sizeof(struct xsc_buf_list), + GFP_KERNEL); + if (!buf->frags) + goto err_out; + + for (i = 0; i < buf->npages; i++) { + struct xsc_buf_list *frag = &buf->frags[i]; + int frag_sz = min_t(int, size, PAGE_SIZE); + + frag->buf = xsc_dma_zalloc_coherent_node(xdev, frag_sz, + &frag->map, node); + if (!frag->buf) + goto err_free_buf; + if (frag->map & ((1 << buf->page_shift) - 1)) { + dma_free_coherent(&xdev->pdev->dev, frag_sz, + buf->frags[i].buf, buf->frags[i].map); + pci_err(xdev->pdev, "unexpected map alignment: %pad, page_shift=%d\n", + &frag->map, buf->page_shift); + goto err_free_buf; + } + size -= frag_sz; + } + + return 0; + +err_free_buf: + while (i--) + dma_free_coherent(&xdev->pdev->dev, PAGE_SIZE, buf->frags[i].buf, + buf->frags[i].map); + kfree(buf->frags); +err_out: + return -ENOMEM; +} +EXPORT_SYMBOL(xsc_core_frag_buf_alloc_node); + +void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *buf) +{ + int size = buf->size; + int i; + + for (i = 0; i < buf->npages; i++) { + int frag_sz = min_t(int, size, PAGE_SIZE); + + dma_free_coherent(&xdev->pdev->dev, frag_sz, buf->frags[i].buf, + buf->frags[i].map); + size -= frag_sz; + } + kfree(buf->frags); +} +EXPORT_SYMBOL(xsc_core_frag_buf_free); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h index 8ec465fa9..f3d9a6e0a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h @@ -12,5 +12,4 @@ int xsc_buf_alloc(struct xsc_core_device *xdev, int size, int max_direct, struct xsc_buf *buf); void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf); void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, int npages); - #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c index 5cff9025c..547d5872e 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c @@ -4,6 +4,7 @@ */ #include "common/xsc_core.h" +#include "common/xsc_driver.h" #include "cq.h" void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type) @@ -37,3 +38,114 @@ void xsc_init_cq_table(struct xsc_core_device *xdev) spin_lock_init(&table->lock); INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); } + +static int xsc_create_cq(struct xsc_core_device *xdev, u32 *p_cqn, + struct xsc_create_cq_mbox_in *in, int insize) +{ + struct xsc_create_cq_mbox_out out; + int ret; + + memset(&out, 0, sizeof(out)); + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_CQ); + ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to create cq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + *p_cqn = be32_to_cpu(out.cqn) & 0xffffff; + return 0; +} + +static int xsc_destroy_cq(struct xsc_core_device *xdev, u32 cqn) +{ + struct xsc_destroy_cq_mbox_in in; + struct xsc_destroy_cq_mbox_out out; + int ret; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_CQ); + in.cqn = cpu_to_be32(cqn); + ret = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to destroy cq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} + +int xsc_core_eth_create_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq, + struct xsc_create_cq_mbox_in *in, int insize) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + u32 cqn; + int ret; + int err; + + ret = xsc_create_cq(xdev, &cqn, in, insize); + if (ret) { + pci_err(xdev->pdev, "xsc_create_cq failed\n"); + return -ENOEXEC; + } + xcq->cqn = cqn; + xcq->cons_index = 0; + xcq->arm_sn = 0; + atomic_set(&xcq->refcount, 1); + init_completion(&xcq->free); + + spin_lock_irq(&table->lock); + ret = radix_tree_insert(&table->tree, xcq->cqn, xcq); + spin_unlock_irq(&table->lock); + if (ret) + goto err_insert_cq; + return 0; +err_insert_cq: + err = xsc_destroy_cq(xdev, cqn); + if (err) + pci_err(xdev->pdev, "failed to destroy cqn=%d, err=%d\n", xcq->cqn, err); + return ret; +} +EXPORT_SYMBOL(xsc_core_eth_create_cq); + +int xsc_core_eth_destroy_cq(struct xsc_core_device *xdev, struct xsc_core_cq *xcq) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + struct xsc_core_cq *tmp; + int err; + + spin_lock_irq(&table->lock); + tmp = radix_tree_delete(&table->tree, xcq->cqn); + spin_unlock_irq(&table->lock); + if (!tmp) { + err = -ENOENT; + goto err_delete_cq; + } + + if (tmp != xcq) { + err = -EINVAL; + goto err_delete_cq; + } + + err = xsc_destroy_cq(xdev, xcq->cqn); + if (err) + goto err_destroy_cq; + + if (atomic_dec_and_test(&xcq->refcount)) + complete(&xcq->free); + wait_for_completion(&xcq->free); + return 0; + +err_destroy_cq: + pci_err(xdev->pdev, "failed to destroy cqn=%d, err=%d\n", + xcq->cqn, err); + return err; +err_delete_cq: + pci_err(xdev->pdev, "cqn=%d not found in tree, err=%d\n", + xcq->cqn, err); + return err; +} +EXPORT_SYMBOL(xsc_core_eth_destroy_cq); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c index f08c0e34f..06ab4db24 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c @@ -8,6 +8,7 @@ #include #include #include "common/xsc_core.h" +#include "common/xsc_driver.h" #include "qp.h" int xsc_core_create_resource_common(struct xsc_core_device *xdev, @@ -77,3 +78,112 @@ void xsc_init_qp_table(struct xsc_core_device *xdev) spin_lock_init(&table->lock); INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); } + +int xsc_core_eth_create_qp(struct xsc_core_device *xdev, + struct xsc_create_qp_mbox_in *in, + int insize, u32 *p_qpn) +{ + struct xsc_create_qp_mbox_out out; + int ret; + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_QP); + ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to create sq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + *p_qpn = be32_to_cpu(out.qpn) & 0xffffff; + + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_create_qp); + +int xsc_core_eth_modify_qp_status(struct xsc_core_device *xdev, u32 qpn, u16 status) +{ + struct xsc_modify_qp_mbox_in in; + struct xsc_modify_qp_mbox_out out; + int ret = 0; + + in.hdr.opcode = cpu_to_be16(status); + in.qpn = cpu_to_be32(qpn); + in.no_need_wait = 1; + + ret = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (ret || out.hdr.status != 0) { + pci_err(xdev->pdev, "failed to modify qp %u status=%u, err=%d out.status %u\n", + qpn, status, ret, out.hdr.status); + ret = -ENOEXEC; + } + + return ret; +} +EXPORT_SYMBOL_GPL(xsc_core_eth_modify_qp_status); + +int xsc_core_eth_destroy_qp(struct xsc_core_device *xdev, u32 qpn) +{ + struct xsc_destroy_qp_mbox_in in; + struct xsc_destroy_qp_mbox_out out; + int err; + + err = xsc_core_eth_modify_qp_status(xdev, qpn, XSC_CMD_OP_2RST_QP); + if (err) { + pci_err(xdev->pdev, "failed to set sq%d status=rst, err=%d\n", qpn, err); + return err; + } + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_QP); + in.qpn = cpu_to_be32(qpn); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + pci_err(xdev->pdev, "failed to destroy sq%d, err=%d out.status=%u\n", + qpn, err, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_destroy_qp); + +int xsc_core_eth_modify_raw_qp(struct xsc_core_device *xdev, struct xsc_modify_raw_qp_mbox_in *in) +{ + struct xsc_modify_raw_qp_mbox_out out; + int ret; + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_MODIFY_RAW_QP); + + ret = xsc_cmd_exec(xdev, in, sizeof(struct xsc_modify_raw_qp_mbox_in), + &out, sizeof(struct xsc_modify_raw_qp_mbox_out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to modify sq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_modify_raw_qp); + +int xsc_core_eth_create_rss_qp_rqs(struct xsc_core_device *xdev, + struct xsc_create_multiqp_mbox_in *in, + int insize, int *p_qpn_base) +{ + int ret; + struct xsc_create_multiqp_mbox_out out; + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_MULTI_QP); + ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, + "failed to create rss rq, qp_num=%d, type=%d, err=%d out.status=%u\n", + in->qp_num, in->qp_type, ret, out.hdr.status); + return -ENOEXEC; + } + + *p_qpn_base = be32_to_cpu(out.qpn_base) & 0xffffff; + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_create_rss_qp_rqs); From patchwork Wed Jan 15 10:23:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940219 X-Patchwork-Delegate: kuba@kernel.org Received: from va-1-32.ptr.blmpb.com (va-1-32.ptr.blmpb.com [209.127.230.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 070BD248166 for ; Wed, 15 Jan 2025 10:25:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936740; cv=none; b=fgPdVGFvQPYE8+5LDvQ/0u/nViMJDMG9YRch+DkvmZU1j2FwbRAd0Pg3bhSyfJRYhGVVGxudrsh6HYqpBMPh7s9L5GmiTiiFfpCRa0IYsP5gcQXElfRyhSi3kk1+iFjUicaZTtj59uJkjF3Dkm1P8yOLgWApoUZRjxiExqcRW54= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936740; c=relaxed/simple; bh=35fU2vRXSXtKM23/59MExj7W8JQeUhuD6PRSszyGZzA=; h=To:Content-Type:From:Message-Id:Mime-Version:Subject:Date: In-Reply-To:Cc:References; b=K/T+2xsnSIjPPaeIKf2DdUsGtP0aNcZwK8iH7gDWpG0xrvUMKa330HTHkeI6Dhxd27D5gNJUJbopqxwZc6FDR43xHakSFCMVFMsH6hYSE2ORbm7xC05zsbwlY/XSwIJMG6WlQtK5TIQpH8kGfhqlxhuuYmb+PY3hCuIKCpaOb5k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=dqYzdDT/; arc=none smtp.client-ip=209.127.230.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="dqYzdDT/" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936590; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=VmNPn7gnb9rzEk2V/KwFxQt8UzJvizu3Fx+sVf8IbsA=; b=dqYzdDT/7ouJIwNw0VsuwdO39AybYFJDoHYulVGJvoyUc60UcjTC5xQ45BzL2AA+rINXh1 nxTXQL8sTkfqwbHvgsrfyN1Pg9+FrhBhIU7DzjMeftKrKIV8Zm9LhaQdGgC6/ttyaN0yM1 iIDmkTd0eTyexgwPwALC8lHpcWOo9oxeBwc5EdLcDCXxnRvmalCiZ8rQyQWhMHZnPUI41y EDTrsYLgdc+1eeEMu/4E4UTmSCSLxgCwuF5Ut9AnHRN8b4EQm5BLSFyZfSj03NFN2W9ciI 9/5y4oDRUHnLB1dY0jU0zeIXhOnABBxVD+dBoUspcqkng5QcgK2+1Nwgv1y8xQ== To: X-Mailer: git-send-email 2.25.1 X-Lms-Return-Path: From: "Xin Tian" Message-Id: <20250115102307.3541496-12-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Xin Tian Subject: [PATCH v3 11/14] net-next/yunsilicon: ndo_open and ndo_stop Date: Wed, 15 Jan 2025 18:23:08 +0800 In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Cc: , , , , , , , , , Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:08 +0800 References: <20250115102242.3541496-1-tianx@yunsilicon.com> X-Patchwork-Delegate: kuba@kernel.org Add ndo_open and ndo_stop Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 50 + .../yunsilicon/xsc/common/xsc_device.h | 35 + .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 1490 ++++++++++++++++- .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 8 + .../yunsilicon/xsc/net/xsc_eth_common.h | 143 ++ .../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 48 + .../yunsilicon/xsc/net/xsc_eth_txrx.c | 99 ++ .../yunsilicon/xsc/net/xsc_eth_txrx.h | 26 + .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 145 ++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/pci/vport.c | 30 + 12 files changed, 2074 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/vport.c diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index a81f75e58..6dced72c4 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -166,6 +166,40 @@ enum xsc_event { XSC_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,//IBV_EVENT_QP_ACCESS_ERR }; +struct xsc_cqe { + union { + u8 msg_opcode; + struct { + u8 error_code:7; + u8 is_error:1; + }; + }; + __le32 qp_id:15; + u8 rsv1:1; + u8 se:1; + u8 has_pph:1; + u8 type:1; + u8 with_immdt:1; + u8 csum_err:4; + __le32 imm_data; + __le32 msg_len; + __le32 vni; + __le64 ts:48; + __le16 wqe_id; + __le16 rsv[3]; + __le16 rsv2:15; + u8 owner:1; +}; + +union xsc_cq_doorbell { + struct{ + u32 cq_next_cid:16; + u32 cq_id:15; + u32 arm:1; + }; + u32 val; +}; + struct xsc_core_cq { u32 cqn; int cqe_sz; @@ -397,6 +431,8 @@ struct xsc_core_device { int bar_num; u8 mac_port; + u8 pcie_no; + u8 pf_id; u16 glb_func_id; u16 msix_vec_base; @@ -425,6 +461,8 @@ struct xsc_core_device { u32 fw_version_tweak; u8 fw_version_extra_flag; cpumask_var_t xps_cpumask; + + u8 user_mode; }; int xsc_core_create_resource_common(struct xsc_core_device *xdev, @@ -453,6 +491,8 @@ int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, int size, struct xsc_frag_buf *buf, int node); void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *buf); +u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport); + static inline void *xsc_buf_offset(struct xsc_buf *buf, int offset) { if (likely(BITS_PER_LONG == 64 || buf->nbufs == 1)) @@ -467,4 +507,14 @@ static inline bool xsc_fw_is_available(struct xsc_core_device *xdev) return xdev->cmd.cmd_status == XSC_CMD_STATUS_NORMAL; } +static inline void xsc_set_user_mode(struct xsc_core_device *xdev, u8 mode) +{ + xdev->user_mode = mode; +} + +static inline u8 xsc_get_user_mode(struct xsc_core_device *xdev) +{ + return xdev->user_mode; +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h index 45ea8d2a0..154a4e027 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h @@ -6,6 +6,22 @@ #ifndef __XSC_DEVICE_H #define __XSC_DEVICE_H +#include +#include + +/* QP type */ +enum { + XSC_QUEUE_TYPE_RDMA_RC = 0, + XSC_QUEUE_TYPE_RDMA_MAD = 1, + XSC_QUEUE_TYPE_RAW = 2, + XSC_QUEUE_TYPE_VIRTIO_NET = 3, + XSC_QUEUE_TYPE_VIRTIO_BLK = 4, + XSC_QUEUE_TYPE_RAW_TPE = 5, + XSC_QUEUE_TYPE_RAW_TSO = 6, + XSC_QUEUE_TYPE_RAW_TX = 7, + XSC_QUEUE_TYPE_INVALID = 0xFF, +}; + enum xsc_traffic_types { XSC_TT_IPV4, XSC_TT_IPV4_TCP, @@ -39,4 +55,23 @@ struct xsc_tirc_config { u32 rx_hash_fields; }; +enum { + XSC_HASH_FUNC_XOR = 0, + XSC_HASH_FUNC_TOP = 1, + XSC_HASH_FUNC_TOP_SYM = 2, + XSC_HASH_FUNC_RSV = 3, +}; + +static inline u8 xsc_hash_func_type(u8 hash_func) +{ + switch (hash_func) { + case ETH_RSS_HASH_TOP: + return XSC_HASH_FUNC_TOP; + case ETH_RSS_HASH_XOR: + return XSC_HASH_FUNC_XOR; + default: + return XSC_HASH_FUNC_TOP; + } +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 697046979..104ef5330 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o xsc_eth_wq.o \ No newline at end of file +xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_rx.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index fcb30676a..163fc2f55 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -7,12 +7,14 @@ #include #include #include +#include #include "common/xsc_core.h" #include "common/xsc_driver.h" #include "common/xsc_device.h" #include "common/xsc_pp.h" #include "xsc_eth_common.h" #include "xsc_eth.h" +#include "xsc_eth_txrx.h" static const struct xsc_tirc_config tirc_default_config[XSC_NUM_INDIR_TIRS] = { [XSC_TT_IPV4] = { @@ -109,6 +111,8 @@ static int xsc_eth_netdev_init(struct xsc_adapter *adapter) if (!adapter->txq2sq) goto err_out; + mutex_init(&adapter->status_lock); + adapter->workq = create_singlethread_workqueue("xsc_eth"); if (!adapter->workq) goto err_free_priv; @@ -123,9 +127,1490 @@ static int xsc_eth_netdev_init(struct xsc_adapter *adapter) return -ENOMEM; } -static int xsc_eth_close(struct net_device *netdev) +static void xsc_eth_build_queue_param(struct xsc_adapter *adapter, + struct xsc_queue_attr *attr, u8 type) +{ + struct xsc_core_device *xdev = adapter->xdev; + + if (adapter->nic_param.sq_size == 0) + adapter->nic_param.sq_size = BIT(xdev->caps.log_max_qp_depth); + if (adapter->nic_param.rq_size == 0) + adapter->nic_param.rq_size = BIT(xdev->caps.log_max_qp_depth); + + if (type == XSC_QUEUE_TYPE_EQ) { + attr->q_type = XSC_QUEUE_TYPE_EQ; + attr->ele_num = XSC_EQ_ELE_NUM; + attr->ele_size = XSC_EQ_ELE_SZ; + attr->ele_log_size = order_base_2(XSC_EQ_ELE_SZ); + attr->q_log_size = order_base_2(XSC_EQ_ELE_NUM); + } else if (type == XSC_QUEUE_TYPE_RQCQ) { + attr->q_type = XSC_QUEUE_TYPE_RQCQ; + attr->ele_num = min_t(int, XSC_RQCQ_ELE_NUM, xdev->caps.max_cqes); + attr->ele_size = XSC_RQCQ_ELE_SZ; + attr->ele_log_size = order_base_2(XSC_RQCQ_ELE_SZ); + attr->q_log_size = order_base_2(attr->ele_num); + } else if (type == XSC_QUEUE_TYPE_SQCQ) { + attr->q_type = XSC_QUEUE_TYPE_SQCQ; + attr->ele_num = min_t(int, XSC_SQCQ_ELE_NUM, xdev->caps.max_cqes); + attr->ele_size = XSC_SQCQ_ELE_SZ; + attr->ele_log_size = order_base_2(XSC_SQCQ_ELE_SZ); + attr->q_log_size = order_base_2(attr->ele_num); + } else if (type == XSC_QUEUE_TYPE_RQ) { + attr->q_type = XSC_QUEUE_TYPE_RQ; + attr->ele_num = adapter->nic_param.rq_size; + attr->ele_size = xdev->caps.recv_ds_num * XSC_RECV_WQE_DS; + attr->ele_log_size = order_base_2(attr->ele_size); + attr->q_log_size = order_base_2(attr->ele_num); + } else if (type == XSC_QUEUE_TYPE_SQ) { + attr->q_type = XSC_QUEUE_TYPE_SQ; + attr->ele_num = adapter->nic_param.sq_size; + attr->ele_size = xdev->caps.send_ds_num * XSC_SEND_WQE_DS; + attr->ele_log_size = order_base_2(attr->ele_size); + attr->q_log_size = order_base_2(attr->ele_num); + } +} + +static u32 xsc_rx_get_linear_frag_sz(u32 mtu) +{ + u32 byte_count = XSC_SW2HW_FRAG_SIZE(mtu); + + return XSC_SKB_FRAG_SZ(byte_count); +} + +static bool xsc_rx_is_linear_skb(u32 mtu) +{ + u32 linear_frag_sz = xsc_rx_get_linear_frag_sz(mtu); + + return linear_frag_sz <= PAGE_SIZE; +} + +static u32 xsc_get_rq_frag_info(struct xsc_rq_frags_info *frags_info, u32 mtu) +{ + u32 byte_count = XSC_SW2HW_FRAG_SIZE(mtu); + int frag_stride; + int i = 0; + + if (xsc_rx_is_linear_skb(mtu)) { + frag_stride = xsc_rx_get_linear_frag_sz(mtu); + frag_stride = roundup_pow_of_two(frag_stride); + + frags_info->arr[0].frag_size = byte_count; + frags_info->arr[0].frag_stride = frag_stride; + frags_info->num_frags = 1; + frags_info->wqe_bulk = PAGE_SIZE / frag_stride; + frags_info->wqe_bulk_min = frags_info->wqe_bulk; + goto out; + } + + if (byte_count <= DEFAULT_FRAG_SIZE) { + frags_info->arr[0].frag_size = DEFAULT_FRAG_SIZE; + frags_info->arr[0].frag_stride = DEFAULT_FRAG_SIZE; + frags_info->num_frags = 1; + } else if (byte_count <= PAGE_SIZE_4K) { + frags_info->arr[0].frag_size = PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = PAGE_SIZE_4K; + frags_info->num_frags = 1; + } else if (byte_count <= (PAGE_SIZE_4K + DEFAULT_FRAG_SIZE)) { + if (PAGE_SIZE < 2 * PAGE_SIZE_4K) { + frags_info->arr[0].frag_size = PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = PAGE_SIZE_4K; + frags_info->arr[1].frag_size = PAGE_SIZE_4K; + frags_info->arr[1].frag_stride = PAGE_SIZE_4K; + frags_info->num_frags = 2; + } else { + frags_info->arr[0].frag_size = 2 * PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = 2 * PAGE_SIZE_4K; + frags_info->num_frags = 1; + } + } else if (byte_count <= 2 * PAGE_SIZE_4K) { + if (PAGE_SIZE < 2 * PAGE_SIZE_4K) { + frags_info->arr[0].frag_size = PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = PAGE_SIZE_4K; + frags_info->arr[1].frag_size = PAGE_SIZE_4K; + frags_info->arr[1].frag_stride = PAGE_SIZE_4K; + frags_info->num_frags = 2; + } else { + frags_info->arr[0].frag_size = 2 * PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = 2 * PAGE_SIZE_4K; + frags_info->num_frags = 1; + } + } else { + if (PAGE_SIZE < 4 * PAGE_SIZE_4K) { + frags_info->num_frags = roundup(byte_count, PAGE_SIZE_4K) / PAGE_SIZE_4K; + for (i = 0; i < frags_info->num_frags; i++) { + frags_info->arr[i].frag_size = PAGE_SIZE_4K; + frags_info->arr[i].frag_stride = PAGE_SIZE_4K; + } + } else { + frags_info->arr[0].frag_size = 4 * PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = 4 * PAGE_SIZE_4K; + frags_info->num_frags = 1; + } + } + + if (PAGE_SIZE <= PAGE_SIZE_4K) { + frags_info->wqe_bulk_min = 4; + frags_info->wqe_bulk = max_t(u8, frags_info->wqe_bulk_min, 8); + } else if (PAGE_SIZE <= 2 * PAGE_SIZE_4K) { + frags_info->wqe_bulk = 2; + frags_info->wqe_bulk_min = frags_info->wqe_bulk; + } else { + frags_info->wqe_bulk = + PAGE_SIZE / (frags_info->num_frags * frags_info->arr[0].frag_size); + frags_info->wqe_bulk_min = frags_info->wqe_bulk; + } + +out: + frags_info->log_num_frags = order_base_2(frags_info->num_frags); + + return frags_info->num_frags * frags_info->arr[0].frag_size; +} + +static void xsc_build_rq_frags_info(struct xsc_queue_attr *attr, + struct xsc_rq_frags_info *frags_info, + struct xsc_eth_params *params) +{ + params->rq_frags_size = xsc_get_rq_frag_info(frags_info, params->mtu); + frags_info->frags_max_num = attr->ele_size / XSC_RECV_WQE_DS; +} + +static void xsc_eth_build_channel_param(struct xsc_adapter *adapter, + struct xsc_channel_param *chl_param) +{ + xsc_eth_build_queue_param(adapter, &chl_param->rqcq_param.cq_attr, + XSC_QUEUE_TYPE_RQCQ); + chl_param->rqcq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_eth_build_queue_param(adapter, &chl_param->sqcq_param.cq_attr, + XSC_QUEUE_TYPE_SQCQ); + chl_param->sqcq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_eth_build_queue_param(adapter, &chl_param->sq_param.sq_attr, + XSC_QUEUE_TYPE_SQ); + chl_param->sq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_eth_build_queue_param(adapter, &chl_param->rq_param.rq_attr, + XSC_QUEUE_TYPE_RQ); + chl_param->rq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_build_rq_frags_info(&chl_param->rq_param.rq_attr, + &chl_param->rq_param.frags_info, + &adapter->nic_param); +} + +static void xsc_eth_cq_error_event(struct xsc_core_cq *xcq, enum xsc_event event) +{ + struct xsc_cq *xsc_cq = container_of(xcq, struct xsc_cq, xcq); + struct xsc_core_device *xdev = xsc_cq->xdev; + + if (event != XSC_EVENT_TYPE_CQ_ERROR) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "Unexpected event type %d on CQ %06x\n", + event, xcq->cqn); + return; + } + + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "Eth catch CQ ERROR: %x, cqn: %d\n", event, xcq->cqn); +} + +static void xsc_eth_completion_event(struct xsc_core_cq *xcq) +{ + struct xsc_cq *cq = container_of(xcq, struct xsc_cq, xcq); + struct xsc_core_device *xdev = cq->xdev; + struct xsc_rq *rq = NULL; + + if (unlikely(!cq->channel)) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "cq%d->channel is null\n", xcq->cqn); + return; + } + + rq = &cq->channel->qp.rq[0]; + + set_bit(XSC_CHANNEL_NAPI_SCHED, &cq->channel->flags); + + if (!test_bit(XSC_ETH_RQ_STATE_ENABLED, &rq->state)) + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "ch%d_cq%d, napi_flag=0x%lx\n", + cq->channel->chl_idx, xcq->cqn, + cq->napi->state); + + napi_schedule(cq->napi); + cq->event_ctr++; +} + +static int xsc_eth_alloc_cq(struct xsc_channel *c, struct xsc_cq *pcq, + struct xsc_cq_param *pcq_param) +{ + int ret; + struct xsc_core_device *xdev = c->adapter->xdev; + struct xsc_core_cq *core_cq = &pcq->xcq; + u32 i; + u8 q_log_size = pcq_param->cq_attr.q_log_size; + u8 ele_log_size = pcq_param->cq_attr.ele_log_size; + + pcq_param->wq.db_numa_node = cpu_to_node(c->cpu); + pcq_param->wq.buf_numa_node = cpu_to_node(c->cpu); + + ret = xsc_eth_cqwq_create(xdev, &pcq_param->wq, + q_log_size, ele_log_size, &pcq->wq, + &pcq->wq_ctrl); + if (ret) + return ret; + + core_cq->cqe_sz = pcq_param->cq_attr.ele_num; + core_cq->comp = xsc_eth_completion_event; + core_cq->event = xsc_eth_cq_error_event; + core_cq->vector = c->chl_idx; + + for (i = 0; i < xsc_cqwq_get_size(&pcq->wq); i++) { + struct xsc_cqe *cqe = xsc_cqwq_get_wqe(&pcq->wq, i); + + cqe->owner = 1; + } + pcq->xdev = xdev; + + return ret; +} + +static int xsc_eth_set_cq(struct xsc_channel *c, + struct xsc_cq *pcq, + struct xsc_cq_param *pcq_param) +{ + int ret = XSCALE_RET_SUCCESS; + struct xsc_core_device *xdev = c->adapter->xdev; + struct xsc_create_cq_mbox_in *in; + int inlen; + int eqn, irqn; + int hw_npages; + + hw_npages = DIV_ROUND_UP(pcq->wq_ctrl.buf.size, PAGE_SIZE_4K); + /*mbox size + pas size*/ + inlen = sizeof(struct xsc_create_cq_mbox_in) + + sizeof(__be64) * hw_npages; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return -ENOMEM; + + /*construct param of in struct*/ + ret = xsc_core_vector2eqn(xdev, c->chl_idx, &eqn, &irqn); + if (ret) + goto err; + + in->ctx.eqn = eqn; + in->ctx.eqn = cpu_to_be16(in->ctx.eqn); + in->ctx.log_cq_sz = pcq_param->cq_attr.q_log_size; + in->ctx.pa_num = cpu_to_be16(hw_npages); + in->ctx.glb_func_id = cpu_to_be16(xdev->glb_func_id); + + xsc_core_fill_page_frag_array(&pcq->wq_ctrl.buf, &in->pas[0], hw_npages); + + ret = xsc_core_eth_create_cq(c->adapter->xdev, &pcq->xcq, in, inlen); + if (ret == 0) { + pcq->xcq.irqn = irqn; + pcq->xcq.eq = xsc_core_eq_get(xdev, pcq->xcq.vector); + } + +err: + kvfree(in); + return ret; +} + +static void xsc_eth_free_cq(struct xsc_cq *cq) +{ + xsc_eth_wq_destroy(&cq->wq_ctrl); +} + +static int xsc_eth_open_cq(struct xsc_channel *c, + struct xsc_cq *pcq, + struct xsc_cq_param *pcq_param) +{ + int ret; + + ret = xsc_eth_alloc_cq(c, pcq, pcq_param); + if (ret) + return ret; + + ret = xsc_eth_set_cq(c, pcq, pcq_param); + if (ret) + goto err_set_cq; + + xsc_cq_notify_hw_rearm(pcq); + + pcq->napi = &c->napi; + pcq->channel = c; + pcq->rx = (pcq_param->cq_attr.q_type == XSC_QUEUE_TYPE_RQCQ) ? 1 : 0; + + return 0; + +err_set_cq: + xsc_eth_free_cq(pcq); + return ret; +} + +static int xsc_eth_close_cq(struct xsc_channel *c, struct xsc_cq *pcq) +{ + int ret; + struct xsc_core_device *xdev = c->adapter->xdev; + + ret = xsc_core_eth_destroy_cq(xdev, &pcq->xcq); + if (ret) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, "failed to close ch%d cq%d, ret=%d\n", + c->chl_idx, pcq->xcq.cqn, ret); + return ret; + } + + xsc_eth_free_cq(pcq); + + return 0; +} + +static void xsc_free_qp_sq_db(struct xsc_sq *sq) +{ + kvfree(sq->db.wqe_info); + kvfree(sq->db.dma_fifo); +} + +static void xsc_free_qp_sq(struct xsc_sq *sq) +{ + xsc_free_qp_sq_db(sq); + xsc_eth_wq_destroy(&sq->wq_ctrl); +} + +static int xsc_eth_alloc_qp_sq_db(struct xsc_sq *sq, int numa) +{ + int wq_sz = xsc_wq_cyc_get_size(&sq->wq); + struct xsc_core_device *xdev = sq->cq.xdev; + int df_sz = wq_sz * xdev->caps.send_ds_num; + + sq->db.dma_fifo = kvzalloc_node(array_size(df_sz, sizeof(*sq->db.dma_fifo)), + GFP_KERNEL, numa); + sq->db.wqe_info = kvzalloc_node(array_size(wq_sz, sizeof(*sq->db.wqe_info)), + GFP_KERNEL, numa); + + if (!sq->db.dma_fifo || !sq->db.wqe_info) { + xsc_free_qp_sq_db(sq); + return -ENOMEM; + } + + sq->dma_fifo_mask = df_sz - 1; + + return 0; +} + +static void xsc_eth_qp_event(struct xsc_core_qp *qp, int type) +{ + struct xsc_rq *rq; + struct xsc_sq *sq; + struct xsc_core_device *xdev; + + if (qp->eth_queue_type == XSC_RES_RQ) { + rq = container_of(qp, struct xsc_rq, cqp); + xdev = rq->cq.xdev; + } else if (qp->eth_queue_type == XSC_RES_SQ) { + sq = container_of(qp, struct xsc_sq, cqp); + xdev = sq->cq.xdev; + } else { + pr_err("%s:Unknown eth qp type %d\n", __func__, type); + return; + } + + switch (type) { + case XSC_EVENT_TYPE_WQ_CATAS_ERROR: + case XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + case XSC_EVENT_TYPE_WQ_ACCESS_ERROR: + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "%s:Async event %x on QP %d\n", __func__, type, qp->qpn); + break; + default: + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "%s: Unexpected event type %d on QP %d\n", + __func__, type, qp->qpn); + return; + } +} + +static int xsc_eth_open_qp_sq(struct xsc_channel *c, + struct xsc_sq *psq, + struct xsc_sq_param *psq_param, + u32 sq_idx) +{ + struct xsc_adapter *adapter = c->adapter; + struct xsc_core_device *xdev = adapter->xdev; + u8 q_log_size = psq_param->sq_attr.q_log_size; + u8 ele_log_size = psq_param->sq_attr.ele_log_size; + struct xsc_create_qp_mbox_in *in; + struct xsc_modify_raw_qp_mbox_in *modify_in; + int hw_npages; + int inlen; + int ret; + + psq_param->wq.db_numa_node = cpu_to_node(c->cpu); + + ret = xsc_eth_wq_cyc_create(xdev, &psq_param->wq, + q_log_size, ele_log_size, &psq->wq, + &psq->wq_ctrl); + if (ret) + return ret; + + hw_npages = DIV_ROUND_UP(psq->wq_ctrl.buf.size, PAGE_SIZE_4K); + inlen = sizeof(struct xsc_create_qp_mbox_in) + + sizeof(__be64) * hw_npages; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) { + ret = -ENOMEM; + goto err_sq_wq_destroy; + } + in->req.input_qpn = cpu_to_be16(XSC_QPN_SQN_STUB); /*no use for eth*/ + in->req.qp_type = XSC_QUEUE_TYPE_RAW_TSO; /*default sq is tso qp*/ + in->req.log_sq_sz = ilog2(xdev->caps.send_ds_num) + q_log_size; + in->req.pa_num = cpu_to_be16(hw_npages); + in->req.cqn_send = cpu_to_be16(psq->cq.xcq.cqn); + in->req.cqn_recv = in->req.cqn_send; + in->req.glb_funcid = cpu_to_be16(xdev->glb_func_id); + + xsc_core_fill_page_frag_array(&psq->wq_ctrl.buf, + &in->req.pas[0], hw_npages); + + ret = xsc_core_eth_create_qp(xdev, in, inlen, &psq->sqn); + if (ret) + goto err_sq_in_destroy; + + psq->cqp.qpn = psq->sqn; + psq->cqp.event = xsc_eth_qp_event; + psq->cqp.eth_queue_type = XSC_RES_SQ; + + ret = xsc_core_create_resource_common(xdev, &psq->cqp); + if (ret) { + netdev_err(adapter->netdev, "%s:error qp:%d errno:%d\n", + __func__, psq->sqn, ret); + goto err_sq_destroy; + } + + psq->channel = c; + psq->ch_ix = c->chl_idx; + psq->txq_ix = psq->ch_ix + sq_idx * adapter->channels.num_chl; + + /*need to querify from hardware*/ + psq->hw_mtu = XSC_ETH_HW_MTU_SEND; + psq->stop_room = 1; + + ret = xsc_eth_alloc_qp_sq_db(psq, psq_param->wq.db_numa_node); + if (ret) + goto err_sq_common_destroy; + + inlen = sizeof(struct xsc_modify_raw_qp_mbox_in); + modify_in = kvzalloc(inlen, GFP_KERNEL); + if (!modify_in) { + ret = -ENOMEM; + goto err_sq_common_destroy; + } + + modify_in->req.qp_out_port = xdev->pf_id; + modify_in->pcie_no = xdev->pcie_no; + modify_in->req.qpn = cpu_to_be16((u16)(psq->sqn)); + modify_in->req.func_id = cpu_to_be16(xdev->glb_func_id); + modify_in->req.dma_direct = XSC_DMA_DIR_TO_MAC; + modify_in->req.prio = sq_idx; + ret = xsc_core_eth_modify_raw_qp(xdev, modify_in); + if (ret) + goto err_sq_modify_in_destroy; + + kvfree(modify_in); + kvfree(in); + + return 0; + +err_sq_modify_in_destroy: + kvfree(modify_in); + +err_sq_common_destroy: + xsc_core_destroy_resource_common(xdev, &psq->cqp); + +err_sq_destroy: + xsc_core_eth_destroy_qp(xdev, psq->cqp.qpn); + +err_sq_in_destroy: + kvfree(in); + +err_sq_wq_destroy: + xsc_eth_wq_destroy(&psq->wq_ctrl); + return ret; +} + +static int xsc_eth_close_qp_sq(struct xsc_channel *c, struct xsc_sq *psq) +{ + struct xsc_core_device *xdev = c->adapter->xdev; + int ret; + + xsc_core_destroy_resource_common(xdev, &psq->cqp); + + ret = xsc_core_eth_destroy_qp(xdev, psq->cqp.qpn); + if (ret) + return ret; + + xsc_free_qp_sq(psq); + + return 0; +} + +static int xsc_eth_open_channel(struct xsc_adapter *adapter, + int idx, + struct xsc_channel *c, + struct xsc_channel_param *chl_param) +{ + int ret = 0; + struct net_device *netdev = adapter->netdev; + struct xsc_core_device *xdev = adapter->xdev; + int i, j, eqn, irqn; + const struct cpumask *aff; + + c->adapter = adapter; + c->netdev = adapter->netdev; + c->chl_idx = idx; + c->num_tc = adapter->nic_param.num_tc; + + /*1rq per channel, and may have multi sqs per channel*/ + c->qp.rq_num = 1; + c->qp.sq_num = c->num_tc; + + if (xdev->caps.msix_enable) { + ret = xsc_core_vector2eqn(xdev, c->chl_idx, &eqn, &irqn); + if (ret) + goto err; + aff = irq_get_affinity_mask(irqn); + c->aff_mask = aff; + c->cpu = cpumask_first(aff); + } + + if (c->qp.sq_num > XSC_MAX_NUM_TC || c->qp.rq_num > XSC_MAX_NUM_TC) { + ret = -EINVAL; + goto err; + } + + for (i = 0; i < c->qp.rq_num; i++) { + ret = xsc_eth_open_cq(c, &c->qp.rq[i].cq, &chl_param->rqcq_param); + if (ret) { + j = i - 1; + goto err_open_rq_cq; + } + } + + for (i = 0; i < c->qp.sq_num; i++) { + ret = xsc_eth_open_cq(c, &c->qp.sq[i].cq, &chl_param->sqcq_param); + if (ret) { + j = i - 1; + goto err_open_sq_cq; + } + } + + for (i = 0; i < c->qp.sq_num; i++) { + ret = xsc_eth_open_qp_sq(c, &c->qp.sq[i], &chl_param->sq_param, i); + if (ret) { + j = i - 1; + goto err_open_sq; + } + } + netif_napi_add(netdev, &c->napi, xsc_eth_napi_poll); + + netdev_dbg(adapter->netdev, "open channel%d ok\n", idx); + return 0; + +err_open_sq: + for (; j >= 0; j--) + xsc_eth_close_qp_sq(c, &c->qp.sq[j]); + j = (c->qp.rq_num - 1); +err_open_sq_cq: + for (; j >= 0; j--) + xsc_eth_close_cq(c, &c->qp.sq[j].cq); + j = (c->qp.rq_num - 1); +err_open_rq_cq: + for (; j >= 0; j--) + xsc_eth_close_cq(c, &c->qp.rq[j].cq); +err: + netdev_err(adapter->netdev, + "failed to open channel: ch%d, sq_num=%d, rq_num=%d, err=%d\n", + idx, c->qp.sq_num, c->qp.rq_num, ret); + return ret; +} + +static int xsc_eth_modify_qps_channel(struct xsc_adapter *adapter, struct xsc_channel *c) +{ + int ret = 0; + int i; + + for (i = 0; i < c->qp.rq_num; i++) { + c->qp.rq[i].post_wqes(&c->qp.rq[i]); + ret = xsc_core_eth_modify_qp_status(adapter->xdev, c->qp.rq[i].rqn, + XSC_CMD_OP_RTR2RTS_QP); + if (ret) + return ret; + } + + for (i = 0; i < c->qp.sq_num; i++) { + ret = xsc_core_eth_modify_qp_status(adapter->xdev, c->qp.sq[i].sqn, + XSC_CMD_OP_RTR2RTS_QP); + if (ret) + return ret; + } + return 0; +} + +static int xsc_eth_modify_qps(struct xsc_adapter *adapter, + struct xsc_eth_channels *chls) +{ + int ret; + int i; + + for (i = 0; i < chls->num_chl; i++) { + struct xsc_channel *c = &chls->c[i]; + + ret = xsc_eth_modify_qps_channel(adapter, c); + if (ret) + return ret; + } + + return 0; +} + +static void xsc_eth_init_frags_partition(struct xsc_rq *rq) +{ + struct xsc_wqe_frag_info next_frag = {}; + struct xsc_wqe_frag_info *prev; + int i; + + next_frag.di = &rq->wqe.di[0]; + next_frag.offset = 0; + prev = NULL; + + for (i = 0; i < xsc_wq_cyc_get_size(&rq->wqe.wq); i++) { + struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; + struct xsc_wqe_frag_info *frag = + &rq->wqe.frags[i << rq->wqe.info.log_num_frags]; + int f; + + for (f = 0; f < rq->wqe.info.num_frags; f++, frag++) { + if (next_frag.offset + frag_info[f].frag_stride > + XSC_RX_FRAG_SZ) { + next_frag.di++; + next_frag.offset = 0; + if (prev) + prev->last_in_page = 1; + } + *frag = next_frag; + + /* prepare next */ + next_frag.offset += frag_info[f].frag_stride; + prev = frag; + } + } + + if (prev) + prev->last_in_page = 1; +} + +static int xsc_eth_init_di_list(struct xsc_rq *rq, int wq_sz, int cpu) +{ + int len = wq_sz << rq->wqe.info.log_num_frags; + + rq->wqe.di = kvzalloc_node(array_size(len, sizeof(*rq->wqe.di)), + GFP_KERNEL, cpu_to_node(cpu)); + if (!rq->wqe.di) + return -ENOMEM; + + xsc_eth_init_frags_partition(rq); + + return 0; +} + +static void xsc_eth_free_di_list(struct xsc_rq *rq) +{ + kvfree(rq->wqe.di); +} + +static int xsc_eth_alloc_rq(struct xsc_channel *c, + struct xsc_rq *prq, + struct xsc_rq_param *prq_param) +{ + struct xsc_adapter *adapter = c->adapter; + u8 q_log_size = prq_param->rq_attr.q_log_size; + struct page_pool_params pagepool_params = { 0 }; + u32 pool_size = 1 << q_log_size; + u8 ele_log_size = prq_param->rq_attr.ele_log_size; + int wq_sz; + int i, f; + int ret = 0; + + prq_param->wq.db_numa_node = cpu_to_node(c->cpu); + + ret = xsc_eth_wq_cyc_create(c->adapter->xdev, &prq_param->wq, + q_log_size, ele_log_size, &prq->wqe.wq, + &prq->wq_ctrl); + if (ret) + return ret; + + wq_sz = xsc_wq_cyc_get_size(&prq->wqe.wq); + + prq->wqe.info = prq_param->frags_info; + prq->wqe.frags = kvzalloc_node(array_size((wq_sz << prq->wqe.info.log_num_frags), + sizeof(*prq->wqe.frags)), + GFP_KERNEL, + cpu_to_node(c->cpu)); + if (!prq->wqe.frags) { + ret = -ENOMEM; + goto err_alloc_frags; + } + + ret = xsc_eth_init_di_list(prq, wq_sz, c->cpu); + if (ret) + goto err_init_di; + + prq->buff.map_dir = DMA_FROM_DEVICE; + + /* Create a page_pool and register it with rxq */ + pool_size = wq_sz << prq->wqe.info.log_num_frags; + pagepool_params.order = XSC_RX_FRAG_SZ_ORDER; + pagepool_params.flags = 0; /* No-internal DMA mapping in page_pool */ + pagepool_params.pool_size = pool_size; + pagepool_params.nid = cpu_to_node(c->cpu); + pagepool_params.dev = c->adapter->dev; + pagepool_params.dma_dir = prq->buff.map_dir; + + prq->page_pool = page_pool_create(&pagepool_params); + if (IS_ERR(prq->page_pool)) { + ret = PTR_ERR(prq->page_pool); + prq->page_pool = NULL; + goto err_create_pool; + } + + if (c->chl_idx == 0) + netdev_dbg(adapter->netdev, + "page pool: size=%d, cpu=%d, pool_numa=%d, mtu=%d, wqe_numa=%d\n", + pool_size, c->cpu, pagepool_params.nid, + adapter->nic_param.mtu, + prq_param->wq.buf_numa_node); + + for (i = 0; i < wq_sz; i++) { + struct xsc_eth_rx_wqe_cyc *wqe = + xsc_wq_cyc_get_wqe(&prq->wqe.wq, i); + + for (f = 0; f < prq->wqe.info.num_frags; f++) { + u32 frag_size = prq->wqe.info.arr[f].frag_size; + + wqe->data[f].seg_len = cpu_to_le32(frag_size); + wqe->data[f].mkey = cpu_to_le32(XSC_INVALID_LKEY); + } + + for (; f < prq->wqe.info.frags_max_num; f++) { + wqe->data[f].seg_len = 0; + wqe->data[f].mkey = cpu_to_le32(XSC_INVALID_LKEY); + wqe->data[f].va = 0; + } + } + + prq->post_wqes = xsc_eth_post_rx_wqes; + prq->handle_rx_cqe = xsc_eth_handle_rx_cqe; + prq->dealloc_wqe = xsc_eth_dealloc_rx_wqe; + prq->wqe.skb_from_cqe = xsc_rx_is_linear_skb(adapter->nic_param.mtu) ? + xsc_skb_from_cqe_linear : + xsc_skb_from_cqe_nonlinear; + prq->ix = c->chl_idx; + prq->frags_sz = adapter->nic_param.rq_frags_size; + + return 0; + +err_create_pool: + xsc_eth_free_di_list(prq); +err_init_di: + kvfree(prq->wqe.frags); +err_alloc_frags: + xsc_eth_wq_destroy(&prq->wq_ctrl); + return ret; +} + +static void xsc_free_qp_rq(struct xsc_rq *rq) +{ + kvfree(rq->wqe.frags); + kvfree(rq->wqe.di); + + if (rq->page_pool) + page_pool_destroy(rq->page_pool); + + xsc_eth_wq_destroy(&rq->wq_ctrl); +} + +static int xsc_eth_open_rss_qp_rqs(struct xsc_adapter *adapter, + struct xsc_rq_param *prq_param, + struct xsc_eth_channels *chls, + unsigned int num_chl) +{ + int ret = 0, err = 0; + struct xsc_create_multiqp_mbox_in *in; + struct xsc_create_qp_request *req; + u8 q_log_size = prq_param->rq_attr.q_log_size; + int paslen = 0; + struct xsc_rq *prq; + struct xsc_channel *c; + int rqn_base; + int inlen; + int entry_len; + int i, j, n; + int hw_npages; + + for (i = 0; i < num_chl; i++) { + c = &chls->c[i]; + + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + ret = xsc_eth_alloc_rq(c, prq, prq_param); + if (ret) + goto err_alloc_rqs; + + hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size, PAGE_SIZE_4K); + /*support different npages number smoothly*/ + entry_len = sizeof(struct xsc_create_qp_request) + + sizeof(__be64) * hw_npages; + + paslen += entry_len; + } + } + + inlen = sizeof(struct xsc_create_multiqp_mbox_in) + paslen; + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) { + ret = -ENOMEM; + goto err_create_rss_rqs; + } + + in->qp_num = cpu_to_be16(num_chl); + in->qp_type = XSC_QUEUE_TYPE_RAW; + in->req_len = cpu_to_be32(inlen); + + req = (struct xsc_create_qp_request *)&in->data[0]; + n = 0; + for (i = 0; i < num_chl; i++) { + c = &chls->c[i]; + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + + hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size, PAGE_SIZE_4K); + /* no use for eth */ + req->input_qpn = cpu_to_be16(0); + req->qp_type = XSC_QUEUE_TYPE_RAW; + req->log_rq_sz = ilog2(adapter->xdev->caps.recv_ds_num) + + q_log_size; + req->pa_num = cpu_to_be16(hw_npages); + req->cqn_recv = cpu_to_be16(prq->cq.xcq.cqn); + req->cqn_send = req->cqn_recv; + req->glb_funcid = cpu_to_be16(adapter->xdev->glb_func_id); + + xsc_core_fill_page_frag_array(&prq->wq_ctrl.buf, &req->pas[0], hw_npages); + n++; + req = (struct xsc_create_qp_request *)(&in->data[0] + entry_len * n); + } + } + + ret = xsc_core_eth_create_rss_qp_rqs(adapter->xdev, in, inlen, &rqn_base); + kvfree(in); + if (ret) + goto err_create_rss_rqs; + + n = 0; + for (i = 0; i < num_chl; i++) { + c = &chls->c[i]; + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + prq->rqn = rqn_base + n; + prq->cqp.qpn = prq->rqn; + prq->cqp.event = xsc_eth_qp_event; + prq->cqp.eth_queue_type = XSC_RES_RQ; + ret = xsc_core_create_resource_common(adapter->xdev, &prq->cqp); + if (ret) { + err = ret; + netdev_err(adapter->netdev, + "create resource common error qp:%d errno:%d\n", + prq->rqn, ret); + continue; + } + + n++; + } + } + if (err) + return err; + + adapter->channels.rqn_base = rqn_base; + return 0; + +err_create_rss_rqs: + i = num_chl; +err_alloc_rqs: + for (--i; i >= 0; i--) { + c = &chls->c[i]; + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + xsc_free_qp_rq(prq); + } + } + return ret; +} + +static void xsc_eth_free_rx_wqe(struct xsc_rq *rq) +{ + u16 wqe_ix; + struct xsc_wq_cyc *wq = &rq->wqe.wq; + + while (!xsc_wq_cyc_is_empty(wq)) { + wqe_ix = xsc_wq_cyc_get_tail(wq); + rq->dealloc_wqe(rq, wqe_ix); + xsc_wq_cyc_pop(wq); + } +} + +static int xsc_eth_close_qp_rq(struct xsc_channel *c, struct xsc_rq *prq) +{ + int ret; + struct xsc_core_device *xdev = c->adapter->xdev; + + xsc_core_destroy_resource_common(xdev, &prq->cqp); + + ret = xsc_core_eth_destroy_qp(xdev, prq->cqp.qpn); + if (ret) + return ret; + + xsc_eth_free_rx_wqe(prq); + xsc_free_qp_rq(prq); + + return 0; +} + +static void xsc_eth_close_channel(struct xsc_channel *c, bool free_rq) +{ + int i; + + for (i = 0; i < c->qp.rq_num; i++) { + if (free_rq) + xsc_eth_close_qp_rq(c, &c->qp.rq[i]); + xsc_eth_close_cq(c, &c->qp.rq[i].cq); + memset(&c->qp.rq[i], 0, sizeof(struct xsc_rq)); + } + + for (i = 0; i < c->qp.sq_num; i++) { + xsc_eth_close_qp_sq(c, &c->qp.sq[i]); + xsc_eth_close_cq(c, &c->qp.sq[i].cq); + } + + netif_napi_del(&c->napi); +} + +static int xsc_eth_open_channels(struct xsc_adapter *adapter) { + int ret = 0; + int i; + struct xsc_channel_param *chl_param; + struct xsc_eth_channels *chls = &adapter->channels; + struct xsc_core_device *xdev = adapter->xdev; + bool free_rq = false; + + chls->num_chl = adapter->nic_param.num_channels; + chls->c = kcalloc_node(chls->num_chl, sizeof(struct xsc_channel), + GFP_KERNEL, xdev->numa_node); + if (!chls->c) { + ret = -ENOMEM; + goto err; + } + + chl_param = kvzalloc(sizeof(*chl_param), GFP_KERNEL); + if (!chl_param) { + ret = -ENOMEM; + goto err_free_ch; + } + + xsc_eth_build_channel_param(adapter, chl_param); + + for (i = 0; i < chls->num_chl; i++) { + ret = xsc_eth_open_channel(adapter, i, &chls->c[i], chl_param); + if (ret) + goto err_open_channel; + } + + ret = xsc_eth_open_rss_qp_rqs(adapter, &chl_param->rq_param, chls, chls->num_chl); + if (ret) + goto err_open_channel; + free_rq = true; + + for (i = 0; i < chls->num_chl; i++) + napi_enable(&chls->c[i].napi); + + /* flush cache to memory before interrupt and napi_poll running */ + smp_wmb(); + + ret = xsc_eth_modify_qps(adapter, chls); + if (ret) + goto err_modify_qps; + + kvfree(chl_param); return 0; + +err_modify_qps: + i = chls->num_chl; +err_open_channel: + for (--i; i >= 0; i--) + xsc_eth_close_channel(&chls->c[i], free_rq); + + kvfree(chl_param); +err_free_ch: + kfree(chls->c); +err: + chls->num_chl = 0; + netdev_err(adapter->netdev, "failed to open %d channels, err=%d\n", + chls->num_chl, ret); + return ret; +} + +static void xsc_eth_close_channels(struct xsc_adapter *adapter) +{ + int i; + struct xsc_channel *c = NULL; + + for (i = 0; i < adapter->channels.num_chl; i++) { + c = &adapter->channels.c[i]; + + xsc_eth_close_channel(c, true); + } + + kfree(adapter->channels.c); + adapter->channels.num_chl = 0; +} + +static void xsc_netdev_set_tcs(struct xsc_adapter *priv, u16 nch, u8 ntc) +{ + int tc; + + netdev_reset_tc(priv->netdev); + + if (ntc == 1) + return; + + netdev_set_num_tc(priv->netdev, ntc); + + /* Map netdev TCs to offset 0 + * We have our own UP to TXQ mapping for QoS + */ + for (tc = 0; tc < ntc; tc++) + netdev_set_tc_queue(priv->netdev, tc, nch, 0); +} + +static void xsc_eth_build_tx2sq_maps(struct xsc_adapter *adapter) +{ + struct xsc_channel *c; + struct xsc_sq *psq; + int i, tc; + + for (i = 0; i < adapter->channels.num_chl; i++) { + c = &adapter->channels.c[i]; + for (tc = 0; tc < c->num_tc; tc++) { + psq = &c->qp.sq[tc]; + adapter->txq2sq[psq->txq_ix] = psq; + } + } +} + +static void xsc_eth_activate_txqsq(struct xsc_channel *c) +{ + int tc = c->num_tc; + struct xsc_sq *psq; + + for (tc = 0; tc < c->num_tc; tc++) { + psq = &c->qp.sq[tc]; + psq->txq = netdev_get_tx_queue(psq->channel->netdev, psq->txq_ix); + set_bit(XSC_ETH_SQ_STATE_ENABLED, &psq->state); + netdev_tx_reset_queue(psq->txq); + netif_tx_start_queue(psq->txq); + } +} + +static void xsc_eth_deactivate_txqsq(struct xsc_channel *c) +{ + int tc = c->num_tc; + struct xsc_sq *psq; + + for (tc = 0; tc < c->num_tc; tc++) { + psq = &c->qp.sq[tc]; + clear_bit(XSC_ETH_SQ_STATE_ENABLED, &psq->state); + } +} + +static void xsc_activate_rq(struct xsc_channel *c) +{ + int i; + + for (i = 0; i < c->qp.rq_num; i++) + set_bit(XSC_ETH_RQ_STATE_ENABLED, &c->qp.rq[i].state); +} + +static void xsc_deactivate_rq(struct xsc_channel *c) +{ + int i; + + for (i = 0; i < c->qp.rq_num; i++) + clear_bit(XSC_ETH_RQ_STATE_ENABLED, &c->qp.rq[i].state); +} + +static void xsc_eth_activate_channel(struct xsc_channel *c) +{ + xsc_eth_activate_txqsq(c); + xsc_activate_rq(c); +} + +static void xsc_eth_deactivate_channel(struct xsc_channel *c) +{ + xsc_deactivate_rq(c); + xsc_eth_deactivate_txqsq(c); +} + +static void xsc_eth_activate_channels(struct xsc_eth_channels *chs) +{ + int i; + + for (i = 0; i < chs->num_chl; i++) + xsc_eth_activate_channel(&chs->c[i]); +} + +static void xsc_eth_deactivate_channels(struct xsc_eth_channels *chs) +{ + int i; + + for (i = 0; i < chs->num_chl; i++) + xsc_eth_deactivate_channel(&chs->c[i]); + + /* Sync with all NAPIs to wait until they stop using queues. */ + synchronize_net(); + + for (i = 0; i < chs->num_chl; i++) + /* last doorbell out */ + napi_disable(&chs->c[i].napi); +} + +static void xsc_eth_activate_priv_channels(struct xsc_adapter *adapter) +{ + int num_txqs; + struct net_device *netdev = adapter->netdev; + + num_txqs = adapter->channels.num_chl * adapter->nic_param.num_tc; + xsc_netdev_set_tcs(adapter, adapter->channels.num_chl, adapter->nic_param.num_tc); + netif_set_real_num_tx_queues(netdev, num_txqs); + netif_set_real_num_rx_queues(netdev, adapter->channels.num_chl); + + xsc_eth_build_tx2sq_maps(adapter); + xsc_eth_activate_channels(&adapter->channels); + netif_tx_start_all_queues(adapter->netdev); +} + +static void xsc_eth_deactivate_priv_channels(struct xsc_adapter *adapter) +{ + netif_tx_disable(adapter->netdev); + xsc_eth_deactivate_channels(&adapter->channels); +} + +static int xsc_eth_sw_init(struct xsc_adapter *adapter) +{ + int ret; + + ret = xsc_eth_open_channels(adapter); + if (ret) + return ret; + + xsc_eth_activate_priv_channels(adapter); + + return 0; +} + +static void xsc_eth_sw_deinit(struct xsc_adapter *adapter) +{ + xsc_eth_deactivate_priv_channels(adapter); + + return xsc_eth_close_channels(adapter); +} + +static bool xsc_eth_get_link_status(struct xsc_adapter *adapter) +{ + struct xsc_core_device *xdev = adapter->xdev; + bool link_up; + u16 vport = 0; + + link_up = xsc_core_query_vport_state(xdev, vport); + + return link_up ? true : false; +} + +static int xsc_eth_change_link_status(struct xsc_adapter *adapter) +{ + bool link_up; + + link_up = xsc_eth_get_link_status(adapter); + + if (link_up && !netif_carrier_ok(adapter->netdev)) { + netdev_info(adapter->netdev, "Link up\n"); + netif_carrier_on(adapter->netdev); + } else if (!link_up && netif_carrier_ok(adapter->netdev)) { + netdev_info(adapter->netdev, "Link down\n"); + netif_carrier_off(adapter->netdev); + } + + return 0; +} + +static void xsc_eth_event_work(struct work_struct *work) +{ + int err; + struct xsc_event_query_type_mbox_in in; + struct xsc_event_query_type_mbox_out out; + struct xsc_adapter *adapter = container_of(work, struct xsc_adapter, event_work); + + if (adapter->status != XSCALE_ETH_DRIVER_OK) + return; + + /*query cmd_type cmd*/ + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_EVENT_TYPE); + + err = xsc_cmd_exec(adapter->xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + netdev_err(adapter->netdev, "failed to query event type, err=%d, status=%d\n", + err, out.hdr.status); + goto failed; + } + + switch (out.ctx.resp_cmd_type) { + case XSC_CMD_EVENT_RESP_CHANGE_LINK: + err = xsc_eth_change_link_status(adapter); + if (err) { + netdev_err(adapter->netdev, "failed to change linkstatus, err=%d\n", err); + goto failed; + } + break; + case XSC_CMD_EVENT_RESP_TEMP_WARN: + netdev_err(adapter->netdev, "[Minor]nic chip temperature high warning\n"); + break; + case XSC_CMD_EVENT_RESP_OVER_TEMP_PROTECTION: + netdev_err(adapter->netdev, "[Critical]nic chip was over-temperature\n"); + break; + default: + break; + } + +failed: + return; +} + +static void xsc_eth_event_handler(void *arg) +{ + struct xsc_adapter *adapter = (struct xsc_adapter *)arg; + + queue_work(adapter->workq, &adapter->event_work); +} + +static bool xsc_get_pct_drop_config(struct xsc_core_device *xdev) +{ + return (xdev->pdev->device == XSC_MC_PF_DEV_ID) || + (xdev->pdev->device == XSC_MF_SOC_PF_DEV_ID) || + (xdev->pdev->device == XSC_MS_PF_DEV_ID) || + (xdev->pdev->device == XSC_MV_SOC_PF_DEV_ID); +} + +static int xsc_eth_enable_nic_hca(struct xsc_adapter *adapter) +{ + struct xsc_core_device *xdev = adapter->xdev; + struct net_device *netdev = adapter->netdev; + struct xsc_cmd_enable_nic_hca_mbox_in in = {}; + struct xsc_cmd_enable_nic_hca_mbox_out out = {}; + u16 caps = 0; + u16 caps_mask = 0; + int err; + + if (xsc_get_user_mode(xdev)) + return 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ENABLE_NIC_HCA); + + in.rss.rss_en = 1; + in.rss.rqn_base = cpu_to_be16(adapter->channels.rqn_base - + xdev->caps.raweth_rss_qp_id_base); + in.rss.rqn_num = cpu_to_be16(adapter->channels.num_chl); + in.rss.hash_tmpl = cpu_to_be32(adapter->rss_param.rss_hash_tmpl); + in.rss.hfunc = xsc_hash_func_type(adapter->rss_param.hfunc); + caps_mask |= BIT(XSC_TBM_CAP_RSS); + + if (netdev->features & NETIF_F_RXCSUM) + caps |= BIT(XSC_TBM_CAP_HASH_PPH); + caps_mask |= BIT(XSC_TBM_CAP_HASH_PPH); + + if (xsc_get_pct_drop_config(xdev) && !(netdev->flags & IFF_SLAVE)) + caps |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG); + caps_mask |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG); + + memcpy(in.nic.mac_addr, netdev->dev_addr, ETH_ALEN); + + in.nic.caps = cpu_to_be16(caps); + in.nic.caps_mask = cpu_to_be16(caps_mask); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + netdev_err(netdev, "failed!! err=%d, status=%d\n", err, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} + +static int xsc_eth_disable_nic_hca(struct xsc_adapter *adapter) +{ + struct xsc_core_device *xdev = adapter->xdev; + struct net_device *netdev = adapter->netdev; + struct xsc_cmd_disable_nic_hca_mbox_in in = {}; + struct xsc_cmd_disable_nic_hca_mbox_out out = {}; + int err; + u16 caps = 0; + + if (xsc_get_user_mode(xdev)) + return 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DISABLE_NIC_HCA); + + if (xsc_get_pct_drop_config(xdev) && !(netdev->priv_flags & IFF_BONDING)) + caps |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG); + + in.nic.caps = cpu_to_be16(caps); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + netdev_err(netdev, "failed!! err=%d, status=%d\n", err, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} + +static void xsc_set_default_xps_cpumasks(struct xsc_adapter *priv, + struct xsc_eth_params *params) +{ + struct xsc_core_device *xdev = priv->xdev; + int num_comp_vectors, irq; + + num_comp_vectors = priv->nic_param.comp_vectors; + cpumask_clear(xdev->xps_cpumask); + + for (irq = 0; irq < num_comp_vectors; irq++) { + cpumask_set_cpu(cpumask_local_spread(irq, xdev->numa_node), + xdev->xps_cpumask); + netif_set_xps_queue(priv->netdev, xdev->xps_cpumask, irq); + } +} + +static int xsc_set_port_admin_status(struct xsc_adapter *adapter, + enum xsc_port_status status) +{ + struct xsc_event_set_port_admin_status_mbox_in in; + struct xsc_event_set_port_admin_status_mbox_out out; + int ret = 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_SET_PORT_ADMIN_STATUS); + in.admin_status = cpu_to_be16(status); + + ret = xsc_cmd_exec(adapter->xdev, &in, sizeof(in), &out, sizeof(out)); + if (ret || out.hdr.status) { + netdev_err(adapter->netdev, "failed to set port admin status, err=%d, status=%d\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + return ret; +} + +static int xsc_eth_open(struct net_device *netdev) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + struct xsc_core_device *xdev = adapter->xdev; + int ret = XSCALE_RET_SUCCESS; + + mutex_lock(&adapter->status_lock); + if (adapter->status == XSCALE_ETH_DRIVER_OK) { + netdev_err(adapter->netdev, "unnormal ndo_open when status=%d\n", + adapter->status); + goto ret; + } + + ret = xsc_eth_sw_init(adapter); + if (ret) + goto ret; + + ret = xsc_eth_enable_nic_hca(adapter); + if (ret) + goto sw_deinit; + + /*INIT_WORK*/ + INIT_WORK(&adapter->event_work, xsc_eth_event_work); + xdev->event_handler = xsc_eth_event_handler; + + if (xsc_eth_get_link_status(adapter)) { + netdev_info(netdev, "Link up\n"); + netif_carrier_on(adapter->netdev); + } else { + netdev_info(netdev, "Link down\n"); + } + + adapter->status = XSCALE_ETH_DRIVER_OK; + + xsc_set_default_xps_cpumasks(adapter, &adapter->nic_param); + + xsc_set_port_admin_status(adapter, XSC_PORT_UP); + + goto ret; + +sw_deinit: + xsc_eth_sw_deinit(adapter); + +ret: + mutex_unlock(&adapter->status_lock); + if (ret) + return XSCALE_RET_ERROR; + else + return XSCALE_RET_SUCCESS; +} + +static int xsc_eth_close(struct net_device *netdev) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + int ret = 0; + + mutex_lock(&adapter->status_lock); + + if (!netif_device_present(netdev)) { + ret = -ENODEV; + goto ret; + } + + if (adapter->status != XSCALE_ETH_DRIVER_OK) + goto ret; + + adapter->status = XSCALE_ETH_DRIVER_CLOSE; + + netif_carrier_off(adapter->netdev); + + xsc_eth_sw_deinit(adapter); + + ret = xsc_eth_disable_nic_hca(adapter); + if (ret) + netdev_err(adapter->netdev, "failed to disable nic hca, err=%d\n", ret); + + xsc_set_port_admin_status(adapter, XSC_PORT_DOWN); + +ret: + mutex_unlock(&adapter->status_lock); + + return ret; } static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz) @@ -155,7 +1640,8 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_ } static const struct net_device_ops xsc_netdev_ops = { - // TBD + .ndo_open = xsc_eth_open, + .ndo_stop = xsc_eth_close, }; static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 1f9bae10b..09af22d92 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -9,6 +9,12 @@ #include "common/xsc_device.h" #include "xsc_eth_common.h" +#define XSC_INVALID_LKEY 0x100 + +#define XSCALE_DRIVER_NAME "xsc_eth" +#define XSCALE_RET_SUCCESS 0 +#define XSCALE_RET_ERROR 1 + enum { XSCALE_ETH_DRIVER_INIT, XSCALE_ETH_DRIVER_OK, @@ -34,7 +40,9 @@ struct xsc_adapter { struct xsc_rss_params rss_param; struct workqueue_struct *workq; + struct work_struct event_work; + struct xsc_eth_channels channels; struct xsc_sq **txq2sq; u32 status; diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index 997d3033c..a402f8ff7 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -6,6 +6,7 @@ #ifndef __XSC_ETH_COMMON_H #define __XSC_ETH_COMMON_H +#include "xsc_queue.h" #include "xsc_pph.h" #define SW_MIN_MTU ETH_MIN_MTU @@ -20,12 +21,130 @@ #define XSC_ETH_RX_MAX_HEAD_ROOM 256 #define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM) +#define XSC_QPN_SQN_STUB 1025 +#define XSC_QPN_RQN_STUB 1024 + #define XSC_LOG_INDIR_RQT_SIZE 0x8 #define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE) #define XSC_ETH_MIN_NUM_CHANNELS 2 #define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE +#define XSC_TX_NUM_TC 1 +#define XSC_MAX_NUM_TC 8 +#define XSC_ETH_MAX_TC_TOTAL (XSC_ETH_MAX_NUM_CHANNELS * XSC_MAX_NUM_TC) +#define XSC_ETH_MAX_QP_NUM_PER_CH (XSC_MAX_NUM_TC + 1) + +#define XSC_SKB_FRAG_SZ(len) (SKB_DATA_ALIGN(len) + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) + +#define XSC_RQCQ_ELE_SZ 32 //size of a rqcq entry +#define XSC_SQCQ_ELE_SZ 32 //size of a sqcq entry +#define XSC_RQ_ELE_SZ XSC_RECV_WQE_BB +#define XSC_SQ_ELE_SZ XSC_SEND_WQE_BB +#define XSC_EQ_ELE_SZ 8 //size of a eq entry + +#define XSC_SKB_FRAG_SZ(len) (SKB_DATA_ALIGN(len) + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) +#define XSC_MIN_SKB_FRAG_SZ (XSC_SKB_FRAG_SZ(XSC_RX_HEADROOM)) +#define XSC_LOG_MAX_RX_WQE_BULK \ + (ilog2(PAGE_SIZE / roundup_pow_of_two(XSC_MIN_SKB_FRAG_SZ))) + +#define XSC_MIN_LOG_RQ_SZ (1 + XSC_LOG_MAX_RX_WQE_BULK) +#define XSC_DEF_LOG_RQ_SZ 0xa +#define XSC_MAX_LOG_RQ_SZ 0xd + +#define XSC_MIN_LOG_SQ_SZ 0x6 +#define XSC_DEF_LOG_SQ_SZ 0xa +#define XSC_MAX_LOG_SQ_SZ 0xd + +#define XSC_SQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_SQ_SZ) +#define XSC_RQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_RQ_SZ) + +#define XSC_SQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_SQ_SZ) +#define XSC_RQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_RQ_SZ) + +#define XSC_LOG_RQCQ_SZ 0xb +#define XSC_LOG_SQCQ_SZ 0xa + +#define XSC_RQCQ_ELE_NUM BIT(XSC_LOG_RQCQ_SZ) +#define XSC_SQCQ_ELE_NUM BIT(XSC_LOG_SQCQ_SZ) +#define XSC_RQ_ELE_NUM XSC_RQ_ELE_NUM_DEF //ds number of a wqebb +#define XSC_SQ_ELE_NUM XSC_SQ_ELE_NUM_DEF //DS number +#define XSC_EQ_ELE_NUM XSC_SQ_ELE_NUM_DEF //number of eq entry??? + +enum xsc_port_status { + XSC_PORT_DOWN = 0, + XSC_PORT_UP = 1, +}; + +enum xsc_queue_type { + XSC_QUEUE_TYPE_EQ = 0, + XSC_QUEUE_TYPE_RQCQ, + XSC_QUEUE_TYPE_SQCQ, + XSC_QUEUE_TYPE_RQ, + XSC_QUEUE_TYPE_SQ, + XSC_QUEUE_TYPE_MAX, +}; + +struct xsc_queue_attr { + u8 q_type; + u32 ele_num; + u32 ele_size; + u8 ele_log_size; + u8 q_log_size; +}; + +struct xsc_eth_rx_wqe_cyc { + DECLARE_FLEX_ARRAY(struct xsc_wqe_data_seg, data); +}; + +struct xsc_eq_param { + struct xsc_queue_attr eq_attr; +}; + +struct xsc_cq_param { + struct xsc_wq_param wq; + struct cq_cmd { + u8 abc[16]; + } cqc; + struct xsc_queue_attr cq_attr; +}; + +struct xsc_rq_param { + struct xsc_wq_param wq; + struct xsc_queue_attr rq_attr; + struct xsc_rq_frags_info frags_info; +}; + +struct xsc_sq_param { + struct xsc_wq_param wq; + struct xsc_queue_attr sq_attr; +}; + +struct xsc_qp_param { + struct xsc_queue_attr qp_attr; +}; + +struct xsc_channel_param { + struct xsc_cq_param rqcq_param; + struct xsc_cq_param sqcq_param; + struct xsc_rq_param rq_param; + struct xsc_sq_param sq_param; + struct xsc_qp_param qp_param; +}; + +struct xsc_eth_qp { + u16 rq_num; + u16 sq_num; + struct xsc_rq rq[XSC_MAX_NUM_TC]; /*may be use one only*/ + struct xsc_sq sq[XSC_MAX_NUM_TC]; /*reserved to tc*/ +}; + +enum channel_flags { + XSC_CHANNEL_NAPI_SCHED = 1, +}; + struct xsc_eth_params { u16 num_channels; u16 max_num_ch; @@ -57,4 +176,28 @@ struct xsc_eth_params { u32 pflags; }; +struct xsc_channel { + /* data path */ + struct xsc_eth_qp qp; + struct napi_struct napi; + u8 num_tc; + int chl_idx; + + /*relationship*/ + struct xsc_adapter *adapter; + struct net_device *netdev; + int cpu; + unsigned long flags; + + /* data path - accessed per napi poll */ + const struct cpumask *aff_mask; + struct irq_desc *irq_desc; +} ____cacheline_aligned_in_smp; + +struct xsc_eth_channels { + struct xsc_channel *c; + unsigned int num_chl; + u32 rqn_base; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c new file mode 100644 index 000000000..72f33bb53 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All + * rights reserved. + * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved. + */ + +#include "xsc_eth_txrx.h" + +struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph) +{ + // TBD + return NULL; +} + +struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph) +{ + // TBD + return NULL; +} + +void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq, + struct xsc_rq *rq, struct xsc_cqe *cqe) +{ + // TBD +} + +int xsc_poll_rx_cq(struct xsc_cq *cq, int budget) +{ + // TBD + return 0; +} + +void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix) +{ + // TBD +} + +bool xsc_eth_post_rx_wqes(struct xsc_rq *rq) +{ + // TBD + return true; +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c new file mode 100644 index 000000000..caf61ec50 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "xsc_eth_common.h" +#include "xsc_eth_txrx.h" + +void xsc_cq_notify_hw_rearm(struct xsc_cq *cq) +{ + union xsc_cq_doorbell db; + + db.val = 0; + db.cq_next_cid = cpu_to_le32(cq->wq.cc); + db.cq_id = cpu_to_le32(cq->xcq.cqn); + db.arm = 0; + + /* ensure doorbell record is visible to device before ringing the doorbell */ + wmb(); + writel(db.val, REG_ADDR(cq->xdev, cq->xdev->regs.complete_db)); +} + +void xsc_cq_notify_hw(struct xsc_cq *cq) +{ + struct xsc_core_device *xdev = cq->xdev; + union xsc_cq_doorbell db; + + dma_wmb(); + + db.val = 0; + db.cq_next_cid = cpu_to_le32(cq->wq.cc); + db.cq_id = cpu_to_le32(cq->xcq.cqn); + + writel(db.val, REG_ADDR(xdev, xdev->regs.complete_reg)); +} + +static bool xsc_channel_no_affinity_change(struct xsc_channel *c) +{ + int current_cpu = smp_processor_id(); + + return cpumask_test_cpu(current_cpu, c->aff_mask); +} + +static bool xsc_poll_tx_cq(struct xsc_cq *cq, int napi_budget) +{ + // TBD + return true; +} + +int xsc_eth_napi_poll(struct napi_struct *napi, int budget) +{ + struct xsc_channel *c = container_of(napi, struct xsc_channel, napi); + struct xsc_eth_params *params = &c->adapter->nic_param; + struct xsc_rq *rq = &c->qp.rq[0]; + struct xsc_sq *sq = NULL; + bool busy = false; + int work_done = 0; + int tx_budget = 0; + int i; + + rcu_read_lock(); + + clear_bit(XSC_CHANNEL_NAPI_SCHED, &c->flags); + + tx_budget = params->sq_size >> 2; + for (i = 0; i < c->num_tc; i++) + busy |= xsc_poll_tx_cq(&c->qp.sq[i].cq, tx_budget); + + /* budget=0 means: don't poll rx rings */ + if (likely(budget)) { + work_done = xsc_poll_rx_cq(&rq->cq, budget); + busy |= work_done == budget; + } + + busy |= rq->post_wqes(rq); + + if (busy) { + if (likely(xsc_channel_no_affinity_change(c))) { + rcu_read_unlock(); + return budget; + } + if (budget && work_done == budget) + work_done--; + } + + if (unlikely(!napi_complete_done(napi, work_done))) + goto out; + + for (i = 0; i < c->num_tc; i++) { + sq = &c->qp.sq[i]; + xsc_cq_notify_hw_rearm(&sq->cq); + } + + xsc_cq_notify_hw_rearm(&rq->cq); +out: + rcu_read_unlock(); + return work_done; +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h new file mode 100644 index 000000000..116019a9a --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_RXTX_H +#define __XSC_RXTX_H + +#include "xsc_eth.h" + +void xsc_cq_notify_hw_rearm(struct xsc_cq *cq); +void xsc_cq_notify_hw(struct xsc_cq *cq); +int xsc_eth_napi_poll(struct napi_struct *napi, int budget); +bool xsc_eth_post_rx_wqes(struct xsc_rq *rq); +void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq, + struct xsc_rq *rq, struct xsc_cqe *cqe); +void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix); +struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph); +struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph); +int xsc_poll_rx_cq(struct xsc_cq *cq, int budget); + +#endif /* XSC_RXTX_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h index 8f33c78d8..8f63b9e0b 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -8,7 +8,152 @@ #ifndef __XSC_QUEUE_H #define __XSC_QUEUE_H +#include +#include #include "common/xsc_core.h" +#include "xsc_eth_wq.h" + +enum { + XSC_SEND_WQE_DS = 16, + XSC_SEND_WQE_BB = 64, +}; + +enum { + XSC_RECV_WQE_DS = 16, + XSC_RECV_WQE_BB = 16, +}; + +enum { + XSC_ETH_RQ_STATE_ENABLED, + XSC_ETH_RQ_STATE_AM, + XSC_ETH_RQ_STATE_CACHE_REDUCE_PENDING, +}; + +#define XSC_SEND_WQEBB_NUM_DS (XSC_SEND_WQE_BB / XSC_SEND_WQE_DS) +#define XSC_LOG_SEND_WQEBB_NUM_DS ilog2(XSC_SEND_WQEBB_NUM_DS) + +#define XSC_RECV_WQEBB_NUM_DS (XSC_RECV_WQE_BB / XSC_RECV_WQE_DS) +#define XSC_LOG_RECV_WQEBB_NUM_DS ilog2(XSC_RECV_WQEBB_NUM_DS) + +/* each ds holds one fragment in skb */ +#define XSC_MAX_RX_FRAGS 4 +#define XSC_RX_FRAG_SZ_ORDER 0 +#define XSC_RX_FRAG_SZ (PAGE_SIZE << XSC_RX_FRAG_SZ_ORDER) +#define DEFAULT_FRAG_SIZE (2048) + +enum { + XSC_ETH_SQ_STATE_ENABLED, + XSC_ETH_SQ_STATE_AM, +}; + +struct xsc_dma_info { + struct page *page; + dma_addr_t addr; +}; + +struct xsc_page_cache { + struct xsc_dma_info *page_cache; + u32 head; + u32 tail; + u32 sz; + u32 resv; +}; + +struct xsc_cq { + /* data path - accessed per cqe */ + struct xsc_cqwq wq; + + /* data path - accessed per napi poll */ + u16 event_ctr; + struct napi_struct *napi; + struct xsc_core_cq xcq; + struct xsc_channel *channel; + + /* control */ + struct xsc_core_device *xdev; + struct xsc_wq_ctrl wq_ctrl; + u8 rx; +} ____cacheline_aligned_in_smp; + +struct xsc_wqe_frag_info { + struct xsc_dma_info *di; + u32 offset; + u8 last_in_page; + u8 is_available; +}; + +struct xsc_rq_frag_info { + int frag_size; + int frag_stride; +}; + +struct xsc_rq_frags_info { + struct xsc_rq_frag_info arr[XSC_MAX_RX_FRAGS]; + u8 num_frags; + u8 log_num_frags; + u8 wqe_bulk; + u8 wqe_bulk_min; + u8 frags_max_num; +}; + +struct xsc_rq; +typedef void (*xsc_fp_handle_rx_cqe)(struct xsc_cqwq *cqwq, struct xsc_rq *rq, + struct xsc_cqe *cqe); +typedef bool (*xsc_fp_post_rx_wqes)(struct xsc_rq *rq); +typedef void (*xsc_fp_dealloc_wqe)(struct xsc_rq *rq, u16 ix); +typedef struct sk_buff * (*xsc_fp_skb_from_cqe)(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, u32 cqe_bcnt, u8 has_pph); + +struct xsc_rq { + struct xsc_core_qp cqp; + struct { + struct xsc_wq_cyc wq; + struct xsc_wqe_frag_info *frags; + struct xsc_dma_info *di; + struct xsc_rq_frags_info info; + xsc_fp_skb_from_cqe skb_from_cqe; + } wqe; + + struct { + u16 headroom; + u8 map_dir; /* dma map direction */ + } buff; + + struct page_pool *page_pool; + struct xsc_wq_ctrl wq_ctrl; + struct xsc_cq cq; + u32 rqn; + int ix; + + unsigned long state; + struct work_struct recover_work; + + u32 hw_mtu; + u32 frags_sz; + + xsc_fp_handle_rx_cqe handle_rx_cqe; + xsc_fp_post_rx_wqes post_wqes; + xsc_fp_dealloc_wqe dealloc_wqe; + struct xsc_page_cache page_cache; +} ____cacheline_aligned_in_smp; + +enum xsc_dma_map_type { + XSC_DMA_MAP_SINGLE, + XSC_DMA_MAP_PAGE +}; + +struct xsc_sq_dma { + dma_addr_t addr; + u32 size; + enum xsc_dma_map_type type; +}; + +struct xsc_tx_wqe_info { + struct sk_buff *skb; + u32 num_bytes; + u8 num_wqebbs; + u8 num_dma; +}; struct xsc_sq { struct xsc_core_qp cqp; diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index ad0ecc122..e8c13c6fd 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,5 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o adev.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o adev.o vport.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c new file mode 100644 index 000000000..9a7a475da --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c @@ -0,0 +1,30 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021 - 2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "common/xsc_core.h" +#include "common/xsc_driver.h" + +u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport) +{ + struct xsc_query_vport_state_in in; + struct xsc_query_vport_state_out out; + int err; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_VPORT_STATE); + in.vport_number = cpu_to_be16(vport); + if (vport) + in.other_vport = 1; + + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) + pci_err(xdev->pdev, "failed to query vport state, err=%d, status=%d\n", + err, out.hdr.status); + + return out.state; +} +EXPORT_SYMBOL(xsc_core_query_vport_state); From patchwork Wed Jan 15 10:23:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940214 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-1-33.ptr.blmpb.com (lf-1-33.ptr.blmpb.com [103.149.242.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A7D91EEA42 for ; Wed, 15 Jan 2025 10:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.149.242.33 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936674; cv=none; b=rPdB+uGgbPmijGrMRJXpP2e3oXYKGo6kHH8BF8w6/lhJ1Y5bIo0ebPl1eKEudZinJoY+HjM9Rle8PSoP//gbZ6oEO5jLjHysycZmwMUzOCmP0iC3Zdu1jDvmMqqaT1B0H59kqlA+B2NHrZr40iFKxXEttOsLwyVkaa9txlBsM34= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936674; c=relaxed/simple; bh=nTbQmL9ztGrt8NCC+2P4ntTgLx0rRlwiGTjKg7bGQ/s=; h=In-Reply-To:References:From:Subject:Date:Message-Id:Cc: Mime-Version:Content-Type:To; b=oaBx27yX9f8YJ3dWAtz/WMhOOB+YRzsmU8wapSPqzT6YBoGsef9m2HdYS6iQwRorxTCrjLSnYBLSBANSRzICIgu5xWqcB+e0lrtlYtSeriRD6piPqARUuQslFZ9yCW/u4XGTWhpqznFiA6LoBiyT2zn6khZv0IsawlTpkcMXONQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=DQuf8912; arc=none smtp.client-ip=103.149.242.33 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="DQuf8912" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936593; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=HAJnhN+IOjYMpK0F6EK0sTwVgHe61PCUDEekEieksfU=; b=DQuf89122VbDa1FufamA26lKcPEKEXfl5TFtLdr4CTmZCN5i7aXLUt83bbI+/l3Uz4zzSl q5FkIV6V6PDYYWJoa6tm/8dTWxqpHUl10v6TqWVqiYQyttb0UHm2LI/lThSnR6A1TFuC6p dS9+/5UlCfJRuNzhaDb1VEYVj9s65lGXYunSYZKG3z+SyeL2GvAjL24Cu6YVTvTHSZ5Fzr e5vjG1pfI891Sr6clDoTTxccuCtuwQYq7nRYSx9zBr8nkYF22V+TaMIuIscFK2BzW7nrA9 X6mN7Ubxfc8MyXvEMJP7DCo0nR51ZyMfeVo7XE6l+q5LvzzUvEDL0DbYF5W5AA== X-Original-From: Xin Tian In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> References: <20250115102242.3541496-1-tianx@yunsilicon.com> From: "Xin Tian" Subject: [PATCH v3 12/14] net-next/yunsilicon: Add ndo_start_xmit Date: Wed, 15 Jan 2025 18:23:10 +0800 Message-Id: <20250115102309.3541496-13-tianx@yunsilicon.com> Cc: , , , , , , , , , Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.25.1 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:10 +0800 X-Lms-Return-Path: To: X-Patchwork-Delegate: kuba@kernel.org Add ndo_start_xmit Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 1 + .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 1 + .../yunsilicon/xsc/net/xsc_eth_common.h | 8 + .../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 290 ++++++++++++++++++ .../yunsilicon/xsc/net/xsc_eth_txrx.h | 36 +++ .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 7 + 7 files changed, 344 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 104ef5330..7cfc2aaa2 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_rx.o +xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index 163fc2f55..b52f0db29 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -1642,6 +1642,7 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_ static const struct net_device_ops xsc_netdev_ops = { .ndo_open = xsc_eth_open, .ndo_stop = xsc_eth_close, + .ndo_start_xmit = xsc_eth_xmit_start, }; static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 09af22d92..87e2a72d3 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -6,6 +6,7 @@ #ifndef __XSC_ETH_H #define __XSC_ETH_H +#include #include "common/xsc_device.h" #include "xsc_eth_common.h" diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index a402f8ff7..5fc81a3f6 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -200,4 +200,12 @@ struct xsc_eth_channels { u32 rqn_base; }; +union xsc_send_doorbell { + struct{ + s32 next_pid : 16; + u32 qp_num : 15; + }; + u32 send_data; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c new file mode 100644 index 000000000..bd9c4e1c0 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c @@ -0,0 +1,290 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include "xsc_eth.h" +#include "xsc_eth_txrx.h" + +#define XSC_OPCODE_RAW 7 + +static void xsc_dma_push(struct xsc_sq *sq, dma_addr_t addr, u32 size, + enum xsc_dma_map_type map_type) +{ + struct xsc_sq_dma *dma = xsc_dma_get(sq, sq->dma_fifo_pc++); + + dma->addr = addr; + dma->size = size; + dma->type = map_type; +} + +static void xsc_dma_unmap_wqe_err(struct xsc_sq *sq, u8 num_dma) +{ + struct xsc_adapter *adapter = sq->channel->adapter; + struct device *dev = adapter->dev; + + int i; + + for (i = 0; i < num_dma; i++) { + struct xsc_sq_dma *last_pushed_dma = xsc_dma_get(sq, --sq->dma_fifo_pc); + + xsc_tx_dma_unmap(dev, last_pushed_dma); + } +} + +static void *xsc_sq_fetch_wqe(struct xsc_sq *sq, size_t size, u16 *pi) +{ + struct xsc_wq_cyc *wq = &sq->wq; + void *wqe; + + /*caution, sp->pc is default to be zero*/ + *pi = xsc_wq_cyc_ctr2ix(wq, sq->pc); + wqe = xsc_wq_cyc_get_wqe(wq, *pi); + memset(wqe, 0, size); + + return wqe; +} + +static u16 xsc_tx_get_gso_ihs(struct xsc_sq *sq, struct sk_buff *skb) +{ + u16 ihs; + + if (skb->encapsulation) { + ihs = skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb); + } else { + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) + ihs = skb_transport_offset(skb) + sizeof(struct udphdr); + else + ihs = skb_transport_offset(skb) + tcp_hdrlen(skb); + } + + return ihs; +} + +static void xsc_txwqe_build_cseg_csum(struct xsc_sq *sq, + struct sk_buff *skb, + struct xsc_send_wqe_ctrl_seg *cseg) +{ + if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { + if (skb->encapsulation) + cseg->csum_en = XSC_ETH_WQE_INNER_AND_OUTER_CSUM; + else + cseg->csum_en = XSC_ETH_WQE_OUTER_CSUM; + } else { + cseg->csum_en = XSC_ETH_WQE_NONE_CSUM; + } +} + +static void xsc_txwqe_build_csegs(struct xsc_sq *sq, struct sk_buff *skb, + u16 mss, u16 ihs, u16 headlen, + u8 opcode, u16 ds_cnt, u32 num_bytes, + struct xsc_send_wqe_ctrl_seg *cseg) +{ + struct xsc_core_device *xdev = sq->cq.xdev; + int send_wqe_ds_num_log = ilog2(xdev->caps.send_ds_num); + + xsc_txwqe_build_cseg_csum(sq, skb, cseg); + + if (mss != 0) { + cseg->has_pph = 0; + cseg->so_type = 1; + cseg->so_hdr_len = ihs; + cseg->so_data_size = cpu_to_le16(mss); + } + + cseg->msg_opcode = opcode; + cseg->wqe_id = cpu_to_le16(sq->pc << send_wqe_ds_num_log); + cseg->ds_data_num = ds_cnt - XSC_SEND_WQEBB_CTRL_NUM_DS; + cseg->msg_len = cpu_to_le32(num_bytes); + + cseg->ce = 1; +} + +static int xsc_txwqe_build_dsegs(struct xsc_sq *sq, struct sk_buff *skb, + u16 ihs, u16 headlen, + struct xsc_wqe_data_seg *dseg) +{ + dma_addr_t dma_addr = 0; + u8 num_dma = 0; + int i; + struct xsc_adapter *adapter = sq->channel->adapter; + struct device *dev = adapter->dev; + + if (headlen) { + dma_addr = dma_map_single(dev, skb->data, headlen, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, dma_addr))) + goto dma_unmap_wqe_err; + + dseg->va = cpu_to_le64(dma_addr); + dseg->mkey = cpu_to_le32(sq->mkey_be); + dseg->seg_len = cpu_to_le32(headlen); + + xsc_dma_push(sq, dma_addr, headlen, XSC_DMA_MAP_SINGLE); + num_dma++; + dseg++; + } + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + int fsz = skb_frag_size(frag); + + dma_addr = skb_frag_dma_map(dev, frag, 0, fsz, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, dma_addr))) + goto dma_unmap_wqe_err; + + dseg->va = cpu_to_le64(dma_addr); + dseg->mkey = cpu_to_le32(sq->mkey_be); + dseg->seg_len = cpu_to_le32(fsz); + + xsc_dma_push(sq, dma_addr, fsz, XSC_DMA_MAP_PAGE); + num_dma++; + dseg++; + } + + return num_dma; + +dma_unmap_wqe_err: + xsc_dma_unmap_wqe_err(sq, num_dma); + return -ENOMEM; +} + +static void xsc_sq_notify_hw(struct xsc_wq_cyc *wq, u16 pc, + struct xsc_sq *sq) +{ + struct xsc_adapter *adapter = sq->channel->adapter; + struct xsc_core_device *xdev = adapter->xdev; + union xsc_send_doorbell doorbell_value; + int send_ds_num_log = ilog2(xdev->caps.send_ds_num); + + /*reverse wqe index to ds index*/ + doorbell_value.next_pid = pc << send_ds_num_log; + doorbell_value.qp_num = sq->sqn; + + /* Make sure that descriptors are written before + * updating doorbell record and ringing the doorbell + */ + wmb(); + writel(doorbell_value.send_data, REG_ADDR(xdev, xdev->regs.tx_db)); +} + +static void xsc_txwqe_complete(struct xsc_sq *sq, struct sk_buff *skb, + u8 opcode, u16 ds_cnt, u8 num_wqebbs, u32 num_bytes, u8 num_dma, + struct xsc_tx_wqe_info *wi) +{ + struct xsc_wq_cyc *wq = &sq->wq; + + wi->num_bytes = num_bytes; + wi->num_dma = num_dma; + wi->num_wqebbs = num_wqebbs; + wi->skb = skb; + + netdev_tx_sent_queue(sq->txq, num_bytes); + + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + + sq->pc += wi->num_wqebbs; + + if (unlikely(!xsc_wqc_has_room_for(wq, sq->cc, sq->pc, sq->stop_room))) + netif_tx_stop_queue(sq->txq); + + if (!netdev_xmit_more() || netif_xmit_stopped(sq->txq)) + xsc_sq_notify_hw(wq, sq->pc, sq); +} + +static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, + struct xsc_sq *sq, + struct xsc_tx_wqe *wqe, + u16 pi) +{ + struct xsc_send_wqe_ctrl_seg *cseg; + struct xsc_wqe_data_seg *dseg; + struct xsc_tx_wqe_info *wi; + struct xsc_core_device *xdev = sq->cq.xdev; + u16 ds_cnt; + u16 mss, ihs, headlen; + u8 opcode; + u32 num_bytes, num_dma; + u8 num_wqebbs; + +retry_send: + /* Calc ihs and ds cnt, no writes to wqe yet */ + /*ctrl-ds, it would be reduce in ds_data_num*/ + ds_cnt = XSC_SEND_WQEBB_CTRL_NUM_DS; + + /*in andes inline is bonding with gso*/ + if (skb_is_gso(skb)) { + opcode = XSC_OPCODE_RAW; + mss = skb_shinfo(skb)->gso_size; + ihs = xsc_tx_get_gso_ihs(sq, skb); + num_bytes = skb->len; + } else { + opcode = XSC_OPCODE_RAW; + mss = 0; + ihs = 0; + num_bytes = skb->len; + } + + /*linear data in skb*/ + headlen = skb->len - skb->data_len; + ds_cnt += !!headlen; + ds_cnt += skb_shinfo(skb)->nr_frags; + + /* Check packet size. */ + if (unlikely(mss == 0 && num_bytes > sq->hw_mtu)) + goto err_drop; + + num_wqebbs = DIV_ROUND_UP(ds_cnt, xdev->caps.send_ds_num); + /*if ds_cnt exceed one wqe, drop it*/ + if (num_wqebbs != 1) { + if (skb_linearize(skb)) + goto err_drop; + goto retry_send; + } + + /* fill wqe */ + wi = (struct xsc_tx_wqe_info *)&sq->db.wqe_info[pi]; + cseg = &wqe->ctrl; + dseg = &wqe->data[0]; + + if (unlikely(num_bytes == 0)) + goto err_drop; + + xsc_txwqe_build_csegs(sq, skb, mss, ihs, headlen, + opcode, ds_cnt, num_bytes, cseg); + + /*inline header is also use dma to transport*/ + num_dma = xsc_txwqe_build_dsegs(sq, skb, ihs, headlen, dseg); + if (unlikely(num_dma < 0)) + goto err_drop; + + xsc_txwqe_complete(sq, skb, opcode, ds_cnt, num_wqebbs, num_bytes, + num_dma, wi); + + return NETDEV_TX_OK; + +err_drop: + dev_kfree_skb_any(skb); + + return NETDEV_TX_OK; +} + +netdev_tx_t xsc_eth_xmit_start(struct sk_buff *skb, struct net_device *netdev) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + struct xsc_tx_wqe *wqe; + struct xsc_sq *sq; + u16 pi; + + if (!adapter || !adapter->xdev || adapter->status != XSCALE_ETH_DRIVER_OK) + return NETDEV_TX_BUSY; + + sq = adapter->txq2sq[skb_get_queue_mapping(skb)]; + if (unlikely(!sq)) + return NETDEV_TX_BUSY; + + wqe = xsc_sq_fetch_wqe(sq, adapter->xdev->caps.send_ds_num * XSC_SEND_WQE_DS, &pi); + + return xsc_eth_xmit_frame(skb, sq, wqe, pi); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h index 116019a9a..f14ff7abf 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h @@ -6,8 +6,17 @@ #ifndef __XSC_RXTX_H #define __XSC_RXTX_H +#include +#include #include "xsc_eth.h" +enum { + XSC_ETH_WQE_NONE_CSUM, + XSC_ETH_WQE_INNER_CSUM, + XSC_ETH_WQE_OUTER_CSUM, + XSC_ETH_WQE_INNER_AND_OUTER_CSUM, +}; + void xsc_cq_notify_hw_rearm(struct xsc_cq *cq); void xsc_cq_notify_hw(struct xsc_cq *cq); int xsc_eth_napi_poll(struct napi_struct *napi, int budget); @@ -23,4 +32,31 @@ struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, u32 cqe_bcnt, u8 has_pph); int xsc_poll_rx_cq(struct xsc_cq *cq, int budget); +netdev_tx_t xsc_eth_xmit_start(struct sk_buff *skb, struct net_device *netdev); + +static inline void xsc_tx_dma_unmap(struct device *dev, struct xsc_sq_dma *dma) +{ + switch (dma->type) { + case XSC_DMA_MAP_SINGLE: + dma_unmap_single(dev, dma->addr, dma->size, DMA_TO_DEVICE); + break; + case XSC_DMA_MAP_PAGE: + dma_unmap_page(dev, dma->addr, dma->size, DMA_TO_DEVICE); + break; + default: + break; + } +} + +static inline struct xsc_sq_dma *xsc_dma_get(struct xsc_sq *sq, u32 i) +{ + return &sq->db.dma_fifo[i & sq->dma_fifo_mask]; +} + +static inline bool xsc_wqc_has_room_for(struct xsc_wq_cyc *wq, + u16 cc, u16 pc, u16 n) +{ + return (xsc_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc); +} + #endif /* XSC_RXTX_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h index 8f63b9e0b..967d46e7e 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -35,6 +35,8 @@ enum { #define XSC_RECV_WQEBB_NUM_DS (XSC_RECV_WQE_BB / XSC_RECV_WQE_DS) #define XSC_LOG_RECV_WQEBB_NUM_DS ilog2(XSC_RECV_WQEBB_NUM_DS) +#define XSC_SEND_WQEBB_CTRL_NUM_DS 1 + /* each ds holds one fragment in skb */ #define XSC_MAX_RX_FRAGS 4 #define XSC_RX_FRAG_SZ_ORDER 0 @@ -155,6 +157,11 @@ struct xsc_tx_wqe_info { u8 num_dma; }; +struct xsc_tx_wqe { + struct xsc_send_wqe_ctrl_seg ctrl; + struct xsc_wqe_data_seg data[]; +}; + struct xsc_sq { struct xsc_core_qp cqp; /* dirtied @completion */ From patchwork Wed Jan 15 10:23:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940213 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-59.ptr.blmpb.com (lf-2-59.ptr.blmpb.com [101.36.218.59]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F17BD1DB130 for ; Wed, 15 Jan 2025 10:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.59 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936673; cv=none; b=t6EdI0191XV8ozq8Wozk8F/+RuN5m2fvZSfatDwFmTWqZXRyofsAJY7GM0ArDF/XfsN4j28l/Ojym6176hh9MGxnKhUakJo6CHuOPW5+rLkGahh1AdptqChnUXnpvauGPEh28r4dRIdQLg1VKV8BobqXWQzI4A3fyhobu+9rzp8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936673; c=relaxed/simple; bh=7lYBcr54x6O7kEC+hCY59hazGbkz/H7ZK9NLeYyAal8=; h=Date:From:Subject:Mime-Version:References:To:Message-Id: In-Reply-To:Cc:Content-Type; b=RtUmi09A9MS1BSIlpMly8lkC9y6gMBYPg0o+rbJ/3uM5wJjZo/QIzno0plXQmzq3A5QmITRtbCRfnCPVDNRL8JczNno8uP8UtYJh0gKzjAqAXanaa+mOUWC5DyU2K3gWdppzZ2AyqiejBvd5D8k2RiAxQ2qeHmAj4XoNBJovfsA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=fA+HB2mk; arc=none smtp.client-ip=101.36.218.59 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="fA+HB2mk" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936595; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=d3UtwfW3Scb91ce78Zb7iTkC3aNi5tgFpWP8rsS7qWY=; b=fA+HB2mkUDG3E0pkjfTmBchO15Z53uZYgA7tZVsbtWLHKL2/DjuHV0tG+rewl1TgnVkoud 4k00OIvla3a/JOazJk2TjbY9XwZGjdX7Y1a/rCyPu8DgkMh2pWnnzZNTSkXNUILC2/0n2q /Afvg2a51NAKTY/Njby4zkyTu1ZiN6mHgNVhzwa4q+rtRjTtNmP9DY+kV/RwXpKr92ChdR +3WX3uEhHWrGn1H2e+54jJv+BIczuvL7QqGi5goSTn4Mg9L2UGFp/jmUL2QbbBbI5oG/k5 gzwEb+lx+giKxKLWGKWxB7MrjCRHeJG3F9CEZNL042Qx3uNbZz3gpFHKyyxBSQ== Date: Wed, 15 Jan 2025 18:23:12 +0800 From: "Xin Tian" Subject: [PATCH v3 13/14] net-next/yunsilicon: Add eth rx Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Lms-Return-Path: References: <20250115102242.3541496-1-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:12 +0800 To: Message-Id: <20250115102311.3541496-14-tianx@yunsilicon.com> X-Original-From: Xin Tian X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> Cc: , , , , , , , , , X-Patchwork-Delegate: kuba@kernel.org Add eth rx Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 9 + .../yunsilicon/xsc/net/xsc_eth_common.h | 28 + .../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 525 +++++++++++++++++- .../yunsilicon/xsc/net/xsc_eth_txrx.c | 90 ++- .../yunsilicon/xsc/net/xsc_eth_txrx.h | 28 + 5 files changed, 668 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 6dced72c4..6f5c18f3f 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -156,6 +156,10 @@ struct xsc_qp_table { }; // cq +enum { + XSC_CQE_OWNER_MASK = 1, +}; + enum xsc_event { XSC_EVENT_TYPE_COMP = 0x0, XSC_EVENT_TYPE_COMM_EST = 0x02,//mad @@ -517,4 +521,9 @@ static inline u8 xsc_get_user_mode(struct xsc_core_device *xdev) return xdev->user_mode; } +static inline u8 get_cqe_opcode(struct xsc_cqe *cqe) +{ + return cqe->msg_opcode; +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index 5fc81a3f6..92257a950 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -21,6 +21,8 @@ #define XSC_ETH_RX_MAX_HEAD_ROOM 256 #define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM) +#define XSC_RX_MAX_HEAD (256) + #define XSC_QPN_SQN_STUB 1025 #define XSC_QPN_RQN_STUB 1024 @@ -145,6 +147,24 @@ enum channel_flags { XSC_CHANNEL_NAPI_SCHED = 1, }; +enum xsc_eth_priv_flag { + XSC_PFLAG_RX_NO_CSUM_COMPLETE, + XSC_PFLAG_SNIFFER, + XSC_PFLAG_DROPLESS_RQ, + XSC_PFLAG_RX_COPY_BREAK, + XSC_NUM_PFLAGS, /* Keep last */ +}; + +#define XSC_SET_PFLAG(params, pflag, enable) \ + do { \ + if (enable) \ + (params)->pflags |= BIT(pflag); \ + else \ + (params)->pflags &= ~(BIT(pflag)); \ + } while (0) + +#define XSC_GET_PFLAG(params, pflag) (!!((params)->pflags & (BIT(pflag)))) + struct xsc_eth_params { u16 num_channels; u16 max_num_ch; @@ -208,4 +228,12 @@ union xsc_send_doorbell { u32 send_data; }; +union xsc_recv_doorbell { + struct{ + s32 next_pid : 13; + u32 qp_num : 15; + }; + u32 recv_data; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c index 72f33bb53..a4428e629 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c @@ -5,44 +5,549 @@ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved. */ +#include +#include "xsc_eth.h" #include "xsc_eth_txrx.h" +#include "xsc_eth_common.h" +#include +#include "common/xsc_pp.h" +#include "xsc_pph.h" + +#define PAGE_REF_ELEV (U16_MAX) +/* Upper bound on number of packets that share a single page */ +#define PAGE_REF_THRSD (PAGE_SIZE / 64) + +static void xsc_rq_notify_hw(struct xsc_rq *rq) +{ + struct xsc_core_device *xdev = rq->cq.xdev; + struct xsc_wq_cyc *wq = &rq->wqe.wq; + union xsc_recv_doorbell doorbell_value; + u64 rqwqe_id = wq->wqe_ctr << (ilog2(xdev->caps.recv_ds_num)); + + /*reverse wqe index to ds index*/ + doorbell_value.next_pid = rqwqe_id; + doorbell_value.qp_num = rq->rqn; + + /* Make sure that descriptors are written before + * updating doorbell record and ringing the doorbell + */ + wmb(); + writel(doorbell_value.recv_data, REG_ADDR(xdev, xdev->regs.rx_db)); +} + +static void xsc_skb_set_hash(struct xsc_adapter *adapter, + struct xsc_cqe *cqe, + struct sk_buff *skb) +{ + struct xsc_rss_params *rss = &adapter->rss_param; + u32 hash_field; + bool l3_hash = false; + bool l4_hash = false; + int ht = 0; + + if (adapter->netdev->features & NETIF_F_RXHASH) { + if (skb->protocol == htons(ETH_P_IP)) { + hash_field = rss->rx_hash_fields[XSC_TT_IPV4_TCP]; + if (hash_field & XSC_HASH_FIELD_SEL_SRC_IP || + hash_field & XSC_HASH_FIELD_SEL_DST_IP) + l3_hash = true; + + if (hash_field & XSC_HASH_FIELD_SEL_SPORT || + hash_field & XSC_HASH_FIELD_SEL_DPORT) + l4_hash = true; + } else if (skb->protocol == htons(ETH_P_IPV6)) { + hash_field = rss->rx_hash_fields[XSC_TT_IPV6_TCP]; + if (hash_field & XSC_HASH_FIELD_SEL_SRC_IPV6 || + hash_field & XSC_HASH_FIELD_SEL_DST_IPV6) + l3_hash = true; + + if (hash_field & XSC_HASH_FIELD_SEL_SPORT_V6 || + hash_field & XSC_HASH_FIELD_SEL_DPORT_V6) + l4_hash = true; + } + + if (l3_hash && l4_hash) + ht = PKT_HASH_TYPE_L4; + else if (l3_hash) + ht = PKT_HASH_TYPE_L3; + if (ht) + skb_set_hash(skb, be32_to_cpu(cqe->vni), ht); + } +} + +static void xsc_handle_csum(struct xsc_cqe *cqe, struct xsc_rq *rq, + struct sk_buff *skb, struct xsc_wqe_frag_info *wi) +{ + struct xsc_channel *c = rq->cq.channel; + struct net_device *netdev = c->adapter->netdev; + struct xsc_dma_info *dma_info = wi->di; + int offset_from = wi->offset; + struct epp_pph *hw_pph = page_address(dma_info->page) + offset_from; + + if (unlikely((netdev->features & NETIF_F_RXCSUM) == 0)) + goto csum_none; + + if (unlikely(XSC_GET_EPP2SOC_PPH_ERROR_BITMAP(hw_pph) & PACKET_UNKNOWN)) + goto csum_none; + + if (XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) && + (!(cqe->csum_err & OUTER_AND_INNER))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = 1; + skb->encapsulation = 1; + } else if (XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) && + (!(cqe->csum_err & OUTER_BIT) && (cqe->csum_err & INNER_BIT))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = 0; + skb->encapsulation = 1; + } else if (!XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) && + (!(cqe->csum_err & OUTER_BIT))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + } + + goto out; + +csum_none: + skb->csum = 0; + skb->ip_summed = CHECKSUM_NONE; +out: + return; +} + +static void xsc_build_rx_skb(struct xsc_cqe *cqe, + u32 cqe_bcnt, + struct xsc_rq *rq, + struct sk_buff *skb, + struct xsc_wqe_frag_info *wi) +{ + struct xsc_channel *c = rq->cq.channel; + struct net_device *netdev = c->netdev; + struct xsc_adapter *adapter = c->adapter; + + skb->mac_len = ETH_HLEN; + + skb_record_rx_queue(skb, rq->ix); + xsc_handle_csum(cqe, rq, skb, wi); + + skb->protocol = eth_type_trans(skb, netdev); + xsc_skb_set_hash(adapter, cqe, skb); +} + +static void xsc_complete_rx_cqe(struct xsc_rq *rq, + struct xsc_cqe *cqe, + u32 cqe_bcnt, + struct sk_buff *skb, + struct xsc_wqe_frag_info *wi) +{ + xsc_build_rx_skb(cqe, cqe_bcnt, rq, skb, wi); +} + +static void xsc_add_skb_frag(struct xsc_rq *rq, + struct sk_buff *skb, + struct xsc_dma_info *di, + u32 frag_offset, u32 len, + unsigned int truesize) +{ + struct xsc_channel *c = rq->cq.channel; + struct device *dev = c->adapter->dev; + + dma_sync_single_for_cpu(dev, di->addr + frag_offset, len, DMA_FROM_DEVICE); + page_ref_inc(di->page); + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, + di->page, frag_offset, len, truesize); +} + +static void xsc_copy_skb_header(struct device *dev, + struct sk_buff *skb, + struct xsc_dma_info *dma_info, + int offset_from, u32 headlen) +{ + void *from = page_address(dma_info->page) + offset_from; + /* Aligning len to sizeof(long) optimizes memcpy performance */ + unsigned int len = ALIGN(headlen, sizeof(long)); + + dma_sync_single_for_cpu(dev, dma_info->addr + offset_from, len, + DMA_FROM_DEVICE); + skb_copy_to_linear_data(skb, from, len); +} + +static struct sk_buff *xsc_build_linear_skb(struct xsc_rq *rq, void *va, + u32 frag_size, u16 headroom, + u32 cqe_bcnt) +{ + struct sk_buff *skb = build_skb(va, frag_size); + + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, headroom); + skb_put(skb, cqe_bcnt); + + return skb; +} struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq, struct xsc_wqe_frag_info *wi, u32 cqe_bcnt, u8 has_pph) { - // TBD - return NULL; + int pph_len = has_pph ? XSC_PPH_HEAD_LEN : 0; + u16 rx_headroom = rq->buff.headroom; + struct xsc_dma_info *di = wi->di; + struct sk_buff *skb; + void *va, *data; + u32 frag_size; + + va = page_address(di->page) + wi->offset; + data = va + rx_headroom + pph_len; + frag_size = XSC_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); + + dma_sync_single_range_for_cpu(rq->cq.xdev->device, di->addr, wi->offset, + frag_size, DMA_FROM_DEVICE); + prefetchw(va); /* xdp_frame data area */ + prefetch(data); + + skb = xsc_build_linear_skb(rq, va, frag_size, (rx_headroom + pph_len), + (cqe_bcnt - pph_len)); + if (unlikely(!skb)) + return NULL; + + /* queue up for recycling/reuse */ + page_ref_inc(di->page); + + return skb; } struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, struct xsc_wqe_frag_info *wi, u32 cqe_bcnt, u8 has_pph) { - // TBD - return NULL; + struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; + u16 headlen = min_t(u32, XSC_RX_MAX_HEAD, cqe_bcnt); + struct xsc_channel *c = rq->cq.channel; + struct net_device *netdev = c->adapter->netdev; + struct device *dev = c->adapter->dev; + struct xsc_wqe_frag_info *head_wi = wi; + struct xsc_wqe_frag_info *rx_wi = wi; + u16 head_offset = head_wi->offset; + u16 byte_cnt = cqe_bcnt - headlen; + u16 frag_consumed_bytes = 0; + u16 frag_headlen = headlen; + struct sk_buff *skb; + u8 fragcnt = 0; + int i = 0; + + skb = napi_alloc_skb(rq->cq.napi, ALIGN(XSC_RX_MAX_HEAD, sizeof(long))); + if (unlikely(!skb)) + return NULL; + + prefetchw(skb->data); + + if (likely(has_pph)) { + headlen = min_t(u32, XSC_RX_MAX_HEAD, (cqe_bcnt - XSC_PPH_HEAD_LEN)); + frag_headlen = headlen + XSC_PPH_HEAD_LEN; + byte_cnt = cqe_bcnt - headlen - XSC_PPH_HEAD_LEN; + head_offset += XSC_PPH_HEAD_LEN; + } + + if (byte_cnt == 0 && (XSC_GET_PFLAG(&c->adapter->nic_param, XSC_PFLAG_RX_COPY_BREAK))) { + for (i = 0; i < rq->wqe.info.num_frags; i++, wi++) + wi->is_available = 1; + goto ret; + } + + for (i = 0; i < rq->wqe.info.num_frags; i++, rx_wi++) + rx_wi->is_available = 0; + + while (byte_cnt) { + /*figure out whether the first fragment can be a page ?*/ + frag_consumed_bytes = + min_t(u16, frag_info->frag_size - frag_headlen, byte_cnt); + + xsc_add_skb_frag(rq, skb, wi->di, wi->offset + frag_headlen, + frag_consumed_bytes, frag_info->frag_stride); + byte_cnt -= frag_consumed_bytes; + + /*to protect extend wqe read, drop exceed bytes*/ + frag_headlen = 0; + fragcnt++; + if (fragcnt == rq->wqe.info.num_frags) { + if (byte_cnt) { + netdev_warn(netdev, + "large packet reach the maximum rev-wqe num.\n"); + netdev_warn(netdev, + "%u bytes dropped: frag_num=%d, headlen=%d, cqe_cnt=%d, frag0_bytes=%d, frag_size=%d\n", + byte_cnt, fragcnt, headlen, cqe_bcnt, + frag_consumed_bytes, frag_info->frag_size); + } + break; + } + + frag_info++; + wi++; + } + +ret: + /* copy header */ + xsc_copy_skb_header(dev, skb, head_wi->di, head_offset, headlen); + + /* skb linear part was allocated with headlen and aligned to long */ + skb->tail += headlen; + skb->len += headlen; + + return skb; +} + +static void xsc_page_dma_unmap(struct xsc_rq *rq, struct xsc_dma_info *dma_info) +{ + struct xsc_channel *c = rq->cq.channel; + struct device *dev = c->adapter->dev; + + dma_unmap_page(dev, dma_info->addr, XSC_RX_FRAG_SZ, rq->buff.map_dir); +} + +static void xsc_page_release_dynamic(struct xsc_rq *rq, + struct xsc_dma_info *dma_info, bool recycle) +{ + xsc_page_dma_unmap(rq, dma_info); + page_pool_recycle_direct(rq->page_pool, dma_info->page); +} + +static void xsc_put_rx_frag(struct xsc_rq *rq, + struct xsc_wqe_frag_info *frag, bool recycle) +{ + if (frag->last_in_page) + xsc_page_release_dynamic(rq, frag->di, recycle); +} + +static struct xsc_wqe_frag_info *get_frag(struct xsc_rq *rq, u16 ix) +{ + return &rq->wqe.frags[ix << rq->wqe.info.log_num_frags]; +} + +static void xsc_free_rx_wqe(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, bool recycle) +{ + int i; + + for (i = 0; i < rq->wqe.info.num_frags; i++, wi++) { + if (wi->is_available && recycle) + continue; + xsc_put_rx_frag(rq, wi, recycle); + } +} + +static void xsc_dump_error_rqcqe(struct xsc_rq *rq, + struct xsc_cqe *cqe) +{ + struct xsc_channel *c = rq->cq.channel; + struct net_device *netdev = c->adapter->netdev; + u32 ci = xsc_cqwq_get_ci(&rq->cq.wq); + + net_err_ratelimited("Error cqe on dev=%s, cqn=%d, ci=%d, rqn=%d, qpn=%d, error_code=0x%x\n", + netdev->name, rq->cq.xcq.cqn, ci, + rq->rqn, cqe->qp_id, get_cqe_opcode(cqe)); } void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq, struct xsc_rq *rq, struct xsc_cqe *cqe) { - // TBD + struct xsc_wq_cyc *wq = &rq->wqe.wq; + struct xsc_channel *c = rq->cq.channel; + u8 cqe_opcode = get_cqe_opcode(cqe); + struct xsc_wqe_frag_info *wi; + struct sk_buff *skb; + u32 cqe_bcnt; + u16 ci; + + ci = xsc_wq_cyc_ctr2ix(wq, cqwq->cc); + wi = get_frag(rq, ci); + if (unlikely(cqe_opcode & BIT(7))) { + xsc_dump_error_rqcqe(rq, cqe); + goto free_wqe; + } + + cqe_bcnt = le32_to_cpu(cqe->msg_len); + if (cqe->has_pph && cqe_bcnt <= XSC_PPH_HEAD_LEN) + goto free_wqe; + + if (unlikely(cqe_bcnt > rq->frags_sz)) { + if (!XSC_GET_PFLAG(&c->adapter->nic_param, XSC_PFLAG_DROPLESS_RQ)) + goto free_wqe; + } + + cqe_bcnt = min_t(u32, cqe_bcnt, rq->frags_sz); + skb = rq->wqe.skb_from_cqe(rq, wi, cqe_bcnt, cqe->has_pph); + if (!skb) + goto free_wqe; + + xsc_complete_rx_cqe(rq, cqe, + cqe->has_pph == 1 ? cqe_bcnt - XSC_PPH_HEAD_LEN : cqe_bcnt, + skb, wi); + + napi_gro_receive(rq->cq.napi, skb); + +free_wqe: + xsc_free_rx_wqe(rq, wi, true); + xsc_wq_cyc_pop(wq); } int xsc_poll_rx_cq(struct xsc_cq *cq, int budget) { - // TBD + struct xsc_rq *rq = container_of(cq, struct xsc_rq, cq); + struct xsc_cqwq *cqwq = &cq->wq; + struct xsc_cqe *cqe; + int work_done = 0; + + if (!test_bit(XSC_ETH_RQ_STATE_ENABLED, &rq->state)) + return 0; + + while ((work_done < budget) && (cqe = xsc_cqwq_get_cqe(cqwq))) { + rq->handle_rx_cqe(cqwq, rq, cqe); + ++work_done; + + xsc_cqwq_pop(cqwq); + } + + if (!work_done) + goto out; + + xsc_cq_notify_hw(cq); + /* ensure cq space is freed before enabling more cqes */ + wmb(); + +out: + + return work_done; +} + +static int xsc_page_alloc_mapped(struct xsc_rq *rq, + struct xsc_dma_info *dma_info) +{ + struct xsc_channel *c = rq->cq.channel; + struct device *dev = c->adapter->dev; + + dma_info->page = page_pool_dev_alloc_pages(rq->page_pool); + if (unlikely(!dma_info->page)) + return -ENOMEM; + + dma_info->addr = dma_map_page(dev, dma_info->page, 0, + XSC_RX_FRAG_SZ, rq->buff.map_dir); + if (unlikely(dma_mapping_error(dev, dma_info->addr))) { + page_pool_recycle_direct(rq->page_pool, dma_info->page); + dma_info->page = NULL; + return -ENOMEM; + } + return 0; } +static int xsc_get_rx_frag(struct xsc_rq *rq, + struct xsc_wqe_frag_info *frag) +{ + int err = 0; + + if (!frag->offset && !frag->is_available) + /* On first frag (offset == 0), replenish page (dma_info actually). + * Other frags that point to the same dma_info (with a different + * offset) should just use the new one without replenishing again + * by themselves. + */ + err = xsc_page_alloc_mapped(rq, frag->di); + + return err; +} + +static int xsc_alloc_rx_wqe(struct xsc_rq *rq, struct xsc_eth_rx_wqe_cyc *wqe, u16 ix) +{ + struct xsc_wqe_frag_info *frag = get_frag(rq, ix); + u64 addr; + int i; + int err; + + for (i = 0; i < rq->wqe.info.num_frags; i++, frag++) { + err = xsc_get_rx_frag(rq, frag); + if (unlikely(err)) + goto free_frags; + + addr = cpu_to_le64(frag->di->addr + frag->offset + rq->buff.headroom); + wqe->data[i].va = addr; + } + + return 0; + +free_frags: + while (--i >= 0) + xsc_put_rx_frag(rq, --frag, true); + + return err; +} + void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix) { - // TBD + struct xsc_wqe_frag_info *wi = get_frag(rq, ix); + + xsc_free_rx_wqe(rq, wi, false); } -bool xsc_eth_post_rx_wqes(struct xsc_rq *rq) +static int xsc_alloc_rx_wqes(struct xsc_rq *rq, u16 ix, u8 wqe_bulk) { - // TBD - return true; + struct xsc_wq_cyc *wq = &rq->wqe.wq; + struct xsc_eth_rx_wqe_cyc *wqe; + int err; + int i; + int idx; + + for (i = 0; i < wqe_bulk; i++) { + idx = xsc_wq_cyc_ctr2ix(wq, (ix + i)); + wqe = xsc_wq_cyc_get_wqe(wq, idx); + + err = xsc_alloc_rx_wqe(rq, wqe, idx); + if (unlikely(err)) + goto free_wqes; + } + + return 0; + +free_wqes: + while (--i >= 0) + xsc_eth_dealloc_rx_wqe(rq, ix + i); + + return err; } +bool xsc_eth_post_rx_wqes(struct xsc_rq *rq) +{ + struct xsc_wq_cyc *wq = &rq->wqe.wq; + u8 wqe_bulk, wqe_bulk_min; + int alloc; + u16 head; + int err; + + wqe_bulk = rq->wqe.info.wqe_bulk; + wqe_bulk_min = rq->wqe.info.wqe_bulk_min; + if (xsc_wq_cyc_missing(wq) < wqe_bulk) + return false; + + do { + head = xsc_wq_cyc_get_head(wq); + + alloc = min_t(int, wqe_bulk, xsc_wq_cyc_missing(wq)); + if (alloc < wqe_bulk && alloc >= wqe_bulk_min) + alloc = alloc & 0xfffffffe; + + if (alloc > 0) { + err = xsc_alloc_rx_wqes(rq, head, alloc); + if (unlikely(err)) + break; + + xsc_wq_cyc_push_n(wq, alloc); + } + } while (xsc_wq_cyc_missing(wq) >= wqe_bulk_min); + + dma_wmb(); + + /* ensure wqes are visible to device before updating doorbell record */ + xsc_rq_notify_hw(rq); + + return !!err; +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c index caf61ec50..a1b7ef0d1 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c @@ -41,10 +41,96 @@ static bool xsc_channel_no_affinity_change(struct xsc_channel *c) return cpumask_test_cpu(current_cpu, c->aff_mask); } +static void xsc_dump_error_sqcqe(struct xsc_sq *sq, + struct xsc_cqe *cqe) +{ + u32 ci = xsc_cqwq_get_ci(&sq->cq.wq); + struct net_device *netdev = sq->channel->netdev; + + net_err_ratelimited("Err cqe on dev %s cqn=0x%x ci=0x%x sqn=0x%x err_code=0x%x qpid=0x%x\n", + netdev->name, sq->cq.xcq.cqn, ci, + sq->sqn, get_cqe_opcode(cqe), cqe->qp_id); +} + static bool xsc_poll_tx_cq(struct xsc_cq *cq, int napi_budget) { - // TBD - return true; + struct xsc_adapter *adapter; + struct device *dev; + struct xsc_sq *sq; + struct xsc_cqe *cqe; + u32 dma_fifo_cc; + u32 nbytes = 0; + u16 npkts = 0; + u16 sqcc; + int i = 0; + + sq = container_of(cq, struct xsc_sq, cq); + if (!test_bit(XSC_ETH_SQ_STATE_ENABLED, &sq->state)) + return false; + + adapter = sq->channel->adapter; + dev = adapter->dev; + + cqe = xsc_cqwq_get_cqe(&cq->wq); + if (!cqe) + goto out; + + if (unlikely(get_cqe_opcode(cqe) & BIT(7))) { + xsc_dump_error_sqcqe(sq, cqe); + return false; + } + + sqcc = sq->cc; + + /* avoid dirtying sq cache line every cqe */ + dma_fifo_cc = sq->dma_fifo_cc; + i = 0; + do { + struct xsc_tx_wqe_info *wi; + struct sk_buff *skb; + int j; + u16 ci; + + xsc_cqwq_pop(&cq->wq); + + ci = xsc_wq_cyc_ctr2ix(&sq->wq, sqcc); + wi = &sq->db.wqe_info[ci]; + skb = wi->skb; + + /*cqe may be overstanding in real test, not by nop in other*/ + if (unlikely(!skb)) + continue; + + for (j = 0; j < wi->num_dma; j++) { + struct xsc_sq_dma *dma = xsc_dma_get(sq, dma_fifo_cc++); + + xsc_tx_dma_unmap(dev, dma); + } + + npkts++; + nbytes += wi->num_bytes; + sqcc += wi->num_wqebbs; + napi_consume_skb(skb, 0); + + } while ((++i <= napi_budget) && (cqe = xsc_cqwq_get_cqe(&cq->wq))); + + xsc_cq_notify_hw(cq); + + /* ensure cq space is freed before enabling more cqes */ + wmb(); + + sq->dma_fifo_cc = dma_fifo_cc; + sq->cc = sqcc; + + netdev_tx_completed_queue(sq->txq, npkts, nbytes); + + if (netif_tx_queue_stopped(sq->txq) && + xsc_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room)) { + netif_tx_wake_queue(sq->txq); + } + +out: + return (i == napi_budget); } int xsc_eth_napi_poll(struct napi_struct *napi, int budget) diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h index f14ff7abf..873392665 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h @@ -59,4 +59,32 @@ static inline bool xsc_wqc_has_room_for(struct xsc_wq_cyc *wq, return (xsc_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc); } +static inline struct xsc_cqe *xsc_cqwq_get_cqe_buff(struct xsc_cqwq *wq, u32 ix) +{ + struct xsc_cqe *cqe = xsc_frag_buf_get_wqe(&wq->fbc, ix); + + return cqe; +} + +static inline struct xsc_cqe *xsc_cqwq_get_cqe(struct xsc_cqwq *wq) +{ + struct xsc_cqe *cqe; + u8 cqe_ownership_bit; + u8 sw_ownership_val; + u32 ci = xsc_cqwq_get_ci(wq); + + cqe = xsc_cqwq_get_cqe_buff(wq, ci); + + cqe_ownership_bit = cqe->owner & XSC_CQE_OWNER_MASK; + sw_ownership_val = xsc_cqwq_get_wrap_cnt(wq) & 1; + + if (cqe_ownership_bit != sw_ownership_val) + return NULL; + + /* ensure cqe content is read after cqe ownership bit */ + dma_rmb(); + + return cqe; +} + #endif /* XSC_RXTX_H */ From patchwork Wed Jan 15 10:23:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13940208 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-56.ptr.blmpb.com (va-2-56.ptr.blmpb.com [209.127.231.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2A211E7C3A for ; Wed, 15 Jan 2025 10:23:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936609; cv=none; b=k5rUgT8c3Y6Z3hT8g1CzjCUxhRD8YPxitu4BWyUSRstefBuKVtlY485DT9TCwuzgLdteeh+LL1qni3JD7fXFsd7urUs4vMeIOEX39NLK0pEbYUrZpFFJbtAgPdemuMM5APwxQQPg5ETrWcYemgrdL61r8ge+5wd+w0Q9v/6QC0o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736936609; c=relaxed/simple; bh=Fc9DEh/u4Wzu9h2qZ8jbFoJBfr5tzKaZQimdjN0doRI=; h=From:Content-Type:Cc:Mime-Version:In-Reply-To:To:Date:Message-Id: References:Subject; b=geRivWVOnJARkysTwAcR/HFF05c6snVT4QvycEcSuNQjOeNAoQ2wbxJf5YCc6fVblaZ3HHlAUaSzMVB9tRkDyXP2DENMq3dYlZhm0nlS6BQI65Oy6lkiWNDoeu95HE8Y3R/HX/h6wzNCfpI+7aR8Mro7fQkuVo9a7SEB/1+8bjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=ghss7DjW; arc=none smtp.client-ip=209.127.231.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="ghss7DjW" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1736936598; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=zklnxPzs4QAv1sMQHOpImMDddpLpFTq4keBMZO3gh9Y=; b=ghss7DjW65n2EHMV0u5D8QKcoUovIDsXHE/J74oeIwDWS5n+RCnYZv5BBJRklOiSlYOK0J QW5rBn+f3D9xg01idMjlTYgghX/FyRhDdkeuxUtJIlH511KNxpBdMOIeac84CU8fK0lI92 bqfsMKB+8M0XctEYtKgx4G0r7Au5bZ9uq68A6zo1VCouMmkZDGUOwv0ztgvPA0osOsF0NG oKVdqu1TxUlFLsheMIjUnXWKWR76PkHB5cMXdfY1h5g6Eti+VQFKOJ08l7VNeSSD/UxDWv aZUUS2BLJVdb6aTExHRtlAjdwjsOHxd9lvYtybOMAzlyuA2hF+eeTU9TOOkV0g== From: "Xin Tian" X-Original-From: Xin Tian X-Mailer: git-send-email 2.25.1 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Wed, 15 Jan 2025 18:23:15 +0800 Cc: , , , , , , , , , Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20250115102242.3541496-1-tianx@yunsilicon.com> To: Date: Wed, 15 Jan 2025 18:23:15 +0800 Message-Id: <20250115102314.3541496-15-tianx@yunsilicon.com> References: <20250115102242.3541496-1-tianx@yunsilicon.com> Subject: [PATCH v3 14/14] net-next/yunsilicon: add ndo_get_stats64 X-Lms-Return-Path: X-Patchwork-Delegate: kuba@kernel.org Support nic stats Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 24 ++++++++++- .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 3 ++ .../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 4 ++ .../yunsilicon/xsc/net/xsc_eth_stats.c | 42 +++++++++++++++++++ .../yunsilicon/xsc/net/xsc_eth_stats.h | 33 +++++++++++++++ .../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 5 +++ .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 2 + 8 files changed, 112 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 7cfc2aaa2..e1cfa3cdf 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o +xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o xsc_eth_stats.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index b52f0db29..d2269794f 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -541,12 +541,15 @@ static int xsc_eth_open_qp_sq(struct xsc_channel *c, struct xsc_core_device *xdev = adapter->xdev; u8 q_log_size = psq_param->sq_attr.q_log_size; u8 ele_log_size = psq_param->sq_attr.ele_log_size; + struct xsc_stats *stats = adapter->stats; + struct xsc_channel_stats *channel_stats = &stats->channel_stats[c->chl_idx]; struct xsc_create_qp_mbox_in *in; struct xsc_modify_raw_qp_mbox_in *modify_in; int hw_npages; int inlen; int ret; + psq->stats = &channel_stats->sq[sq_idx]; psq_param->wq.db_numa_node = cpu_to_node(c->cpu); ret = xsc_eth_wq_cyc_create(xdev, &psq_param->wq, @@ -840,10 +843,13 @@ static int xsc_eth_alloc_rq(struct xsc_channel *c, struct page_pool_params pagepool_params = { 0 }; u32 pool_size = 1 << q_log_size; u8 ele_log_size = prq_param->rq_attr.ele_log_size; + struct xsc_stats *stats = c->adapter->stats; + struct xsc_channel_stats *channel_stats = &stats->channel_stats[c->chl_idx]; int wq_sz; int i, f; int ret = 0; + prq->stats = &channel_stats->rq; prq_param->wq.db_numa_node = cpu_to_node(c->cpu); ret = xsc_eth_wq_cyc_create(c->adapter->xdev, &prq_param->wq, @@ -1613,6 +1619,13 @@ static int xsc_eth_close(struct net_device *netdev) return ret; } +static void xsc_eth_get_stats(struct net_device *netdev, struct rtnl_link_stats64 *stats) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + + xsc_eth_fold_sw_stats64(adapter, stats); +} + static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz) { struct xsc_set_mtu_mbox_in in; @@ -1643,6 +1656,7 @@ static const struct net_device_ops xsc_netdev_ops = { .ndo_open = xsc_eth_open, .ndo_stop = xsc_eth_close, .ndo_start_xmit = xsc_eth_xmit_start, + .ndo_get_stats64 = xsc_eth_get_stats, }; static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) @@ -1851,14 +1865,20 @@ static int xsc_eth_probe(struct auxiliary_device *adev, goto err_nic_cleanup; } + adapter->stats = kvzalloc(sizeof(*adapter->stats), GFP_KERNEL); + if (!adapter->stats) + goto err_detach; + err = register_netdev(netdev); if (err) { netdev_err(netdev, "register_netdev failed, err=%d\n", err); - goto err_detach; + goto err_free_stats; } return 0; +err_free_stats: + kfree(adapter->stats); err_detach: xsc_eth_detach(xdev, adapter); err_nic_cleanup: @@ -1885,7 +1905,7 @@ static void xsc_eth_remove(struct auxiliary_device *adev) } unregister_netdev(adapter->netdev); - + kfree(adapter->stats); free_netdev(adapter->netdev); xdev->eth_priv = NULL; diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 87e2a72d3..650f92c48 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -9,6 +9,7 @@ #include #include "common/xsc_device.h" #include "xsc_eth_common.h" +#include "xsc_eth_stats.h" #define XSC_INVALID_LKEY 0x100 @@ -48,6 +49,8 @@ struct xsc_adapter { u32 status; struct mutex status_lock; // protect status + + struct xsc_stats *stats; }; #endif /* __XSC_ETH_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c index a4428e629..83cf31239 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c @@ -139,6 +139,10 @@ static void xsc_complete_rx_cqe(struct xsc_rq *rq, struct sk_buff *skb, struct xsc_wqe_frag_info *wi) { + struct xsc_rq_stats *stats = rq->stats; + + stats->packets++; + stats->bytes += cqe_bcnt; xsc_build_rx_skb(cqe, cqe_bcnt, rq, skb, wi); } diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c new file mode 100644 index 000000000..9fe0e831b --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "xsc_eth_stats.h" +#include "xsc_eth.h" + +static int xsc_get_netdev_max_channels(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + + return min_t(unsigned int, netdev->num_rx_queues, + netdev->num_tx_queues); +} + +static int xsc_get_netdev_max_tc(struct xsc_adapter *adapter) +{ + return adapter->nic_param.num_tc; +} + +void xsc_eth_fold_sw_stats64(struct xsc_adapter *adapter, struct rtnl_link_stats64 *s) +{ + int i, j; + + for (i = 0; i < xsc_get_netdev_max_channels(adapter); i++) { + struct xsc_channel_stats *channel_stats = &adapter->stats->channel_stats[i]; + struct xsc_rq_stats *rq_stats = &channel_stats->rq; + + s->rx_packets += rq_stats->packets; + s->rx_bytes += rq_stats->bytes; + + for (j = 0; j < xsc_get_netdev_max_tc(adapter); j++) { + struct xsc_sq_stats *sq_stats = &channel_stats->sq[j]; + + s->tx_packets += sq_stats->packets; + s->tx_bytes += sq_stats->bytes; + s->tx_dropped += sq_stats->dropped; + } + } +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h new file mode 100644 index 000000000..10b2aa69b --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_EN_STATS_H +#define __XSC_EN_STATS_H + +#include "xsc_eth_common.h" + +struct xsc_rq_stats { + u64 packets; + u64 bytes; +}; + +struct xsc_sq_stats { + u64 packets; + u64 bytes; + u64 dropped; +}; + +struct xsc_channel_stats { + struct xsc_sq_stats sq[XSC_MAX_NUM_TC]; + struct xsc_rq_stats rq; +} ____cacheline_aligned_in_smp; + +struct xsc_stats { + struct xsc_channel_stats channel_stats[XSC_ETH_MAX_NUM_CHANNELS]; +}; + +void xsc_eth_fold_sw_stats64(struct xsc_adapter *adapter, struct rtnl_link_stats64 *s); + +#endif /* XSC_EN_STATS_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c index bd9c4e1c0..a24e05c26 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c @@ -201,6 +201,7 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, struct xsc_send_wqe_ctrl_seg *cseg; struct xsc_wqe_data_seg *dseg; struct xsc_tx_wqe_info *wi; + struct xsc_sq_stats *stats = sq->stats; struct xsc_core_device *xdev = sq->cq.xdev; u16 ds_cnt; u16 mss, ihs, headlen; @@ -219,11 +220,13 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, mss = skb_shinfo(skb)->gso_size; ihs = xsc_tx_get_gso_ihs(sq, skb); num_bytes = skb->len; + stats->packets += skb_shinfo(skb)->gso_segs; } else { opcode = XSC_OPCODE_RAW; mss = 0; ihs = 0; num_bytes = skb->len; + stats->packets++; } /*linear data in skb*/ @@ -261,10 +264,12 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, xsc_txwqe_complete(sq, skb, opcode, ds_cnt, num_wqebbs, num_bytes, num_dma, wi); + stats->bytes += num_bytes; return NETDEV_TX_OK; err_drop: + stats->dropped++; dev_kfree_skb_any(skb); return NETDEV_TX_OK; diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h index 967d46e7e..0d342e846 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -129,6 +129,7 @@ struct xsc_rq { unsigned long state; struct work_struct recover_work; + struct xsc_rq_stats *stats; u32 hw_mtu; u32 frags_sz; @@ -177,6 +178,7 @@ struct xsc_sq { /* read only */ struct xsc_wq_cyc wq; u32 dma_fifo_mask; + struct xsc_sq_stats *stats; struct { struct xsc_sq_dma *dma_fifo; struct xsc_tx_wqe_info *wqe_info;