From patchwork Mon Feb 24 17:24:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988567 X-Patchwork-Delegate: kuba@kernel.org Received: from va-1-37.ptr.blmpb.com (va-1-37.ptr.blmpb.com [209.127.230.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC7A3263F5E for ; Mon, 24 Feb 2025 17:26:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418004; cv=none; b=e+ffNLisGQl5hF0/I10iW8mk/TB/vlnt9xP+jjCqpRte+dpyR8AEjU+kZpluq3NK9ucxWhqssm6XP0Jz2ElqtIzlqTd3QVtowzBt+rAy7xhGYMjOUOJvSaDoKDI0ND8wx46BBfgfysewUaF2O4xrZeWOfoYBDyZtajv4u1JsN38= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418004; c=relaxed/simple; bh=AtAhkn+t9dXkaFIfmX4N70AZKXjkBR7H//STlBgv/M4=; h=In-Reply-To:Cc:Message-Id:References:To:From:Subject:Date: Mime-Version:Content-Type; b=rSgSqnvXvTbm22r4bN5MhzclujBeaiAPRrELWsTDVTdH+xqLspiE60755GNac0Eluabwow7VS0UZfzq0/xfEV/nSUhZ0L7Pxzi7UN8KUbN/1i8NSTzaVyl1nmH06RglpawBp+Rh7Lp9ESfT7gfn0Ww2Q2LFNzhHSi40pVfwogi0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=ARuxom96; arc=none smtp.client-ip=209.127.230.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="ARuxom96" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417860; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=bswaTUtWEocJ2NftzI5CL9gVnQyhdqw72pBuu2KeL3w=; b=ARuxom96J8YRUWKkWQ01rWd4mrO1jL1wuKHYv0ZtLCOrRpOGY9pLqOs4ikifE19VROruiv JKNEQH5mslW3EaJk5xXvLeJWif0SzsEjz6+Mfz7hdBWgREjgyWphMjKeOiB8AxDnBuw4bd wGzkVBiA3paOTTlml0Gbmb5FBDktHOSwHbWoQZc7YSfBCJhru1AatxW3X+G2uQxho2Sj+0 EwgUQDxxc4VrtnKdiu3KG7Bx+ZOUD5hf9HCCWz9AXn/uqXFCTQwIrD/3jryotCSvSwHY2Z 3+DEYQZCkw3sf8ETdJEkEldVpsxAGt9L+qh8EhUyJf1DuTE5YBSNX5Mq7xIEXw== In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Lms-Return-Path: Cc: , , , , , , , , , , , , Message-Id: <20250224172416.2455751-2-tianx@yunsilicon.com> References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Mailer: git-send-email 2.25.1 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:17 +0800 X-Original-From: Xin Tian To: From: "Xin Tian" Subject: [PATCH net-next v5 01/14] xsc: Add xsc driver basic framework Date: Tue, 25 Feb 2025 01:24:18 +0800 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org 1. Add yunsilicon xsc driver basic compile framework, including xsc_pci driver and xsc_eth driver 2. Implemented PCI device initialization. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- MAINTAINERS | 7 + drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/yunsilicon/Kconfig | 26 ++ drivers/net/ethernet/yunsilicon/Makefile | 8 + .../ethernet/yunsilicon/xsc/common/xsc_core.h | 53 ++++ .../net/ethernet/yunsilicon/xsc/net/Kconfig | 17 ++ .../net/ethernet/yunsilicon/xsc/net/Makefile | 9 + .../net/ethernet/yunsilicon/xsc/pci/Kconfig | 16 ++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 9 + .../net/ethernet/yunsilicon/xsc/pci/main.c | 255 ++++++++++++++++++ 11 files changed, 402 insertions(+) create mode 100644 drivers/net/ethernet/yunsilicon/Kconfig create mode 100644 drivers/net/ethernet/yunsilicon/Makefile create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Kconfig create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/Makefile create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/Makefile create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/main.c diff --git a/MAINTAINERS b/MAINTAINERS index cc40a9d9b..4892dd63e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -25285,6 +25285,13 @@ S: Maintained F: Documentation/input/devices/yealink.rst F: drivers/input/misc/yealink.* +YUNSILICON XSC DRIVERS +M: Honggang Wei +M: Xin Tian +L: netdev@vger.kernel.org +S: Maintained +F: drivers/net/ethernet/yunsilicon/xsc + Z3FOLD COMPRESSED PAGE ALLOCATOR M: Vitaly Wool R: Miaohe Lin diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 0baac25db..aa6016597 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig" source "drivers/net/ethernet/ibm/Kconfig" source "drivers/net/ethernet/intel/Kconfig" source "drivers/net/ethernet/xscale/Kconfig" +source "drivers/net/ethernet/yunsilicon/Kconfig" config JME tristate "JMicron(R) PCI-Express Gigabit Ethernet support" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index c03203439..c16c34d4b 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -51,6 +51,7 @@ obj-$(CONFIG_NET_VENDOR_INTEL) += intel/ obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/ obj-$(CONFIG_NET_VENDOR_MICROSOFT) += microsoft/ obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/ +obj-$(CONFIG_NET_VENDOR_YUNSILICON) += yunsilicon/ obj-$(CONFIG_JME) += jme.o obj-$(CONFIG_KORINA) += korina.o obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o diff --git a/drivers/net/ethernet/yunsilicon/Kconfig b/drivers/net/ethernet/yunsilicon/Kconfig new file mode 100644 index 000000000..ff57fedf8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/Kconfig @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Yunsilicon driver configuration +# + +config NET_VENDOR_YUNSILICON + bool "Yunsilicon devices" + default y + depends on PCI + depends on ARM64 || X86_64 + help + If you have a network (Ethernet) device belonging to this class, + say Y. + + Note that the answer to this question doesn't directly affect the + kernel: saying N will just cause the configurator to skip all + the questions about Yunsilicon cards. If you say Y, you will be asked + for your specific card in the following questions. + +if NET_VENDOR_YUNSILICON + +source "drivers/net/ethernet/yunsilicon/xsc/net/Kconfig" +source "drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig" + +endif # NET_VENDOR_YUNSILICON diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile new file mode 100644 index 000000000..6fc8259a7 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Makefile for the Yunsilicon device drivers. +# + +# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/ +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/ \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h new file mode 100644 index 000000000..6627a176a --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_CORE_H +#define __XSC_CORE_H + +#include + +#define XSC_PCI_VENDOR_ID 0x1f67 + +#define XSC_MC_PF_DEV_ID 0x1011 +#define XSC_MC_VF_DEV_ID 0x1012 +#define XSC_MC_PF_DEV_ID_DIAMOND 0x1021 + +#define XSC_MF_HOST_PF_DEV_ID 0x1051 +#define XSC_MF_HOST_VF_DEV_ID 0x1052 +#define XSC_MF_SOC_PF_DEV_ID 0x1053 + +#define XSC_MS_PF_DEV_ID 0x1111 +#define XSC_MS_VF_DEV_ID 0x1112 + +#define XSC_MV_HOST_PF_DEV_ID 0x1151 +#define XSC_MV_HOST_VF_DEV_ID 0x1152 +#define XSC_MV_SOC_PF_DEV_ID 0x1153 + +struct xsc_dev_resource { + /* protect buffer allocation according to numa node */ + struct mutex alloc_mutex; +}; + +enum xsc_pci_state { + XSC_PCI_STATE_DISABLED, + XSC_PCI_STATE_ENABLED, +}; + +struct xsc_core_device { + struct pci_dev *pdev; + struct device *device; + struct xsc_dev_resource *dev_res; + int numa_node; + + void __iomem *bar; + int bar_num; + + struct mutex pci_state_mutex; /* protect pci_state */ + enum xsc_pci_state pci_state; + struct mutex intf_state_mutex; /* protect intf_state */ + unsigned long intf_state; +}; + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig new file mode 100644 index 000000000..de743487e --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Kconfig @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Yunsilicon driver configuration +# + +config YUNSILICON_XSC_ETH + tristate "Yunsilicon XSC ethernet driver" + default n + depends on YUNSILICON_XSC_PCI + depends on NET + help + This driver provides ethernet support for + Yunsilicon XSC devices. + + To compile this driver as a module, choose M here. The module + will be called xsc_eth. diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile new file mode 100644 index 000000000..2811433af --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. + +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc + +obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o + +xsc_eth-y := main.o \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig new file mode 100644 index 000000000..2b6d79905 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Kconfig @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. +# Yunsilicon PCI configuration +# + +config YUNSILICON_XSC_PCI + tristate "Yunsilicon XSC PCI driver" + default n + select PAGE_POOL + help + This driver is common for Yunsilicon XSC + ethernet and RDMA drivers. + + To compile this driver as a module, choose M here. The module + will be called xsc_pci. diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile new file mode 100644 index 000000000..709270df8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. +# All rights reserved. + +ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc + +obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o + +xsc_pci-y := main.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c new file mode 100644 index 000000000..ec3181a8e --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -0,0 +1,255 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "common/xsc_core.h" + +static const struct pci_device_id xsc_pci_id_table[] = { + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID_DIAMOND) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_HOST_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MF_SOC_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MS_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_HOST_PF_DEV_ID) }, + { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MV_SOC_PF_DEV_ID) }, + { 0 } +}; + +static int set_dma_caps(struct pci_dev *pdev) +{ + int err; + + err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + else + err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); + + if (!err) + dma_set_max_seg_size(&pdev->dev, SZ_2G); + + return err; +} + +static int xsc_pci_enable_device(struct xsc_core_device *xdev) +{ + struct pci_dev *pdev = xdev->pdev; + int err = 0; + + mutex_lock(&xdev->pci_state_mutex); + if (xdev->pci_state == XSC_PCI_STATE_DISABLED) { + err = pci_enable_device(pdev); + if (!err) + xdev->pci_state = XSC_PCI_STATE_ENABLED; + } + mutex_unlock(&xdev->pci_state_mutex); + + return err; +} + +static void xsc_pci_disable_device(struct xsc_core_device *xdev) +{ + struct pci_dev *pdev = xdev->pdev; + + mutex_lock(&xdev->pci_state_mutex); + if (xdev->pci_state == XSC_PCI_STATE_ENABLED) { + pci_disable_device(pdev); + xdev->pci_state = XSC_PCI_STATE_DISABLED; + } + mutex_unlock(&xdev->pci_state_mutex); +} + +static int xsc_pci_init(struct xsc_core_device *xdev, + const struct pci_device_id *id) +{ + struct pci_dev *pdev = xdev->pdev; + void __iomem *bar_base; + int bar_num = 0; + int err; + + xdev->numa_node = dev_to_node(&pdev->dev); + + err = xsc_pci_enable_device(xdev); + if (err) { + pci_err(pdev, "failed to enable PCI device: err=%d\n", err); + goto err_ret; + } + + err = pci_request_region(pdev, bar_num, KBUILD_MODNAME); + if (err) { + pci_err(pdev, "failed to request %s pci_region=%d: err=%d\n", + KBUILD_MODNAME, bar_num, err); + goto err_disable; + } + + pci_set_master(pdev); + + err = set_dma_caps(pdev); + if (err) { + pci_err(pdev, "failed to set DMA capabilities mask: err=%d\n", + err); + goto err_clr_master; + } + + bar_base = pci_ioremap_bar(pdev, bar_num); + if (!bar_base) { + pci_err(pdev, "failed to ioremap %s bar%d\n", KBUILD_MODNAME, + bar_num); + err = -ENOMEM; + goto err_clr_master; + } + + err = pci_save_state(pdev); + if (err) { + pci_err(pdev, "pci_save_state failed: err=%d\n", err); + goto err_io_unmap; + } + + xdev->bar_num = bar_num; + xdev->bar = bar_base; + + return 0; + +err_io_unmap: + pci_iounmap(pdev, bar_base); +err_clr_master: + pci_clear_master(pdev); + pci_release_region(pdev, bar_num); +err_disable: + xsc_pci_disable_device(xdev); +err_ret: + return err; +} + +static void xsc_pci_fini(struct xsc_core_device *xdev) +{ + struct pci_dev *pdev = xdev->pdev; + + if (xdev->bar) + pci_iounmap(pdev, xdev->bar); + pci_clear_master(pdev); + pci_release_region(pdev, xdev->bar_num); + xsc_pci_disable_device(xdev); +} + +static int xsc_dev_res_init(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res; + + dev_res = kvzalloc(sizeof(*dev_res), GFP_KERNEL); + if (!dev_res) + return -ENOMEM; + + xdev->dev_res = dev_res; + mutex_init(&dev_res->alloc_mutex); + + return 0; +} + +static void xsc_dev_res_cleanup(struct xsc_core_device *xdev) +{ + kfree(xdev->dev_res); +} + +static int xsc_core_dev_init(struct xsc_core_device *xdev) +{ + int err; + + mutex_init(&xdev->pci_state_mutex); + mutex_init(&xdev->intf_state_mutex); + + err = xsc_dev_res_init(xdev); + if (err) { + pci_err(xdev->pdev, "xsc dev res init failed %d\n", err); + goto out; + } + + return 0; +out: + return err; +} + +static void xsc_core_dev_cleanup(struct xsc_core_device *xdev) +{ + xsc_dev_res_cleanup(xdev); +} + +static int xsc_pci_probe(struct pci_dev *pci_dev, + const struct pci_device_id *id) +{ + struct xsc_core_device *xdev; + int err; + + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL); + if (!xdev) + return -ENOMEM; + + xdev->pdev = pci_dev; + xdev->device = &pci_dev->dev; + + pci_set_drvdata(pci_dev, xdev); + err = xsc_pci_init(xdev, id); + if (err) { + pci_err(pci_dev, "xsc_pci_init failed %d\n", err); + goto err_unset_pci_drvdata; + } + + err = xsc_core_dev_init(xdev); + if (err) { + pci_err(pci_dev, "xsc_core_dev_init failed %d\n", err); + goto err_pci_fini; + } + + return 0; +err_pci_fini: + xsc_pci_fini(xdev); +err_unset_pci_drvdata: + pci_set_drvdata(pci_dev, NULL); + kfree(xdev); + + return err; +} + +static void xsc_pci_remove(struct pci_dev *pci_dev) +{ + struct xsc_core_device *xdev = pci_get_drvdata(pci_dev); + + xsc_core_dev_cleanup(xdev); + xsc_pci_fini(xdev); + pci_set_drvdata(pci_dev, NULL); + kfree(xdev); +} + +static struct pci_driver xsc_pci_driver = { + .name = "xsc-pci", + .id_table = xsc_pci_id_table, + .probe = xsc_pci_probe, + .remove = xsc_pci_remove, +}; + +static int __init xsc_init(void) +{ + int err; + + err = pci_register_driver(&xsc_pci_driver); + if (err) { + pr_err("failed to register pci driver\n"); + goto out; + } + return 0; + +out: + return err; +} + +static void __exit xsc_fini(void) +{ + pci_unregister_driver(&xsc_pci_driver); +} + +module_init(xsc_init); +module_exit(xsc_fini); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Yunsilicon XSC PCI driver"); From patchwork Mon Feb 24 17:24:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988571 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-51.ptr.blmpb.com (lf-2-51.ptr.blmpb.com [101.36.218.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED5B92641D0 for ; Mon, 24 Feb 2025 17:27:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418076; cv=none; b=t6i7xAg/wPmf5JzFN8BMKdu2EepXecEraGzwjnXg29L8KSZ3vaAYsv0WwGhM5bCLDJMCPBSdSnubrFJBXG6xEMoa8/a1GBXXZuojbjJFcFeR9E9fGNc9olfP55gcPY759F5MmUDDV2AOwfmLK6yMbm+pGQkpYa9iS8bHb/sjuFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418076; c=relaxed/simple; bh=uA0+tN2FQcZYcyvz47tF7wh4f2+jccIP+fRT81H5so8=; h=References:Date:Mime-Version:Cc:In-Reply-To:Subject:Message-Id: Content-Type:To:From; b=lj3ngdGB9vp5FdioqWOFefM2WYok07qGb+eoeSEBgqrbKscCxAD7XQrD2c1x/9HIVEl/vEtDd+wZg8boxrIKN1R5MZR4WDTpXYfNZ5xbuLoEBaJA5Vtv0x/3RcvlZWq9RONnMxuPlIKxiCymsF16uaEXubS0WMjk5nKwwzcOzN8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=I8w7qJBb; arc=none smtp.client-ip=101.36.218.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="I8w7qJBb" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417862; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=gweB/W2aV6b7P6H+6SCcRJ2B4ogckWf9TRybInqgyqY=; b=I8w7qJBb5bXauDpBhOpU6Ugn6q3M3JqFiDZZBbYLmVm9UA2LErWmkidN0GmSm2KwdcpxTW uUPMcqoFZMOUCbx5gikJrIime2i0RtMQLJEPNcsfClzLQDx5de3f3O64rQEqrBmCLx3/2a zuxALKVAnX0CScMWA5oLp9Sx0hG68P9e73lQiNJpDndouvTECNWB/jjei0eDdIuX2q+x5k 63GF3zJvE1P+Ce8JkAKpov1oL1JXTqk8ko+TpP5yQQFD1srFodgFHECmoA54VvCDl2Iycg TsytS+LtURbIFl4WbW8VKavU4Fw3/2eNF+CMJ8k2qbYSnFBDlDYOXUX0U3v8OQ== References: <20250224172416.2455751-1-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:20 +0800 X-Original-From: Xin Tian Date: Tue, 25 Feb 2025 01:24:20 +0800 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Cc: , , , , , , , , , , , , X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Subject: [PATCH net-next v5 02/14] xsc: Enable command queue X-Lms-Return-Path: Message-Id: <20250224172419.2455751-3-tianx@yunsilicon.com> To: From: "Xin Tian" X-Patchwork-Delegate: kuba@kernel.org The command queue is a hardware channel for sending commands between the driver and the firmware. xsc_cmd.h defines the command protocol structures. The logic for command allocation, sending, completion handling, and error handling is implemented in cmdq.c. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../yunsilicon/xsc/common/xsc_auto_hw.h | 94 + .../ethernet/yunsilicon/xsc/common/xsc_cmd.h | 630 +++++++ .../ethernet/yunsilicon/xsc/common/xsc_cmdq.h | 233 +++ .../ethernet/yunsilicon/xsc/common/xsc_core.h | 12 + .../yunsilicon/xsc/common/xsc_driver.h | 25 + .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/pci/cmdq.c | 1579 +++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 81 + 8 files changed, 2655 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h new file mode 100644 index 000000000..03c781de8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_auto_hw.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_HW_H +#define __XSC_HW_H + +//hif_irq_csr_defines.h +#define HIF_IRQ_TBL2IRQ_TBL_RD_DONE_INT_MSIX_REG_ADDR 0xa1100070 + +//hif_cpm_csr_defines.h +#define HIF_CPM_LOCK_GET_REG_ADDR 0xa0000104 +#define HIF_CPM_LOCK_PUT_REG_ADDR 0xa0000108 +#define HIF_CPM_LOCK_AVAIL_REG_ADDR 0xa000010c +#define HIF_CPM_IDA_DATA_MEM_ADDR 0xa0000800 +#define HIF_CPM_IDA_CMD_REG_ADDR 0xa0000020 +#define HIF_CPM_IDA_ADDR_REG_ADDR 0xa0000080 +#define HIF_CPM_IDA_BUSY_REG_ADDR 0xa0000100 +#define HIF_CPM_IDA_CMD_REG_IDA_IDX_WIDTH 5 +#define HIF_CPM_IDA_CMD_REG_IDA_LEN_WIDTH 4 +#define HIF_CPM_IDA_CMD_REG_IDA_R0W1_WIDTH 1 +#define HIF_CPM_LOCK_GET_REG_LOCK_VLD_SHIFT 5 +#define HIF_CPM_LOCK_GET_REG_LOCK_IDX_MASK 0x1f +#define HIF_CPM_IDA_ADDR_REG_STRIDE 0x4 +#define HIF_CPM_CHIP_VERSION_H_REG_ADDR 0xa0000010 + +//mmc_csr_defines.h +#define MMC_MPT_TBL_MEM_DEPTH 32768 +#define MMC_MTT_TBL_MEM_DEPTH 262144 +#define MMC_MPT_TBL_MEM_WIDTH 256 +#define MMC_MTT_TBL_MEM_WIDTH 64 +#define MMC_MPT_TBL_MEM_ADDR 0xa4100000 +#define MMC_MTT_TBL_MEM_ADDR 0xa4200000 + +//clsf_dma_csr_defines.h +#define CLSF_DMA_DMA_UL_BUSY_REG_ADDR 0xa6010048 +#define CLSF_DMA_DMA_DL_DONE_REG_ADDR 0xa60100d0 +#define CLSF_DMA_DMA_DL_SUCCESS_REG_ADDR 0xa60100c0 +#define CLSF_DMA_ERR_CODE_CLR_REG_ADDR 0xa60100d4 +#define CLSF_DMA_DMA_RD_TABLE_ID_REG_DMA_RD_TBL_ID_MASK 0x7f +#define CLSF_DMA_DMA_RD_TABLE_ID_REG_ADDR 0xa6010020 +#define CLSF_DMA_DMA_RD_ADDR_REG_DMA_RD_BURST_NUM_SHIFT 16 +#define CLSF_DMA_DMA_RD_ADDR_REG_ADDR 0xa6010024 +#define CLSF_DMA_INDRW_RD_START_REG_ADDR 0xa6010028 + +//hif_tbl_csr_defines.h +#define HIF_TBL_TBL_DL_BUSY_REG_ADDR 0xa1060030 +#define HIF_TBL_TBL_DL_REQ_REG_TBL_DL_LEN_SHIFT 12 +#define HIF_TBL_TBL_DL_REQ_REG_TBL_DL_HOST_ID_SHIFT 11 +#define HIF_TBL_TBL_DL_REQ_REG_ADDR 0xa1060020 +#define HIF_TBL_TBL_DL_ADDR_L_REG_TBL_DL_ADDR_L_MASK 0xffffffff +#define HIF_TBL_TBL_DL_ADDR_L_REG_ADDR 0xa1060024 +#define HIF_TBL_TBL_DL_ADDR_H_REG_TBL_DL_ADDR_H_MASK 0xffffffff +#define HIF_TBL_TBL_DL_ADDR_H_REG_ADDR 0xa1060028 +#define HIF_TBL_TBL_DL_START_REG_ADDR 0xa106002c +#define HIF_TBL_TBL_UL_REQ_REG_TBL_UL_HOST_ID_SHIFT 11 +#define HIF_TBL_TBL_UL_REQ_REG_ADDR 0xa106007c +#define HIF_TBL_TBL_UL_ADDR_L_REG_TBL_UL_ADDR_L_MASK 0xffffffff +#define HIF_TBL_TBL_UL_ADDR_L_REG_ADDR 0xa1060080 +#define HIF_TBL_TBL_UL_ADDR_H_REG_TBL_UL_ADDR_H_MASK 0xffffffff +#define HIF_TBL_TBL_UL_ADDR_H_REG_ADDR 0xa1060084 +#define HIF_TBL_TBL_UL_START_REG_ADDR 0xa1060088 +#define HIF_TBL_MSG_RDY_REG_ADDR 0xa1060044 + +//hif_cmdqm_csr_defines.h +#define HIF_CMDQM_HOST_REQ_PID_MEM_ADDR 0xa1026000 +#define HIF_CMDQM_HOST_REQ_CID_MEM_ADDR 0xa1028000 +#define HIF_CMDQM_HOST_RSP_PID_MEM_ADDR 0xa102e000 +#define HIF_CMDQM_HOST_RSP_CID_MEM_ADDR 0xa1030000 +#define HIF_CMDQM_HOST_REQ_BUF_BASE_H_ADDR_MEM_ADDR 0xa1022000 +#define HIF_CMDQM_HOST_REQ_BUF_BASE_L_ADDR_MEM_ADDR 0xa1024000 +#define HIF_CMDQM_HOST_RSP_BUF_BASE_H_ADDR_MEM_ADDR 0xa102a000 +#define HIF_CMDQM_HOST_RSP_BUF_BASE_L_ADDR_MEM_ADDR 0xa102c000 +#define HIF_CMDQM_VECTOR_ID_MEM_ADDR 0xa1034000 +#define HIF_CMDQM_Q_ELEMENT_SZ_REG_ADDR 0xa1020020 +#define HIF_CMDQM_HOST_Q_DEPTH_REG_ADDR 0xa1020028 +#define HIF_CMDQM_HOST_VF_ERR_STS_MEM_ADDR 0xa1032000 + +//PSV use +//hif_irq_csr_defines.h +#define HIF_IRQ_CONTROL_TBL_MEM_ADDR 0xa1102000 +#define HIF_IRQ_INT_DB_REG_ADDR 0xa11000b4 +#define HIF_IRQ_CFG_VECTOR_TABLE_BUSY_REG_ADDR 0xa1100114 +#define HIF_IRQ_CFG_VECTOR_TABLE_ADDR_REG_ADDR 0xa11000f0 +#define HIF_IRQ_CFG_VECTOR_TABLE_CMD_REG_ADDR 0xa11000ec +#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_LADDR_REG_ADDR 0xa11000f4 +#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_UADDR_REG_ADDR 0xa11000f8 +#define HIF_IRQ_CFG_VECTOR_TABLE_MSG_DATA_REG_ADDR 0xa11000fc +#define HIF_IRQ_CFG_VECTOR_TABLE_CTRL_REG_ADDR 0xa1100100 +#define HIF_IRQ_CFG_VECTOR_TABLE_START_REG_ADDR 0xa11000e8 + +#endif /* __XSC_HW_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h new file mode 100644 index 000000000..15d1fa134 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmd.h @@ -0,0 +1,630 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_CMD_H +#define __XSC_CMD_H + +#define XSC_CMDQ_VERSION 0x32 + +#define XSC_BOARD_SN_LEN 32 + +enum { + XSC_CMD_STAT_OK = 0x0, + XSC_CMD_STAT_INT_ERR = 0x1, + XSC_CMD_STAT_BAD_OP_ERR = 0x2, + XSC_CMD_STAT_BAD_PARAM_ERR = 0x3, + XSC_CMD_STAT_BAD_SYS_STATE_ERR = 0x4, + XSC_CMD_STAT_BAD_RES_ERR = 0x5, + XSC_CMD_STAT_RES_BUSY = 0x6, + XSC_CMD_STAT_LIM_ERR = 0x8, + XSC_CMD_STAT_BAD_RES_STATE_ERR = 0x9, + XSC_CMD_STAT_IX_ERR = 0xa, + XSC_CMD_STAT_NO_RES_ERR = 0xf, + XSC_CMD_STAT_BAD_QP_STATE_ERR = 0x10, + XSC_CMD_STAT_BAD_PKT_ERR = 0x30, + XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR = 0x40, + XSC_CMD_STAT_BAD_INP_LEN_ERR = 0x50, + XSC_CMD_STAT_BAD_OUTP_LEN_ERR = 0x51, +}; + +enum { + XSC_CMD_OP_QUERY_HCA_CAP = 0x100, + XSC_CMD_OP_QUERY_CMDQ_VERSION = 0x10a, + XSC_CMD_OP_FUNCTION_RESET = 0x10c, + XSC_CMD_OP_DUMMY = 0x10d, + XSC_CMD_OP_QUERY_GUID = 0x113, + XSC_CMD_OP_ACTIVATE_HW_CONFIG = 0x114, + + XSC_CMD_OP_CREATE_EQ = 0x301, + XSC_CMD_OP_DESTROY_EQ = 0x302, + + XSC_CMD_OP_CREATE_CQ = 0x400, + XSC_CMD_OP_DESTROY_CQ = 0x401, + + XSC_CMD_OP_CREATE_QP = 0x500, + XSC_CMD_OP_DESTROY_QP = 0x501, + XSC_CMD_OP_RST2INIT_QP = 0x502, + XSC_CMD_OP_INIT2RTR_QP = 0x503, + XSC_CMD_OP_RTR2RTS_QP = 0x504, + XSC_CMD_OP_RTS2RTS_QP = 0x505, + XSC_CMD_OP_SQERR2RTS_QP = 0x506, + XSC_CMD_OP_2ERR_QP = 0x507, + XSC_CMD_OP_RTS2SQD_QP = 0x508, + XSC_CMD_OP_SQD2RTS_QP = 0x509, + XSC_CMD_OP_2RST_QP = 0x50a, + XSC_CMD_OP_INIT2INIT_QP = 0x50e, + XSC_CMD_OP_CREATE_MULTI_QP = 0x515, + + XSC_CMD_OP_MODIFY_RAW_QP = 0x81f, + + XSC_CMD_OP_ENABLE_NIC_HCA = 0x810, + XSC_CMD_OP_DISABLE_NIC_HCA = 0x811, + + XSC_CMD_OP_QUERY_VPORT_STATE = 0x822, + XSC_CMD_OP_QUERY_EVENT_TYPE = 0x831, + + XSC_CMD_OP_ENABLE_MSIX = 0x850, + + XSC_CMD_OP_SET_MTU = 0x1100, + XSC_CMD_OP_QUERY_ETH_MAC = 0X1101, + + XSC_CMD_OP_SET_PORT_ADMIN_STATUS = 0x1801, + + XSC_CMD_OP_MAX +}; + +enum xsc_dma_direct { + XSC_DMA_DIR_TO_MAC, + XSC_DMA_DIR_READ, + XSC_DMA_DIR_WRITE, + XSC_DMA_DIR_LOOPBACK, + XSC_DMA_DIR_MAX +}; + +/* hw feature bitmap, 32bit */ +enum xsc_hw_feature_flag { + XSC_HW_RDMA_SUPPORT = BIT(0), + XSC_HW_PFC_PRIO_STATISTIC_SUPPORT = BIT(1), + XSC_HW_PFC_STALL_STATS_SUPPORT = BIT(3), + XSC_HW_RDMA_CM_SUPPORT = BIT(5), + + XSC_HW_LAST_FEATURE = BIT(31) +}; + +struct xsc_inbox_hdr { + __be16 opcode; + u8 rsvd[4]; + __be16 ver; // cmd version +}; + +struct xsc_outbox_hdr { + u8 status; + u8 rsvd[5]; + __be16 ver; +}; + +/*CQ mbox*/ +struct xsc_cq_context { + __be16 eqn; // event queue number + __be16 pa_num; // physical address count in the ctx + __be16 glb_func_id; + u8 log_cq_sz; + u8 cq_type; +}; + +struct xsc_create_cq_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_cq_context ctx; + __be64 pas[]; // physical address list +}; + +struct xsc_create_cq_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 cqn; // completion queue number + u8 rsvd[4]; +}; + +struct xsc_destroy_cq_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 cqn; + u8 rsvd[4]; +}; + +struct xsc_destroy_cq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +/*QP mbox*/ +struct xsc_create_qp_request { + __be16 input_qpn; + __be16 pa_num; + u8 qp_type; + u8 log_sq_sz; + u8 log_rq_sz; + u8 dma_direct; + __be32 pdn; // protect domain number + __be16 cqn_send; + __be16 cqn_recv; + __be16 glb_funcid; + /*rsvd, the old logic_port */ + u8 rsvd[2]; + __be64 pas[]; +}; + +struct xsc_create_qp_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_create_qp_request req; +}; + +struct xsc_create_qp_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 qpn; // queue pair number + u8 rsvd[4]; +}; + +struct xsc_destroy_qp_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 qpn; + u8 rsvd[4]; +}; + +struct xsc_destroy_qp_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_qp_context { + __be32 remote_qpn; + __be32 cqn_send; + __be32 cqn_recv; + __be32 next_send_psn; + __be32 next_recv_psn; + __be32 pdn; + __be16 src_udp_port; + __be16 path_id; + u8 mtu_mode; + u8 lag_sel; + u8 lag_sel_en; + u8 retry_cnt; + u8 rnr_retry; + u8 dscp; + u8 state; + u8 hop_limit; + u8 dmac[6]; + u8 smac[6]; + __be32 dip[4]; + __be32 sip[4]; + __be16 ip_type; + __be16 grp_id; + u8 vlan_valid; + u8 dci_cfi_prio_sl; + __be16 vlan_id; + u8 qp_out_port; + u8 pcie_no; + __be16 lag_id; + __be16 func_id; + __be16 rsvd; +}; + +struct xsc_modify_qp_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 qpn; + struct xsc_qp_context ctx; + u8 no_need_wait; +}; + +struct xsc_modify_qp_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_create_multiqp_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 qp_num; + u8 qp_type; + u8 rsvd; + __be32 req_len; + u8 data[]; +}; + +struct xsc_create_multiqp_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 qpn_base; +}; + +/* MSIX TABLE mbox */ +struct xsc_msix_table_info_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 index; + u8 rsvd[6]; +}; + +struct xsc_msix_table_info_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 addr_lo; + __be32 addr_hi; + __be32 data; +}; + +/*EQ mbox*/ +struct xsc_eq_context { + __be16 vecidx; + __be16 pa_num; + u8 log_eq_sz; + __be16 glb_func_id; + u8 is_async_eq; + u8 rsvd; +}; + +struct xsc_create_eq_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_eq_context ctx; + __be64 pas[]; +}; + +struct xsc_create_eq_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 eqn; + u8 rsvd[4]; +}; + +struct xsc_destroy_eq_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 eqn; + u8 rsvd[4]; +}; + +struct xsc_destroy_eq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_query_eq_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd0[3]; + u8 eqn; + u8 rsvd1[4]; +}; + +struct xsc_query_eq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; + struct xsc_eq_context ctx; +}; + +struct xsc_query_cq_mbox_in { + struct xsc_inbox_hdr hdr; + __be32 cqn; + u8 rsvd0[4]; +}; + +struct xsc_query_cq_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[8]; + struct xsc_cq_context ctx; + u8 rsvd6[16]; + __be64 pas[]; +}; + +struct xsc_cmd_query_cmdq_ver_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_query_cmdq_ver_mbox_out { + struct xsc_outbox_hdr hdr; + __be16 cmdq_ver; + u8 rsvd[6]; +}; + +struct xsc_cmd_dummy_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_dummy_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_fw_version { + u8 fw_version_major; + u8 fw_version_minor; + __be16 fw_version_patch; + __be32 fw_version_tweak; + u8 fw_version_extra_flag; + u8 rsvd[7]; +}; + +struct xsc_hca_cap { + u8 rsvd1[12]; + u8 send_seg_num; + u8 send_wqe_shift; + u8 recv_seg_num; + u8 recv_wqe_shift; + u8 log_max_srq_sz; + u8 log_max_qp_sz; + u8 log_max_mtt; + u8 log_max_qp; + u8 log_max_strq_sz; + u8 log_max_srqs; + u8 rsvd4[2]; + u8 log_max_tso; + u8 log_max_cq_sz; + u8 rsvd6; + u8 log_max_cq; + u8 log_max_eq_sz; + u8 log_max_mkey; + u8 log_max_msix; + u8 log_max_eq; + u8 max_indirection; + u8 log_max_mrw_sz; + u8 log_max_bsf_list_sz; + u8 log_max_klm_list_sz; + u8 rsvd_8_0; + u8 log_max_ra_req_dc; + u8 rsvd_8_1; + u8 log_max_ra_res_dc; + u8 rsvd9; + u8 log_max_ra_req_qp; + u8 log_max_qp_depth; + u8 log_max_ra_res_qp; + __be16 max_vfs; + __be16 raweth_qp_id_end; + __be16 raw_tpe_qp_num; + __be16 max_qp_count; + __be16 raweth_qp_id_base; + u8 rsvd13; + u8 local_ca_ack_delay; + u8 max_num_eqs; + u8 num_ports; + u8 log_max_msg; + u8 mac_port; + __be16 raweth_rss_qp_id_base; + __be16 stat_rate_support; + u8 rsvd16[2]; + __be64 flags; + u8 rsvd17; + u8 uar_sz; + u8 rsvd18; + u8 log_pg_sz; + __be16 bf_log_bf_reg_size; + __be16 msix_base; + __be16 msix_num; + __be16 max_desc_sz_sq; + u8 rsvd20[2]; + __be16 max_desc_sz_rq; + u8 rsvd21[2]; + __be16 max_desc_sz_sq_dc; + u8 rsvd22[4]; + __be16 max_qp_mcg; + u8 rsvd23; + u8 log_max_mcg; + u8 rsvd24; + u8 log_max_pd; + u8 rsvd25; + u8 log_max_xrcd; + u8 rsvd26[40]; + __be32 uar_page_sz; + u8 rsvd27[8]; + __be32 hw_feature_flag;/*enum xsc_hw_feature_flag*/ + __be16 pf0_vf_funcid_base; + __be16 pf0_vf_funcid_top; + __be16 pf1_vf_funcid_base; + __be16 pf1_vf_funcid_top; + __be16 pcie0_pf_funcid_base; + __be16 pcie0_pf_funcid_top; + __be16 pcie1_pf_funcid_base; + __be16 pcie1_pf_funcid_top; + u8 log_msx_atomic_size_qp; + u8 pcie_host; + u8 rsvd28; + u8 log_msx_atomic_size_dc; + u8 board_sn[XSC_BOARD_SN_LEN]; + u8 max_tc; + u8 mac_bit; + __be16 funcid_to_logic_port; + u8 rsvd29[6]; + u8 nif_port_num; + u8 reg_mr_via_cmdq; + __be32 hca_core_clock; + __be32 max_rwq_indirection_tables;/*rss_caps*/ + __be32 max_rwq_indirection_table_size;/*rss_caps*/ + __be32 chip_ver_h; + __be32 chip_ver_m; + __be32 chip_ver_l; + __be32 hotfix_num; + __be32 feature_flag; + __be32 rx_pkt_len_max; + __be32 glb_func_id; + __be64 tx_db; + __be64 rx_db; + __be64 complete_db; + __be64 complete_reg; + __be64 event_db; + __be32 qp_rate_limit_min; + __be32 qp_rate_limit_max; + struct xsc_fw_version fw_ver; + u8 lag_logic_port_ofst; +}; + +struct xsc_cmd_query_hca_cap_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 cpu_num; + u8 rsvd[6]; +}; + +struct xsc_cmd_query_hca_cap_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[8]; + struct xsc_hca_cap hca_cap; +}; + +struct xsc_query_vport_state_out { + struct xsc_outbox_hdr hdr; + u8 admin_state:4; + u8 state:4; +}; + +struct xsc_query_vport_state_in { + struct xsc_inbox_hdr hdr; + __be32 data0; +#define XSC_QUERY_VPORT_OTHER_VPORT BIT(0) +#define XSC_QUERY_VPORT_VPORT_NUM_MASK GENMASK(16, 1) +}; + +enum { + XSC_CMD_EVENT_RESP_CHANGE_LINK = BIT(0), + XSC_CMD_EVENT_RESP_TEMP_WARN = BIT(1), + XSC_CMD_EVENT_RESP_OVER_TEMP_PROTECTION = BIT(2), +}; + +struct xsc_event_resp { + u8 resp_cmd_type; +}; + +struct xsc_event_query_type_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[2]; +}; + +struct xsc_event_query_type_mbox_out { + struct xsc_outbox_hdr hdr; + struct xsc_event_resp ctx; +}; + +struct xsc_modify_raw_qp_request { + __be16 qpn; + __be16 lag_id; + __be16 func_id; + u8 dma_direct; + u8 prio; + u8 qp_out_port; + u8 rsvd[7]; +}; + +struct xsc_modify_raw_qp_mbox_in { + struct xsc_inbox_hdr hdr; + u8 pcie_no; + u8 rsvd[7]; + struct xsc_modify_raw_qp_request req; +}; + +struct xsc_modify_raw_qp_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_set_mtu_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 mtu; + __be16 rx_buf_sz_min; + u8 mac_port; + u8 rsvd; +}; + +struct xsc_set_mtu_mbox_out { + struct xsc_outbox_hdr hdr; +}; + +struct xsc_query_eth_mac_mbox_in { + struct xsc_inbox_hdr hdr; + u8 index; +}; + +struct xsc_query_eth_mac_mbox_out { + struct xsc_outbox_hdr hdr; + u8 mac[ETH_ALEN]; +}; + +enum { + XSC_TBM_CAP_HASH_PPH = 0, + XSC_TBM_CAP_RSS, + XSC_TBM_CAP_PP_BYPASS, + XSC_TBM_CAP_PCT_DROP_CONFIG, +}; + +struct xsc_nic_attr { + __be16 caps; + __be16 caps_mask; + u8 mac_addr[ETH_ALEN]; +}; + +struct xsc_rss_attr { + u8 rss_en; + u8 hfunc; + __be16 rqn_base; + __be16 rqn_num; + __be32 hash_tmpl; +}; + +struct xsc_cmd_enable_nic_hca_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_nic_attr nic; + struct xsc_rss_attr rss; +}; + +struct xsc_cmd_enable_nic_hca_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[2]; +}; + +struct xsc_nic_dis_attr { + __be16 caps; +}; + +struct xsc_cmd_disable_nic_hca_mbox_in { + struct xsc_inbox_hdr hdr; + struct xsc_nic_dis_attr nic; +}; + +struct xsc_cmd_disable_nic_hca_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd0[4]; +}; + +struct xsc_function_reset_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 glb_func_id; + u8 rsvd[6]; +}; + +struct xsc_function_reset_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_query_guid_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_query_guid_mbox_out { + struct xsc_outbox_hdr hdr; + __be64 guid; +}; + +struct xsc_cmd_activate_hw_config_mbox_in { + struct xsc_inbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_cmd_activate_hw_config_mbox_out { + struct xsc_outbox_hdr hdr; + u8 rsvd[8]; +}; + +struct xsc_event_set_port_admin_status_mbox_in { + struct xsc_inbox_hdr hdr; + __be16 status; +}; + +struct xsc_event_set_port_admin_status_mbox_out { + struct xsc_outbox_hdr hdr; + __be32 status; +}; + +#endif /* __XSC_CMD_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h new file mode 100644 index 000000000..3762d15b6 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_cmdq.h @@ -0,0 +1,233 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_CMDQ_H +#define __XSC_CMDQ_H + +#include + +#include "common/xsc_cmd.h" + +enum { + XSC_CMD_TIMEOUT_MSEC = 10 * 1000, + XSC_CMD_WQ_MAX_NAME = 32, +}; + +enum { + XSC_CMD_DATA, /* print command payload only */ + XSC_CMD_TIME, /* print command execution time */ +}; + +enum { + XSC_MAX_COMMANDS = 32, + XSC_CMD_DATA_BLOCK_SIZE = 512, + XSC_PCI_CMD_XPORT = 7, +}; + +struct xsc_cmd_prot_block { + u8 data[XSC_CMD_DATA_BLOCK_SIZE]; + u8 rsvd0[48]; + __be64 next; + __be32 block_num; + /* init to 0, dma user should change this val to 1 */ + u8 owner_status; + u8 token; + u8 ctrl_sig; + u8 sig; +}; + +struct cache_ent { + /* protect block chain allocations */ + spinlock_t lock; + struct list_head head; +}; + +struct cmd_msg_cache { + struct cache_ent large; + struct cache_ent med; + +}; + +#define CMD_FIRST_SIZE 8 +struct xsc_cmd_first { + __be32 data[CMD_FIRST_SIZE]; +}; + +struct xsc_cmd_mailbox { + void *buf; + dma_addr_t dma; + struct xsc_cmd_mailbox *next; +}; + +struct xsc_cmd_msg { + struct list_head list; + struct cache_ent *cache; + u32 len; + struct xsc_cmd_first first; + struct xsc_cmd_mailbox *next; +}; + +#define RSP_FIRST_SIZE 14 +struct xsc_rsp_first { + __be32 data[RSP_FIRST_SIZE]; //can be larger, xsc_rsp_layout +}; + +struct xsc_rsp_msg { + struct list_head list; + struct cache_ent *cache; + u32 len; + struct xsc_rsp_first first; + struct xsc_cmd_mailbox *next; +}; + +typedef void (*xsc_cmd_cbk_t)(int status, void *context); + +//hw will use this for some records(e.g. vf_id) +struct cmdq_rsv { + u16 vf_id; + u8 rsv[2]; +}; + +//related with hw, won't change +#define CMDQ_ENTRY_SIZE 64 + +struct xsc_cmd_layout { + struct cmdq_rsv rsv0; + __be32 inlen; + __be64 in_ptr; + __be32 in[CMD_FIRST_SIZE]; + __be64 out_ptr; + __be32 outlen; + u8 token; + u8 sig; + u8 idx; + u8 type: 7; + /* rsv for hw, arm will check this bit to make sure mem written */ + u8 owner_bit: 1; +}; + +struct xsc_rsp_layout { + struct cmdq_rsv rsv0; + __be32 out[RSP_FIRST_SIZE]; + u8 token; + u8 sig; + u8 idx; + u8 type: 7; + /* rsv for hw, driver will check this bit to make sure mem written */ + u8 owner_bit: 1; +}; + +struct xsc_cmd_work_ent { + struct xsc_cmd_msg *in; + struct xsc_rsp_msg *out; + int idx; + struct completion done; + struct xsc_cmd *cmd; + struct work_struct work; + struct xsc_cmd_layout *lay; + struct xsc_rsp_layout *rsp_lay; + int ret; + u8 status; + u8 token; + struct timespec64 ts1; + struct timespec64 ts2; +}; + +struct xsc_cmd_debug { + struct dentry *dbg_root; + struct dentry *dbg_in; + struct dentry *dbg_out; + struct dentry *dbg_outlen; + struct dentry *dbg_status; + struct dentry *dbg_run; + void *in_msg; + void *out_msg; + u8 status; + u16 inlen; + u16 outlen; +}; + +struct xsc_cmd_stats { + u64 sum; + u64 n; + struct dentry *root; + struct dentry *avg; + struct dentry *count; + /* protect command average calculations */ + spinlock_t lock; +}; + +struct xsc_cmd_reg { + u32 req_pid_addr; + u32 req_cid_addr; + u32 rsp_pid_addr; + u32 rsp_cid_addr; + u32 req_buf_h_addr; + u32 req_buf_l_addr; + u32 rsp_buf_h_addr; + u32 rsp_buf_l_addr; + u32 msix_vec_addr; + u32 element_sz_addr; + u32 q_depth_addr; + u32 interrupt_stat_addr; +}; + +enum xsc_cmd_status { + XSC_CMD_STATUS_NORMAL, + XSC_CMD_STATUS_TIMEDOUT, +}; + +enum xsc_cmd_mode { + XSC_CMD_MODE_POLLING, + XSC_CMD_MODE_EVENTS, +}; + +struct xsc_cmd { + struct xsc_cmd_reg reg; + void *cmd_buf; + void *cq_buf; + dma_addr_t dma; + dma_addr_t cq_dma; + u16 cmd_pid; + u16 cq_cid; + u8 owner_bit; + u8 cmdif_rev; + u8 log_sz; + u8 log_stride; + int max_reg_cmds; + int events; + u32 __iomem *vector; + + /* protect command queue allocations */ + spinlock_t alloc_lock; + /* protect token allocations */ + spinlock_t token_lock; + /* protect cmdq req pid doorbell */ + spinlock_t doorbell_lock; + u8 token; + unsigned long cmd_entry_mask; + char wq_name[XSC_CMD_WQ_MAX_NAME]; + struct workqueue_struct *wq; + struct task_struct *cq_task; + /* The semaphore limits the number of working commands + * to max_reg_cmds. Each exec_cmd call does a down + * to acquire the semaphore before execution and an up + * after completion, ensuring no more than max_reg_cmds + * commands run at the same time. + */ + struct semaphore sem; + enum xsc_cmd_mode mode; + struct xsc_cmd_work_ent *ent_arr[XSC_MAX_COMMANDS]; + struct dma_pool *pool; + struct xsc_cmd_debug dbg; + struct cmd_msg_cache cache; + int checksum_disabled; + struct xsc_cmd_stats stats[XSC_CMD_OP_MAX]; + unsigned int irqn; + u8 ownerbit_learned; + u8 cmd_status; +}; + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 6627a176a..1f8e4eb77 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -8,6 +8,8 @@ #include +#include "common/xsc_cmdq.h" + #define XSC_PCI_VENDOR_ID 0x1f67 #define XSC_MC_PF_DEV_ID 0x1011 @@ -25,6 +27,9 @@ #define XSC_MV_HOST_VF_DEV_ID 0x1152 #define XSC_MV_SOC_PF_DEV_ID 0x1153 +#define XSC_REG_ADDR(dev, offset) \ + (((dev)->bar) + ((offset) - 0xA0000000)) + struct xsc_dev_resource { /* protect buffer allocation according to numa node */ struct mutex alloc_mutex; @@ -35,6 +40,10 @@ enum xsc_pci_state { XSC_PCI_STATE_ENABLED, }; +enum xsc_interface_state { + XSC_INTERFACE_STATE_UP = BIT(0), +}; + struct xsc_core_device { struct pci_dev *pdev; struct device *device; @@ -44,6 +53,9 @@ struct xsc_core_device { void __iomem *bar; int bar_num; + struct xsc_cmd cmd; + u16 cmdq_ver; + struct mutex pci_state_mutex; /* protect pci_state */ enum xsc_pci_state pci_state; struct mutex intf_state_mutex; /* protect intf_state */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h new file mode 100644 index 000000000..72b2df6c9 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_driver.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_DRIVER_H +#define __XSC_DRIVER_H + +#include "common/xsc_core.h" +#include "common/xsc_cmd.h" + +int xsc_cmd_init(struct xsc_core_device *xdev); +void xsc_cmd_cleanup(struct xsc_core_device *xdev); +void xsc_cmd_use_events(struct xsc_core_device *xdev); +void xsc_cmd_use_polling(struct xsc_core_device *xdev); +int xsc_cmd_err_handler(struct xsc_core_device *xdev); +void xsc_cmd_resp_handler(struct xsc_core_device *xdev); +int xsc_cmd_status_to_err(struct xsc_outbox_hdr *hdr); +int xsc_cmd_exec(struct xsc_core_device *xdev, void *in, int in_size, void *out, + int out_size); +int xsc_cmd_version_check(struct xsc_core_device *xdev); +const char *xsc_command_str(int command); + +#endif + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 709270df8..5e0f0a205 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o +xsc_pci-y := main.o cmdq.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c new file mode 100644 index 000000000..9a3e9a7d4 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cmdq.c @@ -0,0 +1,1579 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + * Copyright (c) 2013-2016, Mellanox Technologies. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "common/xsc_driver.h" +#include "common/xsc_cmd.h" +#include "common/xsc_auto_hw.h" +#include "common/xsc_core.h" + +enum { + CMD_IF_REV = 3, +}; + +enum { + NUM_LONG_LISTS = 2, + NUM_MED_LISTS = 64, + LONG_LIST_SIZE = (2ULL * 1024 * 1024 * 1024 / PAGE_SIZE) * 8 + 16 + + XSC_CMD_DATA_BLOCK_SIZE, + MED_LIST_SIZE = 16 + XSC_CMD_DATA_BLOCK_SIZE, +}; + +enum { + XSC_CMD_DELIVERY_STAT_OK = 0x0, + XSC_CMD_DELIVERY_STAT_SIGNAT_ERR = 0x1, + XSC_CMD_DELIVERY_STAT_TOK_ERR = 0x2, + XSC_CMD_DELIVERY_STAT_BAD_BLK_NUM_ERR = 0x3, + XSC_CMD_DELIVERY_STAT_OUT_PTR_ALIGN_ERR = 0x4, + XSC_CMD_DELIVERY_STAT_IN_PTR_ALIGN_ERR = 0x5, + XSC_CMD_DELIVERY_STAT_FW_ERR = 0x6, + XSC_CMD_DELIVERY_STAT_IN_LENGTH_ERR = 0x7, + XSC_CMD_DELIVERY_STAT_OUT_LENGTH_ERR = 0x8, + XSC_CMD_DELIVERY_STAT_RES_FLD_NOT_CLR_ERR = 0x9, + XSC_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10, +}; + +static struct xsc_cmd_work_ent *xsc_alloc_cmd(struct xsc_cmd *cmd, + struct xsc_cmd_msg *in, + struct xsc_rsp_msg *out) +{ + struct xsc_cmd_work_ent *ent; + + ent = kzalloc(sizeof(*ent), GFP_KERNEL); + if (!ent) + return ERR_PTR(-ENOMEM); + + ent->in = in; + ent->out = out; + ent->cmd = cmd; + + return ent; +} + +static void xsc_free_cmd(struct xsc_cmd_work_ent *ent) +{ + kfree(ent); +} + +static u8 xsc_alloc_token(struct xsc_cmd *cmd) +{ + u8 token; + + spin_lock(&cmd->token_lock); + token = cmd->token++ % 255 + 1; + spin_unlock(&cmd->token_lock); + + return token; +} + +static int xsc_alloc_ent(struct xsc_cmd *cmd) +{ + unsigned long flags; + int ret; + + spin_lock_irqsave(&cmd->alloc_lock, flags); + ret = find_first_bit(&cmd->cmd_entry_mask, cmd->max_reg_cmds); + if (ret < cmd->max_reg_cmds) + clear_bit(ret, &cmd->cmd_entry_mask); + spin_unlock_irqrestore(&cmd->alloc_lock, flags); + + return ret < cmd->max_reg_cmds ? ret : -ENOSPC; +} + +static void xsc_free_ent(struct xsc_cmd *cmd, int idx) +{ + unsigned long flags; + + spin_lock_irqsave(&cmd->alloc_lock, flags); + set_bit(idx, &cmd->cmd_entry_mask); + spin_unlock_irqrestore(&cmd->alloc_lock, flags); +} + +static struct xsc_cmd_layout *xsc_get_inst(struct xsc_cmd *cmd, int idx) +{ + return cmd->cmd_buf + (idx << cmd->log_stride); +} + +static struct xsc_rsp_layout *xsc_get_cq_inst(struct xsc_cmd *cmd, int idx) +{ + return cmd->cq_buf + (idx << cmd->log_stride); +} + +static u8 xsc_xor8_buf(void *buf, int len) +{ + u8 *ptr = buf; + u8 sum = 0; + int i; + + for (i = 0; i < len; i++) + sum ^= ptr[i]; + + return sum; +} + +static int xsc_verify_block_sig(struct xsc_cmd_prot_block *block) +{ + /* rsvd0 was set to 0xFF by fw */ + if (xsc_xor8_buf(block->rsvd0, + sizeof(*block) - sizeof(block->data) - 1) != 0xff) + return -EINVAL; + + if (xsc_xor8_buf(block, sizeof(*block)) != 0xff) + return -EINVAL; + + return 0; +} + +static void xsc_calc_block_sig(struct xsc_cmd_prot_block *block, u8 token) +{ + block->token = token; + block->ctrl_sig = ~xsc_xor8_buf(block->rsvd0, + sizeof(*block) - sizeof(block->data) - 2); + block->sig = ~xsc_xor8_buf(block, sizeof(*block) - 1); +} + +static void xsc_calc_chain_sig(struct xsc_cmd_mailbox *head, u8 token) +{ + struct xsc_cmd_mailbox *next = head; + + while (next) { + xsc_calc_block_sig(next->buf, token); + next = next->next; + } +} + +static void xsc_set_signature(struct xsc_cmd_work_ent *ent) +{ + ent->lay->sig = ~xsc_xor8_buf(ent->lay, sizeof(*ent->lay)); + xsc_calc_chain_sig(ent->in->next, ent->token); + xsc_calc_chain_sig(ent->out->next, ent->token); +} + +static int xsc_verify_signature(struct xsc_cmd_work_ent *ent) +{ + struct xsc_cmd_mailbox *next = ent->out->next; + int err; + u8 sig; + + sig = xsc_xor8_buf(ent->rsp_lay, sizeof(*ent->rsp_lay)); + if (sig != 0xff) + return -EINVAL; + + while (next) { + err = xsc_verify_block_sig(next->buf); + if (err) + return err; + + next = next->next; + } + + return 0; +} + +const char *xsc_command_str(int command) +{ + switch (command) { + case XSC_CMD_OP_QUERY_HCA_CAP: + return "QUERY_HCA_CAP"; + + case XSC_CMD_OP_QUERY_CMDQ_VERSION: + return "QUERY_CMDQ_VERSION"; + + case XSC_CMD_OP_FUNCTION_RESET: + return "FUNCTION_RESET"; + + case XSC_CMD_OP_DUMMY: + return "DUMMY_CMD"; + + case XSC_CMD_OP_CREATE_EQ: + return "CREATE_EQ"; + + case XSC_CMD_OP_DESTROY_EQ: + return "DESTROY_EQ"; + + case XSC_CMD_OP_CREATE_CQ: + return "CREATE_CQ"; + + case XSC_CMD_OP_DESTROY_CQ: + return "DESTROY_CQ"; + + case XSC_CMD_OP_CREATE_QP: + return "CREATE_QP"; + + case XSC_CMD_OP_DESTROY_QP: + return "DESTROY_QP"; + + case XSC_CMD_OP_RST2INIT_QP: + return "RST2INIT_QP"; + + case XSC_CMD_OP_INIT2RTR_QP: + return "INIT2RTR_QP"; + + case XSC_CMD_OP_RTR2RTS_QP: + return "RTR2RTS_QP"; + + case XSC_CMD_OP_RTS2RTS_QP: + return "RTS2RTS_QP"; + + case XSC_CMD_OP_SQERR2RTS_QP: + return "SQERR2RTS_QP"; + + case XSC_CMD_OP_2ERR_QP: + return "2ERR_QP"; + + case XSC_CMD_OP_RTS2SQD_QP: + return "RTS2SQD_QP"; + + case XSC_CMD_OP_SQD2RTS_QP: + return "SQD2RTS_QP"; + + case XSC_CMD_OP_2RST_QP: + return "2RST_QP"; + + case XSC_CMD_OP_INIT2INIT_QP: + return "INIT2INIT_QP"; + + case XSC_CMD_OP_MODIFY_RAW_QP: + return "MODIFY_RAW_QP"; + + case XSC_CMD_OP_ENABLE_NIC_HCA: + return "ENABLE_NIC_HCA"; + + case XSC_CMD_OP_DISABLE_NIC_HCA: + return "DISABLE_NIC_HCA"; + + case XSC_CMD_OP_QUERY_VPORT_STATE: + return "QUERY_VPORT_STATE"; + + case XSC_CMD_OP_QUERY_EVENT_TYPE: + return "QUERY_EVENT_TYPE"; + + case XSC_CMD_OP_ENABLE_MSIX: + return "ENABLE_MSIX"; + + case XSC_CMD_OP_SET_MTU: + return "SET_MTU"; + + case XSC_CMD_OP_QUERY_ETH_MAC: + return "QUERY_ETH_MAC"; + + default: return "unknown command opcode"; + } +} + +static void cmd_work_handler(struct work_struct *work) +{ + struct xsc_cmd_work_ent *ent; + struct xsc_core_device *xdev; + struct xsc_cmd_layout *lay; + struct semaphore *sem; + struct xsc_cmd *cmd; + unsigned long flags; + + ent = container_of(work, struct xsc_cmd_work_ent, work); + cmd = ent->cmd; + xdev = container_of(cmd, struct xsc_core_device, cmd); + sem = &cmd->sem; + down(sem); + ent->idx = xsc_alloc_ent(cmd); + if (ent->idx < 0) { + pci_err(xdev->pdev, "failed to allocate command entry\n"); + up(sem); + return; + } + + ent->token = xsc_alloc_token(cmd); + cmd->ent_arr[ent->idx] = ent; + + spin_lock_irqsave(&cmd->doorbell_lock, flags); + lay = xsc_get_inst(cmd, cmd->cmd_pid); + ent->lay = lay; + memset(lay, 0, sizeof(*lay)); + memcpy(lay->in, ent->in->first.data, sizeof(lay->in)); + if (ent->in->next) + lay->in_ptr = cpu_to_be64(ent->in->next->dma); + lay->inlen = cpu_to_be32(ent->in->len); + if (ent->out->next) + lay->out_ptr = cpu_to_be64(ent->out->next->dma); + lay->outlen = cpu_to_be32(ent->out->len); + lay->type = XSC_PCI_CMD_XPORT; + lay->token = ent->token; + lay->idx = ent->idx; + if (!cmd->checksum_disabled) + xsc_set_signature(ent); + else + lay->sig = 0xff; + + ktime_get_ts64(&ent->ts1); + + /* ring doorbell after the descriptor is valid */ + wmb(); + + cmd->cmd_pid = (cmd->cmd_pid + 1) % (1 << cmd->log_sz); + writel(cmd->cmd_pid, XSC_REG_ADDR(xdev, cmd->reg.req_pid_addr)); + spin_unlock_irqrestore(&cmd->doorbell_lock, flags); +} + +static const char *xsc_deliv_status_to_str(u8 status) +{ + switch (status) { + case XSC_CMD_DELIVERY_STAT_OK: + return "no errors"; + case XSC_CMD_DELIVERY_STAT_SIGNAT_ERR: + return "signature error"; + case XSC_CMD_DELIVERY_STAT_TOK_ERR: + return "token error"; + case XSC_CMD_DELIVERY_STAT_BAD_BLK_NUM_ERR: + return "bad block number"; + case XSC_CMD_DELIVERY_STAT_OUT_PTR_ALIGN_ERR: + return "output pointer not aligned to block size"; + case XSC_CMD_DELIVERY_STAT_IN_PTR_ALIGN_ERR: + return "input pointer not aligned to block size"; + case XSC_CMD_DELIVERY_STAT_FW_ERR: + return "firmware internal error"; + case XSC_CMD_DELIVERY_STAT_IN_LENGTH_ERR: + return "command input length error"; + case XSC_CMD_DELIVERY_STAT_OUT_LENGTH_ERR: + return "command output length error"; + case XSC_CMD_DELIVERY_STAT_RES_FLD_NOT_CLR_ERR: + return "reserved fields not cleared"; + case XSC_CMD_DELIVERY_STAT_CMD_DESCR_ERR: + return "bad command descriptor type"; + default: + return "unknown status code"; + } +} + +static u16 xsc_msg_to_opcode(struct xsc_cmd_msg *in) +{ + struct xsc_inbox_hdr *hdr = (struct xsc_inbox_hdr *)(in->first.data); + + return be16_to_cpu(hdr->opcode); +} + +static int xsc_wait_func(struct xsc_core_device *xdev, + struct xsc_cmd_work_ent *ent) +{ + unsigned long timeout = msecs_to_jiffies(XSC_CMD_TIMEOUT_MSEC); + struct xsc_cmd *cmd = &xdev->cmd; + int err; + + if (!wait_for_completion_timeout(&ent->done, timeout)) + err = -ETIMEDOUT; + else + err = ent->ret; + + if (err == -ETIMEDOUT) { + cmd->cmd_status = XSC_CMD_STATUS_TIMEDOUT; + pci_err(xdev->pdev, "wait for %s(0x%x) response timeout!\n", + xsc_command_str(xsc_msg_to_opcode(ent->in)), + xsc_msg_to_opcode(ent->in)); + } else if (err) { + pci_err(xdev->pdev, "err %d, delivery status %s(%d)\n", err, + xsc_deliv_status_to_str(ent->status), ent->status); + } + + return err; +} + +/* Notes: + * 1. Callback functions may not sleep + * 2. page queue commands do not support asynchrous completion + */ +static int xsc_cmd_invoke(struct xsc_core_device *xdev, struct xsc_cmd_msg *in, + struct xsc_rsp_msg *out, u8 *status) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_work_ent *ent; + struct xsc_cmd_stats *stats; + ktime_t t1, t2, delta; + struct semaphore *sem; + int err = 0; + s64 ds; + u16 op; + + ent = xsc_alloc_cmd(cmd, in, out); + if (IS_ERR(ent)) + return PTR_ERR(ent); + + init_completion(&ent->done); + INIT_WORK(&ent->work, cmd_work_handler); + if (!queue_work(cmd->wq, &ent->work)) { + pci_err(xdev->pdev, "failed to queue work\n"); + err = -ENOMEM; + goto out_free; + } + + err = xsc_wait_func(xdev, ent); + if (err == -ETIMEDOUT) + goto out; + t1 = timespec64_to_ktime(ent->ts1); + t2 = timespec64_to_ktime(ent->ts2); + delta = ktime_sub(t2, t1); + ds = ktime_to_ns(delta); + op = be16_to_cpu(((struct xsc_inbox_hdr *)in->first.data)->opcode); + if (op < ARRAY_SIZE(cmd->stats)) { + stats = &cmd->stats[op]; + spin_lock(&stats->lock); + stats->sum += ds; + ++stats->n; + spin_unlock(&stats->lock); + } + *status = ent->status; + xsc_free_cmd(ent); + + return err; + +out: + sem = &cmd->sem; + up(sem); +out_free: + xsc_free_cmd(ent); + return err; +} + +static int xsc_copy_to_cmd_msg(struct xsc_cmd_msg *to, void *from, int size) +{ + struct xsc_cmd_prot_block *block; + struct xsc_cmd_mailbox *next; + int copy; + + if (!to || !from) + return -ENOMEM; + + copy = min_t(int, size, sizeof(to->first.data)); + memcpy(to->first.data, from, copy); + size -= copy; + from += copy; + + next = to->next; + while (size) { + if (!next) { + /* this is a BUG */ + return -ENOMEM; + } + + copy = min_t(int, size, XSC_CMD_DATA_BLOCK_SIZE); + block = next->buf; + memcpy(block->data, from, copy); + block->owner_status = 0; + from += copy; + size -= copy; + next = next->next; + } + + return 0; +} + +static int xsc_copy_from_rsp_msg(void *to, struct xsc_rsp_msg *from, int size) +{ + struct xsc_cmd_prot_block *block; + struct xsc_cmd_mailbox *next; + int copy; + + if (!to || !from) + return -ENOMEM; + + copy = min_t(int, size, sizeof(from->first.data)); + memcpy(to, from->first.data, copy); + size -= copy; + to += copy; + + next = from->next; + while (size) { + if (!next) { + /* this is a BUG */ + return -ENOMEM; + } + + copy = min_t(int, size, XSC_CMD_DATA_BLOCK_SIZE); + block = next->buf; + if (!block->owner_status) + pr_err("block ownership check failed\n"); + + memcpy(to, block->data, copy); + to += copy; + size -= copy; + next = next->next; + } + + return 0; +} + +static struct xsc_cmd_mailbox *xsc_alloc_cmd_box(struct xsc_core_device *xdev, + gfp_t flags) +{ + struct xsc_cmd_mailbox *mailbox; + + mailbox = kmalloc(sizeof(*mailbox), flags); + if (!mailbox) + return ERR_PTR(-ENOMEM); + + mailbox->buf = dma_pool_alloc(xdev->cmd.pool, flags, + &mailbox->dma); + if (!mailbox->buf) { + kfree(mailbox); + return ERR_PTR(-ENOMEM); + } + memset(mailbox->buf, 0, sizeof(struct xsc_cmd_prot_block)); + mailbox->next = NULL; + + return mailbox; +} + +static void xsc_free_cmd_box(struct xsc_core_device *xdev, + struct xsc_cmd_mailbox *mailbox) +{ + dma_pool_free(xdev->cmd.pool, mailbox->buf, mailbox->dma); + + kfree(mailbox); +} + +static struct xsc_cmd_msg *xsc_alloc_cmd_msg(struct xsc_core_device *xdev, + gfp_t flags, int size) +{ + struct xsc_cmd_mailbox *tmp, *head = NULL; + struct xsc_cmd_prot_block *block; + struct xsc_cmd_msg *msg; + int blen; + int err; + int n; + int i; + + msg = kzalloc(sizeof(*msg), GFP_KERNEL); + if (!msg) + return ERR_PTR(-ENOMEM); + + blen = size - min_t(int, sizeof(msg->first.data), size); + n = (blen + XSC_CMD_DATA_BLOCK_SIZE - 1) / XSC_CMD_DATA_BLOCK_SIZE; + + for (i = 0; i < n; i++) { + tmp = xsc_alloc_cmd_box(xdev, flags); + if (IS_ERR(tmp)) { + pci_err(xdev->pdev, "failed allocating block\n"); + err = PTR_ERR(tmp); + goto err_alloc; + } + + block = tmp->buf; + tmp->next = head; + block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0); + block->block_num = cpu_to_be32(n - i - 1); + head = tmp; + } + msg->next = head; + msg->len = size; + return msg; + +err_alloc: + while (head) { + tmp = head->next; + xsc_free_cmd_box(xdev, head); + head = tmp; + } + kfree(msg); + + return ERR_PTR(err); +} + +static void xsc_free_cmd_msg(struct xsc_core_device *xdev, + struct xsc_cmd_msg *msg) +{ + struct xsc_cmd_mailbox *head = msg->next; + struct xsc_cmd_mailbox *next; + + while (head) { + next = head->next; + xsc_free_cmd_box(xdev, head); + head = next; + } + kfree(msg); +} + +static struct xsc_rsp_msg *xsc_alloc_rsp_msg(struct xsc_core_device *xdev, + gfp_t flags, int size) +{ + struct xsc_cmd_mailbox *tmp, *head = NULL; + struct xsc_cmd_prot_block *block; + struct xsc_rsp_msg *msg; + int blen; + int err; + int n; + int i; + + msg = kzalloc(sizeof(*msg), GFP_KERNEL); + if (!msg) + return ERR_PTR(-ENOMEM); + + blen = size - min_t(int, sizeof(msg->first.data), size); + n = (blen + XSC_CMD_DATA_BLOCK_SIZE - 1) / XSC_CMD_DATA_BLOCK_SIZE; + + for (i = 0; i < n; i++) { + tmp = xsc_alloc_cmd_box(xdev, flags); + if (IS_ERR(tmp)) { + pci_err(xdev->pdev, "failed allocating block\n"); + err = PTR_ERR(tmp); + goto err_alloc; + } + + block = tmp->buf; + tmp->next = head; + block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0); + block->block_num = cpu_to_be32(n - i - 1); + head = tmp; + } + msg->next = head; + msg->len = size; + return msg; + +err_alloc: + while (head) { + tmp = head->next; + xsc_free_cmd_box(xdev, head); + head = tmp; + } + kfree(msg); + + return ERR_PTR(err); +} + +static void xsc_free_rsp_msg(struct xsc_core_device *xdev, + struct xsc_rsp_msg *msg) +{ + struct xsc_cmd_mailbox *head = msg->next; + struct xsc_cmd_mailbox *next; + + while (head) { + next = head->next; + xsc_free_cmd_box(xdev, head); + head = next; + } + kfree(msg); +} + +static void xsc_set_wqname(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + + snprintf(cmd->wq_name, sizeof(cmd->wq_name), "xsc_cmd_%s", + dev_name(&xdev->pdev->dev)); +} + +void xsc_cmd_use_events(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + int i; + + for (i = 0; i < cmd->max_reg_cmds; i++) + down(&cmd->sem); + + flush_workqueue(cmd->wq); + + cmd->mode = XSC_CMD_MODE_EVENTS; + + while (cmd->cmd_pid != cmd->cq_cid) + msleep(20); + kthread_stop(cmd->cq_task); + cmd->cq_task = NULL; + + for (i = 0; i < cmd->max_reg_cmds; i++) + up(&cmd->sem); +} + +static int xsc_cmd_cq_polling(void *data); +void xsc_cmd_use_polling(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + int i; + + for (i = 0; i < cmd->max_reg_cmds; i++) + down(&cmd->sem); + + flush_workqueue(cmd->wq); + cmd->mode = XSC_CMD_MODE_POLLING; + cmd->cq_task = kthread_create(xsc_cmd_cq_polling, + (void *)xdev, + "xsc_cmd_cq_polling"); + if (cmd->cq_task) + wake_up_process(cmd->cq_task); + + for (i = 0; i < cmd->max_reg_cmds; i++) + up(&cmd->sem); +} + +static int xsc_status_to_err(u8 status) +{ + return status ? -1 : 0; /* TBD more meaningful codes */ +} + +static struct xsc_cmd_msg *xsc_alloc_msg(struct xsc_core_device *xdev, + int in_size) +{ + struct xsc_cmd_msg *msg = ERR_PTR(-ENOMEM); + struct xsc_cmd *cmd = &xdev->cmd; + struct cache_ent *ent = NULL; + + if (in_size > MED_LIST_SIZE && in_size <= LONG_LIST_SIZE) + ent = &cmd->cache.large; + else if (in_size > 16 && in_size <= MED_LIST_SIZE) + ent = &cmd->cache.med; + + if (ent) { + spin_lock(&ent->lock); + if (!list_empty(&ent->head)) { + msg = list_entry(ent->head.next, typeof(*msg), list); + /* For cached lists, we must explicitly state what is + * the real size + */ + msg->len = in_size; + list_del(&msg->list); + } + spin_unlock(&ent->lock); + } + + if (IS_ERR(msg)) + msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, in_size); + + return msg; +} + +static void xsc_free_msg(struct xsc_core_device *xdev, struct xsc_cmd_msg *msg) +{ + if (msg->cache) { + spin_lock(&msg->cache->lock); + list_add_tail(&msg->list, &msg->cache->head); + spin_unlock(&msg->cache->lock); + } else { + xsc_free_cmd_msg(xdev, msg); + } +} + +static int xsc_dummy_work(struct xsc_core_device *xdev, + struct xsc_cmd_msg *in, + struct xsc_rsp_msg *out, + u16 dummy_cnt, + u16 dummy_start_pid) +{ + struct xsc_cmd_work_ent **dummy_ent_arr; + struct xsc_cmd *cmd = &xdev->cmd; + u16 temp_pid = dummy_start_pid; + struct xsc_cmd_layout *lay; + struct semaphore *sem; + u16 free_cnt = 0; + int err = 0; + u16 i; + + sem = &cmd->sem; + + dummy_ent_arr = kcalloc(dummy_cnt, sizeof(struct xsc_cmd_work_ent *), + GFP_KERNEL); + if (!dummy_ent_arr) { + err = -ENOMEM; + goto alloc_ent_arr_err; + } + + for (i = 0; i < dummy_cnt; i++) { + dummy_ent_arr[i] = xsc_alloc_cmd(cmd, in, out); + if (IS_ERR(dummy_ent_arr[i])) { + pci_err(xdev->pdev, "failed to alloc cmd buffer\n"); + err = -ENOMEM; + free_cnt = i; + goto alloc_ent_err; + } + + down(sem); + + dummy_ent_arr[i]->idx = xsc_alloc_ent(cmd); + if (dummy_ent_arr[i]->idx < 0) { + pci_err(xdev->pdev, "failed to allocate command entry\n"); + err = -1; + free_cnt = i; + goto get_cmd_ent_idx_err; + } + dummy_ent_arr[i]->token = xsc_alloc_token(cmd); + cmd->ent_arr[dummy_ent_arr[i]->idx] = dummy_ent_arr[i]; + init_completion(&dummy_ent_arr[i]->done); + + lay = xsc_get_inst(cmd, temp_pid); + dummy_ent_arr[i]->lay = lay; + memset(lay, 0, sizeof(*lay)); + memcpy(lay->in, + dummy_ent_arr[i]->in->first.data, + sizeof(dummy_ent_arr[i]->in)); + lay->inlen = cpu_to_be32(dummy_ent_arr[i]->in->len); + lay->outlen = cpu_to_be32(dummy_ent_arr[i]->out->len); + lay->type = XSC_PCI_CMD_XPORT; + lay->token = dummy_ent_arr[i]->token; + lay->idx = dummy_ent_arr[i]->idx; + if (!cmd->checksum_disabled) + xsc_set_signature(dummy_ent_arr[i]); + else + lay->sig = 0xff; + temp_pid = (temp_pid + 1) % (1 << cmd->log_sz); + } + + /* ring doorbell after the descriptor is valid */ + wmb(); + writel(cmd->cmd_pid, XSC_REG_ADDR(xdev, cmd->reg.req_pid_addr)); + if (readl(XSC_REG_ADDR(xdev, cmd->reg.interrupt_stat_addr)) != 0) + writel(0xF, XSC_REG_ADDR(xdev, cmd->reg.interrupt_stat_addr)); + + if (wait_for_completion_timeout(&dummy_ent_arr[dummy_cnt - 1]->done, + msecs_to_jiffies(3000)) == 0) { + pci_err(xdev->pdev, "dummy_cmd %d ent timeout, cmdq fail\n", + dummy_cnt - 1); + err = -ETIMEDOUT; + } + + for (i = 0; i < dummy_cnt; i++) + xsc_free_cmd(dummy_ent_arr[i]); + + kfree(dummy_ent_arr); + return err; + +get_cmd_ent_idx_err: + xsc_free_cmd(dummy_ent_arr[free_cnt]); + up(sem); +alloc_ent_err: + for (i = 0; i < free_cnt; i++) { + xsc_free_ent(cmd, dummy_ent_arr[i]->idx); + up(sem); + xsc_free_cmd(dummy_ent_arr[i]); + } + kfree(dummy_ent_arr); +alloc_ent_arr_err: + return err; +} + +static int xsc_dummy_cmd_exec(struct xsc_core_device *xdev, void *in, + int in_size, void *out, int out_size, + u16 dmmy_cnt, u16 dummy_start) +{ + struct xsc_rsp_msg *outb; + struct xsc_cmd_msg *inb; + int err; + + inb = xsc_alloc_msg(xdev, in_size); + if (IS_ERR(inb)) { + err = PTR_ERR(inb); + return err; + } + + err = xsc_copy_to_cmd_msg(inb, in, in_size); + if (err) { + pci_err(xdev->pdev, "err %d\n", err); + goto out_in; + } + + outb = xsc_alloc_rsp_msg(xdev, GFP_KERNEL, out_size); + if (IS_ERR(outb)) { + err = PTR_ERR(outb); + goto out_in; + } + + err = xsc_dummy_work(xdev, inb, outb, dmmy_cnt, dummy_start); + + if (err) + goto out_out; + + err = xsc_copy_from_rsp_msg(out, outb, out_size); + +out_out: + xsc_free_rsp_msg(xdev, outb); + +out_in: + xsc_free_msg(xdev, inb); + return err; +} + +static int xsc_send_dummy_cmd(struct xsc_core_device *xdev, + u16 gap, u16 dummy_start) +{ + struct xsc_cmd_dummy_mbox_out *out; + struct xsc_cmd_dummy_mbox_in in; + int err; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) { + err = -ENOMEM; + goto no_mem_out; + } + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DUMMY); + + err = xsc_dummy_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out), + gap, dummy_start); + if (err) + goto out_out; + + if (out->hdr.status) { + err = xsc_cmd_status_to_err(&out->hdr); + goto out_out; + } + +out_out: + kfree(out); +no_mem_out: + return err; +} + +static int xsc_request_pid_cid_mismatch_restore(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + u16 req_pid, req_cid; + u16 gap; + + int err; + + req_pid = readl(XSC_REG_ADDR(xdev, cmd->reg.req_pid_addr)); + req_cid = readl(XSC_REG_ADDR(xdev, cmd->reg.req_cid_addr)); + if (req_pid >= (1 << cmd->log_sz) || req_cid >= (1 << cmd->log_sz)) { + pci_err(xdev->pdev, + "req_pid %d, req_cid %d, out of normal range!!! max value is %d\n", + req_pid, req_cid, (1 << cmd->log_sz)); + return -1; + } + + if (req_pid == req_cid) + return 0; + + gap = (req_pid > req_cid) ? (req_pid - req_cid) + : ((1 << cmd->log_sz) + req_pid - req_cid); + + err = xsc_send_dummy_cmd(xdev, gap, req_cid); + if (err) { + pci_err(xdev->pdev, "Send dummy cmd failed\n"); + goto send_dummy_fail; + } + +send_dummy_fail: + return err; +} + +static int _xsc_cmd_exec(struct xsc_core_device *xdev, + void *in, int in_size, + void *out, int out_size) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_rsp_msg *outb; + struct xsc_cmd_msg *inb; + u8 status = 0; + int err; + + if (cmd->cmd_status == XSC_CMD_STATUS_TIMEDOUT) + return -ETIMEDOUT; + + inb = xsc_alloc_msg(xdev, in_size); + if (IS_ERR(inb)) { + err = PTR_ERR(inb); + return err; + } + + err = xsc_copy_to_cmd_msg(inb, in, in_size); + if (err) { + pci_err(xdev->pdev, "copy to cmd_msg err %d\n", err); + goto out_in; + } + + outb = xsc_alloc_rsp_msg(xdev, GFP_KERNEL, out_size); + if (IS_ERR(outb)) { + err = PTR_ERR(outb); + goto out_in; + } + + err = xsc_cmd_invoke(xdev, inb, outb, &status); + if (err) + goto out_out; + + if (status) { + pci_err(xdev->pdev, "opcode:%#x, err %d, status %d\n", + xsc_msg_to_opcode(inb), err, status); + err = xsc_status_to_err(status); + goto out_out; + } + + err = xsc_copy_from_rsp_msg(out, outb, out_size); + +out_out: + xsc_free_rsp_msg(xdev, outb); + +out_in: + xsc_free_msg(xdev, inb); + return err; +} + +int xsc_cmd_exec(struct xsc_core_device *xdev, + void *in, int in_size, + void *out, int out_size) +{ + struct xsc_inbox_hdr *hdr = (struct xsc_inbox_hdr *)in; + + hdr->ver = 0; + if (hdr->ver != 0) { + pci_err(xdev->pdev, "recv an unexpected cmd ver = %d, opcode = %d\n", + be16_to_cpu(hdr->ver), be16_to_cpu(hdr->opcode)); + WARN_ON(hdr->ver != 0); + } + + return _xsc_cmd_exec(xdev, in, in_size, out, out_size); +} +EXPORT_SYMBOL(xsc_cmd_exec); + +static void xsc_destroy_msg_cache(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_msg *msg; + struct xsc_cmd_msg *n; + + list_for_each_entry_safe(msg, n, &cmd->cache.large.head, list) { + list_del(&msg->list); + xsc_free_cmd_msg(xdev, msg); + } + + list_for_each_entry_safe(msg, n, &cmd->cache.med.head, list) { + list_del(&msg->list); + xsc_free_cmd_msg(xdev, msg); + } +} + +static int xsc_create_msg_cache(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_msg *msg; + int err; + int i; + + spin_lock_init(&cmd->cache.large.lock); + INIT_LIST_HEAD(&cmd->cache.large.head); + spin_lock_init(&cmd->cache.med.lock); + INIT_LIST_HEAD(&cmd->cache.med.head); + + for (i = 0; i < NUM_LONG_LISTS; i++) { + msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, LONG_LIST_SIZE); + if (IS_ERR(msg)) { + err = PTR_ERR(msg); + goto ex_err; + } + msg->cache = &cmd->cache.large; + list_add_tail(&msg->list, &cmd->cache.large.head); + } + + for (i = 0; i < NUM_MED_LISTS; i++) { + msg = xsc_alloc_cmd_msg(xdev, GFP_KERNEL, MED_LIST_SIZE); + if (IS_ERR(msg)) { + err = PTR_ERR(msg); + goto ex_err; + } + msg->cache = &cmd->cache.med; + list_add_tail(&msg->list, &cmd->cache.med.head); + } + + return 0; + +ex_err: + xsc_destroy_msg_cache(xdev); + return err; +} + +static void xsc_cmd_comp_handler(struct xsc_core_device *xdev, u8 idx, + struct xsc_rsp_layout *rsp) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_cmd_work_ent *ent; + + if (idx > cmd->max_reg_cmds || (cmd->cmd_entry_mask & (1 << idx))) { + pci_err(xdev->pdev, "idx[%d] exceed max cmds, or has no relative request.\n", + idx); + return; + } + ent = cmd->ent_arr[idx]; + ent->rsp_lay = rsp; + ktime_get_ts64(&ent->ts2); + + memcpy(ent->out->first.data, + ent->rsp_lay->out, + sizeof(ent->rsp_lay->out)); + if (!cmd->checksum_disabled) + ent->ret = xsc_verify_signature(ent); + else + ent->ret = 0; + ent->status = 0; + + xsc_free_ent(cmd, ent->idx); + complete(&ent->done); + up(&cmd->sem); +} + +static int xsc_cmd_cq_polling(void *data) +{ + struct xsc_core_device *xdev = data; + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_rsp_layout *rsp; + u32 cq_pid; + + while (!kthread_should_stop()) { + if (need_resched()) + schedule(); + cq_pid = readl(XSC_REG_ADDR(xdev, cmd->reg.rsp_pid_addr)); + if (cmd->cq_cid == cq_pid) { + mdelay(3); + continue; + } + + //get cqe + rsp = xsc_get_cq_inst(cmd, cmd->cq_cid); + if (!cmd->ownerbit_learned) { + cmd->ownerbit_learned = 1; + cmd->owner_bit = rsp->owner_bit; + } + if (cmd->owner_bit != rsp->owner_bit) { + //hw update cq doorbell but buf may not ready + pci_err(xdev->pdev, "hw update cq doorbell but buf not ready %u %u\n", + cmd->cq_cid, cq_pid); + continue; + } + + xsc_cmd_comp_handler(xdev, rsp->idx, rsp); + + cmd->cq_cid = (cmd->cq_cid + 1) % (1 << cmd->log_sz); + + writel(cmd->cq_cid, XSC_REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + if (cmd->cq_cid == 0) + cmd->owner_bit = !cmd->owner_bit; + } + return 0; +} + +int xsc_cmd_err_handler(struct xsc_core_device *xdev) +{ + union interrupt_stat { + struct { + u32 hw_read_req_err:1; + u32 hw_write_req_err:1; + u32 req_pid_err:1; + u32 rsp_cid_err:1; + }; + u32 raw; + } stat; + int retry = 0; + int err = 0; + + stat.raw = readl(XSC_REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + while (stat.raw != 0) { + err++; + if (stat.hw_read_req_err) { + retry = 1; + stat.hw_read_req_err = 0; + pci_err(xdev->pdev, "hw report read req from host failed!\n"); + } else if (stat.hw_write_req_err) { + retry = 1; + stat.hw_write_req_err = 0; + pci_err(xdev->pdev, "hw report write req to fw failed!\n"); + } else if (stat.req_pid_err) { + stat.req_pid_err = 0; + pci_err(xdev->pdev, "hw report unexpected req pid!\n"); + } else if (stat.rsp_cid_err) { + stat.rsp_cid_err = 0; + pci_err(xdev->pdev, "hw report unexpected rsp cid!\n"); + } else { + stat.raw = 0; + pci_err(xdev->pdev, "ignore unknown interrupt!\n"); + } + } + + if (retry) + writel(xdev->cmd.cmd_pid, + XSC_REG_ADDR(xdev, xdev->cmd.reg.req_pid_addr)); + + if (err) + writel(0xf, + XSC_REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + + return err; +} + +void xsc_cmd_resp_handler(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + struct xsc_rsp_layout *rsp; + const int budget = 32; + int count = 0; + u32 cq_pid; + + while (count < budget) { + cq_pid = readl(XSC_REG_ADDR(xdev, cmd->reg.rsp_pid_addr)); + if (cq_pid == cmd->cq_cid) + return; + + rsp = xsc_get_cq_inst(cmd, cmd->cq_cid); + if (!cmd->ownerbit_learned) { + cmd->ownerbit_learned = 1; + cmd->owner_bit = rsp->owner_bit; + } + if (cmd->owner_bit != rsp->owner_bit) { + pci_err(xdev->pdev, "hw update cq doorbell but buf not ready %u %u\n", + cmd->cq_cid, cq_pid); + return; + } + + xsc_cmd_comp_handler(xdev, rsp->idx, rsp); + + cmd->cq_cid = (cmd->cq_cid + 1) % (1 << cmd->log_sz); + writel(cmd->cq_cid, XSC_REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + if (cmd->cq_cid == 0) + cmd->owner_bit = !cmd->owner_bit; + + count++; + } +} + +static void xsc_cmd_handle_rsp_before_reload +(struct xsc_cmd *cmd, struct xsc_core_device *xdev) +{ + u32 rsp_pid, rsp_cid; + + rsp_pid = readl(XSC_REG_ADDR(xdev, cmd->reg.rsp_pid_addr)); + rsp_cid = readl(XSC_REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + if (rsp_pid == rsp_cid) + return; + + cmd->cq_cid = rsp_pid; + + writel(cmd->cq_cid, XSC_REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); +} + +int xsc_cmd_init(struct xsc_core_device *xdev) +{ + int size = sizeof(struct xsc_cmd_prot_block); + int align = roundup_pow_of_two(size); + struct xsc_cmd *cmd = &xdev->cmd; + u32 cmd_h, cmd_l; + u32 err_stat; + int err; + int i; + + //sriov need adapt for this process. + //now there is 544 cmdq resource, soc using from id 514 + cmd->reg.req_pid_addr = HIF_CMDQM_HOST_REQ_PID_MEM_ADDR; + cmd->reg.req_cid_addr = HIF_CMDQM_HOST_REQ_CID_MEM_ADDR; + cmd->reg.rsp_pid_addr = HIF_CMDQM_HOST_RSP_PID_MEM_ADDR; + cmd->reg.rsp_cid_addr = HIF_CMDQM_HOST_RSP_CID_MEM_ADDR; + cmd->reg.req_buf_h_addr = HIF_CMDQM_HOST_REQ_BUF_BASE_H_ADDR_MEM_ADDR; + cmd->reg.req_buf_l_addr = HIF_CMDQM_HOST_REQ_BUF_BASE_L_ADDR_MEM_ADDR; + cmd->reg.rsp_buf_h_addr = HIF_CMDQM_HOST_RSP_BUF_BASE_H_ADDR_MEM_ADDR; + cmd->reg.rsp_buf_l_addr = HIF_CMDQM_HOST_RSP_BUF_BASE_L_ADDR_MEM_ADDR; + cmd->reg.msix_vec_addr = HIF_CMDQM_VECTOR_ID_MEM_ADDR; + cmd->reg.element_sz_addr = HIF_CMDQM_Q_ELEMENT_SZ_REG_ADDR; + cmd->reg.q_depth_addr = HIF_CMDQM_HOST_Q_DEPTH_REG_ADDR; + cmd->reg.interrupt_stat_addr = HIF_CMDQM_HOST_VF_ERR_STS_MEM_ADDR; + + cmd->pool = dma_pool_create("xsc_cmd", + &xdev->pdev->dev, + size, align, 0); + if (!cmd->pool) + return -ENOMEM; + + cmd->cmd_buf = (void *)__get_free_pages(GFP_ATOMIC, 0); + if (!cmd->cmd_buf) { + err = -ENOMEM; + goto err_free_pool; + } + cmd->cq_buf = (void *)__get_free_pages(GFP_ATOMIC, 0); + if (!cmd->cq_buf) { + err = -ENOMEM; + goto err_free_cmd; + } + + cmd->dma = dma_map_single(&xdev->pdev->dev, cmd->cmd_buf, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(&xdev->pdev->dev, cmd->dma)) { + err = -ENOMEM; + goto err_free; + } + + cmd->cq_dma = dma_map_single(&xdev->pdev->dev, cmd->cq_buf, PAGE_SIZE, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(&xdev->pdev->dev, cmd->cq_dma)) { + err = -ENOMEM; + goto err_map_cmd; + } + + cmd->cmd_pid = readl(XSC_REG_ADDR(xdev, cmd->reg.req_pid_addr)); + cmd->cq_cid = readl(XSC_REG_ADDR(xdev, cmd->reg.rsp_cid_addr)); + cmd->ownerbit_learned = 0; + + xsc_cmd_handle_rsp_before_reload(cmd, xdev); + +#define ELEMENT_SIZE_LOG 6 //64B +#define Q_DEPTH_LOG 5 //32 + + cmd->log_sz = Q_DEPTH_LOG; + cmd->log_stride = readl(XSC_REG_ADDR(xdev, cmd->reg.element_sz_addr)); + writel(1 << cmd->log_sz, XSC_REG_ADDR(xdev, cmd->reg.q_depth_addr)); + if (cmd->log_stride != ELEMENT_SIZE_LOG) { + dev_err(&xdev->pdev->dev, "firmware failed to init cmdq, log_stride=(%d, %d)\n", + cmd->log_stride, ELEMENT_SIZE_LOG); + err = -ENODEV; + goto err_map; + } + + if (1 << cmd->log_sz > XSC_MAX_COMMANDS) { + dev_err(&xdev->pdev->dev, "firmware reports too many outstanding commands %d\n", + 1 << cmd->log_sz); + err = -EINVAL; + goto err_map; + } + + if (cmd->log_sz + cmd->log_stride > PAGE_SHIFT) { + dev_err(&xdev->pdev->dev, "command queue size overflow\n"); + err = -EINVAL; + goto err_map; + } + + cmd->checksum_disabled = 1; + cmd->max_reg_cmds = (1 << cmd->log_sz) - 1; + cmd->cmd_entry_mask = (1 << cmd->max_reg_cmds) - 1; + + spin_lock_init(&cmd->alloc_lock); + spin_lock_init(&cmd->token_lock); + spin_lock_init(&cmd->doorbell_lock); + for (i = 0; i < ARRAY_SIZE(cmd->stats); i++) + spin_lock_init(&cmd->stats[i].lock); + + sema_init(&cmd->sem, cmd->max_reg_cmds); + + cmd_h = (u32)((u64)(cmd->dma) >> 32); + cmd_l = (u32)(cmd->dma); + if (cmd_l & 0xfff) { + dev_err(&xdev->pdev->dev, "invalid command queue address\n"); + err = -ENOMEM; + goto err_map; + } + + writel(cmd_h, XSC_REG_ADDR(xdev, cmd->reg.req_buf_h_addr)); + writel(cmd_l, XSC_REG_ADDR(xdev, cmd->reg.req_buf_l_addr)); + + cmd_h = (u32)((u64)(cmd->cq_dma) >> 32); + cmd_l = (u32)(cmd->cq_dma); + if (cmd_l & 0xfff) { + dev_err(&xdev->pdev->dev, "invalid command queue address\n"); + err = -ENOMEM; + goto err_map; + } + writel(cmd_h, XSC_REG_ADDR(xdev, cmd->reg.rsp_buf_h_addr)); + writel(cmd_l, XSC_REG_ADDR(xdev, cmd->reg.rsp_buf_l_addr)); + + /* Make sure firmware sees the complete address before we proceed */ + wmb(); + + cmd->mode = XSC_CMD_MODE_POLLING; + cmd->cmd_status = XSC_CMD_STATUS_NORMAL; + + err = xsc_create_msg_cache(xdev); + if (err) { + dev_err(&xdev->pdev->dev, "failed to create command cache\n"); + goto err_map; + } + + xsc_set_wqname(xdev); + cmd->wq = create_singlethread_workqueue(cmd->wq_name); + if (!cmd->wq) { + dev_err(&xdev->pdev->dev, "failed to create command workqueue\n"); + err = -ENOMEM; + goto err_cache; + } + + cmd->cq_task = kthread_create(xsc_cmd_cq_polling, + (void *)xdev, + "xsc_cmd_cq_polling"); + if (!cmd->cq_task) { + dev_err(&xdev->pdev->dev, "failed to create cq task\n"); + err = -ENOMEM; + goto err_wq; + } + wake_up_process(cmd->cq_task); + + err = xsc_request_pid_cid_mismatch_restore(xdev); + if (err) { + dev_err(&xdev->pdev->dev, "request pid,cid wrong, restore failed\n"); + goto err_req_restore; + } + + // clear abnormal state to avoid the impact of previous error + err_stat = readl(XSC_REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + if (err_stat) { + pci_err(xdev->pdev, "err_stat 0x%x when init, clear it\n", + err_stat); + writel(0xf, + XSC_REG_ADDR(xdev, xdev->cmd.reg.interrupt_stat_addr)); + } + + return 0; + +err_req_restore: + kthread_stop(cmd->cq_task); + +err_wq: + destroy_workqueue(cmd->wq); + +err_cache: + xsc_destroy_msg_cache(xdev); + +err_map: + dma_unmap_single(&xdev->pdev->dev, cmd->cq_dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); + +err_map_cmd: + dma_unmap_single(&xdev->pdev->dev, cmd->dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); +err_free: + free_pages((unsigned long)cmd->cq_buf, 0); + +err_free_cmd: + free_pages((unsigned long)cmd->cmd_buf, 0); + +err_free_pool: + dma_pool_destroy(cmd->pool); + + return err; +} + +void xsc_cmd_cleanup(struct xsc_core_device *xdev) +{ + struct xsc_cmd *cmd = &xdev->cmd; + + destroy_workqueue(cmd->wq); + if (cmd->cq_task) + kthread_stop(cmd->cq_task); + xsc_destroy_msg_cache(xdev); + dma_unmap_single(&xdev->pdev->dev, cmd->dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); + free_pages((unsigned long)cmd->cq_buf, 0); + dma_unmap_single(&xdev->pdev->dev, cmd->cq_dma, PAGE_SIZE, + DMA_BIDIRECTIONAL); + free_pages((unsigned long)cmd->cmd_buf, 0); + dma_pool_destroy(cmd->pool); +} + +int xsc_cmd_version_check(struct xsc_core_device *xdev) +{ + struct xsc_cmd_query_cmdq_ver_mbox_out *out; + struct xsc_cmd_query_cmdq_ver_mbox_in in; + + int err; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) { + err = -ENOMEM; + goto no_mem_out; + } + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_CMDQ_VERSION); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out)); + if (err) + goto out_out; + + if (out->hdr.status) { + err = xsc_cmd_status_to_err(&out->hdr); + goto out_out; + } + + if (be16_to_cpu(out->cmdq_ver) != XSC_CMDQ_VERSION) { + pci_err(xdev->pdev, "cmdq version check failed, expecting %d, actual %d\n", + XSC_CMDQ_VERSION, be16_to_cpu(out->cmdq_ver)); + err = -EINVAL; + goto out_out; + } + xdev->cmdq_ver = XSC_CMDQ_VERSION; + +out_out: + kfree(out); +no_mem_out: + return err; +} + +static const char *cmd_status_str(u8 status) +{ + switch (status) { + case XSC_CMD_STAT_OK: + return "OK"; + case XSC_CMD_STAT_INT_ERR: + return "internal error"; + case XSC_CMD_STAT_BAD_OP_ERR: + return "bad operation"; + case XSC_CMD_STAT_BAD_PARAM_ERR: + return "bad parameter"; + case XSC_CMD_STAT_BAD_SYS_STATE_ERR: + return "bad system state"; + case XSC_CMD_STAT_BAD_RES_ERR: + return "bad resource"; + case XSC_CMD_STAT_RES_BUSY: + return "resource busy"; + case XSC_CMD_STAT_LIM_ERR: + return "limits exceeded"; + case XSC_CMD_STAT_BAD_RES_STATE_ERR: + return "bad resource state"; + case XSC_CMD_STAT_IX_ERR: + return "bad index"; + case XSC_CMD_STAT_NO_RES_ERR: + return "no resources"; + case XSC_CMD_STAT_BAD_INP_LEN_ERR: + return "bad input length"; + case XSC_CMD_STAT_BAD_OUTP_LEN_ERR: + return "bad output length"; + case XSC_CMD_STAT_BAD_QP_STATE_ERR: + return "bad QP state"; + case XSC_CMD_STAT_BAD_PKT_ERR: + return "bad packet (discarded)"; + case XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR: + return "bad size too many outstanding CQEs"; + default: + return "unknown status"; + } +} + +int xsc_cmd_status_to_err(struct xsc_outbox_hdr *hdr) +{ + if (!hdr->status) + return 0; + + pr_warn("command failed, status %s(0x%x)\n", + cmd_status_str(hdr->status), hdr->status); + + switch (hdr->status) { + case XSC_CMD_STAT_OK: return 0; + case XSC_CMD_STAT_INT_ERR: return -EIO; + case XSC_CMD_STAT_BAD_OP_ERR: return -EOPNOTSUPP; + case XSC_CMD_STAT_BAD_PARAM_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_SYS_STATE_ERR: return -EIO; + case XSC_CMD_STAT_BAD_RES_ERR: return -EINVAL; + case XSC_CMD_STAT_RES_BUSY: return -EBUSY; + case XSC_CMD_STAT_LIM_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_RES_STATE_ERR: return -EINVAL; + case XSC_CMD_STAT_IX_ERR: return -EINVAL; + case XSC_CMD_STAT_NO_RES_ERR: return -EAGAIN; + case XSC_CMD_STAT_BAD_INP_LEN_ERR: return -EIO; + case XSC_CMD_STAT_BAD_OUTP_LEN_ERR: return -EIO; + case XSC_CMD_STAT_BAD_QP_STATE_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_PKT_ERR: return -EINVAL; + case XSC_CMD_STAT_BAD_SIZE_OUTS_CQES_ERR: return -EINVAL; + default: return -EIO; + } +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index ec3181a8e..ead26a57a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -4,6 +4,7 @@ */ #include "common/xsc_core.h" +#include "common/xsc_driver.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -175,6 +176,77 @@ static void xsc_core_dev_cleanup(struct xsc_core_device *xdev) xsc_dev_res_cleanup(xdev); } +static int xsc_hw_setup(struct xsc_core_device *xdev) +{ + int err; + + err = xsc_cmd_init(xdev); + if (err) { + pci_err(xdev->pdev, "Failed initializing command interface, aborting\n"); + goto out; + } + + err = xsc_cmd_version_check(xdev); + if (err) { + pci_err(xdev->pdev, "Failed to check cmdq version\n"); + goto err_cmd_cleanup; + } + + return 0; +err_cmd_cleanup: + xsc_cmd_cleanup(xdev); +out: + return err; +} + +static int xsc_hw_cleanup(struct xsc_core_device *xdev) +{ + xsc_cmd_cleanup(xdev); + + return 0; +} + +static int xsc_load(struct xsc_core_device *xdev) +{ + int err = 0; + + mutex_lock(&xdev->intf_state_mutex); + if (test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) + goto out; + + err = xsc_hw_setup(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_hw_setup failed %d\n", err); + goto out; + } + + set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); + mutex_unlock(&xdev->intf_state_mutex); + + return 0; +out: + mutex_unlock(&xdev->intf_state_mutex); + return err; +} + +static int xsc_unload(struct xsc_core_device *xdev) +{ + mutex_lock(&xdev->intf_state_mutex); + if (!test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) { + xsc_hw_cleanup(xdev); + goto out; + } + + clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); + + xsc_hw_cleanup(xdev); + +out: + mutex_unlock(&xdev->intf_state_mutex); + + return 0; +} + static int xsc_pci_probe(struct pci_dev *pci_dev, const struct pci_device_id *id) { @@ -201,7 +273,15 @@ static int xsc_pci_probe(struct pci_dev *pci_dev, goto err_pci_fini; } + err = xsc_load(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_load failed %d\n", err); + goto err_core_dev_cleanup; + } + return 0; +err_core_dev_cleanup: + xsc_core_dev_cleanup(xdev); err_pci_fini: xsc_pci_fini(xdev); err_unset_pci_drvdata: @@ -215,6 +295,7 @@ static void xsc_pci_remove(struct pci_dev *pci_dev) { struct xsc_core_device *xdev = pci_get_drvdata(pci_dev); + xsc_unload(xdev); xsc_core_dev_cleanup(xdev); xsc_pci_fini(xdev); pci_set_drvdata(pci_dev, NULL); From patchwork Mon Feb 24 17:24:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988570 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-53.ptr.blmpb.com (lf-2-53.ptr.blmpb.com [101.36.218.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4840A1537C8 for ; Mon, 24 Feb 2025 17:27:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418076; cv=none; b=BS85McKjYs/vOJETUk3+/V5pOZmvuIOOph68ENn7E2eAJZ0wcd3umz7TaGtCZ/Q0FfuweXjfV9a9uA8PrfcI2YyR/9NjjNl3Sx5TBvah3TQvcx3hEkLfqepYuanBzoWYsvoMpyHuZxI9uwubKIjbBiZEfteSPRKFcyiiXuzgJU0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418076; c=relaxed/simple; bh=dWejgaXnhS5wItOPaffNzM3hvLquicYCS1ymWjxhBG4=; h=Message-Id:Mime-Version:References:In-Reply-To:To:Date:Cc:From: Content-Type:Subject; b=RbZv0+D+dkKx8eSOBIBKrJuzgRCdje77zGvvjpH3ussoYzF14gpk+hnu9SBOUg6J/lFa5vplizp5AfyWsJIPJYaYGj/HAmrRt60Lri8c5VxpRPrAk8+Xkk2j7dByWqtiA9ifVQxe6almorPSs4d5rO4ttR4jvzA6ffM8HwUMzkU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=N8Uh6wx6; arc=none smtp.client-ip=101.36.218.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="N8Uh6wx6" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417864; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=rHZbJi2I8YH9fy586Krb3MMrp1t3R9d6UkZfUaeh5Ec=; b=N8Uh6wx6WduaGBOHB5mFwPtJwftsfzqk6AZPETG9umwQ8sI9B/XKXLTDIMs4TVGi3k271A iJsAwPQldzw2hpsGqSjGwLGgIJKlWdOyKMechdQBma92ZAk0mLL39lARA/GQL4iUbEB4K9 N6Fk5urJqfAWT2hEF3LRbQAY3ZzHouaTIFeEtmd+UimqAOehvnVgu92l8htLeaMrSZZgR8 /W5sXWWwlcWxkpZ0ihjJ6XedWrISkVLZ4rApo5APam9VSxTN9R7Ns+Pi/8TquSXV+ajPJY E06F0PnL+5WH9I4VjK8Zjl9sqCc9PjGb5Wy7BnCU9zaEPgY0B0vnvdAToP5SEA== Message-Id: <20250224172421.2455751-4-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:22 +0800 References: <20250224172416.2455751-1-tianx@yunsilicon.com> In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> To: Date: Tue, 25 Feb 2025 01:24:22 +0800 X-Mailer: git-send-email 2.25.1 Cc: , , , , , , , , , , , , From: "Xin Tian" Subject: [PATCH net-next v5 03/14] xsc: Add hardware setup APIs X-Original-From: Xin Tian X-Lms-Return-Path: X-Patchwork-Delegate: kuba@kernel.org After CMDQ is initialized, the driver can retrieve hardware information from the firmware. This patch provides APIs to obtain the Hardware Component Adapter (HCA) capabilities, the Globally Unique Identifier (GUID) of the board, activate the hardware configuration, and reset function-specific resources. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 158 ++++++++++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- drivers/net/ethernet/yunsilicon/xsc/pci/hw.c | 277 ++++++++++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/hw.h | 18 ++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 27 ++ 5 files changed, 481 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/hw.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 1f8e4eb77..4c8b26660 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -30,6 +30,145 @@ #define XSC_REG_ADDR(dev, offset) \ (((dev)->bar) + ((offset) - 0xA0000000)) +enum { + XSC_MAX_PORTS = 2, +}; + +enum { + XSC_MAX_FW_PORTS = 1, +}; + +enum { + XSC_BF_REGS_PER_PAGE = 4, + XSC_MAX_UAR_PAGES = 1 << 8, + XSC_MAX_UUARS = XSC_MAX_UAR_PAGES * XSC_BF_REGS_PER_PAGE, +}; + +// hw +struct xsc_reg_addr { + u64 tx_db; + u64 rx_db; + u64 complete_db; + u64 complete_reg; + u64 event_db; + u64 cpm_get_lock; + u64 cpm_put_lock; + u64 cpm_lock_avail; + u64 cpm_data_mem; + u64 cpm_cmd; + u64 cpm_addr; + u64 cpm_busy; +}; + +struct xsc_board_info { + u32 board_id; + char board_sn[XSC_BOARD_SN_LEN]; + __be64 guid; + u8 guid_valid; + u8 hw_config_activated; +}; + +struct xsc_port_caps { + int gid_table_len; + int pkey_table_len; +}; + +struct xsc_caps { + u8 log_max_eq; + u8 log_max_cq; + u8 log_max_qp; + u8 log_max_mkey; + u8 log_max_pd; + u8 log_max_srq; + u8 log_max_msix; + u32 max_cqes; + u32 max_wqes; + u32 max_sq_desc_sz; + u32 max_rq_desc_sz; + u64 flags; + u16 stat_rate_support; + u32 log_max_msg; + u32 num_ports; + u32 max_ra_res_qp; + u32 max_ra_req_qp; + u32 max_srq_wqes; + u32 bf_reg_size; + u32 bf_regs_per_page; + struct xsc_port_caps port[XSC_MAX_PORTS]; + u8 ext_port_cap[XSC_MAX_PORTS]; + u32 reserved_lkey; + u8 local_ca_ack_delay; + u8 log_max_mcg; + u16 max_qp_mcg; + u32 min_page_sz; + u32 send_ds_num; + u32 send_wqe_shift; + u32 recv_ds_num; + u32 recv_wqe_shift; + u32 rx_pkt_len_max; + + u32 msix_enable:1; + u32 port_type:1; + u32 embedded_cpu:1; + u32 eswitch_manager:1; + u32 ecpf_vport_exists:1; + u32 vport_group_manager:1; + u32 sf:1; + u32 wqe_inline_mode:3; + u32 raweth_qp_id_base:15; + u32 rsvd0:7; + + u16 max_vfs; + u8 log_max_qp_depth; + u8 log_max_current_uc_list; + u8 log_max_current_mc_list; + u16 log_max_vlan_list; + u8 fdb_multi_path_to_table; + u8 log_esw_max_sched_depth; + + u8 max_num_sf_partitions; + u8 log_max_esw_sf; + u16 sf_base_id; + + u32 max_tc:8; + u32 ets:1; + u32 dcbx:1; + u32 dscp:1; + u32 sbcam_reg:1; + u32 qos:1; + u32 port_buf:1; + u32 rsvd1:2; + u32 raw_tpe_qp_num:16; + u32 max_num_eqs:8; + u32 mac_port:8; + u32 raweth_rss_qp_id_base:16; + u16 msix_base; + u16 msix_num; + u8 log_max_mtt; + u8 log_max_tso; + u32 hca_core_clock; + u32 max_rwq_indirection_tables;/*rss_caps*/ + u32 max_rwq_indirection_table_size;/*rss_caps*/ + u16 raweth_qp_id_end; + u32 qp_rate_limit_min; + u32 qp_rate_limit_max; + u32 hw_feature_flag; + u16 pf0_vf_funcid_base; + u16 pf0_vf_funcid_top; + u16 pf1_vf_funcid_base; + u16 pf1_vf_funcid_top; + u16 pcie0_pf_funcid_base; + u16 pcie0_pf_funcid_top; + u16 pcie1_pf_funcid_base; + u16 pcie1_pf_funcid_top; + u8 nif_port_num; + u8 pcie_host; + u8 mac_bit; + u16 funcid_to_logic_port; + u8 lag_logic_port_ofst; +}; + +// xsc_core struct xsc_dev_resource { /* protect buffer allocation according to numa node */ struct mutex alloc_mutex; @@ -53,6 +192,9 @@ struct xsc_core_device { void __iomem *bar; int bar_num; + u8 mac_port; + u16 glb_func_id; + struct xsc_cmd cmd; u16 cmdq_ver; @@ -60,6 +202,22 @@ struct xsc_core_device { enum xsc_pci_state pci_state; struct mutex intf_state_mutex; /* protect intf_state */ unsigned long intf_state; + + struct xsc_caps caps; + struct xsc_board_info *board_info; + + struct xsc_reg_addr regs; + u32 chip_ver_h; + u32 chip_ver_m; + u32 chip_ver_l; + u32 hotfix_num; + u32 feature_flag; + + u8 fw_version_major; + u8 fw_version_minor; + u16 fw_version_patch; + u32 fw_version_tweak; + u8 fw_version_extra_flag; }; #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 5e0f0a205..fea625d54 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o +xsc_pci-y := main.o cmdq.o hw.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c new file mode 100644 index 000000000..898b80216 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c @@ -0,0 +1,277 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include "common/xsc_driver.h" +#include "hw.h" + +#define MAX_BOARD_NUM 32 + +static struct xsc_board_info *board_info[MAX_BOARD_NUM]; + +static struct xsc_board_info *xsc_get_board_info(char *board_sn) +{ + int i; + + for (i = 0; i < MAX_BOARD_NUM; i++) { + if (!board_info[i]) + continue; + if (!strncmp(board_info[i]->board_sn, board_sn, + XSC_BOARD_SN_LEN)) + return board_info[i]; + } + return NULL; +} + +static struct xsc_board_info *xsc_alloc_board_info(void) +{ + int i; + + for (i = 0; i < MAX_BOARD_NUM; i++) { + if (!board_info[i]) + break; + } + if (i == MAX_BOARD_NUM) + return NULL; + board_info[i] = vmalloc(sizeof(*board_info[i])); + if (!board_info[i]) + return NULL; + memset(board_info[i], 0, sizeof(*board_info[i])); + board_info[i]->board_id = i; + return board_info[i]; +} + +void xsc_free_board_info(void) +{ + int i; + + for (i = 0; i < MAX_BOARD_NUM; i++) + vfree(board_info[i]); +} + +int xsc_cmd_query_hca_cap(struct xsc_core_device *xdev, + struct xsc_caps *caps) +{ + struct xsc_cmd_query_hca_cap_mbox_out *out; + struct xsc_cmd_query_hca_cap_mbox_in in; + struct xsc_board_info *board_info; + int err; + u16 t16; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) + return -ENOMEM; + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_HCA_CAP); + in.cpu_num = cpu_to_be16(num_online_cpus()); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out)); + if (err) + goto out_out; + + if (out->hdr.status) { + err = xsc_cmd_status_to_err(&out->hdr); + goto out_out; + } + + xdev->glb_func_id = be32_to_cpu(out->hca_cap.glb_func_id); + caps->pcie0_pf_funcid_base = + be16_to_cpu(out->hca_cap.pcie0_pf_funcid_base); + caps->pcie0_pf_funcid_top = + be16_to_cpu(out->hca_cap.pcie0_pf_funcid_top); + caps->pcie1_pf_funcid_base = + be16_to_cpu(out->hca_cap.pcie1_pf_funcid_base); + caps->pcie1_pf_funcid_top = + be16_to_cpu(out->hca_cap.pcie1_pf_funcid_top); + caps->funcid_to_logic_port = + be16_to_cpu(out->hca_cap.funcid_to_logic_port); + + caps->pcie_host = out->hca_cap.pcie_host; + caps->nif_port_num = out->hca_cap.nif_port_num; + caps->hw_feature_flag = be32_to_cpu(out->hca_cap.hw_feature_flag); + + caps->raweth_qp_id_base = be16_to_cpu(out->hca_cap.raweth_qp_id_base); + caps->raweth_qp_id_end = be16_to_cpu(out->hca_cap.raweth_qp_id_end); + caps->raweth_rss_qp_id_base = + be16_to_cpu(out->hca_cap.raweth_rss_qp_id_base); + caps->raw_tpe_qp_num = be16_to_cpu(out->hca_cap.raw_tpe_qp_num); + caps->max_cqes = 1 << out->hca_cap.log_max_cq_sz; + caps->max_wqes = 1 << out->hca_cap.log_max_qp_sz; + caps->max_sq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_sq); + caps->max_rq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_rq); + caps->flags = be64_to_cpu(out->hca_cap.flags); + caps->stat_rate_support = be16_to_cpu(out->hca_cap.stat_rate_support); + caps->log_max_msg = out->hca_cap.log_max_msg & 0x1f; + caps->num_ports = out->hca_cap.num_ports & 0xf; + caps->log_max_cq = out->hca_cap.log_max_cq & 0x1f; + caps->log_max_eq = out->hca_cap.log_max_eq & 0xf; + caps->log_max_msix = out->hca_cap.log_max_msix & 0xf; + caps->mac_port = out->hca_cap.mac_port & 0xff; + xdev->mac_port = caps->mac_port; + if (caps->num_ports > XSC_MAX_FW_PORTS) { + pci_err(xdev->pdev, "device has %d ports while the driver supports max %d ports\n", + caps->num_ports, XSC_MAX_FW_PORTS); + err = -EINVAL; + goto out_out; + } + caps->send_ds_num = out->hca_cap.send_seg_num; + caps->send_wqe_shift = out->hca_cap.send_wqe_shift; + caps->recv_ds_num = out->hca_cap.recv_seg_num; + caps->recv_wqe_shift = out->hca_cap.recv_wqe_shift; + + caps->embedded_cpu = 0; + caps->ecpf_vport_exists = 0; + caps->log_max_current_uc_list = 0; + caps->log_max_current_mc_list = 0; + caps->log_max_vlan_list = 8; + caps->log_max_qp = out->hca_cap.log_max_qp & 0x1f; + caps->log_max_mkey = out->hca_cap.log_max_mkey & 0x3f; + caps->log_max_pd = out->hca_cap.log_max_pd & 0x1f; + caps->log_max_srq = out->hca_cap.log_max_srqs & 0x1f; + caps->local_ca_ack_delay = out->hca_cap.local_ca_ack_delay & 0x1f; + caps->log_max_mcg = out->hca_cap.log_max_mcg; + caps->log_max_mtt = out->hca_cap.log_max_mtt; + caps->log_max_tso = out->hca_cap.log_max_tso; + caps->hca_core_clock = be32_to_cpu(out->hca_cap.hca_core_clock); + caps->max_rwq_indirection_tables = + be32_to_cpu(out->hca_cap.max_rwq_indirection_tables); + caps->max_rwq_indirection_table_size = + be32_to_cpu(out->hca_cap.max_rwq_indirection_table_size); + caps->max_qp_mcg = be16_to_cpu(out->hca_cap.max_qp_mcg); + caps->max_ra_res_qp = 1 << (out->hca_cap.log_max_ra_res_qp & 0x3f); + caps->max_ra_req_qp = 1 << (out->hca_cap.log_max_ra_req_qp & 0x3f); + caps->max_srq_wqes = 1 << out->hca_cap.log_max_srq_sz; + caps->rx_pkt_len_max = be32_to_cpu(out->hca_cap.rx_pkt_len_max); + caps->max_vfs = be16_to_cpu(out->hca_cap.max_vfs); + caps->qp_rate_limit_min = be32_to_cpu(out->hca_cap.qp_rate_limit_min); + caps->qp_rate_limit_max = be32_to_cpu(out->hca_cap.qp_rate_limit_max); + + caps->msix_enable = 1; + caps->msix_base = be16_to_cpu(out->hca_cap.msix_base); + caps->msix_num = be16_to_cpu(out->hca_cap.msix_num); + + t16 = be16_to_cpu(out->hca_cap.bf_log_bf_reg_size); + if (t16 & 0x8000) { + caps->bf_reg_size = 1 << (t16 & 0x1f); + caps->bf_regs_per_page = XSC_BF_REGS_PER_PAGE; + } else { + caps->bf_reg_size = 0; + caps->bf_regs_per_page = 0; + } + caps->min_page_sz = ~(u32)((1 << PAGE_SHIFT) - 1); + + caps->dcbx = 1; + caps->qos = 1; + caps->ets = 1; + caps->dscp = 1; + caps->max_tc = out->hca_cap.max_tc; + caps->log_max_qp_depth = out->hca_cap.log_max_qp_depth & 0xff; + caps->mac_bit = out->hca_cap.mac_bit; + caps->lag_logic_port_ofst = out->hca_cap.lag_logic_port_ofst; + + xdev->chip_ver_h = be32_to_cpu(out->hca_cap.chip_ver_h); + xdev->chip_ver_m = be32_to_cpu(out->hca_cap.chip_ver_m); + xdev->chip_ver_l = be32_to_cpu(out->hca_cap.chip_ver_l); + xdev->hotfix_num = be32_to_cpu(out->hca_cap.hotfix_num); + xdev->feature_flag = be32_to_cpu(out->hca_cap.feature_flag); + + board_info = xsc_get_board_info(out->hca_cap.board_sn); + if (!board_info) { + board_info = xsc_alloc_board_info(); + if (!board_info) + return -ENOMEM; + + memcpy(board_info->board_sn, + out->hca_cap.board_sn, + sizeof(out->hca_cap.board_sn)); + } + xdev->board_info = board_info; + + xdev->regs.tx_db = be64_to_cpu(out->hca_cap.tx_db); + xdev->regs.rx_db = be64_to_cpu(out->hca_cap.rx_db); + xdev->regs.complete_db = be64_to_cpu(out->hca_cap.complete_db); + xdev->regs.complete_reg = be64_to_cpu(out->hca_cap.complete_reg); + xdev->regs.event_db = be64_to_cpu(out->hca_cap.event_db); + + xdev->fw_version_major = out->hca_cap.fw_ver.fw_version_major; + xdev->fw_version_minor = out->hca_cap.fw_ver.fw_version_minor; + xdev->fw_version_patch = + be16_to_cpu(out->hca_cap.fw_ver.fw_version_patch); + xdev->fw_version_tweak = + be32_to_cpu(out->hca_cap.fw_ver.fw_version_tweak); + xdev->fw_version_extra_flag = out->hca_cap.fw_ver.fw_version_extra_flag; +out_out: + kfree(out); + + return err; +} + +static int xsc_cmd_query_guid(struct xsc_core_device *xdev) +{ + struct xsc_cmd_query_guid_mbox_out out; + struct xsc_cmd_query_guid_mbox_in in; + int err; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_GUID); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err) + return err; + + if (out.hdr.status) + return xsc_cmd_status_to_err(&out.hdr); + xdev->board_info->guid = out.guid; + xdev->board_info->guid_valid = 1; + return 0; +} + +int xsc_query_guid(struct xsc_core_device *xdev) +{ + if (xdev->board_info->guid_valid) + return 0; + + return xsc_cmd_query_guid(xdev); +} + +static int xsc_cmd_activate_hw_config(struct xsc_core_device *xdev) +{ + struct xsc_cmd_activate_hw_config_mbox_out out; + struct xsc_cmd_activate_hw_config_mbox_in in; + int err = 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ACTIVATE_HW_CONFIG); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err) + return err; + if (out.hdr.status) + return xsc_cmd_status_to_err(&out.hdr); + xdev->board_info->hw_config_activated = 1; + return 0; +} + +int xsc_activate_hw_config(struct xsc_core_device *xdev) +{ + if (xdev->board_info->hw_config_activated) + return 0; + + return xsc_cmd_activate_hw_config(xdev); +} + +int xsc_reset_function_resource(struct xsc_core_device *xdev) +{ + struct xsc_function_reset_mbox_out out; + struct xsc_function_reset_mbox_in in; + int err; + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_FUNCTION_RESET); + in.glb_func_id = cpu_to_be16(xdev->glb_func_id); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) + return -EINVAL; + + return 0; +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h new file mode 100644 index 000000000..d1030bfde --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __HW_H +#define __HW_H + +#include "common/xsc_core.h" + +void xsc_free_board_info(void); +int xsc_cmd_query_hca_cap(struct xsc_core_device *xdev, + struct xsc_caps *caps); +int xsc_query_guid(struct xsc_core_device *xdev); +int xsc_activate_hw_config(struct xsc_core_device *xdev); +int xsc_reset_function_resource(struct xsc_core_device *xdev); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index ead26a57a..4b505fd9a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -5,6 +5,7 @@ #include "common/xsc_core.h" #include "common/xsc_driver.h" +#include "hw.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -192,6 +193,31 @@ static int xsc_hw_setup(struct xsc_core_device *xdev) goto err_cmd_cleanup; } + err = xsc_cmd_query_hca_cap(xdev, &xdev->caps); + if (err) { + pci_err(xdev->pdev, "Failed to query hca, err=%d\n", err); + goto err_cmd_cleanup; + } + + err = xsc_query_guid(xdev); + if (err) { + pci_err(xdev->pdev, "failed to query guid, err=%d\n", err); + goto err_cmd_cleanup; + } + + err = xsc_activate_hw_config(xdev); + if (err) { + pci_err(xdev->pdev, "failed to activate hw config, err=%d\n", + err); + goto err_cmd_cleanup; + } + + err = xsc_reset_function_resource(xdev); + if (err) { + pci_err(xdev->pdev, "Failed to reset function resource\n"); + goto err_cmd_cleanup; + } + return 0; err_cmd_cleanup: xsc_cmd_cleanup(xdev); @@ -327,6 +353,7 @@ static int __init xsc_init(void) static void __exit xsc_fini(void) { pci_unregister_driver(&xsc_pci_driver); + xsc_free_board_info(); } module_init(xsc_init); From patchwork Mon Feb 24 17:24:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988572 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-1-37.ptr.blmpb.com (lf-1-37.ptr.blmpb.com [103.149.242.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 122A0264FA3 for ; Mon, 24 Feb 2025 17:27:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.149.242.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418078; cv=none; b=Gm3PkNy4mQuMcbn5Hm9m7NtJRsTv9Sybb6LIRjUHVyFPsQgiRgRojnjQ3vepaGfrlFDFnItzScyjyiVf7UAr8w38/vLyeR/gkgSAAg8TDOmV56f/hfE67oGuiYlR3tiY5b1Wuh3gaWy9TEkrluBSdm0gWzJDLLBwelYEgq59SGw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418078; c=relaxed/simple; bh=SLIK3GbTWoDDJNTFGET0aSuhu0B7NbFgjVC9A6Rq6hs=; h=Content-Type:From:Cc:Date:Mime-Version:In-Reply-To:To:Message-Id: References:Subject; b=WzwEjMm4Tkfqxi58OjRvWf/XC0nBEorhj5C7K9/6TIR5n8qtOU8v6/hvJWcUxsk7E8WJli1omvBZ8+oG3ikWNtRd09MJbXGegcViO0LX/DK5ZTIPIU0r7YyP0dWqH9FI79qRRbZc9/VR9ULmukCDCuGR6MIwrGTtnPuEq6/ND+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=iZUUYv0A; arc=none smtp.client-ip=103.149.242.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="iZUUYv0A" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417866; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=DJkzxV1LfORUJ07gUx3c3K+PExeUFTCJzoDxXNaZfHI=; b=iZUUYv0AJeDZPbGFKhEC6vJSswyFWSnv05jtXXiwR9XVAyX5Yy5QXZoZP5uA7GaH6So2cG WNfqgfUk91jAH3AjIpix9yLXdX3a3JQCfxXBrBVeYxM/fduy+S8oKasWFY9SUeq+bRr4Uh sgFOQz72ou9sw10FZF6Kzo2nyTwYraMBE15Duheegjw7OlLzUUUXlKXdSB96ZacZtHLZWx on82AODJh/Dauc7Vyj89uWrYI1SaBgN0e4wl7OPai90IOjT4g/+EPTJtroMhOch1jVYK6l vhDQ9hNj1zjduB5Q6dpplEFnx3rDFZ3o26A7CLZBbLCgmlLD8ONqSod3Hzpvtw== X-Lms-Return-Path: From: "Xin Tian" Cc: , , , , , , , , , , , , Date: Tue, 25 Feb 2025 01:24:24 +0800 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> To: Message-Id: <20250224172423.2455751-5-tianx@yunsilicon.com> References: <20250224172416.2455751-1-tianx@yunsilicon.com> Subject: [PATCH net-next v5 04/14] xsc: Add qp and cq management X-Original-From: Xin Tian Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:24 +0800 X-Patchwork-Delegate: kuba@kernel.org Our hardware uses queue pairs (QP) and completion queues (CQ) for packet transmission and reception. In the PCI driver, all QP and CQ resources used by all xsc drivers(ethernet and rdma) are recorded witch radix tree. This patch defines the QP and CQ structures and initializes the QP and CQ resource management during PCI initialization. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 153 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +- drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 39 +++++ drivers/net/ethernet/yunsilicon/xsc/pci/cq.h | 14 ++ drivers/net/ethernet/yunsilicon/xsc/pci/hw.c | 1 + .../net/ethernet/yunsilicon/xsc/pci/main.c | 5 + drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 80 +++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/qp.h | 15 ++ 8 files changed, 309 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/cq.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/qp.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 4c8b26660..910f0f9fb 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -30,6 +30,10 @@ #define XSC_REG_ADDR(dev, offset) \ (((dev)->bar) + ((offset) - 0xA0000000)) +enum { + XSC_MAX_EQ_NAME = 20 +}; + enum { XSC_MAX_PORTS = 2, }; @@ -44,6 +48,139 @@ enum { XSC_MAX_UUARS = XSC_MAX_UAR_PAGES * XSC_BF_REGS_PER_PAGE, }; +// alloc +struct xsc_buf_list { + void *buf; + dma_addr_t map; +}; + +struct xsc_buf { + struct xsc_buf_list direct; + struct xsc_buf_list *page_list; + unsigned long nbufs; + unsigned long npages; + unsigned long page_shift; + unsigned long size; +}; + +struct xsc_frag_buf { + struct xsc_buf_list *frags; + unsigned long npages; + unsigned long size; + u8 page_shift; +}; + +struct xsc_frag_buf_ctrl { + struct xsc_buf_list *frags; + u32 sz_m1; + u16 frag_sz_m1; + u16 strides_offset; + u8 log_sz; + u8 log_stride; + u8 log_frag_strides; +}; + +// qp +struct xsc_send_wqe_ctrl_seg { + __le32 data0; +#define XSC_WQE_CTRL_SEG_MSG_OPCODE_MASK GENMASK(7, 0) +#define XSC_WQE_CTRL_SEG_WITH_IMM BIT(8) +#define XSC_WQE_CTRL_SEG_CSUM_EN_MASK GENMASK(10, 9) +#define XSC_WQE_CTRL_SEG_DS_DATA_NUM_MASK GENMASK(15, 11) +#define XSC_WQE_CTRL_SEG_WQE_ID_MASK GENMASK(31, 16) + + __le32 msg_len; + __le32 data2; +#define XSC_WQE_CTRL_SEG_HAS_PPH BIT(0) +#define XSC_WQE_CTRL_SEG_SO_TYPE BIT(1) +#define XSC_WQE_CTRL_SEG_SO_DATA_SIZE_MASK GENMASK(15, 2) +#define XSC_WQE_CTRL_SEG_SO_HDR_LEN_MASK GENMASK(31, 24) + + __le32 data3; +#define XSC_WQE_CTRL_SEG_SE BIT(0) +#define XSC_WQE_CTRL_SEG_CE BIT(1) +}; + +struct xsc_wqe_data_seg { + __le32 data0; +#define XSC_WQE_DATA_SEG_SEG_LEN_MASK GENMASK(31, 1) + + __le32 mkey; + __le64 va; +}; + +struct xsc_core_qp { + void (*event)(struct xsc_core_qp *qp, int type); + u32 qpn; + atomic_t refcount; + struct completion free; + int pid; + u16 qp_type; + u16 eth_queue_type; + u16 qp_type_internal; + u16 grp_id; + u8 mac_id; +}; + +struct xsc_qp_table { + spinlock_t lock; /* protect radix tree */ + struct radix_tree_root tree; +}; + +// cq +enum xsc_event { + XSC_EVENT_TYPE_COMP = 0x0, + XSC_EVENT_TYPE_COMM_EST = 0x02,//mad + XSC_EVENT_TYPE_CQ_ERROR = 0x04, + XSC_EVENT_TYPE_WQ_CATAS_ERROR = 0x05, + XSC_EVENT_TYPE_INTERNAL_ERROR = 0x08, + XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR = 0x10,//IBV_EVENT_QP_REQ_ERR + XSC_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,//IBV_EVENT_QP_ACCESS_ERR +}; + +struct xsc_core_cq { + u32 cqn; + u32 cqe_sz; + u64 arm_db; + u64 ci_db; + struct xsc_core_device *xdev; + atomic_t refcount; + struct completion free; + unsigned int vector; + unsigned int irqn; + u16 dim_us; + u16 dim_pkts; + void (*comp)(struct xsc_core_cq *cq); + void (*event)(struct xsc_core_cq *cq, + enum xsc_event); + u32 cons_index; + unsigned int arm_sn; + int pid; + u32 reg_next_cid; + u32 reg_done_pid; + struct xsc_eq *eq; +}; + +struct xsc_cq_table { + spinlock_t lock; /* protect radix tree */ + struct radix_tree_root tree; +}; + +struct xsc_eq { + struct xsc_core_device *dev; + struct xsc_cq_table cq_table; + u32 doorbell;//offset from bar0/2 space start + u32 cons_index; + struct xsc_buf buf; + unsigned int irqn; + u32 eqn; + u32 nent; + cpumask_var_t mask; + char name[XSC_MAX_EQ_NAME]; + struct list_head list; + u16 index; +}; + // hw struct xsc_reg_addr { u64 tx_db; @@ -170,6 +307,8 @@ struct xsc_caps { // xsc_core struct xsc_dev_resource { + struct xsc_qp_table qp_table; + struct xsc_cq_table cq_table; /* protect buffer allocation according to numa node */ struct mutex alloc_mutex; }; @@ -220,4 +359,18 @@ struct xsc_core_device { u8 fw_version_extra_flag; }; +int xsc_core_create_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp); +void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp); + +static inline void *xsc_buf_offset(struct xsc_buf *buf, unsigned long offset) +{ + if (likely(buf->nbufs == 1)) + return buf->direct.buf + offset; + else + return buf->page_list[offset >> PAGE_SHIFT].buf + + (offset & (PAGE_SIZE - 1)); +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index fea625d54..9a4a6e02d 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c new file mode 100644 index 000000000..5cff9025c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "common/xsc_core.h" +#include "cq.h" + +void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + struct xsc_core_cq *cq; + + spin_lock(&table->lock); + + cq = radix_tree_lookup(&table->tree, cqn); + if (cq) + atomic_inc(&cq->refcount); + + spin_unlock(&table->lock); + + if (!cq) { + pci_err(xdev->pdev, "Async event for bogus CQ 0x%x\n", cqn); + return; + } + + cq->event(cq, event_type); + + if (atomic_dec_and_test(&cq->refcount)) + complete(&cq->free); +} + +void xsc_init_cq_table(struct xsc_core_device *xdev) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + + spin_lock_init(&table->lock); + INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h new file mode 100644 index 000000000..902a7f1f2 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __CQ_H +#define __CQ_H + +#include "common/xsc_core.h" + +void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type); +void xsc_init_cq_table(struct xsc_core_device *xdev); + +#endif /* __CQ_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c index 898b80216..6fb2440ab 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/hw.c @@ -5,6 +5,7 @@ #include #include + #include "common/xsc_driver.h" #include "hw.h" diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 4b505fd9a..68ae2fe93 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -6,6 +6,8 @@ #include "common/xsc_core.h" #include "common/xsc_driver.h" #include "hw.h" +#include "qp.h" +#include "cq.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -218,6 +220,9 @@ static int xsc_hw_setup(struct xsc_core_device *xdev) goto err_cmd_cleanup; } + xsc_init_cq_table(xdev); + xsc_init_qp_table(xdev); + return 0; err_cmd_cleanup: xsc_cmd_cleanup(xdev); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c new file mode 100644 index 000000000..cc79eaf92 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include + +#include "common/xsc_core.h" +#include "qp.h" + +int xsc_core_create_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + int err; + + spin_lock_irq(&table->lock); + err = radix_tree_insert(&table->tree, qp->qpn, qp); + spin_unlock_irq(&table->lock); + if (err) + return err; + + atomic_set(&qp->refcount, 1); + init_completion(&qp->free); + qp->pid = current->pid; + + return 0; +} +EXPORT_SYMBOL(xsc_core_create_resource_common); + +void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, + struct xsc_core_qp *qp) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + unsigned long flags; + + spin_lock_irqsave(&table->lock, flags); + radix_tree_delete(&table->tree, qp->qpn); + spin_unlock_irqrestore(&table->lock, flags); + + if (atomic_dec_and_test(&qp->refcount)) + complete(&qp->free); + wait_for_completion(&qp->free); +} +EXPORT_SYMBOL(xsc_core_destroy_resource_common); + +void xsc_qp_event(struct xsc_core_device *xdev, u32 qpn, int event_type) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + struct xsc_core_qp *qp; + + spin_lock(&table->lock); + + qp = radix_tree_lookup(&table->tree, qpn); + if (qp) + atomic_inc(&qp->refcount); + + spin_unlock(&table->lock); + + if (!qp) { + pci_err(xdev->pdev, "Async event for bogus QP 0x%x\n", qpn); + return; + } + + qp->event(qp, event_type); + + if (atomic_dec_and_test(&qp->refcount)) + complete(&qp->free); +} + +void xsc_init_qp_table(struct xsc_core_device *xdev) +{ + struct xsc_qp_table *table = &xdev->dev_res->qp_table; + + spin_lock_init(&table->lock); + INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h new file mode 100644 index 000000000..52af8db7c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __QP_H +#define __QP_H + +#include "common/xsc_core.h" + +void xsc_init_qp_table(struct xsc_core_device *xdev); +void xsc_cleanup_qp_table(struct xsc_core_device *xdev); +void xsc_qp_event(struct xsc_core_device *xdev, u32 qpn, int event_type); + +#endif /* __QP_H */ From patchwork Mon Feb 24 17:24:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988568 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-48.ptr.blmpb.com (va-2-48.ptr.blmpb.com [209.127.231.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AEE71A2388 for ; Mon, 24 Feb 2025 17:26:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418017; cv=none; b=gNqePvFoA1VKlozDIdSSCVsb0johY1W79pYSzgeqZVQJvpFzHA1qu489nH14gUtQS1YT0EnYUXaUZAtn7pYaOZAVQJFM8uJsKu6ugFjqv9EMtD5/0+ywFKGrfjzjK1E3FQMI2QfKjfg0iutGfd9iKbs6BiOnpba+ZasibgSWqkQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418017; c=relaxed/simple; bh=sej4EoXtP1Z+SZIwpujp2gwHl8f5wHzu4Tvn3mD6GJ4=; h=To:Message-Id:Mime-Version:In-Reply-To:Cc:Subject:Date:References: Content-Type:From; b=BPEOUpd8O523qRx5XtFdwfOzGDpyY2siP/InWAJs4OdFuujKvynXbAsCUE/SuhJLV7Lr5neAijQ2UZw2pfqwTiIfUJTyZDrcSKWXf6UTByFsqdjhcF/yPWzNDJsSrPPgRjBaHXJzqF2r3bzVsV9svpoew/ZleqBoY3pZvk/DuT0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=gX4Yij9T; arc=none smtp.client-ip=209.127.231.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="gX4Yij9T" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417868; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=FdF63YqQoGuxm5Hhz+B15ydH2xrsdNmtuQ54a8RndGY=; b=gX4Yij9T4WZ4Uw0RE9sDnq/trHWgZYbQ821Uj3Xi7UZmOW2doTc1Ann2hXMwwuOjbvzmjx K5n2KwyhSNmBLCaGBixlcHlV3q2LLd4Zs4ACQN2lsEcJYGTWDvi6wNj3MriL18Qi9T1Atb NGBNJCkORpRSGbtRikibiBlp+gQSbPtVzwMydjXuSpCzFnvMhGHuWitzCh+WL5WuVtBzZM zIgsDtRKhxgLHPkIwy0LHaVEq7wwdxJqU1VUGmk8hl9ga+Aa7Fv9io/flsR4dzOE/z2HvH 7DBgrstPqPwvl91+ToFe5TVPeCMogmt5HzLVfkd3nt+wpctFf4PK87dfxIp9Mg== To: Message-Id: <20250224172425.2455751-6-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Cc: , , , , , , , , , , , , Subject: [PATCH net-next v5 05/14] xsc: Add eq and alloc Date: Tue, 25 Feb 2025 01:24:26 +0800 References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Lms-Return-Path: X-Original-From: Xin Tian From: "Xin Tian" Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:26 +0800 X-Patchwork-Delegate: kuba@kernel.org Add eq management and buffer alloc apis This patch, along with the next one, implements the event handling mechanism. The hardware uses the event queue (EQ) for event reporting, notifying the CPU through interrupts, which then reads and processes events from the EQ. 1. This patch implements the EQ API, including the initialization of the EQ table, and the creation/destroy, start/stop operations of the EQ. 2. When creating an event queue, a large amount of DMA memory is required to store the ring buffer, necessitating multiple DMA page allocations and unified management. For this purpose, we define struct xsc_buff and use xsc_buf_alloc/free to allocate and free memory. The usage of QP and CQ will also rely on these in the future. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 41 ++- .../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +- .../net/ethernet/yunsilicon/xsc/pci/alloc.c | 130 +++++++ .../net/ethernet/yunsilicon/xsc/pci/alloc.h | 17 + drivers/net/ethernet/yunsilicon/xsc/pci/eq.c | 343 ++++++++++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/eq.h | 46 +++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 2 + 7 files changed, 578 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/eq.h -- 2.18.4 diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 910f0f9fb..2c897d54a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -27,13 +27,20 @@ #define XSC_MV_HOST_VF_DEV_ID 0x1152 #define XSC_MV_SOC_PF_DEV_ID 0x1153 -#define XSC_REG_ADDR(dev, offset) \ - (((dev)->bar) + ((offset) - 0xA0000000)) +#define PAGE_SHIFT_4K 12 +#define PAGE_SIZE_4K (_AC(1, UL) << PAGE_SHIFT_4K) +#define PAGE_MASK_4K (~(PAGE_SIZE_4K - 1)) + +#define XSC_REG_ADDR(dev, offset) (((dev)->bar) + ((offset) - 0xA0000000)) enum { XSC_MAX_EQ_NAME = 20 }; +enum { + XSC_MAX_IRQ_NAME = 32 +}; + enum { XSC_MAX_PORTS = 2, }; @@ -166,6 +173,7 @@ struct xsc_cq_table { struct radix_tree_root tree; }; +// eq struct xsc_eq { struct xsc_core_device *dev; struct xsc_cq_table cq_table; @@ -181,6 +189,25 @@ struct xsc_eq { u16 index; }; +struct xsc_eq_table { + void __iomem *update_ci; + void __iomem *update_arm_ci; + struct list_head comp_eqs_list; + struct xsc_eq pages_eq; + struct xsc_eq async_eq; + struct xsc_eq cmd_eq; + int num_comp_vectors; + int eq_vec_comp_base; + /* protect EQs list */ + spinlock_t lock; +}; + +// irq +struct xsc_irq_info { + cpumask_var_t mask; + char name[XSC_MAX_IRQ_NAME]; +}; + // hw struct xsc_reg_addr { u64 tx_db; @@ -309,6 +336,8 @@ struct xsc_caps { struct xsc_dev_resource { struct xsc_qp_table qp_table; struct xsc_cq_table cq_table; + struct xsc_eq_table eq_table; + struct xsc_irq_info *irq_info; /* protect buffer allocation according to numa node */ struct mutex alloc_mutex; }; @@ -334,6 +363,8 @@ struct xsc_core_device { u8 mac_port; u16 glb_func_id; + u16 msix_vec_base; + struct xsc_cmd cmd; u16 cmdq_ver; @@ -363,6 +394,7 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); +struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i); static inline void *xsc_buf_offset(struct xsc_buf *buf, unsigned long offset) { @@ -373,4 +405,9 @@ static inline void *xsc_buf_offset(struct xsc_buf *buf, unsigned long offset) (offset & (PAGE_SIZE - 1)); } +static inline bool xsc_fw_is_available(struct xsc_core_device *xdev) +{ + return xdev->cmd.cmd_status == XSC_CMD_STATUS_NORMAL; +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 9a4a6e02d..667319958 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,5 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o - +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c new file mode 100644 index 000000000..4c561d9fa --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "alloc.h" + +/* Handling for queue buffers -- we allocate a bunch of memory and + * register it in a memory region at HCA virtual address 0. If the + * requested size is > max_direct, we split the allocation into + * multiple pages, so we don't require too much contiguous memory. + */ +int xsc_buf_alloc(struct xsc_core_device *xdev, + unsigned long size, + unsigned long max_direct, + struct xsc_buf *buf) +{ + dma_addr_t t; + + buf->size = size; + if (size <= max_direct) { + buf->nbufs = 1; + buf->npages = 1; + buf->page_shift = get_order(size) + PAGE_SHIFT; + buf->direct.buf = dma_alloc_coherent(&xdev->pdev->dev, + size, + &t, + GFP_KERNEL | __GFP_ZERO); + if (!buf->direct.buf) + return -ENOMEM; + + buf->direct.map = t; + + while (t & GENMASK(buf->page_shift - 1, 0)) { + --buf->page_shift; + buf->npages *= 2; + } + } else { + struct page **pages; + int i; + + buf->direct.buf = NULL; + buf->nbufs = DIV_ROUND_UP(size, PAGE_SIZE); + buf->npages = buf->nbufs; + buf->page_shift = PAGE_SHIFT; + buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list), + GFP_KERNEL); + if (!buf->page_list) + return -ENOMEM; + + for (i = 0; i < buf->nbufs; i++) { + buf->page_list[i].buf = + dma_alloc_coherent(&xdev->pdev->dev, PAGE_SIZE, + &t, GFP_KERNEL | __GFP_ZERO); + if (!buf->page_list[i].buf) + goto err_free; + + buf->page_list[i].map = t; + } + + pages = kmalloc_array(buf->nbufs, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_free; + for (i = 0; i < buf->nbufs; i++) { + void *addr = buf->page_list[i].buf; + + if (is_vmalloc_addr(addr)) + pages[i] = vmalloc_to_page(addr); + else + pages[i] = virt_to_page(addr); + } + buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL); + kfree(pages); + if (!buf->direct.buf) + goto err_free; + } + + return 0; + +err_free: + xsc_buf_free(xdev, buf); + + return -ENOMEM; +} + +void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf) +{ + int i; + + if (buf->nbufs == 1) { + dma_free_coherent(&xdev->pdev->dev, buf->size, buf->direct.buf, + buf->direct.map); + } else { + if (BITS_PER_LONG == 64 && buf->direct.buf) + vunmap(buf->direct.buf); + + for (i = 0; i < buf->nbufs; i++) + if (buf->page_list[i].buf) + dma_free_coherent(&xdev->pdev->dev, PAGE_SIZE, + buf->page_list[i].buf, + buf->page_list[i].map); + kfree(buf->page_list); + } +} + +void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, unsigned int npages) +{ + unsigned int shift = PAGE_SHIFT - PAGE_SHIFT_4K; + unsigned int mask = (1 << shift) - 1; + u64 addr; + int i; + + for (i = 0; i < npages; i++) { + if (buf->nbufs == 1) + addr = buf->direct.map + (i << PAGE_SHIFT_4K); + else + addr = buf->page_list[i >> shift].map + + ((i & mask) << PAGE_SHIFT_4K); + + pas[i] = cpu_to_be64(addr); + } +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h new file mode 100644 index 000000000..a1c4b92a5 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __ALLOC_H +#define __ALLOC_H + +#include "common/xsc_core.h" + +int xsc_buf_alloc(struct xsc_core_device *xdev, + unsigned long size, + unsigned long max_direct, + struct xsc_buf *buf); +void xsc_buf_free(struct xsc_core_device *xdev, struct xsc_buf *buf); +void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, unsigned int npages); +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c new file mode 100644 index 000000000..3f917c059 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.c @@ -0,0 +1,343 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ +#include +#include +#include + +#include "common/xsc_driver.h" +#include "common/xsc_core.h" +#include "qp.h" +#include "alloc.h" +#include "eq.h" + +enum { + XSC_EQE_SIZE = sizeof(struct xsc_eqe), +}; + +enum { + XSC_NUM_SPARE_EQE = 0x80, + XSC_NUM_ASYNC_EQE = 0x100, +}; + +static int xsc_cmd_destroy_eq(struct xsc_core_device *xdev, u32 eqn) +{ + struct xsc_destroy_eq_mbox_out out; + struct xsc_destroy_eq_mbox_in in; + int err; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_EQ); + in.eqn = cpu_to_be32(eqn); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (!err) + goto ex; + + if (out.hdr.status) + err = xsc_cmd_status_to_err(&out.hdr); + +ex: + return err; +} + +static struct xsc_eqe *get_eqe(struct xsc_eq *eq, u32 entry) +{ + return xsc_buf_offset(&eq->buf, entry * XSC_EQE_SIZE); +} + +static struct xsc_eqe *next_eqe_sw(struct xsc_eq *eq) +{ + struct xsc_eqe *eqe = get_eqe(eq, eq->cons_index & (eq->nent - 1)); + + return ((eqe->owner_data & XSC_EQE_OWNER) ^ + !!(eq->cons_index & eq->nent)) ? NULL : eqe; +} + +static void eq_update_ci(struct xsc_eq *eq, int arm) +{ + u32 db_val = 0; + + db_val = FIELD_PREP(XSC_EQ_DB_NEXT_CID_MASK, eq->cons_index) | + FIELD_PREP(XSC_EQ_DB_EQ_ID_MASK, eq->eqn); + if (arm) + db_val |= XSC_EQ_DB_ARM; + /* make sure memory is written before device access */ + wmb(); + writel(db_val, XSC_REG_ADDR(eq->dev, eq->doorbell)); +} + +static void xsc_cq_completion(struct xsc_core_device *xdev, u32 cqn) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + struct xsc_core_cq *cq; + + rcu_read_lock(); + cq = radix_tree_lookup(&table->tree, cqn); + if (likely(cq)) + atomic_inc(&cq->refcount); + rcu_read_unlock(); + + if (!cq) { + pci_err(xdev->pdev, "Completion event for bogus CQ, cqn=%d\n", + cqn); + return; + } + + ++cq->arm_sn; + + if (!cq->comp) + pci_err(xdev->pdev, "cq->comp is NULL\n"); + else + cq->comp(cq); + + if (atomic_dec_and_test(&cq->refcount)) + complete(&cq->free); +} + +static void xsc_eq_cq_event(struct xsc_core_device *xdev, + u32 cqn, int event_type) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + struct xsc_core_cq *cq; + + spin_lock(&table->lock); + cq = radix_tree_lookup(&table->tree, cqn); + if (likely(cq)) + atomic_inc(&cq->refcount); + spin_unlock(&table->lock); + + if (unlikely(!cq)) { + pci_err(xdev->pdev, "Async event for bogus CQ, cqn=%d\n", + cqn); + return; + } + + cq->event(cq, event_type); + + if (atomic_dec_and_test(&cq->refcount)) + complete(&cq->free); +} + +static int xsc_eq_int(struct xsc_core_device *xdev, struct xsc_eq *eq) +{ + u32 cqn, qpn, queue_id; + struct xsc_eqe *eqe; + int eqes_found = 0; + int set_ci = 0; + + while ((eqe = next_eqe_sw(eq))) { + /* Make sure we read EQ entry contents after we've + * checked the ownership bit. + */ + rmb(); + switch (eqe->type) { + case XSC_EVENT_TYPE_COMP: + case XSC_EVENT_TYPE_INTERNAL_ERROR: + /* eqe is changing */ + queue_id = FIELD_GET(XSC_EQE_QUEUE_ID_MASK, + le16_to_cpu(eqe->queue_id_data)); + cqn = queue_id; + xsc_cq_completion(xdev, cqn); + break; + + case XSC_EVENT_TYPE_CQ_ERROR: + queue_id = FIELD_GET(XSC_EQE_QUEUE_ID_MASK, + le16_to_cpu(eqe->queue_id_data)); + cqn = queue_id; + xsc_eq_cq_event(xdev, cqn, eqe->type); + break; + case XSC_EVENT_TYPE_WQ_CATAS_ERROR: + case XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + case XSC_EVENT_TYPE_WQ_ACCESS_ERROR: + queue_id = FIELD_GET(XSC_EQE_QUEUE_ID_MASK, + le16_to_cpu(eqe->queue_id_data)); + qpn = queue_id; + xsc_qp_event(xdev, qpn, eqe->type); + break; + default: + break; + } + + ++eq->cons_index; + eqes_found = 1; + ++set_ci; + + /* The HCA will think the queue has overflowed if we + * don't tell it we've been processing events. We + * create our EQs with XSC_NUM_SPARE_EQE extra + * entries, so we must update our consumer index at + * least that often. + */ + if (unlikely(set_ci >= XSC_NUM_SPARE_EQE)) { + eq_update_ci(eq, 0); + set_ci = 0; + } + } + + eq_update_ci(eq, 1); + + return eqes_found; +} + +static irqreturn_t xsc_msix_handler(int irq, void *eq_ptr) +{ + struct xsc_core_device *xdev; + struct xsc_eq *eq = eq_ptr; + + xdev = eq->dev; + xsc_eq_int(xdev, eq); + + /* MSI-X vectors always belong to us */ + return IRQ_HANDLED; +} + +static void init_eq_buf(struct xsc_eq *eq) +{ + struct xsc_eqe *eqe; + int i; + + for (i = 0; i < eq->nent; i++) { + eqe = get_eqe(eq, i); + eqe->owner_data |= XSC_EQE_OWNER; + } +} + +int xsc_create_map_eq(struct xsc_core_device *xdev, + struct xsc_eq *eq, u16 vecidx, + u32 nent, const char *name) +{ + u16 msix_vec_offset = xdev->msix_vec_base + vecidx; + struct xsc_dev_resource *dev_res = xdev->dev_res; + struct xsc_create_eq_mbox_out out; + struct xsc_create_eq_mbox_in *in; + unsigned int hw_npages; + u32 inlen; + int err; + + eq->nent = roundup_pow_of_two(roundup(nent, XSC_NUM_SPARE_EQE)); + err = xsc_buf_alloc(xdev, eq->nent * XSC_EQE_SIZE, PAGE_SIZE, &eq->buf); + if (err) + return err; + + init_eq_buf(eq); + + hw_npages = DIV_ROUND_UP(eq->nent * XSC_EQE_SIZE, PAGE_SIZE_4K); + inlen = sizeof(*in) + sizeof(in->pas[0]) * hw_npages; + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) { + err = -ENOMEM; + goto err_buf; + } + memset(&out, 0, sizeof(out)); + + xsc_fill_page_array(&eq->buf, in->pas, hw_npages); + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_EQ); + in->ctx.log_eq_sz = ilog2(eq->nent); + in->ctx.vecidx = cpu_to_be16(msix_vec_offset); + in->ctx.pa_num = cpu_to_be16(hw_npages); + in->ctx.glb_func_id = cpu_to_be16(xdev->glb_func_id); + in->ctx.is_async_eq = (vecidx == XSC_EQ_VEC_ASYNC ? 1 : 0); + + err = xsc_cmd_exec(xdev, in, inlen, &out, sizeof(out)); + if (err) + goto err_in; + + if (out.hdr.status) { + err = -ENOSPC; + goto err_in; + } + + snprintf(dev_res->irq_info[vecidx].name, XSC_MAX_IRQ_NAME, "%s@pci:%s", + name, pci_name(xdev->pdev)); + + eq->eqn = be32_to_cpu(out.eqn); + eq->irqn = pci_irq_vector(xdev->pdev, vecidx); + eq->dev = xdev; + eq->doorbell = xdev->regs.event_db; + eq->index = vecidx; + + err = request_irq(eq->irqn, xsc_msix_handler, 0, + dev_res->irq_info[vecidx].name, eq); + if (err) + goto err_eq; + + /* EQs are created in ARMED state + */ + eq_update_ci(eq, 1); + kvfree(in); + return 0; + +err_eq: + xsc_cmd_destroy_eq(xdev, eq->eqn); + +err_in: + kvfree(in); + +err_buf: + xsc_buf_free(xdev, &eq->buf); + return err; +} + +int xsc_destroy_unmap_eq(struct xsc_core_device *xdev, struct xsc_eq *eq) +{ + int err; + + if (!xsc_fw_is_available(xdev)) + return 0; + + free_irq(eq->irqn, eq); + err = xsc_cmd_destroy_eq(xdev, eq->eqn); + if (err) + pci_err(xdev->pdev, "failed to destroy a previously created eq: eqn %d\n", + eq->eqn); + xsc_buf_free(xdev, &eq->buf); + + return err; +} + +void xsc_eq_init(struct xsc_core_device *xdev) +{ + spin_lock_init(&xdev->dev_res->eq_table.lock); +} + +int xsc_start_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int err; + + err = xsc_create_map_eq(xdev, &table->async_eq, XSC_EQ_VEC_ASYNC, + XSC_NUM_ASYNC_EQE, "xsc_async_eq"); + if (err) + pci_err(xdev->pdev, "failed to create async EQ %d\n", err); + + return err; +} + +void xsc_stop_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + + xsc_destroy_unmap_eq(xdev, &table->async_eq); +} + +struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq_ret = NULL; + struct xsc_eq *eq, *n; + + spin_lock(&table->lock); + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + if (eq->index == i) { + eq_ret = eq; + break; + } + } + spin_unlock(&table->lock); + + return eq_ret; +} +EXPORT_SYMBOL(xsc_core_eq_get); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h new file mode 100644 index 000000000..c434976e3 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/eq.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __EQ_H +#define __EQ_H + +#include "common/xsc_core.h" + +enum { + XSC_EQ_VEC_ASYNC = 0, + XSC_VEC_CMD = 1, + XSC_VEC_CMD_EVENT = 2, + XSC_DMA_READ_DONE_VEC = 3, + XSC_EQ_VEC_COMP_BASE, +}; + +struct xsc_eqe { + u8 type; + u8 sub_type; + __le16 queue_id_data; +#define XSC_EQE_QUEUE_ID_MASK GENMASK(14, 0) + + u8 err_code; + u8 rsvd[2]; + u8 owner_data; +#define XSC_EQE_OWNER BIT(7) +}; + +// eq doorbell bitfields +#define XSC_EQ_DB_NEXT_CID_SHIFT 0 +#define XSC_EQ_DB_NEXT_CID_MASK GENMASK(10, 0) +#define XSC_EQ_DB_EQ_ID_SHIFT 11 +#define XSC_EQ_DB_EQ_ID_MASK GENMASK(21, 11) +#define XSC_EQ_DB_ARM BIT(22) + +int xsc_create_map_eq(struct xsc_core_device *xdev, + struct xsc_eq *eq, u16 vecidx, + u32 nent, const char *name); +int xsc_destroy_unmap_eq(struct xsc_core_device *xdev, struct xsc_eq *eq); +void xsc_eq_init(struct xsc_core_device *xdev); +int xsc_start_eqs(struct xsc_core_device *xdev); +void xsc_stop_eqs(struct xsc_core_device *xdev); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 68ae2fe93..9b185e2d5 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -8,6 +8,7 @@ #include "hw.h" #include "qp.h" #include "cq.h" +#include "eq.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -222,6 +223,7 @@ static int xsc_hw_setup(struct xsc_core_device *xdev) xsc_init_cq_table(xdev); xsc_init_qp_table(xdev); + xsc_eq_init(xdev); return 0; err_cmd_cleanup: From patchwork Mon Feb 24 17:24:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988561 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-14.ptr.blmpb.com (lf-2-14.ptr.blmpb.com [101.36.218.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EE16262809 for ; Mon, 24 Feb 2025 17:25:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417954; cv=none; b=JgxYGA9NwSsjzm5+i9eHSZJPpFs2v0rQVAhT8WlLu0j78Nl7wPtcON1e2xSi3nCNSN8HT8IEgzU2E6M13I1ZlWYzHf9kdyt/btQxxb6bHdnkDQPIGNpCeZy8syA/81EEZCjyKtHBQsR32KdWUBAsUuWtJtZAHoqHf/ftT3TB2rY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417954; c=relaxed/simple; bh=5a0yvyJSfo6NqU4uDwrWly6FbktEMfJ3X/kNczMAaPk=; h=To:Date:Subject:Message-Id:Mime-Version:In-Reply-To:Cc:From: Content-Type:References; b=l8FncMX4kL6LZ+OFmdb75YGZ3b6CrKtrft9yozhnl+rCoMuKQD0NjEj5CL6sd5M9lYhVpkVzVYsyTty83RNLGf0WpTdeT19UgFX8LziTg91dxc87qPOH4b+vk1HAv3tudiEH2heCgF1KfipI7wR/7RQeA4jtOKUBpTmRzM/rC1E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=f/fUYlKR; arc=none smtp.client-ip=101.36.218.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="f/fUYlKR" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417871; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=ev2t00RumoxqkGcDTbqo2DMFCNTGj/KmX0958koQs04=; b=f/fUYlKRbAlRpQ5UNM4QWGd4XMcL7AbmEDOrVSSl/hH55O5DAarQTGlh7ZdoYrp3uZsDtZ dtAul2X2FKm+kUqPuZnTH1KjGl6Y8sARoDZv14MHw4seX1pRUDEmWPLj0ow/qv5anwJmny rL0jghKMFkergQt4lCaHx5fGcQNANGTD7WERnflS2erbBTw9iTgRuUmINQ7tUUWm6lGKQS 3kXYhqCZU8Gp1Vc4K9rUJ5tzR3UVoNfyiPr7b3FQeq98XLjQSwSJq6/leODuswySFhg9oN XTjnXFDtuOMavhNQAhh+ZbeBB9n1wrX58LTTm7sx47yBH1m3RqWv/rZDvgyB4w== X-Lms-Return-Path: To: Date: Tue, 25 Feb 2025 01:24:28 +0800 Subject: [PATCH net-next v5 06/14] xsc: Init pci irq Message-Id: <20250224172427.2455751-7-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Cc: , , , , , , , , , , , , From: "Xin Tian" X-Mailer: git-send-email 2.25.1 X-Original-From: Xin Tian References: <20250224172416.2455751-1-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:28 +0800 X-Patchwork-Delegate: kuba@kernel.org This patch implements the initialization of PCI MSI-X interrupts. It handles the allocation of MSI-X interrupt vectors, registration of interrupt requests (IRQs), configuration of interrupt affinity , and interrupt handlers for different event types, including: - Completion events (handled via completion queues). - Command queue (cmdq) events (via xsc_cmd_handler). - Asynchronous events (via xsc_event_handler). Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 6 + .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/pci/main.c | 11 +- .../net/ethernet/yunsilicon/xsc/pci/pci_irq.c | 429 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/pci_irq.h | 14 + 5 files changed, 460 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 2c897d54a..a0270937f 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -354,9 +354,12 @@ enum xsc_interface_state { struct xsc_core_device { struct pci_dev *pdev; struct device *device; + void *eth_priv; struct xsc_dev_resource *dev_res; int numa_node; + void (*event_handler)(void *adapter); + void __iomem *bar; int bar_num; @@ -388,6 +391,7 @@ struct xsc_core_device { u16 fw_version_patch; u32 fw_version_tweak; u8 fw_version_extra_flag; + cpumask_var_t xps_cpumask; }; int xsc_core_create_resource_common(struct xsc_core_device *xdev, @@ -395,6 +399,8 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev, void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i); +int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, u32 *eqn, + unsigned int *irqn); static inline void *xsc_buf_offset(struct xsc_buf *buf, unsigned long offset) { diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 667319958..3525d1c74 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 9b185e2d5..72eb2a37d 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -9,6 +9,7 @@ #include "qp.h" #include "cq.h" #include "eq.h" +#include "pci_irq.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -253,10 +254,18 @@ static int xsc_load(struct xsc_core_device *xdev) goto out; } + err = xsc_irq_eq_create(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_irq_eq_create failed %d\n", err); + goto err_hw_cleanup; + } + set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); mutex_unlock(&xdev->intf_state_mutex); return 0; +err_hw_cleanup: + xsc_hw_cleanup(xdev); out: mutex_unlock(&xdev->intf_state_mutex); return err; @@ -271,7 +280,7 @@ static int xsc_unload(struct xsc_core_device *xdev) } clear_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); - + xsc_irq_eq_destroy(xdev); xsc_hw_cleanup(xdev); out: diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c new file mode 100644 index 000000000..c66cc9387 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include +#include +#include +#ifdef CONFIG_RFS_ACCEL +#include +#endif + +#include "common/xsc_driver.h" +#include "common/xsc_core.h" +#include "eq.h" +#include "pci_irq.h" + +enum { + XSC_COMP_EQ_SIZE = 1024, +}; + +enum xsc_eq_type { + XSC_EQ_TYPE_COMP, + XSC_EQ_TYPE_ASYNC, +}; + +struct xsc_irq { + struct atomic_notifier_head nh; + cpumask_var_t mask; + char name[XSC_MAX_IRQ_NAME]; +}; + +struct xsc_irq_table { + struct xsc_irq *irq; + int nvec; +#ifdef CONFIG_RFS_ACCEL + struct cpu_rmap *rmap; +#endif +}; + +static void xsc_free_irq(struct xsc_core_device *xdev, unsigned int vector) +{ + unsigned int irqn = 0; + + irqn = pci_irq_vector(xdev->pdev, vector); + disable_irq(irqn); + + if (xsc_fw_is_available(xdev)) + free_irq(irqn, xdev); +} + +static int set_comp_irq_affinity_hint(struct xsc_core_device *xdev, int i) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq = xsc_core_eq_get(xdev, i); + int vecidx = table->eq_vec_comp_base + i; + unsigned int irqn; + int ret; + + irqn = pci_irq_vector(xdev->pdev, vecidx); + if (!zalloc_cpumask_var(&eq->mask, GFP_KERNEL)) { + pci_err(xdev->pdev, "zalloc_cpumask_var rx cpumask failed"); + return -ENOMEM; + } + + if (!zalloc_cpumask_var(&xdev->xps_cpumask, GFP_KERNEL)) { + pci_err(xdev->pdev, "zalloc_cpumask_var tx cpumask failed"); + return -ENOMEM; + } + + cpumask_set_cpu(cpumask_local_spread(i, xdev->numa_node), + xdev->xps_cpumask); + ret = irq_set_affinity_hint(irqn, eq->mask); + + return ret; +} + +static void clear_comp_irq_affinity_hint(struct xsc_core_device *xdev, int i) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq = xsc_core_eq_get(xdev, i); + int vecidx = table->eq_vec_comp_base + i; + int irqn; + + irqn = pci_irq_vector(xdev->pdev, vecidx); + irq_set_affinity_hint(irqn, NULL); + free_cpumask_var(eq->mask); +} + +static int set_comp_irq_affinity_hints(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int nvec = table->num_comp_vectors; + int err; + int i; + + for (i = 0; i < nvec; i++) { + err = set_comp_irq_affinity_hint(xdev, i); + if (err) + goto err_out; + } + + return 0; + +err_out: + for (i--; i >= 0; i--) + clear_comp_irq_affinity_hint(xdev, i); + free_cpumask_var(xdev->xps_cpumask); + + return err; +} + +static void clear_comp_irq_affinity_hints(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + int nvec = table->num_comp_vectors; + int i; + + for (i = 0; i < nvec; i++) + clear_comp_irq_affinity_hint(xdev, i); + free_cpumask_var(xdev->xps_cpumask); +} + +static int xsc_alloc_irq_vectors(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + struct xsc_eq_table *table = &dev_res->eq_table; + int nvec = xdev->caps.msix_num; + int nvec_base; + int err; + + nvec_base = XSC_EQ_VEC_COMP_BASE; + if (nvec <= nvec_base) { + pci_err(xdev->pdev, "failed to alloc irq vector(%d)\n", nvec); + return -ENOMEM; + } + + dev_res->irq_info = kcalloc(nvec, sizeof(*dev_res->irq_info), + GFP_KERNEL); + if (!dev_res->irq_info) + return -ENOMEM; + + nvec = pci_alloc_irq_vectors(xdev->pdev, nvec_base + 1, nvec, + PCI_IRQ_MSIX); + if (nvec < 0) { + err = nvec; + goto err_free_irq_info; + } + + table->eq_vec_comp_base = nvec_base; + table->num_comp_vectors = nvec - nvec_base; + xdev->msix_vec_base = xdev->caps.msix_base; + + return 0; + +err_free_irq_info: + pci_free_irq_vectors(xdev->pdev); + kfree(dev_res->irq_info); + return err; +} + +static void xsc_free_irq_vectors(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + + if (!xsc_fw_is_available(xdev)) + return; + + pci_free_irq_vectors(xdev->pdev); + kfree(dev_res->irq_info); +} + +int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, u32 *eqn, + unsigned int *irqn) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq, *n; + int err = -ENOENT; + + if (!xdev->caps.msix_enable) + return 0; + + spin_lock(&table->lock); + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + if (eq->index == vector) { + *eqn = eq->eqn; + *irqn = eq->irqn; + err = 0; + break; + } + } + spin_unlock(&table->lock); + + return err; +} +EXPORT_SYMBOL(xsc_core_vector2eqn); + +static void free_comp_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + struct xsc_eq *eq, *n; + + spin_lock(&table->lock); + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + list_del(&eq->list); + spin_unlock(&table->lock); + if (xsc_destroy_unmap_eq(xdev, eq)) + pci_err(xdev->pdev, "failed to destroy EQ 0x%x\n", + eq->eqn); + kfree(eq); + spin_lock(&table->lock); + } + spin_unlock(&table->lock); +} + +static int alloc_comp_eqs(struct xsc_core_device *xdev) +{ + struct xsc_eq_table *table = &xdev->dev_res->eq_table; + char name[XSC_MAX_IRQ_NAME]; + struct xsc_eq *eq; + int ncomp_vec; + u32 nent; + int err; + int i; + + INIT_LIST_HEAD(&table->comp_eqs_list); + ncomp_vec = table->num_comp_vectors; + nent = XSC_COMP_EQ_SIZE; + + for (i = 0; i < ncomp_vec; i++) { + eq = kzalloc(sizeof(*eq), GFP_KERNEL); + if (!eq) { + err = -ENOMEM; + goto clean; + } + + snprintf(name, XSC_MAX_IRQ_NAME, "xsc_comp%d", i); + err = xsc_create_map_eq(xdev, eq, + i + table->eq_vec_comp_base, + nent, name); + if (err) { + kfree(eq); + goto clean; + } + + eq->index = i; + spin_lock(&table->lock); + list_add_tail(&eq->list, &table->comp_eqs_list); + spin_unlock(&table->lock); + } + + return 0; + +clean: + free_comp_eqs(xdev); + return err; +} + +static irqreturn_t xsc_cmd_handler(int irq, void *arg) +{ + struct xsc_core_device *xdev = (struct xsc_core_device *)arg; + int err; + + disable_irq_nosync(xdev->cmd.irqn); + err = xsc_cmd_err_handler(xdev); + if (!err) + xsc_cmd_resp_handler(xdev); + enable_irq(xdev->cmd.irqn); + + return IRQ_HANDLED; +} + +static int xsc_request_irq_for_cmdq(struct xsc_core_device *xdev, u8 vecidx) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + + writel(xdev->msix_vec_base + vecidx, + XSC_REG_ADDR(xdev, xdev->cmd.reg.msix_vec_addr)); + + snprintf(dev_res->irq_info[vecidx].name, XSC_MAX_IRQ_NAME, "%s@pci:%s", + "xsc_cmd", pci_name(xdev->pdev)); + xdev->cmd.irqn = pci_irq_vector(xdev->pdev, vecidx); + return request_irq(xdev->cmd.irqn, xsc_cmd_handler, 0, + dev_res->irq_info[vecidx].name, xdev); +} + +static void xsc_free_irq_for_cmdq(struct xsc_core_device *xdev) +{ + xsc_free_irq(xdev, XSC_VEC_CMD); +} + +static irqreturn_t xsc_event_handler(int irq, void *arg) +{ + struct xsc_core_device *xdev = (struct xsc_core_device *)arg; + + if (!xdev->eth_priv) + return IRQ_NONE; + + if (!xdev->event_handler) + return IRQ_NONE; + + xdev->event_handler(xdev->eth_priv); + + return IRQ_HANDLED; +} + +static int xsc_request_irq_for_event(struct xsc_core_device *xdev) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + + snprintf(dev_res->irq_info[XSC_VEC_CMD_EVENT].name, + XSC_MAX_IRQ_NAME, "%s@pci:%s", + "xsc_eth_event", pci_name(xdev->pdev)); + return request_irq(pci_irq_vector(xdev->pdev, XSC_VEC_CMD_EVENT), + xsc_event_handler, 0, + dev_res->irq_info[XSC_VEC_CMD_EVENT].name, xdev); +} + +static void xsc_free_irq_for_event(struct xsc_core_device *xdev) +{ + xsc_free_irq(xdev, XSC_VEC_CMD_EVENT); +} + +static int xsc_cmd_enable_msix(struct xsc_core_device *xdev) +{ + struct xsc_msix_table_info_mbox_out out; + struct xsc_msix_table_info_mbox_in in; + int err; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ENABLE_MSIX); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err) { + pci_err(xdev->pdev, "xsc_cmd_exec enable msix failed %d\n", + err); + return err; + } + + return 0; +} + +int xsc_irq_eq_create(struct xsc_core_device *xdev) +{ + int err; + + if (xdev->caps.msix_enable == 0) + return 0; + + err = xsc_alloc_irq_vectors(xdev); + if (err) { + pci_err(xdev->pdev, "enable msix failed, err=%d\n", err); + goto out; + } + + err = xsc_start_eqs(xdev); + if (err) { + pci_err(xdev->pdev, "failed to start EQs, err=%d\n", err); + goto err_free_irq_vectors; + } + + err = alloc_comp_eqs(xdev); + if (err) { + pci_err(xdev->pdev, "failed to alloc comp EQs, err=%d\n", err); + goto err_stop_eqs; + } + + err = xsc_request_irq_for_cmdq(xdev, XSC_VEC_CMD); + if (err) { + pci_err(xdev->pdev, "failed to request irq for cmdq, err=%d\n", + err); + goto err_free_comp_eqs; + } + + err = xsc_request_irq_for_event(xdev); + if (err) { + pci_err(xdev->pdev, "failed to request irq for event, err=%d\n", + err); + goto err_free_irq_cmdq; + } + + err = set_comp_irq_affinity_hints(xdev); + if (err) { + pci_err(xdev->pdev, "failed to alloc affinity hint cpumask, err=%d\n", + err); + goto err_free_irq_evnt; + } + + xsc_cmd_use_events(xdev); + err = xsc_cmd_enable_msix(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_cmd_enable_msix failed %d.\n", err); + xsc_cmd_use_polling(xdev); + goto err_free_irq_evnt; + } + return 0; + +err_free_irq_evnt: + xsc_free_irq_for_event(xdev); +err_free_irq_cmdq: + xsc_free_irq_for_cmdq(xdev); +err_free_comp_eqs: + free_comp_eqs(xdev); +err_stop_eqs: + xsc_stop_eqs(xdev); +err_free_irq_vectors: + xsc_free_irq_vectors(xdev); +out: + return err; +} + +int xsc_irq_eq_destroy(struct xsc_core_device *xdev) +{ + if (xdev->caps.msix_enable == 0) + return 0; + + xsc_stop_eqs(xdev); + clear_comp_irq_affinity_hints(xdev); + free_comp_eqs(xdev); + + xsc_free_irq_for_event(xdev); + xsc_free_irq_for_cmdq(xdev); + xsc_free_irq_vectors(xdev); + + return 0; +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h new file mode 100644 index 000000000..7b0aae349 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/pci_irq.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __PCI_IRQ_H +#define __PCI_IRQ_H + +#include "common/xsc_core.h" + +int xsc_irq_eq_create(struct xsc_core_device *xdev); +int xsc_irq_eq_destroy(struct xsc_core_device *xdev); + +#endif From patchwork Mon Feb 24 17:24:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988558 X-Patchwork-Delegate: kuba@kernel.org Received: from va-1-14.ptr.blmpb.com (va-1-14.ptr.blmpb.com [209.127.230.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B6AF264624 for ; Mon, 24 Feb 2025 17:24:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417889; cv=none; b=D+hjN1Haw1lMVEOSObSsEZ6JJHic2SmkM767R6I0VOqrxbIObVjvVzy1m00sXSgRCSEuOpG5OP+v9FZO0yOl7wIaIZnay9P4ONirMv3p1oItXtnieOQSe3Zy+bIyw7oulG0Tuoey+s+dy+6ZmrYYUJStKXSLotal8Q5l0S7Cb5E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417889; c=relaxed/simple; bh=LrX8nGzCurzk5Zv8gvgxLkmRlfXkw/zv3KxL+VtgBBA=; h=To:Mime-Version:In-Reply-To:Content-Type:Cc:Message-Id:Subject: References:From:Date; b=nhluIxy9NkwXFFwIPKCROLFflEDHYHiFeTXr7Lv4jpEjDzvapdsw4daFPgfBzzJ8AKVb+lyL2W2G4actgrPW4TlQE5YiBCZm7Vip8/xj2dviA8XO50oyQP5CkTtT954i5RyU7VmZyh/b0XccghmBRH5GibNuJEMsezGXktV/fCo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=HdgZjoIH; arc=none smtp.client-ip=209.127.230.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="HdgZjoIH" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417873; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=G8XfPiLGwgPJo/PS3m3N7a/GuW477ze2l8p+6iHHbCI=; b=HdgZjoIHb8dLIb8a+VDmYffBS3V2OfP/WUGQ++tGCMfYWSlztj7amj1lxe5lIbHDxnzWtD NTb9AMObz5zzOVOjkg/vgEWJcly1YuHqXusoAmdgYFt2NKPze5M+Y05O3h5hPYloZveETc ARFfxuXamf2i0nsQLj3qlVrHk/FIygsksJbW4pqN8sNuBImbXt5cN+tRgK9dz+24K0+EjY 6i6CTQNkIkAufhzmDNumU0grNeLnraUS0lTVF4/2cfswuQx68WPVgr7Xs/ecZebW76tNHg N6+OmWDAJAxzQ7Y9IDvPbQ+/fENDcgJ47ySDuSoUv5ZXPHO5yy99u52QHhr4wQ== To: Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:31 +0800 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Cc: , , , , , , , , , , , , Message-Id: <20250224172429.2455751-8-tianx@yunsilicon.com> X-Original-From: Xin Tian X-Lms-Return-Path: Subject: [PATCH net-next v5 07/14] xsc: Init auxiliary device X-Mailer: git-send-email 2.25.1 References: <20250224172416.2455751-1-tianx@yunsilicon.com> From: "Xin Tian" Date: Tue, 25 Feb 2025 01:24:31 +0800 X-Patchwork-Delegate: kuba@kernel.org Our device supports both Ethernet and RDMA functionalities, and leveraging the auxiliary bus perfectly addresses our needs for managing these distinct features. This patch utilizes auxiliary device to handle the Ethernet functionality, while defining xsc_adev_list to reserve expansion space for future RDMA capabilities. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 14 +++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 3 +- .../net/ethernet/yunsilicon/xsc/pci/adev.c | 112 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/adev.h | 14 +++ .../net/ethernet/yunsilicon/xsc/pci/main.c | 10 ++ 5 files changed, 152 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/adev.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/adev.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index a0270937f..31a018413 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -7,6 +7,7 @@ #define __XSC_CORE_H #include +#include #include "common/xsc_cmdq.h" @@ -208,6 +209,17 @@ struct xsc_irq_info { char name[XSC_MAX_IRQ_NAME]; }; +// adev +#define XSC_PCI_DRV_NAME "xsc_pci" +#define XSC_ETH_ADEV_NAME "eth" + +struct xsc_adev { + struct auxiliary_device adev; + struct xsc_core_device *xdev; + + int idx; +}; + // hw struct xsc_reg_addr { u64 tx_db; @@ -354,6 +366,8 @@ enum xsc_interface_state { struct xsc_core_device { struct pci_dev *pdev; struct device *device; + int adev_id; + struct xsc_adev **xsc_adev_list; void *eth_priv; struct xsc_dev_resource *dev_res; int numa_node; diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index 3525d1c74..ad0ecc122 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,4 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o adev.o + diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/adev.c b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.c new file mode 100644 index 000000000..94db3893a --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.c @@ -0,0 +1,112 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include + +#include "adev.h" + +static DEFINE_IDA(xsc_adev_ida); + +enum xsc_adev_idx { + XSC_ADEV_IDX_ETH, + XSC_ADEV_IDX_MAX +}; + +static const char * const xsc_adev_name[] = { + [XSC_ADEV_IDX_ETH] = XSC_ETH_ADEV_NAME, +}; + +static void xsc_release_adev(struct device *dev) +{ + struct xsc_adev *xsc_adev = + container_of(dev, struct xsc_adev, adev.dev); + struct xsc_core_device *xdev = xsc_adev->xdev; + int idx = xsc_adev->idx; + + kfree(xsc_adev); + xdev->xsc_adev_list[idx] = NULL; +} + +static int xsc_reg_adev(struct xsc_core_device *xdev, int idx) +{ + struct auxiliary_device *adev; + struct xsc_adev *xsc_adev; + int ret; + + xsc_adev = kzalloc(sizeof(*xsc_adev), GFP_KERNEL); + if (!xsc_adev) + return -ENOMEM; + + adev = &xsc_adev->adev; + adev->name = xsc_adev_name[idx]; + adev->id = xdev->adev_id; + adev->dev.parent = &xdev->pdev->dev; + adev->dev.release = xsc_release_adev; + xsc_adev->xdev = xdev; + xsc_adev->idx = idx; + + ret = auxiliary_device_init(adev); + if (ret) { + kfree(xsc_adev); + return ret; + } + + ret = auxiliary_device_add(adev); + if (ret) { + auxiliary_device_uninit(adev); + return ret; + } + + xdev->xsc_adev_list[idx] = xsc_adev; + + return 0; +} + +static void xsc_unreg_adev(struct xsc_core_device *xdev, int idx) +{ + struct xsc_adev *xsc_adev = xdev->xsc_adev_list[idx]; + struct auxiliary_device *adev = &xsc_adev->adev; + + auxiliary_device_delete(adev); + auxiliary_device_uninit(adev); +} + +int xsc_adev_init(struct xsc_core_device *xdev) +{ + struct xsc_adev **xsc_adev_list; + int adev_id; + int ret; + + xsc_adev_list = kzalloc(sizeof(void *) * XSC_ADEV_IDX_MAX, GFP_KERNEL); + if (!xsc_adev_list) + return -ENOMEM; + xdev->xsc_adev_list = xsc_adev_list; + + adev_id = ida_alloc(&xsc_adev_ida, GFP_KERNEL); + if (adev_id < 0) + goto err_free_adev_list; + xdev->adev_id = adev_id; + + ret = xsc_reg_adev(xdev, XSC_ADEV_IDX_ETH); + if (ret) + goto err_dalloc_adev_id; + + return 0; +err_dalloc_adev_id: + ida_free(&xsc_adev_ida, xdev->adev_id); +err_free_adev_list: + kfree(xsc_adev_list); + + return ret; +} + +void xsc_adev_uninit(struct xsc_core_device *xdev) +{ + xsc_unreg_adev(xdev, XSC_ADEV_IDX_ETH); + ida_free(&xsc_adev_ida, xdev->adev_id); + kfree(xdev->xsc_adev_list); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/adev.h b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.h new file mode 100644 index 000000000..3de4dd26f --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/adev.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __ADEV_H +#define __ADEV_H + +#include "common/xsc_core.h" + +int xsc_adev_init(struct xsc_core_device *xdev); +void xsc_adev_uninit(struct xsc_core_device *xdev); + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c index 72eb2a37d..48c4a2a7f 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/main.c @@ -10,6 +10,7 @@ #include "cq.h" #include "eq.h" #include "pci_irq.h" +#include "adev.h" static const struct pci_device_id xsc_pci_id_table[] = { { PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_MC_PF_DEV_ID) }, @@ -260,10 +261,18 @@ static int xsc_load(struct xsc_core_device *xdev) goto err_hw_cleanup; } + err = xsc_adev_init(xdev); + if (err) { + pci_err(xdev->pdev, "xsc_adev_init failed %d\n", err); + goto err_irq_eq_destroy; + } + set_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state); mutex_unlock(&xdev->intf_state_mutex); return 0; +err_irq_eq_destroy: + xsc_irq_eq_destroy(xdev); err_hw_cleanup: xsc_hw_cleanup(xdev); out: @@ -273,6 +282,7 @@ static int xsc_load(struct xsc_core_device *xdev) static int xsc_unload(struct xsc_core_device *xdev) { + xsc_adev_uninit(xdev); mutex_lock(&xdev->intf_state_mutex); if (!test_bit(XSC_INTERFACE_STATE_UP, &xdev->intf_state)) { xsc_hw_cleanup(xdev); From patchwork Mon Feb 24 17:24:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988557 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-37.ptr.blmpb.com (va-2-37.ptr.blmpb.com [209.127.231.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CA5F265609 for ; Mon, 24 Feb 2025 17:24:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417888; cv=none; b=vBE7Fu9pCMhphElBFAphKDH0VTTCzNKAL5qPTREwVy6dqrZLG65aXaLMswV59FDCDHMy7uMo/KXX6KuuqZ4Bm6At+KlaDI/qfzhLddRdIUB2qDkAfO3Vpqicl3Tr2M2IjBWKUH9AJDRSCU4AkhmtIo3QpTDbOTjV0tSkMR75hRc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417888; c=relaxed/simple; bh=5uY+kvIT85UGVh6ElEeO1LULy+xaAUawOsTNJDAIoqA=; h=To:Subject:From:Date:Message-Id:Mime-Version:In-Reply-To:Cc: Content-Type:References; b=o1z/qjUHOyUHudUnCQ/w6YWNnF42wMh5cGdaWsj5dUnSWTXyK6y4uIHG+0u5intOEGJmctlxKxYfSdq/qG2K7bIbseG5mQ9i/nfZrUAwiZxzTPTCdyWLpblsY1oG6dQjGvgZkdf/FmyoxP9yyBja2tM4PwLkCxBQQvhbaMCUT3E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=J4bi1/if; arc=none smtp.client-ip=209.127.231.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="J4bi1/if" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417875; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=m39T5dA573KDvecKGhEC+Ej38N1ikwbRr1YncP+Z/zg=; b=J4bi1/ifmFfO4M9IpC5xHuifeCOAKTz5clehnJkK9GspMfoiqskc5Y9wq0ngCNYCah5Pa4 kVfIGlfAohlxSTDJXjWVi77gSUHsGQjhcno1OUnrZU8aVQTruptNu/6BPxmlG1E1xgSANy 1cisOcZ0zMNEbbsqTr8K7Dy1mjAdODIeXxvCUH5myiezs6fwTmoEukibsGshnev+azgYsd U9qNGF6YoovuT7ZxgzCMAysBurihjnhEFPRV2SAQK/1xY+o3G086oULovqEnTbH1ZxlCAk HFDdQUWp+RVEfHUo10XpDzsr4TvVDE3I1KWGGlTOCz8YP3B/4RClPmFZvdgbsQ== To: Subject: [PATCH net-next v5 08/14] xsc: Add ethernet interface X-Mailer: git-send-email 2.25.1 X-Original-From: Xin Tian From: "Xin Tian" Date: Tue, 25 Feb 2025 01:24:33 +0800 Message-Id: <20250224172432.2455751-9-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Cc: , , , , , , , , , , , , References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Lms-Return-Path: Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:33 +0800 X-Patchwork-Delegate: kuba@kernel.org Implement an auxiliary driver for ethernet and initialize the netdevice simply. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- drivers/net/ethernet/yunsilicon/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 100 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 16 +++ .../yunsilicon/xsc/net/xsc_eth_common.h | 15 +++ 4 files changed, 132 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/main.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h diff --git a/drivers/net/ethernet/yunsilicon/Makefile b/drivers/net/ethernet/yunsilicon/Makefile index 6fc8259a7..65b9a6265 100644 --- a/drivers/net/ethernet/yunsilicon/Makefile +++ b/drivers/net/ethernet/yunsilicon/Makefile @@ -4,5 +4,5 @@ # Makefile for the Yunsilicon device drivers. # -# obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/ +obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc/net/ obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc/pci/ \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c new file mode 100644 index 000000000..da309dc63 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include +#include +#include + +#include "common/xsc_core.h" +#include "xsc_eth_common.h" +#include "xsc_eth.h" + +static int xsc_get_max_num_channels(struct xsc_core_device *xdev) +{ + return min_t(int, xdev->dev_res->eq_table.num_comp_vectors, + XSC_ETH_MAX_NUM_CHANNELS); +} + +static int xsc_eth_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *adev_id) +{ + struct xsc_adev *xsc_adev = container_of(adev, struct xsc_adev, adev); + struct xsc_core_device *xdev = xsc_adev->xdev; + struct xsc_adapter *adapter; + struct net_device *netdev; + int num_chl, num_tc; + int err; + + num_chl = xsc_get_max_num_channels(xdev); + num_tc = xdev->caps.max_tc; + + netdev = alloc_etherdev_mqs(sizeof(struct xsc_adapter), + num_chl * num_tc, num_chl); + if (!netdev) { + pr_err("alloc_etherdev_mqs failed, txq=%d, rxq=%d\n", + (num_chl * num_tc), num_chl); + return -ENOMEM; + } + + netdev->dev.parent = &xdev->pdev->dev; + adapter = netdev_priv(netdev); + adapter->netdev = netdev; + adapter->pdev = xdev->pdev; + adapter->dev = &adapter->pdev->dev; + adapter->xdev = xdev; + xdev->eth_priv = adapter; + + err = register_netdev(netdev); + if (err) { + netdev_err(netdev, "register_netdev failed, err=%d\n", err); + goto err_free_netdev; + } + + return 0; + +err_free_netdev: + free_netdev(netdev); + + return err; +} + +static void xsc_eth_remove(struct auxiliary_device *adev) +{ + struct xsc_adev *xsc_adev = container_of(adev, struct xsc_adev, adev); + struct xsc_core_device *xdev = xsc_adev->xdev; + struct xsc_adapter *adapter; + + if (!xdev) + return; + + adapter = xdev->eth_priv; + if (!adapter) { + netdev_err(adapter->netdev, "failed! adapter is null\n"); + return; + } + + unregister_netdev(adapter->netdev); + + free_netdev(adapter->netdev); + + xdev->eth_priv = NULL; +} + +static const struct auxiliary_device_id xsc_eth_id_table[] = { + { .name = XSC_PCI_DRV_NAME "." XSC_ETH_ADEV_NAME }, + {}, +}; +MODULE_DEVICE_TABLE(auxiliary, xsc_eth_id_table); + +static struct auxiliary_driver xsc_eth_driver = { + .name = "eth", + .probe = xsc_eth_probe, + .remove = xsc_eth_remove, + .id_table = xsc_eth_id_table, +}; +module_auxiliary_driver(xsc_eth_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Yunsilicon XSC ethernet driver"); diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h new file mode 100644 index 000000000..0c70c0d59 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_ETH_H +#define __XSC_ETH_H + +struct xsc_adapter { + struct net_device *netdev; + struct pci_dev *pdev; + struct device *dev; + struct xsc_core_device *xdev; +}; + +#endif /* __XSC_ETH_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h new file mode 100644 index 000000000..b5640f05d --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_ETH_COMMON_H +#define __XSC_ETH_COMMON_H + +#define XSC_LOG_INDIR_RQT_SIZE 0x8 + +#define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE) +#define XSC_ETH_MIN_NUM_CHANNELS 2 +#define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE + +#endif From patchwork Mon Feb 24 17:24:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988569 X-Patchwork-Delegate: kuba@kernel.org Received: from va-2-45.ptr.blmpb.com (va-2-45.ptr.blmpb.com [209.127.231.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5FF12561DD for ; Mon, 24 Feb 2025 17:27:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.231.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418026; cv=none; b=Y2iX/zdrE1Y12qgrX26HOdMhFloemQHepZl7WX/IjzgNamGgijloNmDvtd0vFFRwfAwzCCAORlcVSPib1K2hwO2u1sH48UJrrZkaeG69Fg/EcUF6GeceS4GBfHtboPiTk6K4lj3g7f3bn96WuE/DrZfMZRlhhnjRa/jqRAObmgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740418026; c=relaxed/simple; bh=9QfFG3dnTzB482u6E9FB68vv5Dc9z9lQktZWNjYQ/OM=; h=Cc:From:Message-Id:References:In-Reply-To:To:Date:Subject: Mime-Version:Content-Type; b=jfkGvdNVoeskpoLlXFVu/LhIxARIaZuOmhLG9y3GxVE8dBKsiiXkfpZK76Tw+sRW/cPAD+T9LL0bX+XerIhca5HFLMCzNJZCZ4P+892h5OLnS6kc2hKHFmCt7ByhfrKlM6J1EjSqqppOUPe14JFk9NG9mLJVYbsq4kdCOEAgiVk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=WcAyn6+S; arc=none smtp.client-ip=209.127.231.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="WcAyn6+S" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417877; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=acVgszPHNveYUGwxHw2MavOU+QlQi20OSE5FjANHBXA=; b=WcAyn6+S/NoqOql/xhi3nbIL5IsjLR7Q2jB7xv7S9dJZSFdDXfvccCSlR6p2CBfE1IDuHy K100tXTbyFVzbMhso7eegYkdiBLbe5bJ7J8DS2xeysU0M2S64PvVwM5e9IUQZ+bbACYd6Z goDiYzjKU7ndkUG1F7P73acVtnzXfB8RajLYIYxFgeJmByDzYkcrRf9/85+W3e5uoOCPHM /oVox+6UbOkdVl3TDN0zT+/Jy9C+nGoYzKbP5Bz32oshWYZlPEYqeBUvYN75DgDZs8KE+4 czwdZQCt8kzynfmo/vZtYmJJddw5/1VsPRlzNMR3WVOep44rNMVGWZ24T0DN5A== Cc: , , , , , , , , , , , , From: "Xin Tian" Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:35 +0800 Message-Id: <20250224172434.2455751-10-tianx@yunsilicon.com> References: <20250224172416.2455751-1-tianx@yunsilicon.com> In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> To: Date: Tue, 25 Feb 2025 01:24:35 +0800 X-Lms-Return-Path: X-Original-From: Xin Tian X-Mailer: git-send-email 2.25.1 Subject: [PATCH net-next v5 09/14] xsc: Init net device Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Initialize network device: 1. Prepare xsc_eth_params, set up the network parameters such as MTU, number of channels, and RSS configuration 2. Configure netdev, Set the MTU, features 3. Set mac addr to hardware, and attach netif Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 1 + .../yunsilicon/xsc/common/xsc_device.h | 42 +++ .../ethernet/yunsilicon/xsc/common/xsc_pp.h | 38 ++ .../net/ethernet/yunsilicon/xsc/net/main.c | 333 +++++++++++++++++- .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 28 ++ .../yunsilicon/xsc/net/xsc_eth_common.h | 46 +++ .../net/ethernet/yunsilicon/xsc/net/xsc_pph.h | 180 ++++++++++ .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 49 +++ 8 files changed, 716 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 31a018413..a9b22c6e8 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -8,6 +8,7 @@ #include #include +#include #include "common/xsc_cmdq.h" diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h new file mode 100644 index 000000000..45ea8d2a0 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_DEVICE_H +#define __XSC_DEVICE_H + +enum xsc_traffic_types { + XSC_TT_IPV4, + XSC_TT_IPV4_TCP, + XSC_TT_IPV4_UDP, + XSC_TT_IPV6, + XSC_TT_IPV6_TCP, + XSC_TT_IPV6_UDP, + XSC_TT_IPV4_IPSEC_AH, + XSC_TT_IPV6_IPSEC_AH, + XSC_TT_IPV4_IPSEC_ESP, + XSC_TT_IPV6_IPSEC_ESP, + XSC_TT_ANY, + XSC_NUM_TT, +}; + +#define XSC_NUM_INDIR_TIRS XSC_NUM_TT + +enum { + XSC_L3_PROT_TYPE_IPV4 = BIT(0), + XSC_L3_PROT_TYPE_IPV6 = BIT(1), +}; + +enum { + XSC_L4_PROT_TYPE_TCP = BIT(0), + XSC_L4_PROT_TYPE_UDP = BIT(1), +}; + +struct xsc_tirc_config { + u8 l3_prot_type; + u8 l4_prot_type; + u32 rx_hash_fields; +}; + +#endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h new file mode 100644 index 000000000..582f99d8c --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_pp.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_PP_H +#define __XSC_PP_H + +enum { + XSC_HASH_FIELD_SEL_SRC_IP = BIT(0), + XSC_HASH_FIELD_SEL_PROTO = BIT(1), + XSC_HASH_FIELD_SEL_DST_IP = BIT(2), + XSC_HASH_FIELD_SEL_SPORT = BIT(3), + XSC_HASH_FIELD_SEL_DPORT = BIT(4), + XSC_HASH_FIELD_SEL_SRC_IPV6 = BIT(5), + XSC_HASH_FIELD_SEL_DST_IPV6 = BIT(6), + XSC_HASH_FIELD_SEL_SPORT_V6 = BIT(7), + XSC_HASH_FIELD_SEL_DPORT_V6 = BIT(8), +}; + +#define XSC_HASH_IP (XSC_HASH_FIELD_SEL_SRC_IP |\ + XSC_HASH_FIELD_SEL_DST_IP |\ + XSC_HASH_FIELD_SEL_PROTO) +#define XSC_HASH_IP_PORTS (XSC_HASH_FIELD_SEL_SRC_IP |\ + XSC_HASH_FIELD_SEL_DST_IP |\ + XSC_HASH_FIELD_SEL_SPORT |\ + XSC_HASH_FIELD_SEL_DPORT |\ + XSC_HASH_FIELD_SEL_PROTO) +#define XSC_HASH_IP6 (XSC_HASH_FIELD_SEL_SRC_IPV6 |\ + XSC_HASH_FIELD_SEL_DST_IPV6 |\ + XSC_HASH_FIELD_SEL_PROTO) +#define XSC_HASH_IP6_PORTS (XSC_HASH_FIELD_SEL_SRC_IPV6 |\ + XSC_HASH_FIELD_SEL_DST_IPV6 |\ + XSC_HASH_FIELD_SEL_SPORT_V6 |\ + XSC_HASH_FIELD_SEL_DPORT_V6 |\ + XSC_HASH_FIELD_SEL_PROTO) + +#endif /* __XSC_PP_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index da309dc63..dd1617a84 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -6,17 +6,331 @@ #include #include #include +#include #include "common/xsc_core.h" +#include "common/xsc_driver.h" +#include "common/xsc_device.h" +#include "common/xsc_pp.h" #include "xsc_eth_common.h" #include "xsc_eth.h" +static const struct xsc_tirc_config tirc_default_config[XSC_NUM_INDIR_TIRS] = { + [XSC_TT_IPV4] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV4, + .l4_prot_type = 0, + .rx_hash_fields = XSC_HASH_IP, + }, + [XSC_TT_IPV4_TCP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV4, + .l4_prot_type = XSC_L4_PROT_TYPE_TCP, + .rx_hash_fields = XSC_HASH_IP_PORTS, + }, + [XSC_TT_IPV4_UDP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV4, + .l4_prot_type = XSC_L4_PROT_TYPE_UDP, + .rx_hash_fields = XSC_HASH_IP_PORTS, + }, + [XSC_TT_IPV6] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV6, + .l4_prot_type = 0, + .rx_hash_fields = XSC_HASH_IP6, + }, + [XSC_TT_IPV6_TCP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV6, + .l4_prot_type = XSC_L4_PROT_TYPE_TCP, + .rx_hash_fields = XSC_HASH_IP6_PORTS, + }, + [XSC_TT_IPV6_UDP] = { + .l3_prot_type = XSC_L3_PROT_TYPE_IPV6, + .l4_prot_type = XSC_L4_PROT_TYPE_UDP, + .rx_hash_fields = XSC_HASH_IP6_PORTS, + }, +}; + static int xsc_get_max_num_channels(struct xsc_core_device *xdev) { return min_t(int, xdev->dev_res->eq_table.num_comp_vectors, XSC_ETH_MAX_NUM_CHANNELS); } +static void xsc_build_default_indir_rqt(u32 *indirection_rqt, int len, + int num_channels) +{ + int i; + + for (i = 0; i < len; i++) + indirection_rqt[i] = i % num_channels; +} + +static void xsc_build_rss_param(struct xsc_rss_params *rss_param, + u16 num_channels) +{ + enum xsc_traffic_types tt; + + rss_param->hfunc = ETH_RSS_HASH_TOP; + netdev_rss_key_fill(rss_param->toeplitz_hash_key, + sizeof(rss_param->toeplitz_hash_key)); + + xsc_build_default_indir_rqt(rss_param->indirection_rqt, + XSC_INDIR_RQT_SIZE, num_channels); + + for (tt = 0; tt < XSC_NUM_INDIR_TIRS; tt++) { + rss_param->rx_hash_fields[tt] = + tirc_default_config[tt].rx_hash_fields; + } + rss_param->rss_hash_tmpl = XSC_HASH_IP_PORTS | XSC_HASH_IP6_PORTS; +} + +static void xsc_eth_build_nic_params(struct xsc_adapter *adapter, + u32 ch_num, u32 tc_num) +{ + struct xsc_eth_params *params = &adapter->nic_param; + struct xsc_core_device *xdev = adapter->xdev; + + params->mtu = SW_DEFAULT_MTU; + params->num_tc = tc_num; + + params->comp_vectors = xdev->dev_res->eq_table.num_comp_vectors; + params->max_num_ch = ch_num; + params->num_channels = ch_num; + + params->rq_max_size = BIT(xdev->caps.log_max_qp_depth); + params->sq_max_size = BIT(xdev->caps.log_max_qp_depth); + xsc_build_rss_param(&adapter->rss_param, + adapter->nic_param.num_channels); +} + +static int xsc_eth_netdev_init(struct xsc_adapter *adapter) +{ + unsigned int node, tc, nch; + + tc = adapter->nic_param.num_tc; + nch = adapter->nic_param.max_num_ch; + node = dev_to_node(adapter->dev); + adapter->txq2sq = kcalloc_node(nch * tc, + sizeof(*adapter->txq2sq), + GFP_KERNEL, node); + if (!adapter->txq2sq) + goto err_out; + + adapter->workq = create_singlethread_workqueue("xsc_eth"); + if (!adapter->workq) + goto err_free_priv; + + netif_carrier_off(adapter->netdev); + + return 0; + +err_free_priv: + kfree(adapter->txq2sq); +err_out: + return -ENOMEM; +} + +static int xsc_eth_close(struct net_device *netdev) +{ + return 0; +} + +static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, + u16 mtu, u16 rx_buf_sz) +{ + struct xsc_set_mtu_mbox_out out; + struct xsc_set_mtu_mbox_in in; + int ret; + + memset(&in, 0, sizeof(struct xsc_set_mtu_mbox_in)); + memset(&out, 0, sizeof(struct xsc_set_mtu_mbox_out)); + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_SET_MTU); + in.mtu = cpu_to_be16(mtu); + in.rx_buf_sz_min = cpu_to_be16(rx_buf_sz); + in.mac_port = xdev->mac_port; + + ret = xsc_cmd_exec(xdev, &in, sizeof(struct xsc_set_mtu_mbox_in), &out, + sizeof(struct xsc_set_mtu_mbox_out)); + if (ret || out.hdr.status) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "failed to set hw_mtu=%u rx_buf_sz=%u, err=%d, status=%d\n", + mtu, rx_buf_sz, ret, out.hdr.status); + ret = -ENOEXEC; + } + + return ret; +} + +static const struct net_device_ops xsc_netdev_ops = { + // TBD +}; + +static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + + /* Set up network device as normal. */ + netdev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; + netdev->netdev_ops = &xsc_netdev_ops; + + netdev->min_mtu = SW_MIN_MTU; + netdev->max_mtu = SW_MAX_MTU; + /*mtu - macheaderlen - ipheaderlen should be aligned in 8B*/ + netdev->mtu = SW_DEFAULT_MTU; + + netdev->vlan_features |= NETIF_F_SG; + netdev->vlan_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; + netdev->vlan_features |= NETIF_F_GRO; + netdev->vlan_features |= NETIF_F_TSO; + netdev->vlan_features |= NETIF_F_TSO6; + + netdev->vlan_features |= NETIF_F_RXCSUM; + netdev->vlan_features |= NETIF_F_RXHASH; + netdev->vlan_features |= NETIF_F_GSO_PARTIAL; + + netdev->hw_features = netdev->vlan_features; + + netdev->features |= netdev->hw_features; + netdev->features |= NETIF_F_HIGHDMA; +} + +static int xsc_eth_nic_init(struct xsc_adapter *adapter, + void *rep_priv, u32 ch_num, u32 tc_num) +{ + int err; + + xsc_eth_build_nic_params(adapter, ch_num, tc_num); + + err = xsc_eth_netdev_init(adapter); + if (err) + return err; + + xsc_eth_build_nic_netdev(adapter); + + return 0; +} + +static void xsc_eth_nic_cleanup(struct xsc_adapter *adapter) +{ + destroy_workqueue(adapter->workq); + kfree(adapter->txq2sq); +} + +static int xsc_eth_get_mac(struct xsc_core_device *xdev, char *mac) +{ + struct xsc_query_eth_mac_mbox_out *out; + struct xsc_query_eth_mac_mbox_in in; + int err; + + out = kzalloc(sizeof(*out), GFP_KERNEL); + if (!out) + return -ENOMEM; + + memset(&in, 0, sizeof(in)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_ETH_MAC); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), out, sizeof(*out)); + if (err || out->hdr.status) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "get mac failed! err=%d, out.status=%u\n", + err, out->hdr.status); + err = -ENOEXEC; + goto exit; + } + + memcpy(mac, out->mac, 6); + +exit: + kfree(out); + + return err; +} + +static void xsc_eth_l2_addr_init(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + char mac[6] = {0}; + int ret = 0; + + ret = xsc_eth_get_mac(adapter->xdev, mac); + if (ret) { + netdev_err(netdev, "get mac failed %d, generate random mac...", + ret); + eth_random_addr(mac); + } + dev_addr_mod(netdev, 0, mac, 6); + + if (!is_valid_ether_addr(netdev->perm_addr)) + memcpy(netdev->perm_addr, netdev->dev_addr, netdev->addr_len); +} + +static int xsc_eth_nic_enable(struct xsc_adapter *adapter) +{ + struct xsc_core_device *xdev = adapter->xdev; + + xsc_eth_l2_addr_init(adapter); + + xsc_eth_set_hw_mtu(xdev, XSC_SW2HW_MTU(adapter->nic_param.mtu), + XSC_SW2HW_RX_PKT_LEN(adapter->nic_param.mtu)); + + rtnl_lock(); + netif_device_attach(adapter->netdev); + rtnl_unlock(); + + return 0; +} + +static void xsc_eth_nic_disable(struct xsc_adapter *adapter) +{ + rtnl_lock(); + if (netif_running(adapter->netdev)) + xsc_eth_close(adapter->netdev); + netif_device_detach(adapter->netdev); + rtnl_unlock(); +} + +static int xsc_attach_netdev(struct xsc_adapter *adapter) +{ + int err = -1; + + err = xsc_eth_nic_enable(adapter); + if (err) + return err; + + return 0; +} + +static void xsc_detach_netdev(struct xsc_adapter *adapter) +{ + xsc_eth_nic_disable(adapter); + + flush_workqueue(adapter->workq); + adapter->status = XSCALE_ETH_DRIVER_DETACH; +} + +static int xsc_eth_attach(struct xsc_core_device *xdev, + struct xsc_adapter *adapter) +{ + int err = -1; + + if (netif_device_present(adapter->netdev)) + return 0; + + err = xsc_attach_netdev(adapter); + if (err) + return err; + + return 0; +} + +static void xsc_eth_detach(struct xsc_core_device *xdev, + struct xsc_adapter *adapter) +{ + if (!netif_device_present(adapter->netdev)) + return; + + xsc_detach_netdev(adapter); +} + static int xsc_eth_probe(struct auxiliary_device *adev, const struct auxiliary_device_id *adev_id) { @@ -24,6 +338,7 @@ static int xsc_eth_probe(struct auxiliary_device *adev, struct xsc_core_device *xdev = xsc_adev->xdev; struct xsc_adapter *adapter; struct net_device *netdev; + void *rep_priv = NULL; int num_chl, num_tc; int err; @@ -46,14 +361,30 @@ static int xsc_eth_probe(struct auxiliary_device *adev, adapter->xdev = xdev; xdev->eth_priv = adapter; + err = xsc_eth_nic_init(adapter, rep_priv, num_chl, num_tc); + if (err) { + netdev_err(netdev, "xsc_eth_nic_init failed, err=%d\n", err); + goto err_free_netdev; + } + + err = xsc_eth_attach(xdev, adapter); + if (err) { + netdev_err(netdev, "xsc_eth_attach failed, err=%d\n", err); + goto err_nic_cleanup; + } + err = register_netdev(netdev); if (err) { netdev_err(netdev, "register_netdev failed, err=%d\n", err); - goto err_free_netdev; + goto err_detach; } return 0; +err_detach: + xsc_eth_detach(xdev, adapter); +err_nic_cleanup: + xsc_eth_nic_cleanup(adapter); err_free_netdev: free_netdev(netdev); diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 0c70c0d59..1f9bae10b 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -6,11 +6,39 @@ #ifndef __XSC_ETH_H #define __XSC_ETH_H +#include "common/xsc_device.h" +#include "xsc_eth_common.h" + +enum { + XSCALE_ETH_DRIVER_INIT, + XSCALE_ETH_DRIVER_OK, + XSCALE_ETH_DRIVER_CLOSE, + XSCALE_ETH_DRIVER_DETACH, +}; + +struct xsc_rss_params { + u32 indirection_rqt[XSC_INDIR_RQT_SIZE]; + u32 rx_hash_fields[XSC_NUM_INDIR_TIRS]; + u8 toeplitz_hash_key[52]; + u8 hfunc; + u32 rss_hash_tmpl; +}; + struct xsc_adapter { struct net_device *netdev; struct pci_dev *pdev; struct device *dev; struct xsc_core_device *xdev; + + struct xsc_eth_params nic_param; + struct xsc_rss_params rss_param; + + struct workqueue_struct *workq; + + struct xsc_sq **txq2sq; + + u32 status; + struct mutex status_lock; // protect status }; #endif /* __XSC_ETH_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index b5640f05d..d18feacd4 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -6,10 +6,56 @@ #ifndef __XSC_ETH_COMMON_H #define __XSC_ETH_COMMON_H +#include "xsc_pph.h" + +#define SW_MIN_MTU ETH_MIN_MTU +#define SW_DEFAULT_MTU ETH_DATA_LEN +#define SW_MAX_MTU 9600 + +#define XSC_ETH_HW_MTU_SEND 9800 +#define XSC_ETH_HW_MTU_RECV 9800 +#define XSC_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN * 2 + ETH_FCS_LEN) +#define XSC_SW2HW_MTU(mtu) ((mtu) + XSC_ETH_HARD_MTU) +#define XSC_SW2HW_FRAG_SIZE(mtu) ((mtu) + XSC_ETH_HARD_MTU) +#define XSC_ETH_RX_MAX_HEAD_ROOM 256 +#define XSC_SW2HW_RX_PKT_LEN(mtu) \ + ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM) + #define XSC_LOG_INDIR_RQT_SIZE 0x8 #define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE) #define XSC_ETH_MIN_NUM_CHANNELS 2 #define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE +struct xsc_eth_params { + u16 num_channels; + u16 max_num_ch; + u8 num_tc; + u32 mtu; + u32 hard_mtu; + u32 comp_vectors; + u32 sq_size; + u32 sq_max_size; + u8 rq_wq_type; + u32 rq_size; + u32 rq_max_size; + u32 rq_frags_size; + + u16 num_rl_txqs; + u8 rx_cqe_compress_def; + u8 tunneled_offload_en; + u8 lro_en; + u8 tx_min_inline_mode; + u8 vlan_strip_disable; + u8 scatter_fcs_en; + u8 rx_dim_enabled; + u8 tx_dim_enabled; + u32 rx_dim_usecs_low; + u32 rx_dim_frames_low; + u32 tx_dim_usecs_low; + u32 tx_dim_frames_low; + u32 lro_timeout; + u32 pflags; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h new file mode 100644 index 000000000..5aae86add --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_pph.h @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_PPH_H +#define __XSC_PPH_H + +#define XSC_PPH_HEAD_LEN 64 + +enum { + L4_PROTO_NONE = 0, + L4_PROTO_TCP = 1, + L4_PROTO_UDP = 2, + L4_PROTO_ICMP = 3, + L4_PROTO_GRE = 4, +}; + +enum { + L3_PROTO_NONE = 0, + L3_PROTO_IP = 2, + L3_PROTO_IP6 = 3, +}; + +struct epp_pph { + u16 outer_eth_type; //2 bytes + u16 inner_eth_type; //4 bytes + + u16 rsv1:1; + u16 outer_vlan_flag:2; + u16 outer_ip_type:2; + u16 outer_ip_ofst:5; + u16 outer_ip_len:6; //6 bytes + + u16 rsv2:1; + u16 outer_tp_type:3; + u16 outer_tp_csum_flag:1; + u16 outer_tp_ofst:7; + u16 ext_tunnel_type:4; //8 bytes + + u8 tunnel_ofst; //9 bytes + u8 inner_mac_ofst; //10 bytes + + u32 rsv3:2; + u32 inner_mac_flag:1; + u32 inner_vlan_flag:2; + u32 inner_ip_type:2; + u32 inner_ip_ofst:8; + u32 inner_ip_len:6; + u32 inner_tp_type:2; + u32 inner_tp_csum_flag:1; + u32 inner_tp_ofst:8; //14 bytees + + u16 rsv4:1; + u16 payload_type:4; + u16 payload_ofst:8; + u16 pkt_type:3; //16 bytes + + u16 rsv5:2; + u16 pri:3; + u16 logical_in_port:11; + u16 vlan_info; + u8 error_bitmap:8; //21 bytes + + u8 rsv6:7; + u8 recirc_id_vld:1; + u16 recirc_id; //24 bytes + + u8 rsv7:7; + u8 recirc_data_vld:1; + u32 recirc_data; //29 bytes + + u8 rsv8:6; + u8 mark_tag_vld:2; + u16 mark_tag; //32 bytes + + u8 rsv9:4; + u8 upa_to_soc:1; + u8 upa_from_soc:1; + u8 upa_re_up_call:1; + u8 upa_pkt_drop:1; //33 bytes + + u8 ucdv; + u16 rsv10:2; + u16 pkt_len:14; //36 bytes + + u16 rsv11:2; + u16 pkt_hdr_ptr:14; //38 bytes + + u64 rsv12:5; + u64 csum_ofst:8; + u64 csum_val:29; + u64 csum_plen:14; + u64 rsv11_0:8; //46 bytes + + u64 rsv11_1; + u64 rsv11_2; + u16 rsv11_3; +}; + +#define OUTER_L3_BIT BIT(3) +#define OUTER_L4_BIT BIT(2) +#define INNER_L3_BIT BIT(1) +#define INNER_L4_BIT BIT(0) +#define OUTER_BIT (OUTER_L3_BIT | OUTER_L4_BIT) +#define INNER_BIT (INNER_L3_BIT | INNER_L4_BIT) +#define OUTER_AND_INNER (OUTER_BIT | INNER_BIT) + +#define PACKET_UNKNOWN BIT(4) + +#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_OFFSET (6UL) +#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_MASK (0XF00) +#define EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_OFFSET (8) + +#define EPP2SOC_PPH_EXT_ERROR_BITMAP_OFFSET (20UL) +#define EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_MASK (0XFF) +#define EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_OFFSET (0) + +#define XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(PPH_BASE_ADDR) \ + ((*(u16 *)((u8 *)(PPH_BASE_ADDR) + \ + EPP2SOC_PPH_EXT_TUNNEL_TYPE_OFFSET) & \ + EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_MASK) >> \ + EPP2SOC_PPH_EXT_TUNNEL_TYPE_BIT_OFFSET) + +#define XSC_GET_EPP2SOC_PPH_ERROR_BITMAP(PPH_BASE_ADDR) \ + ((*(u8 *)((u8 *)(PPH_BASE_ADDR) + \ + EPP2SOC_PPH_EXT_ERROR_BITMAP_OFFSET) & \ + EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_MASK) >> \ + EPP2SOC_PPH_EXT_ERROR_BITMAP_BIT_OFFSET) + +#define PPH_OUTER_IP_TYPE_OFF (4UL) +#define PPH_OUTER_IP_TYPE_MASK (0x3) +#define PPH_OUTER_IP_TYPE_SHIFT (11) +#define PPH_OUTER_IP_TYPE(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_TYPE_OFF)) >> \ + PPH_OUTER_IP_TYPE_SHIFT) & PPH_OUTER_IP_TYPE_MASK) + +#define PPH_OUTER_IP_OFST_OFF (4UL) +#define PPH_OUTER_IP_OFST_MASK (0x1f) +#define PPH_OUTER_IP_OFST_SHIFT (6) +#define PPH_OUTER_IP_OFST(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_OFST_OFF)) >> \ + PPH_OUTER_IP_OFST_SHIFT) & PPH_OUTER_IP_OFST_MASK) + +#define PPH_OUTER_IP_LEN_OFF (4UL) +#define PPH_OUTER_IP_LEN_MASK (0x3f) +#define PPH_OUTER_IP_LEN_SHIFT (0) +#define PPH_OUTER_IP_LEN(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_IP_LEN_OFF)) >> \ + PPH_OUTER_IP_LEN_SHIFT) & PPH_OUTER_IP_LEN_MASK) + +#define PPH_OUTER_TP_TYPE_OFF (6UL) +#define PPH_OUTER_TP_TYPE_MASK (0x7) +#define PPH_OUTER_TP_TYPE_SHIFT (12) +#define PPH_OUTER_TP_TYPE(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_OUTER_TP_TYPE_OFF)) >> \ + PPH_OUTER_TP_TYPE_SHIFT) & PPH_OUTER_TP_TYPE_MASK) + +#define PPH_PAYLOAD_OFST_OFF (14UL) +#define PPH_PAYLOAD_OFST_MASK (0xff) +#define PPH_PAYLOAD_OFST_SHIFT (3) +#define PPH_PAYLOAD_OFST(base) \ + ((ntohs(*(u16 *)((u8 *)(base) + PPH_PAYLOAD_OFST_OFF)) >> \ + PPH_PAYLOAD_OFST_SHIFT) & PPH_PAYLOAD_OFST_MASK) + +#define PPH_CSUM_OFST_OFF (38UL) +#define PPH_CSUM_OFST_MASK (0xff) +#define PPH_CSUM_OFST_SHIFT (51) +#define PPH_CSUM_OFST(base) \ + ((be64_to_cpu(*(u64 *)((u8 *)(base) + PPH_CSUM_OFST_OFF)) >> \ + PPH_CSUM_OFST_SHIFT) & PPH_CSUM_OFST_MASK) + +#define PPH_CSUM_VAL_OFF (38UL) +#define PPH_CSUM_VAL_MASK (0xeffffff) +#define PPH_CSUM_VAL_SHIFT (22) +#define PPH_CSUM_VAL(base) \ + ((be64_to_cpu(*(u64 *)((u8 *)(base) + PPH_CSUM_VAL_OFF)) >> \ + PPH_CSUM_VAL_SHIFT) & PPH_CSUM_VAL_MASK) +#endif /* __XSC_TBM_H */ + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h new file mode 100644 index 000000000..8f33c78d8 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved. + */ + +#ifndef __XSC_QUEUE_H +#define __XSC_QUEUE_H + +#include "common/xsc_core.h" + +struct xsc_sq { + struct xsc_core_qp cqp; + /* dirtied @completion */ + u16 cc; + u32 dma_fifo_cc; + + /* dirtied @xmit */ + u16 pc ____cacheline_aligned_in_smp; + u32 dma_fifo_pc; + + struct xsc_cq cq; + + /* read only */ + struct xsc_wq_cyc wq; + u32 dma_fifo_mask; + struct { + struct xsc_sq_dma *dma_fifo; + struct xsc_tx_wqe_info *wqe_info; + } db; + void __iomem *uar_map; + struct netdev_queue *txq; + u32 sqn; + u16 stop_room; + + __be32 mkey_be; + unsigned long state; + unsigned int hw_mtu; + + /* control path */ + struct xsc_wq_ctrl wq_ctrl; + struct xsc_channel *channel; + int ch_ix; + int txq_ix; + struct work_struct recover_work; +} ____cacheline_aligned_in_smp; + +#endif /* __XSC_QUEUE_H */ From patchwork Mon Feb 24 17:24:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988562 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-45.ptr.blmpb.com (lf-2-45.ptr.blmpb.com [101.36.218.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AACBD262805 for ; Mon, 24 Feb 2025 17:25:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417958; cv=none; b=BMbBkUglegVOUKuJ2O1+glDum62QqwgYqIoGpDeQ/4kCGZIzeY/4+9T2JFFEUC4sBzSbCVXAxz92f+NGZUft++j3CkVUHN4hnGotUafePf/vOvYsnUBAuIX5NiR6pOHCdq2hh3IUNJeBIq6gm6WLZIubcbOr4aDVjL9PfmL5yzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417958; c=relaxed/simple; bh=aGCFjqogAZe0wRWkP304fBEb4/fkWO5t6nEGbysmu14=; h=Message-Id:Mime-Version:In-Reply-To:Subject:Content-Type:To:Cc: Date:From:References; b=YkbxXh/sVQQA9azMu3QHgIgQx+xs+MUhULuA7JWyGPgLi9Rqs9r4XdUBtMeSyN5cAfqNQgv+HXQn7QqEGPBRmHsoqcVoTSnrcxXqOhuHTiVQMU53Ljvq+q14Al9xsoRIbnr07fHjKZDcLPZNVIom87aZxFnPVRM8fwRbBJ1h1K4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=QBYAXelQ; arc=none smtp.client-ip=101.36.218.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="QBYAXelQ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417879; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=jFdW8lF3OROkIxux1I+ovGyxvlD3+GqKNnLRf69+UJ0=; b=QBYAXelQdmBSEBAdS4UmJvE6yHsyI91TqFceOdoHG4yxRiqNisQ4l97xtGFGgbXwUpeEx0 e3VcmbozMeZIttuXkOkA+54ZEO5CwGxHeLiD8bugoQVTk8i9nlG8aaPIhX0otyF/Ztg49S QzjYTonKOhob1hluknYb0OoJm9st0jpaNJ6/3N3OO3Hnjb3x3XcTh7+cJB6LIrlBvqKFq1 nT/WvOQ218kJmPABOhzMKRQSGPxXdEfWdqhK3/IRSFD4gzg/wPTN7DOXSx0IXPWB15g7SB vNdKh80Kb7e3P0Uo7ZRa9mRJiotdQ2Rt2fIV+phEd6/f5m7Bc/Zca5sIhlrseA== Message-Id: <20250224172436.2455751-11-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Subject: [PATCH net-next v5 10/14] xsc: Add eth needed qp and cq apis X-Original-From: Xin Tian Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:37 +0800 X-Mailer: git-send-email 2.25.1 To: Cc: , , , , , , , , , , , , Date: Tue, 25 Feb 2025 01:24:37 +0800 From: "Xin Tian" X-Lms-Return-Path: References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Patchwork-Delegate: kuba@kernel.org 1. For Ethernet data transmission and reception, this patch adds APIs for using QP and CQ. For QPs, it includes Create QP, Modify QP Status, and Destroy QP. For CQs, it includes Create CQ and Destroy CQ. Since these operations are common to both Ethernet and RDMA, they are added to the xsc_pci driver. In the xsc_eth driver, Ethernet-specific operations are added, including create RSS RQ and modify QP. 2. Ethernet QP and CQ ring buffer allocation functions are added: xsc_eth_cqwq_create for QP and xsc_eth_wq_cyc_create for CQ. Corresponding DMA buffer allocation functions are also added in alloc.c. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 27 +++ .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../ethernet/yunsilicon/xsc/net/xsc_eth_wq.c | 91 +++++++++ .../ethernet/yunsilicon/xsc/net/xsc_eth_wq.h | 187 ++++++++++++++++++ .../net/ethernet/yunsilicon/xsc/pci/alloc.c | 104 ++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/cq.c | 116 +++++++++++ drivers/net/ethernet/yunsilicon/xsc/pci/qp.c | 114 +++++++++++ 7 files changed, 640 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index a9b22c6e8..595e2562e 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -413,9 +413,36 @@ int xsc_core_create_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); void xsc_core_destroy_resource_common(struct xsc_core_device *xdev, struct xsc_core_qp *qp); +int xsc_core_eth_create_qp(struct xsc_core_device *xdev, + struct xsc_create_qp_mbox_in *in, + int insize, u32 *p_qpn); +int xsc_core_eth_modify_qp_status(struct xsc_core_device *xdev, + u32 qpn, u16 status); +int xsc_core_eth_destroy_qp(struct xsc_core_device *xdev, u32 qpn); +int xsc_core_eth_create_rss_qp_rqs(struct xsc_core_device *xdev, + struct xsc_create_multiqp_mbox_in *in, + int insize, + u32 *p_qpn_base); +int xsc_core_eth_modify_raw_qp(struct xsc_core_device *xdev, + struct xsc_modify_raw_qp_mbox_in *in); +int xsc_core_eth_create_cq(struct xsc_core_device *xdev, + struct xsc_core_cq *xcq, + struct xsc_create_cq_mbox_in *in, + int insize); +int xsc_core_eth_destroy_cq(struct xsc_core_device *xdev, + struct xsc_core_cq *xcq); + struct xsc_eq *xsc_core_eq_get(struct xsc_core_device *xdev, int i); int xsc_core_vector2eqn(struct xsc_core_device *xdev, int vector, u32 *eqn, unsigned int *irqn); +void xsc_core_fill_page_frag_array(struct xsc_frag_buf *buf, + __be64 *pas, unsigned int npages); +int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, + unsigned long size, + struct xsc_frag_buf *buf, + int node); +void xsc_core_frag_buf_free(struct xsc_core_device *xdev, + struct xsc_frag_buf *buf); static inline void *xsc_buf_offset(struct xsc_buf *buf, unsigned long offset) { diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 2811433af..697046979 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o \ No newline at end of file +xsc_eth-y := main.o xsc_eth_wq.o \ No newline at end of file diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c new file mode 100644 index 000000000..df20bafe0 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All + * rights reserved. + * Copyright (c) 2013-2015, Mellanox Technologies, Ltd. All rights reserved. + */ + +#include "xsc_eth_wq.h" +#include "xsc_eth.h" + +u32 xsc_wq_cyc_get_size(struct xsc_wq_cyc *wq) +{ + return (u32)wq->fbc.sz_m1 + 1; +} + +static u32 wq_get_byte_sz(u8 log_sz, u8 log_stride) +{ + return ((u32)1 << log_sz) << log_stride; +} + +int xsc_eth_cqwq_create(struct xsc_core_device *xdev, + struct xsc_wq_param *param, + u8 q_log_size, + u8 ele_log_size, + struct xsc_cqwq *wq, + struct xsc_wq_ctrl *wq_ctrl) +{ + u8 log_wq_stride = ele_log_size; + u8 log_wq_sz = q_log_size; + int err; + + err = xsc_core_frag_buf_alloc_node(xdev, + wq_get_byte_sz(log_wq_sz, + log_wq_stride), + &wq_ctrl->buf, + param->buf_numa_node); + if (err) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "xsc_core_frag_buf_alloc_node failed, %d\n", err); + goto err; + } + + xsc_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, &wq->fbc); + + wq_ctrl->xdev = xdev; + + return 0; + +err: + return err; +} + +int xsc_eth_wq_cyc_create(struct xsc_core_device *xdev, + struct xsc_wq_param *param, + u8 q_log_size, + u8 ele_log_size, + struct xsc_wq_cyc *wq, + struct xsc_wq_ctrl *wq_ctrl) +{ + struct xsc_frag_buf_ctrl *fbc = &wq->fbc; + u8 log_wq_stride = ele_log_size; + u8 log_wq_sz = q_log_size; + int err; + + err = xsc_core_frag_buf_alloc_node(xdev, + wq_get_byte_sz(log_wq_sz, + log_wq_stride), + &wq_ctrl->buf, + param->buf_numa_node); + if (err) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "xsc_core_frag_buf_alloc_node failed, %d\n", err); + goto err; + } + + xsc_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, fbc); + wq->sz = xsc_wq_cyc_get_size(wq); + + wq_ctrl->xdev = xdev; + + return 0; + +err: + return err; +} + +void xsc_eth_wq_destroy(struct xsc_wq_ctrl *wq_ctrl) +{ + xsc_core_frag_buf_free(wq_ctrl->xdev, &wq_ctrl->buf); +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h new file mode 100644 index 000000000..5abe155c0 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_wq.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All + * rights reserved. + * Copyright (c) 2013-2015, Mellanox Technologies, Ltd. All rights reserved. + */ + +#ifndef __XSC_WQ_H +#define __XSC_WQ_H + +#include "common/xsc_core.h" + +struct xsc_wq_param { + int buf_numa_node; + int db_numa_node; +}; + +struct xsc_wq_ctrl { + struct xsc_core_device *xdev; + struct xsc_frag_buf buf; +}; + +struct xsc_wq_cyc { + struct xsc_frag_buf_ctrl fbc; + u16 sz; + u16 wqe_ctr; + u16 cur_sz; +}; + +struct xsc_cqwq { + struct xsc_frag_buf_ctrl fbc; + __be32 *db; + u32 cc; /* consumer counter */ +}; + +enum xsc_res_type { + XSC_RES_UND = 0, + XSC_RES_RQ, + XSC_RES_SQ, + XSC_RES_MAX, +}; + +u32 xsc_wq_cyc_get_size(struct xsc_wq_cyc *wq); + +/*api for eth driver*/ +int xsc_eth_cqwq_create(struct xsc_core_device *xdev, + struct xsc_wq_param *param, + u8 q_log_size, + u8 ele_log_size, + struct xsc_cqwq *wq, + struct xsc_wq_ctrl *wq_ctrl); + +int xsc_eth_wq_cyc_create(struct xsc_core_device *xdev, + struct xsc_wq_param *param, + u8 q_log_size, + u8 ele_log_size, + struct xsc_wq_cyc *wq, + struct xsc_wq_ctrl *wq_ctrl); +void xsc_eth_wq_destroy(struct xsc_wq_ctrl *wq_ctrl); + +static inline void xsc_init_fbc_offset(struct xsc_buf_list *frags, + u8 log_stride, u8 log_sz, + u16 strides_offset, + struct xsc_frag_buf_ctrl *fbc) +{ + fbc->frags = frags; + fbc->log_stride = log_stride; + fbc->log_sz = log_sz; + fbc->sz_m1 = (1 << fbc->log_sz) - 1; + fbc->log_frag_strides = PAGE_SHIFT - fbc->log_stride; + fbc->frag_sz_m1 = (1 << fbc->log_frag_strides) - 1; + fbc->strides_offset = strides_offset; +} + +static inline void xsc_init_fbc(struct xsc_buf_list *frags, + u8 log_stride, u8 log_sz, + struct xsc_frag_buf_ctrl *fbc) +{ + xsc_init_fbc_offset(frags, log_stride, log_sz, 0, fbc); +} + +static inline void *xsc_frag_buf_get_wqe(struct xsc_frag_buf_ctrl *fbc, + u32 ix) +{ + unsigned int frag; + + ix += fbc->strides_offset; + frag = ix >> fbc->log_frag_strides; + + return fbc->frags[frag].buf + + ((fbc->frag_sz_m1 & ix) << fbc->log_stride); +} + +static inline u32 +xsc_frag_buf_get_idx_last_contig_stride(struct xsc_frag_buf_ctrl *fbc, u32 ix) +{ + u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1; + + return min_t(u32, last_frag_stride_idx - fbc->strides_offset, + fbc->sz_m1); +} + +static inline int xsc_wq_cyc_missing(struct xsc_wq_cyc *wq) +{ + return wq->sz - wq->cur_sz; +} + +static inline int xsc_wq_cyc_is_empty(struct xsc_wq_cyc *wq) +{ + return !wq->cur_sz; +} + +static inline void xsc_wq_cyc_push(struct xsc_wq_cyc *wq) +{ + wq->wqe_ctr++; + wq->cur_sz++; +} + +static inline void xsc_wq_cyc_push_n(struct xsc_wq_cyc *wq, u8 n) +{ + wq->wqe_ctr += n; + wq->cur_sz += n; +} + +static inline void xsc_wq_cyc_pop(struct xsc_wq_cyc *wq) +{ + wq->cur_sz--; +} + +static inline u16 xsc_wq_cyc_ctr2ix(struct xsc_wq_cyc *wq, u16 ctr) +{ + return ctr & wq->fbc.sz_m1; +} + +static inline u16 xsc_wq_cyc_get_head(struct xsc_wq_cyc *wq) +{ + return xsc_wq_cyc_ctr2ix(wq, wq->wqe_ctr); +} + +static inline u16 xsc_wq_cyc_get_tail(struct xsc_wq_cyc *wq) +{ + return xsc_wq_cyc_ctr2ix(wq, wq->wqe_ctr - wq->cur_sz); +} + +static inline void *xsc_wq_cyc_get_wqe(struct xsc_wq_cyc *wq, u16 ix) +{ + return xsc_frag_buf_get_wqe(&wq->fbc, ix); +} + +static inline u32 xsc_cqwq_ctr2ix(struct xsc_cqwq *wq, u32 ctr) +{ + return ctr & wq->fbc.sz_m1; +} + +static inline u32 xsc_cqwq_get_ci(struct xsc_cqwq *wq) +{ + return xsc_cqwq_ctr2ix(wq, wq->cc); +} + +static inline u32 xsc_cqwq_get_ctr_wrap_cnt(struct xsc_cqwq *wq, u32 ctr) +{ + return ctr >> wq->fbc.log_sz; +} + +static inline u32 xsc_cqwq_get_wrap_cnt(struct xsc_cqwq *wq) +{ + return xsc_cqwq_get_ctr_wrap_cnt(wq, wq->cc); +} + +static inline void xsc_cqwq_pop(struct xsc_cqwq *wq) +{ + wq->cc++; +} + +static inline u32 xsc_cqwq_get_size(struct xsc_cqwq *wq) +{ + return wq->fbc.sz_m1 + 1; +} + +static inline struct xsc_cqe *xsc_cqwq_get_wqe(struct xsc_cqwq *wq, u32 ix) +{ + struct xsc_cqe *cqe = xsc_frag_buf_get_wqe(&wq->fbc, ix); + + return cqe; +} + +#endif /* __XSC_WQ_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c index 4c561d9fa..eeb684654 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/alloc.c @@ -128,3 +128,107 @@ void xsc_fill_page_array(struct xsc_buf *buf, __be64 *pas, unsigned int npages) pas[i] = cpu_to_be64(addr); } } + +void xsc_core_fill_page_frag_array(struct xsc_frag_buf *buf, + __be64 *pas, + unsigned int npages) +{ + int i; + dma_addr_t addr; + unsigned int shift = PAGE_SHIFT - PAGE_SHIFT_4K; + unsigned int mask = (1 << shift) - 1; + + for (i = 0; i < npages; i++) { + addr = buf->frags[i >> shift].map + + ((i & mask) << PAGE_SHIFT_4K); + pas[i] = cpu_to_be64(addr); + } +} +EXPORT_SYMBOL(xsc_core_fill_page_frag_array); + +static void *xsc_dma_zalloc_coherent_node(struct xsc_core_device *xdev, + size_t size, dma_addr_t *dma_handle, + int node) +{ + struct xsc_dev_resource *dev_res = xdev->dev_res; + struct device *device = &xdev->pdev->dev; + int original_node; + void *cpu_handle; + + /* WA for kernels that don't use numa_mem_id in alloc_pages_node */ + if (node == NUMA_NO_NODE) + node = numa_mem_id(); + + mutex_lock(&dev_res->alloc_mutex); + original_node = dev_to_node(device); + set_dev_node(device, node); + cpu_handle = dma_alloc_coherent(device, size, dma_handle, + GFP_KERNEL); + set_dev_node(device, original_node); + mutex_unlock(&dev_res->alloc_mutex); + return cpu_handle; +} + +int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, + unsigned long size, + struct xsc_frag_buf *buf, + int node) +{ + int i; + + buf->size = size; + buf->npages = DIV_ROUND_UP(size, PAGE_SIZE); + buf->page_shift = PAGE_SHIFT; + buf->frags = kcalloc(buf->npages, sizeof(struct xsc_buf_list), + GFP_KERNEL); + if (!buf->frags) + goto err_out; + + for (i = 0; i < buf->npages; i++) { + unsigned long frag_sz = min_t(unsigned long, size, PAGE_SIZE); + struct xsc_buf_list *frag = &buf->frags[i]; + + frag->buf = xsc_dma_zalloc_coherent_node(xdev, frag_sz, + &frag->map, node); + if (!frag->buf) + goto err_free_buf; + if (frag->map & ((1 << buf->page_shift) - 1)) { + dma_free_coherent(&xdev->pdev->dev, frag_sz, + buf->frags[i].buf, buf->frags[i].map); + pci_err(xdev->pdev, "unexpected map alignment: %pad, page_shift=%d\n", + &frag->map, buf->page_shift); + goto err_free_buf; + } + size -= frag_sz; + } + + return 0; + +err_free_buf: + while (i--) + dma_free_coherent(&xdev->pdev->dev, + PAGE_SIZE, + buf->frags[i].buf, + buf->frags[i].map); + kfree(buf->frags); +err_out: + return -ENOMEM; +} +EXPORT_SYMBOL(xsc_core_frag_buf_alloc_node); + +void xsc_core_frag_buf_free(struct xsc_core_device *xdev, + struct xsc_frag_buf *buf) +{ + unsigned long size = buf->size; + int i; + + for (i = 0; i < buf->npages; i++) { + unsigned long frag_sz = min_t(unsigned long, size, PAGE_SIZE); + + dma_free_coherent(&xdev->pdev->dev, frag_sz, buf->frags[i].buf, + buf->frags[i].map); + size -= frag_sz; + } + kfree(buf->frags); +} +EXPORT_SYMBOL(xsc_core_frag_buf_free); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c index 5cff9025c..493d846cf 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/cq.c @@ -4,6 +4,7 @@ */ #include "common/xsc_core.h" +#include "common/xsc_driver.h" #include "cq.h" void xsc_cq_event(struct xsc_core_device *xdev, u32 cqn, int event_type) @@ -37,3 +38,118 @@ void xsc_init_cq_table(struct xsc_core_device *xdev) spin_lock_init(&table->lock); INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); } + +static int xsc_create_cq(struct xsc_core_device *xdev, u32 *p_cqn, + struct xsc_create_cq_mbox_in *in, int insize) +{ + struct xsc_create_cq_mbox_out out; + int ret; + + memset(&out, 0, sizeof(out)); + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_CQ); + ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to create cq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + *p_cqn = be32_to_cpu(out.cqn) & 0xffffff; + return 0; +} + +static int xsc_destroy_cq(struct xsc_core_device *xdev, u32 cqn) +{ + struct xsc_destroy_cq_mbox_out out; + struct xsc_destroy_cq_mbox_in in; + int ret; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_CQ); + in.cqn = cpu_to_be32(cqn); + ret = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to destroy cq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} + +int xsc_core_eth_create_cq(struct xsc_core_device *xdev, + struct xsc_core_cq *xcq, + struct xsc_create_cq_mbox_in *in, + int insize) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + u32 cqn; + int ret; + int err; + + ret = xsc_create_cq(xdev, &cqn, in, insize); + if (ret) { + pci_err(xdev->pdev, "xsc_create_cq failed\n"); + return -ENOEXEC; + } + xcq->cqn = cqn; + xcq->cons_index = 0; + xcq->arm_sn = 0; + atomic_set(&xcq->refcount, 1); + init_completion(&xcq->free); + + spin_lock_irq(&table->lock); + ret = radix_tree_insert(&table->tree, xcq->cqn, xcq); + spin_unlock_irq(&table->lock); + if (ret) + goto err_insert_cq; + return 0; +err_insert_cq: + err = xsc_destroy_cq(xdev, cqn); + if (err) + pci_err(xdev->pdev, "failed to destroy cqn=%d, err=%d\n", + xcq->cqn, err); + return ret; +} +EXPORT_SYMBOL(xsc_core_eth_create_cq); + +int xsc_core_eth_destroy_cq(struct xsc_core_device *xdev, + struct xsc_core_cq *xcq) +{ + struct xsc_cq_table *table = &xdev->dev_res->cq_table; + struct xsc_core_cq *tmp; + int err; + + spin_lock_irq(&table->lock); + tmp = radix_tree_delete(&table->tree, xcq->cqn); + spin_unlock_irq(&table->lock); + if (!tmp) { + err = -ENOENT; + goto err_delete_cq; + } + + if (tmp != xcq) { + err = -EINVAL; + goto err_delete_cq; + } + + err = xsc_destroy_cq(xdev, xcq->cqn); + if (err) + goto err_destroy_cq; + + if (atomic_dec_and_test(&xcq->refcount)) + complete(&xcq->free); + wait_for_completion(&xcq->free); + return 0; + +err_destroy_cq: + pci_err(xdev->pdev, "failed to destroy cqn=%d, err=%d\n", + xcq->cqn, err); + return err; +err_delete_cq: + pci_err(xdev->pdev, "cqn=%d not found in tree, err=%d\n", + xcq->cqn, err); + return err; +} +EXPORT_SYMBOL(xsc_core_eth_destroy_cq); diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c index cc79eaf92..61746e057 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/qp.c @@ -9,6 +9,7 @@ #include #include "common/xsc_core.h" +#include "common/xsc_driver.h" #include "qp.h" int xsc_core_create_resource_common(struct xsc_core_device *xdev, @@ -78,3 +79,116 @@ void xsc_init_qp_table(struct xsc_core_device *xdev) spin_lock_init(&table->lock); INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); } + +int xsc_core_eth_create_qp(struct xsc_core_device *xdev, + struct xsc_create_qp_mbox_in *in, + int insize, u32 *p_qpn) +{ + struct xsc_create_qp_mbox_out out; + int ret; + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_QP); + ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to create sq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + *p_qpn = be32_to_cpu(out.qpn) & 0xffffff; + + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_create_qp); + +int xsc_core_eth_modify_qp_status(struct xsc_core_device *xdev, + u32 qpn, u16 status) +{ + struct xsc_modify_qp_mbox_out out; + struct xsc_modify_qp_mbox_in in; + int ret = 0; + + in.hdr.opcode = cpu_to_be16(status); + in.qpn = cpu_to_be32(qpn); + in.no_need_wait = 1; + + ret = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (ret || out.hdr.status != 0) { + pci_err(xdev->pdev, "failed to modify qp %u status=%u, err=%d out.status %u\n", + qpn, status, ret, out.hdr.status); + ret = -ENOEXEC; + } + + return ret; +} +EXPORT_SYMBOL_GPL(xsc_core_eth_modify_qp_status); + +int xsc_core_eth_destroy_qp(struct xsc_core_device *xdev, u32 qpn) +{ + struct xsc_destroy_qp_mbox_out out; + struct xsc_destroy_qp_mbox_in in; + int err; + + err = xsc_core_eth_modify_qp_status(xdev, qpn, XSC_CMD_OP_2RST_QP); + if (err) { + pci_err(xdev->pdev, "failed to set sq%d status=rst, err=%d\n", + qpn, err); + return err; + } + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DESTROY_QP); + in.qpn = cpu_to_be32(qpn); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + pci_err(xdev->pdev, "failed to destroy sq%d, err=%d out.status=%u\n", + qpn, err, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_destroy_qp); + +int xsc_core_eth_modify_raw_qp(struct xsc_core_device *xdev, + struct xsc_modify_raw_qp_mbox_in *in) +{ + struct xsc_modify_raw_qp_mbox_out out; + int ret; + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_MODIFY_RAW_QP); + + ret = xsc_cmd_exec(xdev, in, sizeof(struct xsc_modify_raw_qp_mbox_in), + &out, sizeof(struct xsc_modify_raw_qp_mbox_out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, "failed to modify sq, err=%d out.status=%u\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_modify_raw_qp); + +int xsc_core_eth_create_rss_qp_rqs(struct xsc_core_device *xdev, + struct xsc_create_multiqp_mbox_in *in, + int insize, + u32 *p_qpn_base) +{ + struct xsc_create_multiqp_mbox_out out; + int ret; + + in->hdr.opcode = cpu_to_be16(XSC_CMD_OP_CREATE_MULTI_QP); + ret = xsc_cmd_exec(xdev, in, insize, &out, sizeof(out)); + if (ret || out.hdr.status) { + pci_err(xdev->pdev, + "failed to create rss rq, qp_num=%d, type=%d, err=%d out.status=%u\n", + in->qp_num, in->qp_type, ret, out.hdr.status); + return -ENOEXEC; + } + + *p_qpn_base = be32_to_cpu(out.qpn_base) & 0xffffff; + return 0; +} +EXPORT_SYMBOL(xsc_core_eth_create_rss_qp_rqs); From patchwork Mon Feb 24 17:24:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988563 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-45.ptr.blmpb.com (lf-2-45.ptr.blmpb.com [101.36.218.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0140D1537C8 for ; Mon, 24 Feb 2025 17:25:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417961; cv=none; b=uJ83s6DwNrX/MnLdKYmSbMCuTtj+CE1UUTXmBGOaqQABIJZtmFR2R7a43r3lr6MGOmaxd4zY+oajvMNxT5DXWXgoNRBXcGhETOg+hUHybQ1rvS4cFeey8ZrW78lHG0nCszlegvUaKwZ+KDjkO+3qMdrNjIjY8sb8V99bgNU4Hks= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417961; c=relaxed/simple; bh=y9oJUeIO3+MC92JfcliB2fEeaMW1KQ9iCgH09e1+zlY=; h=Cc:From:Date:Content-Type:To:Subject:Message-Id:In-Reply-To: Mime-Version:References; b=EDOm8v/qvNYtspCG+TOe3d3gRvGdLFXtZHE/WC/q4ncXAwzdZJLAJ6m4zsG4TUBQTf4sM6vk2I44Jq4hvGkFwuI268UIQeBAlDTNt3aR58ki5clcSeIb24kkVlXZyMY8d154xrLFPNuCtIstMl3IGjhqN15BL0M7XIDsPR/sYM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=Quqpy/dE; arc=none smtp.client-ip=101.36.218.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="Quqpy/dE" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417882; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=ezoknn1TDZbGwShJ0ICPd0Qh+qvUZzO84QMujW3fVSE=; b=Quqpy/dExEdL+dzyWllMK/5i4OGQAGhoYbQoIfrLSreysaHJv7Ip07GAw6uEafQR4q8h9p ZSSFd1LWOurwGqTHu4ZQmofjNs43JbZ9T0uiXgQNRyjgV2QK9xr1M1hDv/ZzBtEDXFZDPu c0hH/yuP/2uZzKxKt+IUEOfryBV6YG4JcrHXTcE77h9TlDfqtOWK87OsY8iYtKKBz+cmiB ZO2a+Xtv75+6u1qDUSQVLWyROXpxd82wECC1Y+TUIiY2phQJirAWxqK50+BJPmOPxHlosF p4mYvJJaMBUJwyAE8NYOLLpA/aHZTJY2BvLVGEKZ8F0kGUTGCBGP78qQBiB5AQ== Cc: , , , , , , , , , , , , From: "Xin Tian" Date: Tue, 25 Feb 2025 01:24:39 +0800 X-Lms-Return-Path: X-Original-From: Xin Tian To: Subject: [PATCH net-next v5 11/14] xsc: ndo_open and ndo_stop Message-Id: <20250224172438.2455751-12-tianx@yunsilicon.com> In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:39 +0800 References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Mailer: git-send-email 2.25.1 X-Patchwork-Delegate: kuba@kernel.org This patch implements ndo_open and ndo_close. The main task is to initialize the channel resources. A channel is a data transmission path. Each CPU is bound to a channel, and each channel is associated with an interrupt. A channel consists of one RX queue and multiple TX queues, with the number of TX queues corresponding to the number of traffic classes. The TX queue is used only for transmission, while the RX queue is used only for reception.Each queue pair (TX queue and RX queue) is bound to a completion queue (CQ). Each channel also has its own NAPI context. The patch also add event handling functionality. Added a workqueue and event handler to process asynchronous events from the hardware. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 44 + .../yunsilicon/xsc/common/xsc_device.h | 35 + .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 1541 ++++++++++++++++- .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 8 + .../yunsilicon/xsc/net/xsc_eth_common.h | 144 ++ .../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 48 + .../yunsilicon/xsc/net/xsc_eth_txrx.c | 100 ++ .../yunsilicon/xsc/net/xsc_eth_txrx.h | 26 + .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 148 ++ .../net/ethernet/yunsilicon/xsc/pci/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/pci/vport.c | 32 + 12 files changed, 2126 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h create mode 100644 drivers/net/ethernet/yunsilicon/xsc/pci/vport.c diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index 595e2562e..c5c7608be 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -147,6 +147,34 @@ enum xsc_event { XSC_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,//IBV_EVENT_QP_ACCESS_ERR }; +struct xsc_cqe { + __le32 data0; +#define XSC_CQE_MSG_OPCD_MASK GENMASK(7, 0) +#define XSC_CQE_ERR_CODE_MASK GENMASK(6, 0) +#define XSC_CQE_IS_ERR BIT(7) +#define XSC_CQE_QP_ID_MASK GENMASK(22, 8) +#define XSC_CQE_SE BIT(24) +#define XSC_CQE_HAS_PPH BIT(25) +#define XSC_CQE_TYPE BIT(26) +#define XSC_CQE_WITH_IMM BIT(27) +#define XSC_CQE_CSUM_ERR_MASK GENMASK(31, 28) + + __le32 imm_data; + __le32 msg_len; + __le32 vni; + __le32 ts_l; + __le16 ts_h; + __le16 wqe_id; + __le32 rsv; + __le32 data1; +#define XSC_CQE_OWNER BIT(31) +}; + +// cq doorbell bitfields +#define XSC_CQ_DB_NEXT_CID_MASK GENMASK(15, 0) +#define XSC_CQ_DB_CQ_ID_MASK GENMASK(30, 16) +#define XSC_CQ_DB_ARM BIT(31) + struct xsc_core_cq { u32 cqn; u32 cqe_sz; @@ -379,6 +407,8 @@ struct xsc_core_device { int bar_num; u8 mac_port; + u8 pcie_no; + u8 pf_id; u16 glb_func_id; u16 msix_vec_base; @@ -407,6 +437,8 @@ struct xsc_core_device { u32 fw_version_tweak; u8 fw_version_extra_flag; cpumask_var_t xps_cpumask; + + u8 user_mode; }; int xsc_core_create_resource_common(struct xsc_core_device *xdev, @@ -444,6 +476,8 @@ int xsc_core_frag_buf_alloc_node(struct xsc_core_device *xdev, void xsc_core_frag_buf_free(struct xsc_core_device *xdev, struct xsc_frag_buf *buf); +u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport); + static inline void *xsc_buf_offset(struct xsc_buf *buf, unsigned long offset) { if (likely(buf->nbufs == 1)) @@ -458,4 +492,14 @@ static inline bool xsc_fw_is_available(struct xsc_core_device *xdev) return xdev->cmd.cmd_status == XSC_CMD_STATUS_NORMAL; } +static inline void xsc_set_user_mode(struct xsc_core_device *xdev, u8 mode) +{ + xdev->user_mode = mode; +} + +static inline u8 xsc_get_user_mode(struct xsc_core_device *xdev) +{ + return xdev->user_mode; +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h index 45ea8d2a0..154a4e027 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_device.h @@ -6,6 +6,22 @@ #ifndef __XSC_DEVICE_H #define __XSC_DEVICE_H +#include +#include + +/* QP type */ +enum { + XSC_QUEUE_TYPE_RDMA_RC = 0, + XSC_QUEUE_TYPE_RDMA_MAD = 1, + XSC_QUEUE_TYPE_RAW = 2, + XSC_QUEUE_TYPE_VIRTIO_NET = 3, + XSC_QUEUE_TYPE_VIRTIO_BLK = 4, + XSC_QUEUE_TYPE_RAW_TPE = 5, + XSC_QUEUE_TYPE_RAW_TSO = 6, + XSC_QUEUE_TYPE_RAW_TX = 7, + XSC_QUEUE_TYPE_INVALID = 0xFF, +}; + enum xsc_traffic_types { XSC_TT_IPV4, XSC_TT_IPV4_TCP, @@ -39,4 +55,23 @@ struct xsc_tirc_config { u32 rx_hash_fields; }; +enum { + XSC_HASH_FUNC_XOR = 0, + XSC_HASH_FUNC_TOP = 1, + XSC_HASH_FUNC_TOP_SYM = 2, + XSC_HASH_FUNC_RSV = 3, +}; + +static inline u8 xsc_hash_func_type(u8 hash_func) +{ + switch (hash_func) { + case ETH_RSS_HASH_TOP: + return XSC_HASH_FUNC_TOP; + case ETH_RSS_HASH_XOR: + return XSC_HASH_FUNC_XOR; + default: + return XSC_HASH_FUNC_TOP; + } +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 697046979..104ef5330 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o xsc_eth_wq.o \ No newline at end of file +xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_rx.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index dd1617a84..72d91057f 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "common/xsc_core.h" #include "common/xsc_driver.h" @@ -14,6 +15,7 @@ #include "common/xsc_pp.h" #include "xsc_eth_common.h" #include "xsc_eth.h" +#include "xsc_eth_txrx.h" static const struct xsc_tirc_config tirc_default_config[XSC_NUM_INDIR_TIRS] = { [XSC_TT_IPV4] = { @@ -114,6 +116,8 @@ static int xsc_eth_netdev_init(struct xsc_adapter *adapter) if (!adapter->txq2sq) goto err_out; + mutex_init(&adapter->status_lock); + adapter->workq = create_singlethread_workqueue("xsc_eth"); if (!adapter->workq) goto err_free_priv; @@ -128,11 +132,1543 @@ static int xsc_eth_netdev_init(struct xsc_adapter *adapter) return -ENOMEM; } -static int xsc_eth_close(struct net_device *netdev) +static void xsc_eth_build_queue_param(struct xsc_adapter *adapter, + struct xsc_queue_attr *attr, u8 type) +{ + struct xsc_core_device *xdev = adapter->xdev; + + if (adapter->nic_param.sq_size == 0) + adapter->nic_param.sq_size = BIT(xdev->caps.log_max_qp_depth); + if (adapter->nic_param.rq_size == 0) + adapter->nic_param.rq_size = BIT(xdev->caps.log_max_qp_depth); + + if (type == XSC_QUEUE_TYPE_EQ) { + attr->q_type = XSC_QUEUE_TYPE_EQ; + attr->ele_num = XSC_EQ_ELE_NUM; + attr->ele_size = XSC_EQ_ELE_SZ; + attr->ele_log_size = order_base_2(XSC_EQ_ELE_SZ); + attr->q_log_size = order_base_2(XSC_EQ_ELE_NUM); + } else if (type == XSC_QUEUE_TYPE_RQCQ) { + attr->q_type = XSC_QUEUE_TYPE_RQCQ; + attr->ele_num = min_t(int, XSC_RQCQ_ELE_NUM, + xdev->caps.max_cqes); + attr->ele_size = XSC_RQCQ_ELE_SZ; + attr->ele_log_size = order_base_2(XSC_RQCQ_ELE_SZ); + attr->q_log_size = order_base_2(attr->ele_num); + } else if (type == XSC_QUEUE_TYPE_SQCQ) { + attr->q_type = XSC_QUEUE_TYPE_SQCQ; + attr->ele_num = min_t(int, XSC_SQCQ_ELE_NUM, + xdev->caps.max_cqes); + attr->ele_size = XSC_SQCQ_ELE_SZ; + attr->ele_log_size = order_base_2(XSC_SQCQ_ELE_SZ); + attr->q_log_size = order_base_2(attr->ele_num); + } else if (type == XSC_QUEUE_TYPE_RQ) { + attr->q_type = XSC_QUEUE_TYPE_RQ; + attr->ele_num = adapter->nic_param.rq_size; + attr->ele_size = xdev->caps.recv_ds_num * XSC_RECV_WQE_DS; + attr->ele_log_size = order_base_2(attr->ele_size); + attr->q_log_size = order_base_2(attr->ele_num); + } else if (type == XSC_QUEUE_TYPE_SQ) { + attr->q_type = XSC_QUEUE_TYPE_SQ; + attr->ele_num = adapter->nic_param.sq_size; + attr->ele_size = xdev->caps.send_ds_num * XSC_SEND_WQE_DS; + attr->ele_log_size = order_base_2(attr->ele_size); + attr->q_log_size = order_base_2(attr->ele_num); + } +} + +static u32 xsc_rx_get_linear_frag_sz(u32 mtu) +{ + u32 byte_count = XSC_SW2HW_FRAG_SIZE(mtu); + + return XSC_SKB_FRAG_SZ(byte_count); +} + +static bool xsc_rx_is_linear_skb(u32 mtu) +{ + u32 linear_frag_sz = xsc_rx_get_linear_frag_sz(mtu); + + return linear_frag_sz <= PAGE_SIZE; +} + +static u32 xsc_get_rq_frag_info(struct xsc_rq_frags_info *frags_info, u32 mtu) +{ + u32 byte_count = XSC_SW2HW_FRAG_SIZE(mtu); + int frag_stride; + int i = 0; + + if (xsc_rx_is_linear_skb(mtu)) { + frag_stride = xsc_rx_get_linear_frag_sz(mtu); + frag_stride = roundup_pow_of_two(frag_stride); + + frags_info->arr[0].frag_size = byte_count; + frags_info->arr[0].frag_stride = frag_stride; + frags_info->num_frags = 1; + frags_info->wqe_bulk = PAGE_SIZE / frag_stride; + frags_info->wqe_bulk_min = frags_info->wqe_bulk; + goto out; + } + + if (byte_count <= DEFAULT_FRAG_SIZE) { + frags_info->arr[0].frag_size = DEFAULT_FRAG_SIZE; + frags_info->arr[0].frag_stride = DEFAULT_FRAG_SIZE; + frags_info->num_frags = 1; + } else if (byte_count <= PAGE_SIZE_4K) { + frags_info->arr[0].frag_size = PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = PAGE_SIZE_4K; + frags_info->num_frags = 1; + } else if (byte_count <= (PAGE_SIZE_4K + DEFAULT_FRAG_SIZE)) { + if (PAGE_SIZE < 2 * PAGE_SIZE_4K) { + frags_info->arr[0].frag_size = PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = PAGE_SIZE_4K; + frags_info->arr[1].frag_size = PAGE_SIZE_4K; + frags_info->arr[1].frag_stride = PAGE_SIZE_4K; + frags_info->num_frags = 2; + } else { + frags_info->arr[0].frag_size = 2 * PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = 2 * PAGE_SIZE_4K; + frags_info->num_frags = 1; + } + } else if (byte_count <= 2 * PAGE_SIZE_4K) { + if (PAGE_SIZE < 2 * PAGE_SIZE_4K) { + frags_info->arr[0].frag_size = PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = PAGE_SIZE_4K; + frags_info->arr[1].frag_size = PAGE_SIZE_4K; + frags_info->arr[1].frag_stride = PAGE_SIZE_4K; + frags_info->num_frags = 2; + } else { + frags_info->arr[0].frag_size = 2 * PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = 2 * PAGE_SIZE_4K; + frags_info->num_frags = 1; + } + } else { + if (PAGE_SIZE < 4 * PAGE_SIZE_4K) { + frags_info->num_frags = + roundup(byte_count, + PAGE_SIZE_4K) / PAGE_SIZE_4K; + for (i = 0; i < frags_info->num_frags; i++) { + frags_info->arr[i].frag_size = PAGE_SIZE_4K; + frags_info->arr[i].frag_stride = PAGE_SIZE_4K; + } + } else { + frags_info->arr[0].frag_size = 4 * PAGE_SIZE_4K; + frags_info->arr[0].frag_stride = 4 * PAGE_SIZE_4K; + frags_info->num_frags = 1; + } + } + + if (PAGE_SIZE <= PAGE_SIZE_4K) { + frags_info->wqe_bulk_min = 4; + frags_info->wqe_bulk = max_t(u8, frags_info->wqe_bulk_min, 8); + } else if (PAGE_SIZE <= 2 * PAGE_SIZE_4K) { + frags_info->wqe_bulk = 2; + frags_info->wqe_bulk_min = frags_info->wqe_bulk; + } else { + frags_info->wqe_bulk = + PAGE_SIZE / + (frags_info->num_frags * frags_info->arr[0].frag_size); + frags_info->wqe_bulk_min = frags_info->wqe_bulk; + } + +out: + frags_info->log_num_frags = order_base_2(frags_info->num_frags); + + return frags_info->num_frags * frags_info->arr[0].frag_size; +} + +static void xsc_build_rq_frags_info(struct xsc_queue_attr *attr, + struct xsc_rq_frags_info *frags_info, + struct xsc_eth_params *params) +{ + params->rq_frags_size = xsc_get_rq_frag_info(frags_info, params->mtu); + frags_info->frags_max_num = attr->ele_size / XSC_RECV_WQE_DS; +} + +static void xsc_eth_build_channel_param(struct xsc_adapter *adapter, + struct xsc_channel_param *chl_param) +{ + xsc_eth_build_queue_param(adapter, &chl_param->rqcq_param.cq_attr, + XSC_QUEUE_TYPE_RQCQ); + chl_param->rqcq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_eth_build_queue_param(adapter, &chl_param->sqcq_param.cq_attr, + XSC_QUEUE_TYPE_SQCQ); + chl_param->sqcq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_eth_build_queue_param(adapter, &chl_param->sq_param.sq_attr, + XSC_QUEUE_TYPE_SQ); + chl_param->sq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_eth_build_queue_param(adapter, &chl_param->rq_param.rq_attr, + XSC_QUEUE_TYPE_RQ); + chl_param->rq_param.wq.buf_numa_node = dev_to_node(adapter->dev); + + xsc_build_rq_frags_info(&chl_param->rq_param.rq_attr, + &chl_param->rq_param.frags_info, + &adapter->nic_param); +} + +static void xsc_eth_cq_error_event(struct xsc_core_cq *xcq, + enum xsc_event event) +{ + struct xsc_cq *xsc_cq = container_of(xcq, struct xsc_cq, xcq); + struct xsc_core_device *xdev = xsc_cq->xdev; + + if (event != XSC_EVENT_TYPE_CQ_ERROR) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "Unexpected event type %d on CQ %06x\n", + event, xcq->cqn); + return; + } + + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "Eth catch CQ ERROR: %x, cqn: %d\n", event, xcq->cqn); +} + +static void xsc_eth_completion_event(struct xsc_core_cq *xcq) +{ + struct xsc_cq *cq = container_of(xcq, struct xsc_cq, xcq); + struct xsc_core_device *xdev = cq->xdev; + struct xsc_rq *rq = NULL; + + if (unlikely(!cq->channel)) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "cq%d->channel is null\n", xcq->cqn); + return; + } + + rq = &cq->channel->qp.rq[0]; + + set_bit(XSC_CHANNEL_NAPI_SCHED, &cq->channel->flags); + + if (!test_bit(XSC_ETH_RQ_STATE_ENABLED, &rq->state)) + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "ch%d_cq%d, napi_flag=0x%lx\n", + cq->channel->chl_idx, xcq->cqn, + cq->napi->state); + + napi_schedule(cq->napi); + cq->event_ctr++; +} + +static int xsc_eth_alloc_cq(struct xsc_channel *c, struct xsc_cq *pcq, + struct xsc_cq_param *pcq_param) +{ + u8 ele_log_size = pcq_param->cq_attr.ele_log_size; + struct xsc_core_device *xdev = c->adapter->xdev; + u8 q_log_size = pcq_param->cq_attr.q_log_size; + struct xsc_core_cq *core_cq = &pcq->xcq; + int ret; + u32 i; + + pcq_param->wq.db_numa_node = cpu_to_node(c->cpu); + pcq_param->wq.buf_numa_node = cpu_to_node(c->cpu); + + ret = xsc_eth_cqwq_create(xdev, &pcq_param->wq, + q_log_size, ele_log_size, &pcq->wq, + &pcq->wq_ctrl); + if (ret) + return ret; + + core_cq->cqe_sz = pcq_param->cq_attr.ele_num; + core_cq->comp = xsc_eth_completion_event; + core_cq->event = xsc_eth_cq_error_event; + core_cq->vector = c->chl_idx; + + for (i = 0; i < xsc_cqwq_get_size(&pcq->wq); i++) { + struct xsc_cqe *cqe = xsc_cqwq_get_wqe(&pcq->wq, i); + + cqe->data1 |= cpu_to_le32(XSC_CQE_OWNER); + } + pcq->xdev = xdev; + + return ret; +} + +static int xsc_eth_set_cq(struct xsc_channel *c, + struct xsc_cq *pcq, + struct xsc_cq_param *pcq_param) +{ + struct xsc_core_device *xdev = c->adapter->xdev; + struct xsc_create_cq_mbox_in *in; + int ret = XSCALE_RET_SUCCESS; + unsigned int hw_npages; + unsigned int irqn; + int inlen; + u32 eqn; + + hw_npages = DIV_ROUND_UP(pcq->wq_ctrl.buf.size, PAGE_SIZE_4K); + /*mbox size + pas size*/ + inlen = sizeof(struct xsc_create_cq_mbox_in) + + sizeof(__be64) * hw_npages; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return -ENOMEM; + + /*construct param of in struct*/ + ret = xsc_core_vector2eqn(xdev, c->chl_idx, &eqn, &irqn); + if (ret) + goto err; + + in->ctx.eqn = cpu_to_be16(eqn); + in->ctx.log_cq_sz = pcq_param->cq_attr.q_log_size; + in->ctx.pa_num = cpu_to_be16(hw_npages); + in->ctx.glb_func_id = cpu_to_be16(xdev->glb_func_id); + + xsc_core_fill_page_frag_array(&pcq->wq_ctrl.buf, + &in->pas[0], + hw_npages); + + ret = xsc_core_eth_create_cq(c->adapter->xdev, &pcq->xcq, in, inlen); + if (ret == 0) { + pcq->xcq.irqn = irqn; + pcq->xcq.eq = xsc_core_eq_get(xdev, pcq->xcq.vector); + } + +err: + kvfree(in); + return ret; +} + +static void xsc_eth_free_cq(struct xsc_cq *cq) +{ + xsc_eth_wq_destroy(&cq->wq_ctrl); +} + +static int xsc_eth_open_cq(struct xsc_channel *c, + struct xsc_cq *pcq, + struct xsc_cq_param *pcq_param) +{ + int ret; + + ret = xsc_eth_alloc_cq(c, pcq, pcq_param); + if (ret) + return ret; + + ret = xsc_eth_set_cq(c, pcq, pcq_param); + if (ret) + goto err_set_cq; + + xsc_cq_notify_hw_rearm(pcq); + + pcq->napi = &c->napi; + pcq->channel = c; + pcq->rx = (pcq_param->cq_attr.q_type == XSC_QUEUE_TYPE_RQCQ) ? 1 : 0; + + return 0; + +err_set_cq: + xsc_eth_free_cq(pcq); + return ret; +} + +static int xsc_eth_close_cq(struct xsc_channel *c, struct xsc_cq *pcq) +{ + struct xsc_core_device *xdev = c->adapter->xdev; + int ret; + + ret = xsc_core_eth_destroy_cq(xdev, &pcq->xcq); + if (ret) { + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, "failed to close ch%d cq%d, ret=%d\n", + c->chl_idx, pcq->xcq.cqn, ret); + return ret; + } + + xsc_eth_free_cq(pcq); + + return 0; +} + +static void xsc_free_qp_sq_db(struct xsc_sq *sq) +{ + kvfree(sq->db.wqe_info); + kvfree(sq->db.dma_fifo); +} + +static void xsc_free_qp_sq(struct xsc_sq *sq) +{ + xsc_free_qp_sq_db(sq); + xsc_eth_wq_destroy(&sq->wq_ctrl); +} + +static int xsc_eth_alloc_qp_sq_db(struct xsc_sq *sq, int numa) +{ + struct xsc_core_device *xdev = sq->cq.xdev; + int wq_sz; + int df_sz; + + wq_sz = xsc_wq_cyc_get_size(&sq->wq); + df_sz = wq_sz * xdev->caps.send_ds_num; + sq->db.dma_fifo = kvzalloc_node(array_size(df_sz, + sizeof(*sq->db.dma_fifo)), + GFP_KERNEL, + numa); + sq->db.wqe_info = kvzalloc_node(array_size(wq_sz, + sizeof(*sq->db.wqe_info)), + GFP_KERNEL, + numa); + + if (!sq->db.dma_fifo || !sq->db.wqe_info) { + xsc_free_qp_sq_db(sq); + return -ENOMEM; + } + + sq->dma_fifo_mask = df_sz - 1; + + return 0; +} + +static void xsc_eth_qp_event(struct xsc_core_qp *qp, int type) +{ + struct xsc_core_device *xdev; + struct xsc_rq *rq; + struct xsc_sq *sq; + + if (qp->eth_queue_type == XSC_RES_RQ) { + rq = container_of(qp, struct xsc_rq, cqp); + xdev = rq->cq.xdev; + } else if (qp->eth_queue_type == XSC_RES_SQ) { + sq = container_of(qp, struct xsc_sq, cqp); + xdev = sq->cq.xdev; + } else { + pr_err("%s:Unknown eth qp type %d\n", __func__, type); + return; + } + + switch (type) { + case XSC_EVENT_TYPE_WQ_CATAS_ERROR: + case XSC_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + case XSC_EVENT_TYPE_WQ_ACCESS_ERROR: + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "%s:Async event %x on QP %d\n", + __func__, type, qp->qpn); + break; + default: + netdev_err(((struct xsc_adapter *)xdev->eth_priv)->netdev, + "%s: Unexpected event type %d on QP %d\n", + __func__, type, qp->qpn); + return; + } +} + +static int xsc_eth_open_qp_sq(struct xsc_channel *c, + struct xsc_sq *psq, + struct xsc_sq_param *psq_param, + u32 sq_idx) +{ + u8 ele_log_size = psq_param->sq_attr.ele_log_size; + u8 q_log_size = psq_param->sq_attr.q_log_size; + struct xsc_modify_raw_qp_mbox_in *modify_in; + struct xsc_create_qp_mbox_in *in; + struct xsc_core_device *xdev; + struct xsc_adapter *adapter; + unsigned int hw_npages; + int inlen; + int ret; + + adapter = c->adapter; + xdev = adapter->xdev; + psq_param->wq.db_numa_node = cpu_to_node(c->cpu); + + ret = xsc_eth_wq_cyc_create(xdev, &psq_param->wq, + q_log_size, ele_log_size, &psq->wq, + &psq->wq_ctrl); + if (ret) + return ret; + + hw_npages = DIV_ROUND_UP(psq->wq_ctrl.buf.size, PAGE_SIZE_4K); + inlen = sizeof(struct xsc_create_qp_mbox_in) + + sizeof(__be64) * hw_npages; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) { + ret = -ENOMEM; + goto err_sq_wq_destroy; + } + in->req.input_qpn = cpu_to_be16(XSC_QPN_SQN_STUB); /*no use for eth*/ + in->req.qp_type = XSC_QUEUE_TYPE_RAW_TSO; /*default sq is tso qp*/ + in->req.log_sq_sz = ilog2(xdev->caps.send_ds_num) + q_log_size; + in->req.pa_num = cpu_to_be16(hw_npages); + in->req.cqn_send = cpu_to_be16(psq->cq.xcq.cqn); + in->req.cqn_recv = in->req.cqn_send; + in->req.glb_funcid = cpu_to_be16(xdev->glb_func_id); + + xsc_core_fill_page_frag_array(&psq->wq_ctrl.buf, + &in->req.pas[0], hw_npages); + + ret = xsc_core_eth_create_qp(xdev, in, inlen, &psq->sqn); + if (ret) + goto err_sq_in_destroy; + + psq->cqp.qpn = psq->sqn; + psq->cqp.event = xsc_eth_qp_event; + psq->cqp.eth_queue_type = XSC_RES_SQ; + + ret = xsc_core_create_resource_common(xdev, &psq->cqp); + if (ret) { + netdev_err(adapter->netdev, "%s:error qp:%d errno:%d\n", + __func__, psq->sqn, ret); + goto err_sq_destroy; + } + + psq->channel = c; + psq->ch_ix = c->chl_idx; + psq->txq_ix = psq->ch_ix + sq_idx * adapter->channels.num_chl; + + /*need to querify from hardware*/ + psq->hw_mtu = XSC_ETH_HW_MTU_SEND; + psq->stop_room = 1; + + ret = xsc_eth_alloc_qp_sq_db(psq, psq_param->wq.db_numa_node); + if (ret) + goto err_sq_common_destroy; + + inlen = sizeof(struct xsc_modify_raw_qp_mbox_in); + modify_in = kvzalloc(inlen, GFP_KERNEL); + if (!modify_in) { + ret = -ENOMEM; + goto err_sq_common_destroy; + } + + modify_in->req.qp_out_port = xdev->pf_id; + modify_in->pcie_no = xdev->pcie_no; + modify_in->req.qpn = cpu_to_be16((u16)(psq->sqn)); + modify_in->req.func_id = cpu_to_be16(xdev->glb_func_id); + modify_in->req.dma_direct = XSC_DMA_DIR_TO_MAC; + modify_in->req.prio = sq_idx; + ret = xsc_core_eth_modify_raw_qp(xdev, modify_in); + if (ret) + goto err_sq_modify_in_destroy; + + kvfree(modify_in); + kvfree(in); + + return 0; + +err_sq_modify_in_destroy: + kvfree(modify_in); + +err_sq_common_destroy: + xsc_core_destroy_resource_common(xdev, &psq->cqp); + +err_sq_destroy: + xsc_core_eth_destroy_qp(xdev, psq->cqp.qpn); + +err_sq_in_destroy: + kvfree(in); + +err_sq_wq_destroy: + xsc_eth_wq_destroy(&psq->wq_ctrl); + return ret; +} + +static int xsc_eth_close_qp_sq(struct xsc_channel *c, struct xsc_sq *psq) { + struct xsc_core_device *xdev = c->adapter->xdev; + int ret; + + xsc_core_destroy_resource_common(xdev, &psq->cqp); + + ret = xsc_core_eth_destroy_qp(xdev, psq->cqp.qpn); + if (ret) + return ret; + + xsc_free_qp_sq(psq); + return 0; } +static int xsc_eth_open_channel(struct xsc_adapter *adapter, + int idx, + struct xsc_channel *c, + struct xsc_channel_param *chl_param) +{ + struct xsc_core_device *xdev = adapter->xdev; + struct net_device *netdev = adapter->netdev; + const struct cpumask *aff; + unsigned int irqn; + int i, j; + u32 eqn; + int ret; + + c->adapter = adapter; + c->netdev = adapter->netdev; + c->chl_idx = idx; + c->num_tc = adapter->nic_param.num_tc; + + /*1rq per channel, and may have multi sqs per channel*/ + c->qp.rq_num = 1; + c->qp.sq_num = c->num_tc; + + if (xdev->caps.msix_enable) { + ret = xsc_core_vector2eqn(xdev, c->chl_idx, &eqn, &irqn); + if (ret) + goto err; + aff = irq_get_affinity_mask(irqn); + c->aff_mask = aff; + c->cpu = cpumask_first(aff); + } + + if (c->qp.sq_num > XSC_MAX_NUM_TC || c->qp.rq_num > XSC_MAX_NUM_TC) { + ret = -EINVAL; + goto err; + } + + for (i = 0; i < c->qp.rq_num; i++) { + ret = xsc_eth_open_cq(c, &c->qp.rq[i].cq, + &chl_param->rqcq_param); + if (ret) { + j = i - 1; + goto err_open_rq_cq; + } + } + + for (i = 0; i < c->qp.sq_num; i++) { + ret = xsc_eth_open_cq(c, &c->qp.sq[i].cq, + &chl_param->sqcq_param); + if (ret) { + j = i - 1; + goto err_open_sq_cq; + } + } + + for (i = 0; i < c->qp.sq_num; i++) { + ret = xsc_eth_open_qp_sq(c, &c->qp.sq[i], + &chl_param->sq_param, i); + if (ret) { + j = i - 1; + goto err_open_sq; + } + } + netif_napi_add(netdev, &c->napi, xsc_eth_napi_poll); + + netdev_dbg(adapter->netdev, "open channel%d ok\n", idx); + return 0; + +err_open_sq: + for (; j >= 0; j--) + xsc_eth_close_qp_sq(c, &c->qp.sq[j]); + j = (c->qp.rq_num - 1); +err_open_sq_cq: + for (; j >= 0; j--) + xsc_eth_close_cq(c, &c->qp.sq[j].cq); + j = (c->qp.rq_num - 1); +err_open_rq_cq: + for (; j >= 0; j--) + xsc_eth_close_cq(c, &c->qp.rq[j].cq); +err: + netdev_err(adapter->netdev, + "failed to open channel: ch%d, sq_num=%d, rq_num=%d, err=%d\n", + idx, c->qp.sq_num, c->qp.rq_num, ret); + return ret; +} + +static int xsc_eth_modify_qps_channel(struct xsc_adapter *adapter, + struct xsc_channel *c) +{ + int ret = 0; + int i; + + for (i = 0; i < c->qp.rq_num; i++) { + c->qp.rq[i].post_wqes(&c->qp.rq[i]); + ret = xsc_core_eth_modify_qp_status(adapter->xdev, + c->qp.rq[i].rqn, + XSC_CMD_OP_RTR2RTS_QP); + if (ret) + return ret; + } + + for (i = 0; i < c->qp.sq_num; i++) { + ret = xsc_core_eth_modify_qp_status(adapter->xdev, + c->qp.sq[i].sqn, + XSC_CMD_OP_RTR2RTS_QP); + if (ret) + return ret; + } + return 0; +} + +static int xsc_eth_modify_qps(struct xsc_adapter *adapter, + struct xsc_eth_channels *chls) +{ + int ret; + int i; + + for (i = 0; i < chls->num_chl; i++) { + struct xsc_channel *c = &chls->c[i]; + + ret = xsc_eth_modify_qps_channel(adapter, c); + if (ret) + return ret; + } + + return 0; +} + +static void xsc_eth_init_frags_partition(struct xsc_rq *rq) +{ + struct xsc_wqe_frag_info next_frag = {}; + struct xsc_wqe_frag_info *prev; + int i; + + next_frag.di = &rq->wqe.di[0]; + next_frag.offset = 0; + prev = NULL; + + for (i = 0; i < xsc_wq_cyc_get_size(&rq->wqe.wq); i++) { + struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; + struct xsc_wqe_frag_info *frag = + &rq->wqe.frags[i << rq->wqe.info.log_num_frags]; + int f; + + for (f = 0; f < rq->wqe.info.num_frags; f++, frag++) { + if (next_frag.offset + frag_info[f].frag_stride > + XSC_RX_FRAG_SZ) { + next_frag.di++; + next_frag.offset = 0; + if (prev) + prev->last_in_page = 1; + } + *frag = next_frag; + + /* prepare next */ + next_frag.offset += frag_info[f].frag_stride; + prev = frag; + } + } + + if (prev) + prev->last_in_page = 1; +} + +static int xsc_eth_init_di_list(struct xsc_rq *rq, u32 wq_sz, int cpu) +{ + int len = wq_sz << rq->wqe.info.log_num_frags; + + rq->wqe.di = kvzalloc_node(array_size(len, sizeof(*rq->wqe.di)), + GFP_KERNEL, cpu_to_node(cpu)); + if (!rq->wqe.di) + return -ENOMEM; + + xsc_eth_init_frags_partition(rq); + + return 0; +} + +static void xsc_eth_free_di_list(struct xsc_rq *rq) +{ + kvfree(rq->wqe.di); +} + +static int xsc_eth_alloc_rq(struct xsc_channel *c, + struct xsc_rq *prq, + struct xsc_rq_param *prq_param) +{ + u8 ele_log_size = prq_param->rq_attr.ele_log_size; + u8 q_log_size = prq_param->rq_attr.q_log_size; + struct page_pool_params pagepool_params = {0}; + struct xsc_adapter *adapter = c->adapter; + u32 pool_size = 1 << q_log_size; + int ret = 0; + u32 wq_sz; + int i, f; + + prq_param->wq.db_numa_node = cpu_to_node(c->cpu); + + ret = xsc_eth_wq_cyc_create(c->adapter->xdev, &prq_param->wq, + q_log_size, ele_log_size, &prq->wqe.wq, + &prq->wq_ctrl); + if (ret) + return ret; + + wq_sz = xsc_wq_cyc_get_size(&prq->wqe.wq); + + prq->wqe.info = prq_param->frags_info; + prq->wqe.frags = + kvzalloc_node(array_size((wq_sz << prq->wqe.info.log_num_frags), + sizeof(*prq->wqe.frags)), + GFP_KERNEL, + cpu_to_node(c->cpu)); + if (!prq->wqe.frags) { + ret = -ENOMEM; + goto err_alloc_frags; + } + + ret = xsc_eth_init_di_list(prq, wq_sz, c->cpu); + if (ret) + goto err_init_di; + + prq->buff.map_dir = DMA_FROM_DEVICE; + + /* Create a page_pool and register it with rxq */ + pool_size = wq_sz << prq->wqe.info.log_num_frags; + pagepool_params.order = XSC_RX_FRAG_SZ_ORDER; + /* No-internal DMA mapping in page_pool */ + pagepool_params.flags = 0; + pagepool_params.pool_size = pool_size; + pagepool_params.nid = cpu_to_node(c->cpu); + pagepool_params.dev = c->adapter->dev; + pagepool_params.dma_dir = prq->buff.map_dir; + + prq->page_pool = page_pool_create(&pagepool_params); + if (IS_ERR(prq->page_pool)) { + ret = PTR_ERR(prq->page_pool); + prq->page_pool = NULL; + goto err_create_pool; + } + + if (c->chl_idx == 0) + netdev_dbg(adapter->netdev, + "page pool: size=%d, cpu=%d, pool_numa=%d, mtu=%d, wqe_numa=%d\n", + pool_size, c->cpu, pagepool_params.nid, + adapter->nic_param.mtu, + prq_param->wq.buf_numa_node); + + for (i = 0; i < wq_sz; i++) { + struct xsc_eth_rx_wqe_cyc *wqe = + xsc_wq_cyc_get_wqe(&prq->wqe.wq, i); + + for (f = 0; f < prq->wqe.info.num_frags; f++) { + u32 frag_size = prq->wqe.info.arr[f].frag_size; + u32 val = 0; + + val |= FIELD_PREP(XSC_WQE_DATA_SEG_SEG_LEN_MASK, + frag_size); + wqe->data[f].data0 = cpu_to_le32(val); + wqe->data[f].mkey = cpu_to_le32(XSC_INVALID_LKEY); + } + + for (; f < prq->wqe.info.frags_max_num; f++) { + wqe->data[f].data0 = 0; + wqe->data[f].mkey = cpu_to_le32(XSC_INVALID_LKEY); + wqe->data[f].va = 0; + } + } + + prq->post_wqes = xsc_eth_post_rx_wqes; + prq->handle_rx_cqe = xsc_eth_handle_rx_cqe; + prq->dealloc_wqe = xsc_eth_dealloc_rx_wqe; + prq->wqe.skb_from_cqe = xsc_rx_is_linear_skb(adapter->nic_param.mtu) ? + xsc_skb_from_cqe_linear : + xsc_skb_from_cqe_nonlinear; + prq->ix = c->chl_idx; + prq->frags_sz = adapter->nic_param.rq_frags_size; + + return 0; + +err_create_pool: + xsc_eth_free_di_list(prq); +err_init_di: + kvfree(prq->wqe.frags); +err_alloc_frags: + xsc_eth_wq_destroy(&prq->wq_ctrl); + return ret; +} + +static void xsc_free_qp_rq(struct xsc_rq *rq) +{ + kvfree(rq->wqe.frags); + kvfree(rq->wqe.di); + + if (rq->page_pool) + page_pool_destroy(rq->page_pool); + + xsc_eth_wq_destroy(&rq->wq_ctrl); +} + +static int xsc_eth_open_rss_qp_rqs(struct xsc_adapter *adapter, + struct xsc_rq_param *prq_param, + struct xsc_eth_channels *chls, + unsigned int num_chl) +{ + u8 q_log_size = prq_param->rq_attr.q_log_size; + struct xsc_create_multiqp_mbox_in *in; + struct xsc_create_qp_request *req; + unsigned int hw_npages; + struct xsc_channel *c; + int ret = 0, err = 0; + struct xsc_rq *prq; + int paslen = 0; + int entry_len; + u32 rqn_base; + int i, j, n; + int inlen; + + for (i = 0; i < num_chl; i++) { + c = &chls->c[i]; + + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + ret = xsc_eth_alloc_rq(c, prq, prq_param); + if (ret) + goto err_alloc_rqs; + + hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size, + PAGE_SIZE_4K); + /*support different npages number smoothly*/ + entry_len = sizeof(struct xsc_create_qp_request) + + sizeof(__be64) * hw_npages; + + paslen += entry_len; + } + } + + inlen = sizeof(struct xsc_create_multiqp_mbox_in) + paslen; + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) { + ret = -ENOMEM; + goto err_create_rss_rqs; + } + + in->qp_num = cpu_to_be16(num_chl); + in->qp_type = XSC_QUEUE_TYPE_RAW; + in->req_len = cpu_to_be32(inlen); + + req = (struct xsc_create_qp_request *)&in->data[0]; + n = 0; + for (i = 0; i < num_chl; i++) { + c = &chls->c[i]; + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + + hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size, + PAGE_SIZE_4K); + /* no use for eth */ + req->input_qpn = cpu_to_be16(0); + req->qp_type = XSC_QUEUE_TYPE_RAW; + req->log_rq_sz = ilog2(adapter->xdev->caps.recv_ds_num) + + q_log_size; + req->pa_num = cpu_to_be16(hw_npages); + req->cqn_recv = cpu_to_be16(prq->cq.xcq.cqn); + req->cqn_send = req->cqn_recv; + req->glb_funcid = + cpu_to_be16(adapter->xdev->glb_func_id); + + xsc_core_fill_page_frag_array(&prq->wq_ctrl.buf, + &req->pas[0], + hw_npages); + n++; + req = (struct xsc_create_qp_request *) + (&in->data[0] + entry_len * n); + } + } + + ret = xsc_core_eth_create_rss_qp_rqs(adapter->xdev, in, inlen, + &rqn_base); + kvfree(in); + if (ret) + goto err_create_rss_rqs; + + n = 0; + for (i = 0; i < num_chl; i++) { + c = &chls->c[i]; + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + prq->rqn = rqn_base + n; + prq->cqp.qpn = prq->rqn; + prq->cqp.event = xsc_eth_qp_event; + prq->cqp.eth_queue_type = XSC_RES_RQ; + ret = xsc_core_create_resource_common(adapter->xdev, + &prq->cqp); + if (ret) { + err = ret; + netdev_err(adapter->netdev, + "create resource common error qp:%d errno:%d\n", + prq->rqn, ret); + continue; + } + + n++; + } + } + if (err) + return err; + + adapter->channels.rqn_base = rqn_base; + return 0; + +err_create_rss_rqs: + i = num_chl; +err_alloc_rqs: + for (--i; i >= 0; i--) { + c = &chls->c[i]; + for (j = 0; j < c->qp.rq_num; j++) { + prq = &c->qp.rq[j]; + xsc_free_qp_rq(prq); + } + } + return ret; +} + +static void xsc_eth_free_rx_wqe(struct xsc_rq *rq) +{ + struct xsc_wq_cyc *wq = &rq->wqe.wq; + u16 wqe_ix; + + while (!xsc_wq_cyc_is_empty(wq)) { + wqe_ix = xsc_wq_cyc_get_tail(wq); + rq->dealloc_wqe(rq, wqe_ix); + xsc_wq_cyc_pop(wq); + } +} + +static int xsc_eth_close_qp_rq(struct xsc_channel *c, struct xsc_rq *prq) +{ + struct xsc_core_device *xdev = c->adapter->xdev; + int ret; + + xsc_core_destroy_resource_common(xdev, &prq->cqp); + + ret = xsc_core_eth_destroy_qp(xdev, prq->cqp.qpn); + if (ret) + return ret; + + xsc_eth_free_rx_wqe(prq); + xsc_free_qp_rq(prq); + + return 0; +} + +static void xsc_eth_close_channel(struct xsc_channel *c, bool free_rq) +{ + int i; + + for (i = 0; i < c->qp.rq_num; i++) { + if (free_rq) + xsc_eth_close_qp_rq(c, &c->qp.rq[i]); + xsc_eth_close_cq(c, &c->qp.rq[i].cq); + memset(&c->qp.rq[i], 0, sizeof(struct xsc_rq)); + } + + for (i = 0; i < c->qp.sq_num; i++) { + xsc_eth_close_qp_sq(c, &c->qp.sq[i]); + xsc_eth_close_cq(c, &c->qp.sq[i].cq); + } + + netif_napi_del(&c->napi); +} + +static int xsc_eth_open_channels(struct xsc_adapter *adapter) +{ + struct xsc_eth_channels *chls = &adapter->channels; + struct xsc_core_device *xdev = adapter->xdev; + struct xsc_channel_param *chl_param; + bool free_rq = false; + int ret = 0; + int i; + + chls->num_chl = adapter->nic_param.num_channels; + chls->c = kcalloc_node(chls->num_chl, sizeof(struct xsc_channel), + GFP_KERNEL, xdev->numa_node); + if (!chls->c) { + ret = -ENOMEM; + goto err; + } + + chl_param = kvzalloc(sizeof(*chl_param), GFP_KERNEL); + if (!chl_param) { + ret = -ENOMEM; + goto err_free_ch; + } + + xsc_eth_build_channel_param(adapter, chl_param); + + for (i = 0; i < chls->num_chl; i++) { + ret = xsc_eth_open_channel(adapter, i, &chls->c[i], chl_param); + if (ret) + goto err_open_channel; + } + + ret = xsc_eth_open_rss_qp_rqs(adapter, + &chl_param->rq_param, + chls, + chls->num_chl); + if (ret) + goto err_open_channel; + free_rq = true; + + for (i = 0; i < chls->num_chl; i++) + napi_enable(&chls->c[i].napi); + + /* flush cache to memory before interrupt and napi_poll running */ + smp_wmb(); + + ret = xsc_eth_modify_qps(adapter, chls); + if (ret) + goto err_modify_qps; + + kvfree(chl_param); + return 0; + +err_modify_qps: + i = chls->num_chl; +err_open_channel: + for (--i; i >= 0; i--) + xsc_eth_close_channel(&chls->c[i], free_rq); + + kvfree(chl_param); +err_free_ch: + kfree(chls->c); +err: + chls->num_chl = 0; + netdev_err(adapter->netdev, "failed to open %d channels, err=%d\n", + chls->num_chl, ret); + return ret; +} + +static void xsc_eth_close_channels(struct xsc_adapter *adapter) +{ + struct xsc_channel *c; + int i; + + for (i = 0; i < adapter->channels.num_chl; i++) { + c = &adapter->channels.c[i]; + + xsc_eth_close_channel(c, true); + } + + kfree(adapter->channels.c); + adapter->channels.num_chl = 0; +} + +static void xsc_netdev_set_tcs(struct xsc_adapter *priv, u16 nch, u8 ntc) +{ + int tc; + + netdev_reset_tc(priv->netdev); + + if (ntc == 1) + return; + + netdev_set_num_tc(priv->netdev, ntc); + + /* Map netdev TCs to offset 0 + * We have our own UP to TXQ mapping for QoS + */ + for (tc = 0; tc < ntc; tc++) + netdev_set_tc_queue(priv->netdev, tc, nch, 0); +} + +static void xsc_eth_build_tx2sq_maps(struct xsc_adapter *adapter) +{ + struct xsc_channel *c; + struct xsc_sq *psq; + int i, tc; + + for (i = 0; i < adapter->channels.num_chl; i++) { + c = &adapter->channels.c[i]; + for (tc = 0; tc < c->num_tc; tc++) { + psq = &c->qp.sq[tc]; + adapter->txq2sq[psq->txq_ix] = psq; + } + } +} + +static void xsc_eth_activate_txqsq(struct xsc_channel *c) +{ + int tc = c->num_tc; + struct xsc_sq *psq; + + for (tc = 0; tc < c->num_tc; tc++) { + psq = &c->qp.sq[tc]; + psq->txq = netdev_get_tx_queue(psq->channel->netdev, + psq->txq_ix); + set_bit(XSC_ETH_SQ_STATE_ENABLED, &psq->state); + netdev_tx_reset_queue(psq->txq); + netif_tx_start_queue(psq->txq); + } +} + +static void xsc_eth_deactivate_txqsq(struct xsc_channel *c) +{ + int tc = c->num_tc; + struct xsc_sq *psq; + + for (tc = 0; tc < c->num_tc; tc++) { + psq = &c->qp.sq[tc]; + clear_bit(XSC_ETH_SQ_STATE_ENABLED, &psq->state); + } +} + +static void xsc_activate_rq(struct xsc_channel *c) +{ + int i; + + for (i = 0; i < c->qp.rq_num; i++) + set_bit(XSC_ETH_RQ_STATE_ENABLED, &c->qp.rq[i].state); +} + +static void xsc_deactivate_rq(struct xsc_channel *c) +{ + int i; + + for (i = 0; i < c->qp.rq_num; i++) + clear_bit(XSC_ETH_RQ_STATE_ENABLED, &c->qp.rq[i].state); +} + +static void xsc_eth_activate_channel(struct xsc_channel *c) +{ + xsc_eth_activate_txqsq(c); + xsc_activate_rq(c); +} + +static void xsc_eth_deactivate_channel(struct xsc_channel *c) +{ + xsc_deactivate_rq(c); + xsc_eth_deactivate_txqsq(c); +} + +static void xsc_eth_activate_channels(struct xsc_eth_channels *chs) +{ + int i; + + for (i = 0; i < chs->num_chl; i++) + xsc_eth_activate_channel(&chs->c[i]); +} + +static void xsc_eth_deactivate_channels(struct xsc_eth_channels *chs) +{ + int i; + + for (i = 0; i < chs->num_chl; i++) + xsc_eth_deactivate_channel(&chs->c[i]); + + /* Sync with all NAPIs to wait until they stop using queues. */ + synchronize_net(); + + for (i = 0; i < chs->num_chl; i++) + /* last doorbell out */ + napi_disable(&chs->c[i].napi); +} + +static void xsc_eth_activate_priv_channels(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + int num_txqs; + + num_txqs = adapter->channels.num_chl * adapter->nic_param.num_tc; + xsc_netdev_set_tcs(adapter, + adapter->channels.num_chl, + adapter->nic_param.num_tc); + netif_set_real_num_tx_queues(netdev, num_txqs); + netif_set_real_num_rx_queues(netdev, adapter->channels.num_chl); + + xsc_eth_build_tx2sq_maps(adapter); + xsc_eth_activate_channels(&adapter->channels); + netif_tx_start_all_queues(adapter->netdev); +} + +static void xsc_eth_deactivate_priv_channels(struct xsc_adapter *adapter) +{ + netif_tx_disable(adapter->netdev); + xsc_eth_deactivate_channels(&adapter->channels); +} + +static int xsc_eth_sw_init(struct xsc_adapter *adapter) +{ + int ret; + + ret = xsc_eth_open_channels(adapter); + if (ret) + return ret; + + xsc_eth_activate_priv_channels(adapter); + + return 0; +} + +static void xsc_eth_sw_deinit(struct xsc_adapter *adapter) +{ + xsc_eth_deactivate_priv_channels(adapter); + + return xsc_eth_close_channels(adapter); +} + +static bool xsc_eth_get_link_status(struct xsc_adapter *adapter) +{ + struct xsc_core_device *xdev = adapter->xdev; + u16 vport = 0; + bool link_up; + + link_up = xsc_core_query_vport_state(xdev, vport); + + return link_up ? true : false; +} + +static int xsc_eth_change_link_status(struct xsc_adapter *adapter) +{ + bool link_up; + + link_up = xsc_eth_get_link_status(adapter); + + if (link_up && !netif_carrier_ok(adapter->netdev)) { + netdev_info(adapter->netdev, "Link up\n"); + netif_carrier_on(adapter->netdev); + } else if (!link_up && netif_carrier_ok(adapter->netdev)) { + netdev_info(adapter->netdev, "Link down\n"); + netif_carrier_off(adapter->netdev); + } + + return 0; +} + +static void xsc_eth_event_work(struct work_struct *work) +{ + struct xsc_adapter *adapter = container_of(work, + struct xsc_adapter, + event_work); + struct xsc_event_query_type_mbox_out out; + struct xsc_event_query_type_mbox_in in; + int err; + + if (adapter->status != XSCALE_ETH_DRIVER_OK) + return; + + /*query cmd_type cmd*/ + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_EVENT_TYPE); + + err = xsc_cmd_exec(adapter->xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + netdev_err(adapter->netdev, "failed to query event type, err=%d, status=%d\n", + err, out.hdr.status); + goto failed; + } + + switch (out.ctx.resp_cmd_type) { + case XSC_CMD_EVENT_RESP_CHANGE_LINK: + err = xsc_eth_change_link_status(adapter); + if (err) { + netdev_err(adapter->netdev, "failed to change linkstatus, err=%d\n", + err); + goto failed; + } + break; + case XSC_CMD_EVENT_RESP_TEMP_WARN: + netdev_err(adapter->netdev, "[Minor]nic chip temperature high warning\n"); + break; + case XSC_CMD_EVENT_RESP_OVER_TEMP_PROTECTION: + netdev_err(adapter->netdev, "[Critical]nic chip was over-temperature\n"); + break; + default: + break; + } + +failed: + return; +} + +static void xsc_eth_event_handler(void *arg) +{ + struct xsc_adapter *adapter = (struct xsc_adapter *)arg; + + queue_work(adapter->workq, &adapter->event_work); +} + +static bool xsc_get_pct_drop_config(struct xsc_core_device *xdev) +{ + return (xdev->pdev->device == XSC_MC_PF_DEV_ID) || + (xdev->pdev->device == XSC_MF_SOC_PF_DEV_ID) || + (xdev->pdev->device == XSC_MS_PF_DEV_ID) || + (xdev->pdev->device == XSC_MV_SOC_PF_DEV_ID); +} + +static int xsc_eth_enable_nic_hca(struct xsc_adapter *adapter) +{ + struct xsc_cmd_enable_nic_hca_mbox_out out = {}; + struct xsc_cmd_enable_nic_hca_mbox_in in = {}; + struct xsc_core_device *xdev = adapter->xdev; + struct net_device *netdev = adapter->netdev; + u16 caps_mask = 0; + u16 caps = 0; + int err; + + if (xsc_get_user_mode(xdev)) + return 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_ENABLE_NIC_HCA); + + in.rss.rss_en = 1; + in.rss.rqn_base = cpu_to_be16(adapter->channels.rqn_base - + xdev->caps.raweth_rss_qp_id_base); + in.rss.rqn_num = cpu_to_be16(adapter->channels.num_chl); + in.rss.hash_tmpl = cpu_to_be32(adapter->rss_param.rss_hash_tmpl); + in.rss.hfunc = xsc_hash_func_type(adapter->rss_param.hfunc); + caps_mask |= BIT(XSC_TBM_CAP_RSS); + + if (netdev->features & NETIF_F_RXCSUM) + caps |= BIT(XSC_TBM_CAP_HASH_PPH); + caps_mask |= BIT(XSC_TBM_CAP_HASH_PPH); + + if (xsc_get_pct_drop_config(xdev) && !(netdev->flags & IFF_SLAVE)) + caps |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG); + caps_mask |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG); + + memcpy(in.nic.mac_addr, netdev->dev_addr, ETH_ALEN); + + in.nic.caps = cpu_to_be16(caps); + in.nic.caps_mask = cpu_to_be16(caps_mask); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + netdev_err(netdev, "failed!! err=%d, status=%d\n", + err, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} + +static int xsc_eth_disable_nic_hca(struct xsc_adapter *adapter) +{ + struct xsc_cmd_disable_nic_hca_mbox_out out = {}; + struct xsc_cmd_disable_nic_hca_mbox_in in = {}; + struct xsc_core_device *xdev = adapter->xdev; + struct net_device *netdev = adapter->netdev; + u16 caps = 0; + int err; + + if (xsc_get_user_mode(xdev)) + return 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_DISABLE_NIC_HCA); + + if (xsc_get_pct_drop_config(xdev) && + !(netdev->priv_flags & IFF_BONDING)) + caps |= BIT(XSC_TBM_CAP_PCT_DROP_CONFIG); + + in.nic.caps = cpu_to_be16(caps); + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) { + netdev_err(netdev, "failed!! err=%d, status=%d\n", + err, out.hdr.status); + return -ENOEXEC; + } + + return 0; +} + +static void xsc_set_default_xps_cpumasks(struct xsc_adapter *priv, + struct xsc_eth_params *params) +{ + struct xsc_core_device *xdev = priv->xdev; + int num_comp_vectors, irq; + + num_comp_vectors = priv->nic_param.comp_vectors; + cpumask_clear(xdev->xps_cpumask); + + for (irq = 0; irq < num_comp_vectors; irq++) { + cpumask_set_cpu(cpumask_local_spread(irq, xdev->numa_node), + xdev->xps_cpumask); + netif_set_xps_queue(priv->netdev, xdev->xps_cpumask, irq); + } +} + +static int xsc_set_port_admin_status(struct xsc_adapter *adapter, + enum xsc_port_status status) +{ + struct xsc_event_set_port_admin_status_mbox_out out; + struct xsc_event_set_port_admin_status_mbox_in in; + int ret = 0; + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_SET_PORT_ADMIN_STATUS); + in.status = cpu_to_be16(status); + + ret = xsc_cmd_exec(adapter->xdev, &in, sizeof(in), &out, sizeof(out)); + if (ret || out.hdr.status) { + netdev_err(adapter->netdev, "failed to set port admin status, err=%d, status=%d\n", + ret, out.hdr.status); + return -ENOEXEC; + } + + return ret; +} + +static int xsc_eth_open(struct net_device *netdev) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + struct xsc_core_device *xdev = adapter->xdev; + int ret = XSCALE_RET_SUCCESS; + + mutex_lock(&adapter->status_lock); + if (adapter->status == XSCALE_ETH_DRIVER_OK) { + netdev_err(adapter->netdev, "unnormal ndo_open when status=%d\n", + adapter->status); + goto ret; + } + + ret = xsc_eth_sw_init(adapter); + if (ret) + goto ret; + + ret = xsc_eth_enable_nic_hca(adapter); + if (ret) + goto sw_deinit; + + /*INIT_WORK*/ + INIT_WORK(&adapter->event_work, xsc_eth_event_work); + xdev->event_handler = xsc_eth_event_handler; + + if (xsc_eth_get_link_status(adapter)) { + netdev_info(netdev, "Link up\n"); + netif_carrier_on(adapter->netdev); + } else { + netdev_info(netdev, "Link down\n"); + } + + adapter->status = XSCALE_ETH_DRIVER_OK; + + xsc_set_default_xps_cpumasks(adapter, &adapter->nic_param); + + xsc_set_port_admin_status(adapter, XSC_PORT_UP); + + goto ret; + +sw_deinit: + xsc_eth_sw_deinit(adapter); + +ret: + mutex_unlock(&adapter->status_lock); + if (ret) + return XSCALE_RET_ERROR; + else + return XSCALE_RET_SUCCESS; +} + +static int xsc_eth_close(struct net_device *netdev) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + int ret = 0; + + mutex_lock(&adapter->status_lock); + + if (!netif_device_present(netdev)) { + ret = -ENODEV; + goto ret; + } + + if (adapter->status != XSCALE_ETH_DRIVER_OK) + goto ret; + + adapter->status = XSCALE_ETH_DRIVER_CLOSE; + + netif_carrier_off(adapter->netdev); + + xsc_eth_sw_deinit(adapter); + + ret = xsc_eth_disable_nic_hca(adapter); + if (ret) + netdev_err(adapter->netdev, "failed to disable nic hca, err=%d\n", + ret); + + xsc_set_port_admin_status(adapter, XSC_PORT_DOWN); + +ret: + mutex_unlock(&adapter->status_lock); + + return ret; +} + static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz) { @@ -161,7 +1697,8 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, } static const struct net_device_ops xsc_netdev_ops = { - // TBD + .ndo_open = xsc_eth_open, + .ndo_stop = xsc_eth_close, }; static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 1f9bae10b..09af22d92 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -9,6 +9,12 @@ #include "common/xsc_device.h" #include "xsc_eth_common.h" +#define XSC_INVALID_LKEY 0x100 + +#define XSCALE_DRIVER_NAME "xsc_eth" +#define XSCALE_RET_SUCCESS 0 +#define XSCALE_RET_ERROR 1 + enum { XSCALE_ETH_DRIVER_INIT, XSCALE_ETH_DRIVER_OK, @@ -34,7 +40,9 @@ struct xsc_adapter { struct xsc_rss_params rss_param; struct workqueue_struct *workq; + struct work_struct event_work; + struct xsc_eth_channels channels; struct xsc_sq **txq2sq; u32 status; diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index d18feacd4..f13b0eef4 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -6,6 +6,7 @@ #ifndef __XSC_ETH_COMMON_H #define __XSC_ETH_COMMON_H +#include "xsc_queue.h" #include "xsc_pph.h" #define SW_MIN_MTU ETH_MIN_MTU @@ -21,12 +22,131 @@ #define XSC_SW2HW_RX_PKT_LEN(mtu) \ ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM) +#define XSC_QPN_SQN_STUB 1025 +#define XSC_QPN_RQN_STUB 1024 + #define XSC_LOG_INDIR_RQT_SIZE 0x8 #define XSC_INDIR_RQT_SIZE BIT(XSC_LOG_INDIR_RQT_SIZE) #define XSC_ETH_MIN_NUM_CHANNELS 2 #define XSC_ETH_MAX_NUM_CHANNELS XSC_INDIR_RQT_SIZE +#define XSC_TX_NUM_TC 1 +#define XSC_MAX_NUM_TC 8 +#define XSC_ETH_MAX_TC_TOTAL \ + (XSC_ETH_MAX_NUM_CHANNELS * XSC_MAX_NUM_TC) +#define XSC_ETH_MAX_QP_NUM_PER_CH (XSC_MAX_NUM_TC + 1) + +#define XSC_SKB_FRAG_SZ(len) (SKB_DATA_ALIGN(len) + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) + +#define XSC_RQCQ_ELE_SZ 32 //size of a rqcq entry +#define XSC_SQCQ_ELE_SZ 32 //size of a sqcq entry +#define XSC_RQ_ELE_SZ XSC_RECV_WQE_BB +#define XSC_SQ_ELE_SZ XSC_SEND_WQE_BB +#define XSC_EQ_ELE_SZ 8 //size of a eq entry + +#define XSC_SKB_FRAG_SZ(len) (SKB_DATA_ALIGN(len) + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) +#define XSC_MIN_SKB_FRAG_SZ (XSC_SKB_FRAG_SZ(XSC_RX_HEADROOM)) +#define XSC_LOG_MAX_RX_WQE_BULK \ + (ilog2(PAGE_SIZE / roundup_pow_of_two(XSC_MIN_SKB_FRAG_SZ))) + +#define XSC_MIN_LOG_RQ_SZ (1 + XSC_LOG_MAX_RX_WQE_BULK) +#define XSC_DEF_LOG_RQ_SZ 0xa +#define XSC_MAX_LOG_RQ_SZ 0xd + +#define XSC_MIN_LOG_SQ_SZ 0x6 +#define XSC_DEF_LOG_SQ_SZ 0xa +#define XSC_MAX_LOG_SQ_SZ 0xd + +#define XSC_SQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_SQ_SZ) +#define XSC_RQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_RQ_SZ) + +#define XSC_SQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_SQ_SZ) +#define XSC_RQ_ELE_NUM_DEF BIT(XSC_DEF_LOG_RQ_SZ) + +#define XSC_LOG_RQCQ_SZ 0xb +#define XSC_LOG_SQCQ_SZ 0xa + +#define XSC_RQCQ_ELE_NUM BIT(XSC_LOG_RQCQ_SZ) +#define XSC_SQCQ_ELE_NUM BIT(XSC_LOG_SQCQ_SZ) +#define XSC_RQ_ELE_NUM XSC_RQ_ELE_NUM_DEF //ds number of a wqebb +#define XSC_SQ_ELE_NUM XSC_SQ_ELE_NUM_DEF //DS number +#define XSC_EQ_ELE_NUM XSC_SQ_ELE_NUM_DEF //number of eq entry??? + +enum xsc_port_status { + XSC_PORT_DOWN = 0, + XSC_PORT_UP = 1, +}; + +enum xsc_queue_type { + XSC_QUEUE_TYPE_EQ = 0, + XSC_QUEUE_TYPE_RQCQ, + XSC_QUEUE_TYPE_SQCQ, + XSC_QUEUE_TYPE_RQ, + XSC_QUEUE_TYPE_SQ, + XSC_QUEUE_TYPE_MAX, +}; + +struct xsc_queue_attr { + u8 q_type; + u32 ele_num; + u32 ele_size; + u8 ele_log_size; + u8 q_log_size; +}; + +struct xsc_eth_rx_wqe_cyc { + DECLARE_FLEX_ARRAY(struct xsc_wqe_data_seg, data); +}; + +struct xsc_eq_param { + struct xsc_queue_attr eq_attr; +}; + +struct xsc_cq_param { + struct xsc_wq_param wq; + struct cq_cmd { + u8 abc[16]; + } cqc; + struct xsc_queue_attr cq_attr; +}; + +struct xsc_rq_param { + struct xsc_wq_param wq; + struct xsc_queue_attr rq_attr; + struct xsc_rq_frags_info frags_info; +}; + +struct xsc_sq_param { + struct xsc_wq_param wq; + struct xsc_queue_attr sq_attr; +}; + +struct xsc_qp_param { + struct xsc_queue_attr qp_attr; +}; + +struct xsc_channel_param { + struct xsc_cq_param rqcq_param; + struct xsc_cq_param sqcq_param; + struct xsc_rq_param rq_param; + struct xsc_sq_param sq_param; + struct xsc_qp_param qp_param; +}; + +struct xsc_eth_qp { + u16 rq_num; + u16 sq_num; + struct xsc_rq rq[XSC_MAX_NUM_TC]; /*may be use one only*/ + struct xsc_sq sq[XSC_MAX_NUM_TC]; /*reserved to tc*/ +}; + +enum channel_flags { + XSC_CHANNEL_NAPI_SCHED = 1, +}; + struct xsc_eth_params { u16 num_channels; u16 max_num_ch; @@ -58,4 +178,28 @@ struct xsc_eth_params { u32 pflags; }; +struct xsc_channel { + /* data path */ + struct xsc_eth_qp qp; + struct napi_struct napi; + u8 num_tc; + int chl_idx; + + /*relationship*/ + struct xsc_adapter *adapter; + struct net_device *netdev; + int cpu; + unsigned long flags; + + /* data path - accessed per napi poll */ + const struct cpumask *aff_mask; + struct irq_desc *irq_desc; +} ____cacheline_aligned_in_smp; + +struct xsc_eth_channels { + struct xsc_channel *c; + unsigned int num_chl; + u32 rqn_base; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c new file mode 100644 index 000000000..72f33bb53 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. All + * rights reserved. + * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved. + */ + +#include "xsc_eth_txrx.h" + +struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph) +{ + // TBD + return NULL; +} + +struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph) +{ + // TBD + return NULL; +} + +void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq, + struct xsc_rq *rq, struct xsc_cqe *cqe) +{ + // TBD +} + +int xsc_poll_rx_cq(struct xsc_cq *cq, int budget) +{ + // TBD + return 0; +} + +void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix) +{ + // TBD +} + +bool xsc_eth_post_rx_wqes(struct xsc_rq *rq) +{ + // TBD + return true; +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c new file mode 100644 index 000000000..9ed67f95b --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "xsc_eth_common.h" +#include "xsc_eth_txrx.h" + +void xsc_cq_notify_hw_rearm(struct xsc_cq *cq) +{ + u32 db_val = 0; + + db_val |= FIELD_PREP(XSC_CQ_DB_NEXT_CID_MASK, cq->wq.cc) | + FIELD_PREP(XSC_CQ_DB_CQ_ID_MASK, cq->xcq.cqn); + + /* ensure doorbell record is visible to device + * before ringing the doorbell + */ + wmb(); + writel(db_val, XSC_REG_ADDR(cq->xdev, cq->xdev->regs.complete_db)); +} + +void xsc_cq_notify_hw(struct xsc_cq *cq) +{ + struct xsc_core_device *xdev = cq->xdev; + u32 db_val = 0; + + db_val |= FIELD_PREP(XSC_CQ_DB_NEXT_CID_MASK, cq->wq.cc) | + FIELD_PREP(XSC_CQ_DB_CQ_ID_MASK, cq->xcq.cqn); + + /* ensure doorbell record is visible to device + * before ringing the doorbell + */ + wmb(); + writel(db_val, XSC_REG_ADDR(xdev, xdev->regs.complete_reg)); +} + +static bool xsc_channel_no_affinity_change(struct xsc_channel *c) +{ + int current_cpu = smp_processor_id(); + + return cpumask_test_cpu(current_cpu, c->aff_mask); +} + +static bool xsc_poll_tx_cq(struct xsc_cq *cq, int napi_budget) +{ + // TBD + return true; +} + +int xsc_eth_napi_poll(struct napi_struct *napi, int budget) +{ + struct xsc_channel *c = container_of(napi, struct xsc_channel, napi); + struct xsc_eth_params *params = &c->adapter->nic_param; + struct xsc_rq *rq = &c->qp.rq[0]; + struct xsc_sq *sq = NULL; + bool busy = false; + int work_done = 0; + int tx_budget = 0; + int i; + + rcu_read_lock(); + + clear_bit(XSC_CHANNEL_NAPI_SCHED, &c->flags); + + tx_budget = params->sq_size >> 2; + for (i = 0; i < c->num_tc; i++) + busy |= xsc_poll_tx_cq(&c->qp.sq[i].cq, tx_budget); + + /* budget=0 means: don't poll rx rings */ + if (likely(budget)) { + work_done = xsc_poll_rx_cq(&rq->cq, budget); + busy |= work_done == budget; + } + + busy |= rq->post_wqes(rq); + + if (busy) { + if (likely(xsc_channel_no_affinity_change(c))) { + rcu_read_unlock(); + return budget; + } + if (budget && work_done == budget) + work_done--; + } + + if (unlikely(!napi_complete_done(napi, work_done))) + goto out; + + for (i = 0; i < c->num_tc; i++) { + sq = &c->qp.sq[i]; + xsc_cq_notify_hw_rearm(&sq->cq); + } + + xsc_cq_notify_hw_rearm(&rq->cq); +out: + rcu_read_unlock(); + return work_done; +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h new file mode 100644 index 000000000..116019a9a --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_RXTX_H +#define __XSC_RXTX_H + +#include "xsc_eth.h" + +void xsc_cq_notify_hw_rearm(struct xsc_cq *cq); +void xsc_cq_notify_hw(struct xsc_cq *cq); +int xsc_eth_napi_poll(struct napi_struct *napi, int budget); +bool xsc_eth_post_rx_wqes(struct xsc_rq *rq); +void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq, + struct xsc_rq *rq, struct xsc_cqe *cqe); +void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix); +struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph); +struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, u8 has_pph); +int xsc_poll_rx_cq(struct xsc_cq *cq, int budget); + +#endif /* XSC_RXTX_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h index 8f33c78d8..649d7524a 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -8,7 +8,155 @@ #ifndef __XSC_QUEUE_H #define __XSC_QUEUE_H +#include +#include + #include "common/xsc_core.h" +#include "xsc_eth_wq.h" + +enum { + XSC_SEND_WQE_DS = 16, + XSC_SEND_WQE_BB = 64, +}; + +enum { + XSC_RECV_WQE_DS = 16, + XSC_RECV_WQE_BB = 16, +}; + +enum { + XSC_ETH_RQ_STATE_ENABLED, + XSC_ETH_RQ_STATE_AM, + XSC_ETH_RQ_STATE_CACHE_REDUCE_PENDING, +}; + +#define XSC_SEND_WQEBB_NUM_DS (XSC_SEND_WQE_BB / XSC_SEND_WQE_DS) +#define XSC_LOG_SEND_WQEBB_NUM_DS ilog2(XSC_SEND_WQEBB_NUM_DS) + +#define XSC_RECV_WQEBB_NUM_DS (XSC_RECV_WQE_BB / XSC_RECV_WQE_DS) +#define XSC_LOG_RECV_WQEBB_NUM_DS ilog2(XSC_RECV_WQEBB_NUM_DS) + +/* each ds holds one fragment in skb */ +#define XSC_MAX_RX_FRAGS 4 +#define XSC_RX_FRAG_SZ_ORDER 0 +#define XSC_RX_FRAG_SZ (PAGE_SIZE << XSC_RX_FRAG_SZ_ORDER) +#define DEFAULT_FRAG_SIZE (2048) + +enum { + XSC_ETH_SQ_STATE_ENABLED, + XSC_ETH_SQ_STATE_AM, +}; + +struct xsc_dma_info { + struct page *page; + dma_addr_t addr; +}; + +struct xsc_page_cache { + struct xsc_dma_info *page_cache; + u32 head; + u32 tail; + u32 sz; + u32 resv; +}; + +struct xsc_cq { + /* data path - accessed per cqe */ + struct xsc_cqwq wq; + + /* data path - accessed per napi poll */ + u16 event_ctr; + struct napi_struct *napi; + struct xsc_core_cq xcq; + struct xsc_channel *channel; + + /* control */ + struct xsc_core_device *xdev; + struct xsc_wq_ctrl wq_ctrl; + u8 rx; +} ____cacheline_aligned_in_smp; + +struct xsc_wqe_frag_info { + struct xsc_dma_info *di; + u32 offset; + u8 last_in_page; + u8 is_available; +}; + +struct xsc_rq_frag_info { + int frag_size; + int frag_stride; +}; + +struct xsc_rq_frags_info { + struct xsc_rq_frag_info arr[XSC_MAX_RX_FRAGS]; + u8 num_frags; + u8 log_num_frags; + u8 wqe_bulk; + u8 wqe_bulk_min; + u8 frags_max_num; +}; + +struct xsc_rq; +typedef void (*xsc_fp_handle_rx_cqe)(struct xsc_cqwq *cqwq, struct xsc_rq *rq, + struct xsc_cqe *cqe); +typedef bool (*xsc_fp_post_rx_wqes)(struct xsc_rq *rq); +typedef void (*xsc_fp_dealloc_wqe)(struct xsc_rq *rq, u16 ix); +typedef struct sk_buff * (*xsc_fp_skb_from_cqe)(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, + u32 cqe_bcnt, + u8 has_pph); + +struct xsc_rq { + struct xsc_core_qp cqp; + struct { + struct xsc_wq_cyc wq; + struct xsc_wqe_frag_info *frags; + struct xsc_dma_info *di; + struct xsc_rq_frags_info info; + xsc_fp_skb_from_cqe skb_from_cqe; + } wqe; + + struct { + u16 headroom; + u8 map_dir; /* dma map direction */ + } buff; + + struct page_pool *page_pool; + struct xsc_wq_ctrl wq_ctrl; + struct xsc_cq cq; + u32 rqn; + int ix; + + unsigned long state; + struct work_struct recover_work; + + u32 hw_mtu; + u32 frags_sz; + + xsc_fp_handle_rx_cqe handle_rx_cqe; + xsc_fp_post_rx_wqes post_wqes; + xsc_fp_dealloc_wqe dealloc_wqe; + struct xsc_page_cache page_cache; +} ____cacheline_aligned_in_smp; + +enum xsc_dma_map_type { + XSC_DMA_MAP_SINGLE, + XSC_DMA_MAP_PAGE +}; + +struct xsc_sq_dma { + dma_addr_t addr; + u32 size; + enum xsc_dma_map_type type; +}; + +struct xsc_tx_wqe_info { + struct sk_buff *skb; + u32 num_bytes; + u8 num_wqebbs; + u8 num_dma; +}; struct xsc_sq { struct xsc_core_qp cqp; diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile index ad0ecc122..e8c13c6fd 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/Makefile @@ -6,5 +6,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_PCI) += xsc_pci.o -xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o adev.o +xsc_pci-y := main.o cmdq.o hw.o qp.o cq.o alloc.o eq.o pci_irq.o adev.o vport.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c new file mode 100644 index 000000000..edac8a6bd --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/pci/vport.c @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021 - 2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "common/xsc_core.h" +#include "common/xsc_driver.h" + +u8 xsc_core_query_vport_state(struct xsc_core_device *xdev, u16 vport) +{ + struct xsc_query_vport_state_out out; + struct xsc_query_vport_state_in in; + u32 val = 0; + int err; + + memset(&in, 0, sizeof(in)); + memset(&out, 0, sizeof(out)); + + in.hdr.opcode = cpu_to_be16(XSC_CMD_OP_QUERY_VPORT_STATE); + val |= FIELD_PREP(XSC_QUERY_VPORT_VPORT_NUM_MASK, vport); + if (vport) + val |= XSC_QUERY_VPORT_OTHER_VPORT; + in.data0 = cpu_to_be32(val); + + err = xsc_cmd_exec(xdev, &in, sizeof(in), &out, sizeof(out)); + if (err || out.hdr.status) + pci_err(xdev->pdev, "failed to query vport state, err=%d, status=%d\n", + err, out.hdr.status); + + return out.state; +} +EXPORT_SYMBOL(xsc_core_query_vport_state); From patchwork Mon Feb 24 17:24:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988565 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-1-37.ptr.blmpb.com (lf-1-37.ptr.blmpb.com [103.149.242.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07FC7266197 for ; Mon, 24 Feb 2025 17:26:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.149.242.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417966; cv=none; b=ch/fpaf8hQdOILrchAGiGZRXSSl3kiz9waGAFkzDr3e2piKqDwiYV0fFq96g/RzHdBlA5jQMdnejIXKF7fWPNed7GtLGKpNsh126VkocTyBqQ0oPBq4uGx4g1BIRhygCTzXILJeFqTvYKgIKRswQoyjgGT6aWKG/qkMRgQeXFKQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417966; c=relaxed/simple; bh=/UnGjIpp6dFAMaIgcp2OtsUgndvGr+cWa1nlE7uXpuA=; h=Cc:From:In-Reply-To:Content-Type:To:Mime-Version:Subject:Date: Message-Id:References; b=lH0UfiYCHC+GaR64xlNY63PkKSULtnA7KyG7erjd4SK4j1sc9xDxvGONnGKeGf6+a4OHo++LTtL/ic3Gt2U3lvwxO+1xkPZAO+MwqhF+bguZvFcWr6j9fMTjYqEIDTGYd/KS3wCYkAIOSWxzm172GuX5IZaHX1dLALLooxdH+sg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=PP86kArF; arc=none smtp.client-ip=103.149.242.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="PP86kArF" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417884; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=yiNPDXpuuKfHbLLvSmVPKc9X2vDINNrx3x8LKfrebcY=; b=PP86kArFM0FXwISbm4pRSzMWRrSV9e2xi++jN0IqtiK7YgvbhxXninoPFfF2iD812oSPRt gSs0EwTui1pRmM1wDjmFHHWky0frFFUADp+UuY6yota368Yrl8S2rmx+Cuhi8FRc1kgELu p0q8gNXvRwkMnDDK4bRueseauFU7nf4WGTlSPBD+wjRhZse+WB5fxN97AMfAPUjby+ePOI 3PsoNvIaEjJJ/Py5Fa1YIk6/Tfpezb7Os1c+LvkfzX39wT0aBD2VTPP24qbOAdr1LHEMIh O0RlyFQImIyhKavRa94sD+3HaOqoM42q9/cvdbdAC3DcuDVLIonbDVSiMPFNdg== Cc: , , , , , , , , , , , , From: "Xin Tian" In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Lms-Return-Path: To: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.25.1 Subject: [PATCH net-next v5 12/14] xsc: Add ndo_start_xmit Date: Tue, 25 Feb 2025 01:24:42 +0800 Message-Id: <20250224172441.2455751-13-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:42 +0800 X-Original-From: Xin Tian References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Patchwork-Delegate: kuba@kernel.org This patch adds core data transmission functionality, focusing on the ndo_start_xmit interface. The main steps are: 1. Transmission Entry The entry point selects the appropriate transmit queue (SQ) and verifies hardware readiness before calling xsc_eth_xmit_frame for packet transmission. 2. Packet Processing Supports TCP/UDP GSO, calculates MSS and IHS. If necessary, performs SKB linearization and handles checksum offload. Maps data for DMA using dma_map_single and skb_frag_dma_map. 3. Descriptor Generation Constructs control (cseg) and data (dseg) segments, including setting operation codes, segment counts, and DMA addresses. Hardware Notification & Queue Management: 4. Notifies hardware using a doorbell register and manages queue flow to avoid overloading. 5. Combines small packets using netdev_xmit_more to reduce doorbell writes and supports zero-copy transmission for efficiency. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 1 + .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 2 + .../yunsilicon/xsc/net/xsc_eth_common.h | 8 + .../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 315 ++++++++++++++++++ .../yunsilicon/xsc/net/xsc_eth_txrx.h | 37 ++ .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 7 + 7 files changed, 371 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 104ef5330..7cfc2aaa2 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_rx.o +xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index 72d91057f..cd9797ed7 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -1699,6 +1699,7 @@ static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, static const struct net_device_ops xsc_netdev_ops = { .ndo_open = xsc_eth_open, .ndo_stop = xsc_eth_close, + .ndo_start_xmit = xsc_eth_xmit_start, }; static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 09af22d92..2e11d1c68 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -6,6 +6,8 @@ #ifndef __XSC_ETH_H #define __XSC_ETH_H +#include + #include "common/xsc_device.h" #include "xsc_eth_common.h" diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index f13b0eef4..04f9630a0 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -202,4 +202,12 @@ struct xsc_eth_channels { u32 rqn_base; }; +union xsc_send_doorbell { + struct{ + s32 next_pid : 16; + u32 qp_num : 15; + }; + u32 send_data; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c new file mode 100644 index 000000000..e41ff0d77 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c @@ -0,0 +1,315 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include + +#include "xsc_eth.h" +#include "xsc_eth_txrx.h" + +#define XSC_OPCODE_RAW 7 + +static void xsc_dma_push(struct xsc_sq *sq, dma_addr_t addr, u32 size, + enum xsc_dma_map_type map_type) +{ + struct xsc_sq_dma *dma = xsc_dma_get(sq, sq->dma_fifo_pc++); + + dma->addr = addr; + dma->size = size; + dma->type = map_type; +} + +static void xsc_dma_unmap_wqe_err(struct xsc_sq *sq, u8 num_dma) +{ + struct xsc_adapter *adapter = sq->channel->adapter; + struct device *dev = adapter->dev; + int i; + + for (i = 0; i < num_dma; i++) { + struct xsc_sq_dma *last_pushed_dma; + + last_pushed_dma = xsc_dma_get(sq, --sq->dma_fifo_pc); + xsc_tx_dma_unmap(dev, last_pushed_dma); + } +} + +static void *xsc_sq_fetch_wqe(struct xsc_sq *sq, size_t size, u16 *pi) +{ + struct xsc_wq_cyc *wq = &sq->wq; + void *wqe; + + /*caution, sp->pc is default to be zero*/ + *pi = xsc_wq_cyc_ctr2ix(wq, sq->pc); + wqe = xsc_wq_cyc_get_wqe(wq, *pi); + memset(wqe, 0, size); + + return wqe; +} + +static u16 xsc_tx_get_gso_ihs(struct xsc_sq *sq, struct sk_buff *skb) +{ + u16 ihs; + + if (skb->encapsulation) { + ihs = skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb); + } else { + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) + ihs = skb_transport_offset(skb) + sizeof(struct udphdr); + else + ihs = skb_transport_offset(skb) + tcp_hdrlen(skb); + } + + return ihs; +} + +static void xsc_txwqe_build_cseg_csum(struct xsc_sq *sq, + struct sk_buff *skb, + struct xsc_send_wqe_ctrl_seg *cseg) +{ + u32 val = le32_to_cpu(cseg->data0); + + if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { + if (skb->encapsulation) + val |= FIELD_PREP(XSC_WQE_CTRL_SEG_CSUM_EN_MASK, + XSC_ETH_WQE_INNER_AND_OUTER_CSUM); + else + val |= FIELD_PREP(XSC_WQE_CTRL_SEG_CSUM_EN_MASK, + XSC_ETH_WQE_OUTER_CSUM); + } else { + val |= FIELD_PREP(XSC_WQE_CTRL_SEG_CSUM_EN_MASK, + XSC_ETH_WQE_NONE_CSUM); + } + cseg->data0 = cpu_to_le32(val); +} + +static void xsc_txwqe_build_csegs(struct xsc_sq *sq, struct sk_buff *skb, + u16 mss, u16 ihs, u16 headlen, + u8 opcode, u16 ds_cnt, u32 num_bytes, + struct xsc_send_wqe_ctrl_seg *cseg) +{ + struct xsc_core_device *xdev = sq->cq.xdev; + int send_wqe_ds_num_log; + u32 val = 0; + + send_wqe_ds_num_log = ilog2(xdev->caps.send_ds_num); + xsc_txwqe_build_cseg_csum(sq, skb, cseg); + + if (mss != 0) { + val |= XSC_WQE_CTRL_SEG_HAS_PPH | + XSC_WQE_CTRL_SEG_SO_TYPE | + FIELD_PREP(XSC_WQE_CTRL_SEG_SO_HDR_LEN_MASK, ihs) | + FIELD_PREP(XSC_WQE_CTRL_SEG_SO_DATA_SIZE_MASK, mss); + cseg->data2 = cpu_to_le32(val); + } + + val = le32_to_cpu(cseg->data0); + val |= FIELD_PREP(XSC_WQE_CTRL_SEG_MSG_OPCODE_MASK, opcode) | + FIELD_PREP(XSC_WQE_CTRL_SEG_WQE_ID_MASK, + sq->pc << send_wqe_ds_num_log) | + FIELD_PREP(XSC_WQE_CTRL_SEG_DS_DATA_NUM_MASK, + ds_cnt - XSC_SEND_WQEBB_CTRL_NUM_DS); + cseg->data0 = cpu_to_le32(val); + cseg->msg_len = cpu_to_le32(num_bytes); + cseg->data3 = cpu_to_le32(XSC_WQE_CTRL_SEG_CE); +} + +static int xsc_txwqe_build_dsegs(struct xsc_sq *sq, struct sk_buff *skb, + u16 ihs, u16 headlen, + struct xsc_wqe_data_seg *dseg) +{ + struct xsc_adapter *adapter = sq->channel->adapter; + struct device *dev = adapter->dev; + dma_addr_t dma_addr = 0; + u8 num_dma = 0; + int i; + + if (headlen) { + dma_addr = dma_map_single(dev, + skb->data, + headlen, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, dma_addr))) + goto dma_unmap_wqe_err; + + dseg->va = cpu_to_le64(dma_addr); + dseg->mkey = cpu_to_le32(be32_to_cpu(sq->mkey_be)); + dseg->data0 |= + cpu_to_le32(FIELD_PREP(XSC_WQE_DATA_SEG_SEG_LEN_MASK, + headlen)); + + xsc_dma_push(sq, dma_addr, headlen, XSC_DMA_MAP_SINGLE); + num_dma++; + dseg++; + } + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + int fsz = skb_frag_size(frag); + + dma_addr = skb_frag_dma_map(dev, frag, 0, fsz, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, dma_addr))) + goto dma_unmap_wqe_err; + + dseg->va = cpu_to_le64(dma_addr); + dseg->mkey = cpu_to_le32(be32_to_cpu(sq->mkey_be)); + dseg->data0 |= + cpu_to_le32(FIELD_PREP(XSC_WQE_DATA_SEG_SEG_LEN_MASK, + fsz)); + + xsc_dma_push(sq, dma_addr, fsz, XSC_DMA_MAP_PAGE); + num_dma++; + dseg++; + } + + return num_dma; + +dma_unmap_wqe_err: + xsc_dma_unmap_wqe_err(sq, num_dma); + return -ENOMEM; +} + +static void xsc_sq_notify_hw(struct xsc_wq_cyc *wq, u16 pc, + struct xsc_sq *sq) +{ + struct xsc_adapter *adapter = sq->channel->adapter; + struct xsc_core_device *xdev = adapter->xdev; + union xsc_send_doorbell doorbell_value; + int send_ds_num_log; + + send_ds_num_log = ilog2(xdev->caps.send_ds_num); + /*reverse wqe index to ds index*/ + doorbell_value.next_pid = pc << send_ds_num_log; + doorbell_value.qp_num = sq->sqn; + + /* Make sure that descriptors are written before + * updating doorbell record and ringing the doorbell + */ + wmb(); + writel(doorbell_value.send_data, XSC_REG_ADDR(xdev, xdev->regs.tx_db)); +} + +static void xsc_txwqe_complete(struct xsc_sq *sq, struct sk_buff *skb, + u8 opcode, u16 ds_cnt, + u8 num_wqebbs, u32 num_bytes, u8 num_dma, + struct xsc_tx_wqe_info *wi) +{ + struct xsc_wq_cyc *wq = &sq->wq; + + wi->num_bytes = num_bytes; + wi->num_dma = num_dma; + wi->num_wqebbs = num_wqebbs; + wi->skb = skb; + + netdev_tx_sent_queue(sq->txq, num_bytes); + + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + + sq->pc += wi->num_wqebbs; + + if (unlikely(!xsc_wqc_has_room_for(wq, sq->cc, sq->pc, sq->stop_room))) + netif_tx_stop_queue(sq->txq); + + if (!netdev_xmit_more() || netif_xmit_stopped(sq->txq)) + xsc_sq_notify_hw(wq, sq->pc, sq); +} + +static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, + struct xsc_sq *sq, + struct xsc_tx_wqe *wqe, + u16 pi) +{ + struct xsc_core_device *xdev = sq->cq.xdev; + struct xsc_send_wqe_ctrl_seg *cseg; + struct xsc_wqe_data_seg *dseg; + struct xsc_tx_wqe_info *wi; + u32 num_bytes, num_dma; + u16 mss, ihs, headlen; + u8 num_wqebbs; + u16 ds_cnt; + u8 opcode; + +retry_send: + /* Calc ihs and ds cnt, no writes to wqe yet */ + /*ctrl-ds, it would be reduce in ds_data_num*/ + ds_cnt = XSC_SEND_WQEBB_CTRL_NUM_DS; + + /*in andes inline is bonding with gso*/ + if (skb_is_gso(skb)) { + opcode = XSC_OPCODE_RAW; + mss = skb_shinfo(skb)->gso_size; + ihs = xsc_tx_get_gso_ihs(sq, skb); + num_bytes = skb->len; + } else { + opcode = XSC_OPCODE_RAW; + mss = 0; + ihs = 0; + num_bytes = skb->len; + } + + /*linear data in skb*/ + headlen = skb->len - skb->data_len; + ds_cnt += !!headlen; + ds_cnt += skb_shinfo(skb)->nr_frags; + + /* Check packet size. */ + if (unlikely(mss == 0 && num_bytes > sq->hw_mtu)) + goto err_drop; + + num_wqebbs = DIV_ROUND_UP(ds_cnt, xdev->caps.send_ds_num); + /*if ds_cnt exceed one wqe, drop it*/ + if (num_wqebbs != 1) { + if (skb_linearize(skb)) + goto err_drop; + goto retry_send; + } + + /* fill wqe */ + wi = (struct xsc_tx_wqe_info *)&sq->db.wqe_info[pi]; + cseg = &wqe->ctrl; + dseg = &wqe->data[0]; + + if (unlikely(num_bytes == 0)) + goto err_drop; + + xsc_txwqe_build_csegs(sq, skb, mss, ihs, headlen, + opcode, ds_cnt, num_bytes, cseg); + + /*inline header is also use dma to transport*/ + num_dma = xsc_txwqe_build_dsegs(sq, skb, ihs, headlen, dseg); + if (unlikely(num_dma < 0)) + goto err_drop; + + xsc_txwqe_complete(sq, skb, opcode, ds_cnt, num_wqebbs, num_bytes, + num_dma, wi); + + return NETDEV_TX_OK; + +err_drop: + dev_kfree_skb_any(skb); + + return NETDEV_TX_OK; +} + +netdev_tx_t xsc_eth_xmit_start(struct sk_buff *skb, struct net_device *netdev) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + int ds_num = adapter->xdev->caps.send_ds_num; + struct xsc_tx_wqe *wqe; + struct xsc_sq *sq; + u16 pi; + + if (!adapter || + !adapter->xdev || + adapter->status != XSCALE_ETH_DRIVER_OK) + return NETDEV_TX_BUSY; + + sq = adapter->txq2sq[skb_get_queue_mapping(skb)]; + if (unlikely(!sq)) + return NETDEV_TX_BUSY; + + wqe = xsc_sq_fetch_wqe(sq, ds_num * XSC_SEND_WQE_DS, &pi); + + return xsc_eth_xmit_frame(skb, sq, wqe, pi); +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h index 116019a9a..68f3347e4 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h @@ -6,8 +6,18 @@ #ifndef __XSC_RXTX_H #define __XSC_RXTX_H +#include +#include + #include "xsc_eth.h" +enum { + XSC_ETH_WQE_NONE_CSUM, + XSC_ETH_WQE_INNER_CSUM, + XSC_ETH_WQE_OUTER_CSUM, + XSC_ETH_WQE_INNER_AND_OUTER_CSUM, +}; + void xsc_cq_notify_hw_rearm(struct xsc_cq *cq); void xsc_cq_notify_hw(struct xsc_cq *cq); int xsc_eth_napi_poll(struct napi_struct *napi, int budget); @@ -23,4 +33,31 @@ struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, u32 cqe_bcnt, u8 has_pph); int xsc_poll_rx_cq(struct xsc_cq *cq, int budget); +netdev_tx_t xsc_eth_xmit_start(struct sk_buff *skb, struct net_device *netdev); + +static inline void xsc_tx_dma_unmap(struct device *dev, struct xsc_sq_dma *dma) +{ + switch (dma->type) { + case XSC_DMA_MAP_SINGLE: + dma_unmap_single(dev, dma->addr, dma->size, DMA_TO_DEVICE); + break; + case XSC_DMA_MAP_PAGE: + dma_unmap_page(dev, dma->addr, dma->size, DMA_TO_DEVICE); + break; + default: + break; + } +} + +static inline struct xsc_sq_dma *xsc_dma_get(struct xsc_sq *sq, u32 i) +{ + return &sq->db.dma_fifo[i & sq->dma_fifo_mask]; +} + +static inline bool xsc_wqc_has_room_for(struct xsc_wq_cyc *wq, + u16 cc, u16 pc, u16 n) +{ + return (xsc_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc); +} + #endif /* XSC_RXTX_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h index 649d7524a..623df4a51 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -36,6 +36,8 @@ enum { #define XSC_RECV_WQEBB_NUM_DS (XSC_RECV_WQE_BB / XSC_RECV_WQE_DS) #define XSC_LOG_RECV_WQEBB_NUM_DS ilog2(XSC_RECV_WQEBB_NUM_DS) +#define XSC_SEND_WQEBB_CTRL_NUM_DS 1 + /* each ds holds one fragment in skb */ #define XSC_MAX_RX_FRAGS 4 #define XSC_RX_FRAG_SZ_ORDER 0 @@ -158,6 +160,11 @@ struct xsc_tx_wqe_info { u8 num_dma; }; +struct xsc_tx_wqe { + struct xsc_send_wqe_ctrl_seg ctrl; + struct xsc_wqe_data_seg data[]; +}; + struct xsc_sq { struct xsc_core_qp cqp; /* dirtied @completion */ From patchwork Mon Feb 24 17:24:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988559 X-Patchwork-Delegate: kuba@kernel.org Received: from va-1-16.ptr.blmpb.com (va-1-16.ptr.blmpb.com [209.127.230.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 020FE264A76 for ; Mon, 24 Feb 2025 17:24:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.127.230.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417894; cv=none; b=fgM9Hg15ww6Qwqp60cRZi7BH9b4GMeeBaajuBv+zgQDl1yd+zkJmees1WBofLmxtNCHjJHNQ0NU1DyjLru2Ss9ZT9gIDRnFGMDkVriueg+Im9J2upjqHcWY7/ergeaApja6QWvsqpAnFKcJcvJNVTWQO2drPIjG96Jalifr+dR0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417894; c=relaxed/simple; bh=hve91B3Ouwpgw03txH+Ovnj5RM1S5uc6OSmbkYwLKFw=; h=Cc:From:Subject:To:Message-Id:Date:In-Reply-To:Content-Type: Mime-Version:References; b=d+5tBuJdVmBTsfWBKnGLJbOT7kCj4TL7f1a/Uq6oB10+D30VTqbn/wkdF0fbIn2bv2uP5QsQ9IhCK9YySHuT0y9F6Sr+AsSKmRAz510jv5GYN5Hl3ZhhuwP/8TZEDvy7ZGbnU9Y46CQyqDGGR7gN4Mi3vCtaV4KV6ZRywF6Kaog= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=K2P7We4G; arc=none smtp.client-ip=209.127.230.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="K2P7We4G" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417886; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=oBN7kGZziwW/2qP9h8A0vrLTVBOgQTnflgb2rmL4VrA=; b=K2P7We4GkY3eYncQlRxVn3CovhKyVBvHeQJe+8YiVWEE3KzmoTLwnIQSmcVzavBpQXD5Z2 hc2y218kFIZ/fd73PREDWvD/Qkoy8fz2Dg4MFcAYlvkan+84wtP9sDpMJ2TiF7OvY4mzJ3 M4+IFoqK1rKsX2QGlA2CvKKLtO2wjpmuwe2zWBqoK/Xvom1LgEThkc7Roq7r3Q1dPbiNpf fsAodgr8vIqC0SMLketmLJd3PnH/RNkkPh8qaGen5eglnYcgijV95vuabaYux7Tc8dVCt1 spt9W15cWQuC02tSDr/dwbbcNnjU7mqXBBvBE5dZOFcVNjd2xg4ihe7P0NDAEw== X-Lms-Return-Path: Cc: , , , , , , , , , , , , From: "Xin Tian" Subject: [PATCH net-next v5 13/14] xsc: Add eth reception data path X-Original-From: Xin Tian To: Message-Id: <20250224172443.2455751-14-tianx@yunsilicon.com> X-Mailer: git-send-email 2.25.1 Date: Tue, 25 Feb 2025 01:24:44 +0800 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250224172416.2455751-1-tianx@yunsilicon.com> Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:44 +0800 X-Patchwork-Delegate: kuba@kernel.org rx data path: 1. The hardware writes incoming packets into the RQ ring buffer and generates a event queue entry 2. The event handler function(xsc_eth_completion_event in xsc_eth_events.c) is triggered, invokes napi_schedule() to schedule a softirq. 3. The kernel triggers the softirq handler net_rx_action, which calls the driver's NAPI poll function (xsc_eth_napi_poll in xsc_eth_txrx.c). The driver retrieves CQEs from the Completion Queue (CQ) via xsc_poll_rx_cq. 4. xsc_eth_build_rx_skb constructs an sk_buff structure, and submits the SKB to the kernel network stack via napi_gro_receive 5. The driver recycles the RX buffer and notifies the NIC via xsc_eth_post_rx_wqes to prepare for new packets. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../ethernet/yunsilicon/xsc/common/xsc_core.h | 5 + .../yunsilicon/xsc/net/xsc_eth_common.h | 28 + .../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 570 +++++++++++++++++- .../yunsilicon/xsc/net/xsc_eth_txrx.c | 92 ++- .../yunsilicon/xsc/net/xsc_eth_txrx.h | 28 + 5 files changed, 711 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h index c5c7608be..10a5c870f 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h +++ b/drivers/net/ethernet/yunsilicon/xsc/common/xsc_core.h @@ -502,4 +502,9 @@ static inline u8 xsc_get_user_mode(struct xsc_core_device *xdev) return xdev->user_mode; } +static inline u8 get_cqe_opcode(struct xsc_cqe *cqe) +{ + return FIELD_GET(XSC_CQE_MSG_OPCD_MASK, le32_to_cpu(cqe->data0)); +} + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h index 04f9630a0..7d3d751ff 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_common.h @@ -22,6 +22,8 @@ #define XSC_SW2HW_RX_PKT_LEN(mtu) \ ((mtu) + ETH_HLEN + XSC_ETH_RX_MAX_HEAD_ROOM) +#define XSC_RX_MAX_HEAD (256) + #define XSC_QPN_SQN_STUB 1025 #define XSC_QPN_RQN_STUB 1024 @@ -147,6 +149,24 @@ enum channel_flags { XSC_CHANNEL_NAPI_SCHED = 1, }; +enum xsc_eth_priv_flag { + XSC_PFLAG_RX_NO_CSUM_COMPLETE, + XSC_PFLAG_SNIFFER, + XSC_PFLAG_DROPLESS_RQ, + XSC_PFLAG_RX_COPY_BREAK, + XSC_NUM_PFLAGS, /* Keep last */ +}; + +#define XSC_SET_PFLAG(params, pflag, enable) \ + do { \ + if (enable) \ + (params)->pflags |= BIT(pflag); \ + else \ + (params)->pflags &= ~(BIT(pflag)); \ + } while (0) + +#define XSC_GET_PFLAG(params, pflag) (!!((params)->pflags & (BIT(pflag)))) + struct xsc_eth_params { u16 num_channels; u16 max_num_ch; @@ -210,4 +230,12 @@ union xsc_send_doorbell { u32 send_data; }; +union xsc_recv_doorbell { + struct{ + s32 next_pid : 13; + u32 qp_num : 15; + }; + u32 recv_data; +}; + #endif diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c index 72f33bb53..b87105c26 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c @@ -5,44 +5,594 @@ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved. */ +#include +#include + +#include "common/xsc_pp.h" +#include "xsc_eth.h" #include "xsc_eth_txrx.h" +#include "xsc_eth_common.h" +#include "xsc_pph.h" + +#define PAGE_REF_ELEV (U16_MAX) +/* Upper bound on number of packets that share a single page */ +#define PAGE_REF_THRSD (PAGE_SIZE / 64) + +static void xsc_rq_notify_hw(struct xsc_rq *rq) +{ + struct xsc_core_device *xdev = rq->cq.xdev; + union xsc_recv_doorbell doorbell_value; + struct xsc_wq_cyc *wq = &rq->wqe.wq; + u64 rqwqe_id; + + rqwqe_id = wq->wqe_ctr << (ilog2(xdev->caps.recv_ds_num)); + /*reverse wqe index to ds index*/ + doorbell_value.next_pid = rqwqe_id; + doorbell_value.qp_num = rq->rqn; + + /* Make sure that descriptors are written before + * updating doorbell record and ringing the doorbell + */ + wmb(); + writel(doorbell_value.recv_data, XSC_REG_ADDR(xdev, xdev->regs.rx_db)); +} + +static void xsc_skb_set_hash(struct xsc_adapter *adapter, + struct xsc_cqe *cqe, + struct sk_buff *skb) +{ + struct xsc_rss_params *rss = &adapter->rss_param; + bool l3_hash = false; + bool l4_hash = false; + u32 hash_field; + int ht = 0; + + if (adapter->netdev->features & NETIF_F_RXHASH) { + if (skb->protocol == htons(ETH_P_IP)) { + hash_field = rss->rx_hash_fields[XSC_TT_IPV4_TCP]; + if (hash_field & XSC_HASH_FIELD_SEL_SRC_IP || + hash_field & XSC_HASH_FIELD_SEL_DST_IP) + l3_hash = true; + + if (hash_field & XSC_HASH_FIELD_SEL_SPORT || + hash_field & XSC_HASH_FIELD_SEL_DPORT) + l4_hash = true; + } else if (skb->protocol == htons(ETH_P_IPV6)) { + hash_field = rss->rx_hash_fields[XSC_TT_IPV6_TCP]; + if (hash_field & XSC_HASH_FIELD_SEL_SRC_IPV6 || + hash_field & XSC_HASH_FIELD_SEL_DST_IPV6) + l3_hash = true; + + if (hash_field & XSC_HASH_FIELD_SEL_SPORT_V6 || + hash_field & XSC_HASH_FIELD_SEL_DPORT_V6) + l4_hash = true; + } + + if (l3_hash && l4_hash) + ht = PKT_HASH_TYPE_L4; + else if (l3_hash) + ht = PKT_HASH_TYPE_L3; + if (ht) + skb_set_hash(skb, le32_to_cpu(cqe->vni), ht); + } +} + +static void xsc_handle_csum(struct xsc_cqe *cqe, struct xsc_rq *rq, + struct sk_buff *skb, struct xsc_wqe_frag_info *wi) +{ + struct xsc_dma_info *dma_info; + struct net_device *netdev; + struct epp_pph *hw_pph; + struct xsc_channel *c; + int offset_from; + + c = rq->cq.channel; + netdev = c->adapter->netdev; + dma_info = wi->di; + offset_from = wi->offset; + hw_pph = page_address(dma_info->page) + offset_from; + + if (unlikely((netdev->features & NETIF_F_RXCSUM) == 0)) + goto csum_none; + + if (unlikely(XSC_GET_EPP2SOC_PPH_ERROR_BITMAP(hw_pph) & PACKET_UNKNOWN)) + goto csum_none; + + if (XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) && + (!(FIELD_GET(XSC_CQE_CSUM_ERR_MASK, le32_to_cpu(cqe->data0)) & + OUTER_AND_INNER))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = 1; + skb->encapsulation = 1; + } else if (XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) && + (!(FIELD_GET(XSC_CQE_CSUM_ERR_MASK, + le32_to_cpu(cqe->data0)) & + OUTER_BIT) && + (FIELD_GET(XSC_CQE_CSUM_ERR_MASK, + le32_to_cpu(cqe->data0)) & + INNER_BIT))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = 0; + skb->encapsulation = 1; + } else if (!XSC_GET_EPP2SOC_PPH_EXT_TUNNEL_TYPE(hw_pph) && + (!(FIELD_GET(XSC_CQE_CSUM_ERR_MASK, + le32_to_cpu(cqe->data0)) & + OUTER_BIT))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + } + + goto out; + +csum_none: + skb->csum = 0; + skb->ip_summed = CHECKSUM_NONE; +out: + return; +} + +static void xsc_build_rx_skb(struct xsc_cqe *cqe, + u32 cqe_bcnt, + struct xsc_rq *rq, + struct sk_buff *skb, + struct xsc_wqe_frag_info *wi) +{ + struct xsc_adapter *adapter; + struct net_device *netdev; + struct xsc_channel *c; + + c = rq->cq.channel; + adapter = c->adapter; + netdev = c->netdev; + + skb->mac_len = ETH_HLEN; + + skb_record_rx_queue(skb, rq->ix); + xsc_handle_csum(cqe, rq, skb, wi); + + skb->protocol = eth_type_trans(skb, netdev); + xsc_skb_set_hash(adapter, cqe, skb); +} + +static void xsc_complete_rx_cqe(struct xsc_rq *rq, + struct xsc_cqe *cqe, + u32 cqe_bcnt, + struct sk_buff *skb, + struct xsc_wqe_frag_info *wi) +{ + xsc_build_rx_skb(cqe, cqe_bcnt, rq, skb, wi); +} + +static void xsc_add_skb_frag(struct xsc_rq *rq, + struct sk_buff *skb, + struct xsc_dma_info *di, + u32 frag_offset, u32 len, + unsigned int truesize) +{ + struct xsc_channel *c = rq->cq.channel; + struct device *dev = c->adapter->dev; + + dma_sync_single_for_cpu(dev, di->addr + frag_offset, + len, DMA_FROM_DEVICE); + page_ref_inc(di->page); + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, + di->page, frag_offset, len, truesize); +} + +static void xsc_copy_skb_header(struct device *dev, + struct sk_buff *skb, + struct xsc_dma_info *dma_info, + int offset_from, u32 headlen) +{ + void *from = page_address(dma_info->page) + offset_from; + /* Aligning len to sizeof(long) optimizes memcpy performance */ + unsigned int len = ALIGN(headlen, sizeof(long)); + + dma_sync_single_for_cpu(dev, dma_info->addr + offset_from, len, + DMA_FROM_DEVICE); + skb_copy_to_linear_data(skb, from, len); +} + +static struct sk_buff *xsc_build_linear_skb(struct xsc_rq *rq, void *va, + u32 frag_size, u16 headroom, + u32 cqe_bcnt) +{ + struct sk_buff *skb = build_skb(va, frag_size); + + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, headroom); + skb_put(skb, cqe_bcnt); + + return skb; +} struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq, struct xsc_wqe_frag_info *wi, u32 cqe_bcnt, u8 has_pph) { - // TBD - return NULL; + int pph_len = has_pph ? XSC_PPH_HEAD_LEN : 0; + u16 rx_headroom = rq->buff.headroom; + struct xsc_dma_info *di = wi->di; + struct sk_buff *skb; + void *va, *data; + u32 frag_size; + + va = page_address(di->page) + wi->offset; + data = va + rx_headroom + pph_len; + frag_size = XSC_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); + + dma_sync_single_range_for_cpu(rq->cq.xdev->device, di->addr, wi->offset, + frag_size, DMA_FROM_DEVICE); + prefetchw(va); /* xdp_frame data area */ + prefetch(data); + + skb = xsc_build_linear_skb(rq, va, frag_size, (rx_headroom + pph_len), + (cqe_bcnt - pph_len)); + if (unlikely(!skb)) + return NULL; + + /* queue up for recycling/reuse */ + page_ref_inc(di->page); + + return skb; } struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq, struct xsc_wqe_frag_info *wi, u32 cqe_bcnt, u8 has_pph) { - // TBD - return NULL; + struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; + u16 headlen = min_t(u32, XSC_RX_MAX_HEAD, cqe_bcnt); + struct xsc_wqe_frag_info *head_wi = wi; + struct xsc_wqe_frag_info *rx_wi = wi; + u16 head_offset = head_wi->offset; + u16 byte_cnt = cqe_bcnt - headlen; + u16 frag_consumed_bytes = 0; + u16 frag_headlen = headlen; + struct net_device *netdev; + struct xsc_channel *c; + struct sk_buff *skb; + struct device *dev; + u8 fragcnt = 0; + int i = 0; + + c = rq->cq.channel; + dev = c->adapter->dev; + netdev = c->adapter->netdev; + + skb = napi_alloc_skb(rq->cq.napi, ALIGN(XSC_RX_MAX_HEAD, sizeof(long))); + if (unlikely(!skb)) + return NULL; + + prefetchw(skb->data); + + if (likely(has_pph)) { + headlen = min_t(u32, XSC_RX_MAX_HEAD, + (cqe_bcnt - XSC_PPH_HEAD_LEN)); + frag_headlen = headlen + XSC_PPH_HEAD_LEN; + byte_cnt = cqe_bcnt - headlen - XSC_PPH_HEAD_LEN; + head_offset += XSC_PPH_HEAD_LEN; + } + + if (byte_cnt == 0 && + (XSC_GET_PFLAG(&c->adapter->nic_param, XSC_PFLAG_RX_COPY_BREAK))) { + for (i = 0; i < rq->wqe.info.num_frags; i++, wi++) + wi->is_available = 1; + goto ret; + } + + for (i = 0; i < rq->wqe.info.num_frags; i++, rx_wi++) + rx_wi->is_available = 0; + + while (byte_cnt) { + /*figure out whether the first fragment can be a page ?*/ + frag_consumed_bytes = + min_t(u16, frag_info->frag_size - frag_headlen, + byte_cnt); + + xsc_add_skb_frag(rq, skb, wi->di, wi->offset + frag_headlen, + frag_consumed_bytes, frag_info->frag_stride); + byte_cnt -= frag_consumed_bytes; + + /*to protect extend wqe read, drop exceed bytes*/ + frag_headlen = 0; + fragcnt++; + if (fragcnt == rq->wqe.info.num_frags) { + if (byte_cnt) { + netdev_warn(netdev, + "large packet reach the maximum rev-wqe num.\n"); + netdev_warn(netdev, + "%u bytes dropped: frag_num=%d, headlen=%d, cqe_cnt=%d, frag0_bytes=%d, frag_size=%d\n", + byte_cnt, fragcnt, + headlen, cqe_bcnt, + frag_consumed_bytes, + frag_info->frag_size); + } + break; + } + + frag_info++; + wi++; + } + +ret: + /* copy header */ + xsc_copy_skb_header(dev, skb, head_wi->di, head_offset, headlen); + + /* skb linear part was allocated with headlen and aligned to long */ + skb->tail += headlen; + skb->len += headlen; + + return skb; +} + +static void xsc_page_dma_unmap(struct xsc_rq *rq, struct xsc_dma_info *dma_info) +{ + struct xsc_channel *c = rq->cq.channel; + struct device *dev = c->adapter->dev; + + dma_unmap_page(dev, dma_info->addr, XSC_RX_FRAG_SZ, rq->buff.map_dir); +} + +static void xsc_page_release_dynamic(struct xsc_rq *rq, + struct xsc_dma_info *dma_info, + bool recycle) +{ + xsc_page_dma_unmap(rq, dma_info); + page_pool_recycle_direct(rq->page_pool, dma_info->page); +} + +static void xsc_put_rx_frag(struct xsc_rq *rq, + struct xsc_wqe_frag_info *frag, bool recycle) +{ + if (frag->last_in_page) + xsc_page_release_dynamic(rq, frag->di, recycle); +} + +static struct xsc_wqe_frag_info *get_frag(struct xsc_rq *rq, u16 ix) +{ + return &rq->wqe.frags[ix << rq->wqe.info.log_num_frags]; +} + +static void xsc_free_rx_wqe(struct xsc_rq *rq, + struct xsc_wqe_frag_info *wi, bool recycle) +{ + int i; + + for (i = 0; i < rq->wqe.info.num_frags; i++, wi++) { + if (wi->is_available && recycle) + continue; + xsc_put_rx_frag(rq, wi, recycle); + } +} + +static void xsc_dump_error_rqcqe(struct xsc_rq *rq, + struct xsc_cqe *cqe) +{ + struct net_device *netdev; + struct xsc_channel *c; + u32 ci; + + c = rq->cq.channel; + netdev = c->adapter->netdev; + ci = xsc_cqwq_get_ci(&rq->cq.wq); + + net_err_ratelimited("Error cqe on dev=%s, cqn=%d, ci=%d, rqn=%d, qpn=%ld, error_code=0x%x\n", + netdev->name, rq->cq.xcq.cqn, ci, + rq->rqn, + FIELD_GET(XSC_CQE_QP_ID_MASK, + le32_to_cpu(cqe->data0)), + get_cqe_opcode(cqe)); } void xsc_eth_handle_rx_cqe(struct xsc_cqwq *cqwq, struct xsc_rq *rq, struct xsc_cqe *cqe) { - // TBD + struct xsc_channel *c = rq->cq.channel; + struct xsc_wq_cyc *wq = &rq->wqe.wq; + u8 cqe_opcode = get_cqe_opcode(cqe); + struct xsc_wqe_frag_info *wi; + struct sk_buff *skb; + u32 cqe_bcnt; + u16 ci; + + ci = xsc_wq_cyc_ctr2ix(wq, cqwq->cc); + wi = get_frag(rq, ci); + if (unlikely(cqe_opcode & BIT(7))) { + xsc_dump_error_rqcqe(rq, cqe); + goto free_wqe; + } + + cqe_bcnt = le32_to_cpu(cqe->msg_len); + if ((le32_to_cpu(cqe->data0) & XSC_CQE_HAS_PPH) && + cqe_bcnt <= XSC_PPH_HEAD_LEN) + goto free_wqe; + + if (unlikely(cqe_bcnt > rq->frags_sz)) { + if (!XSC_GET_PFLAG(&c->adapter->nic_param, + XSC_PFLAG_DROPLESS_RQ)) + goto free_wqe; + } + + cqe_bcnt = min_t(u32, cqe_bcnt, rq->frags_sz); + skb = rq->wqe.skb_from_cqe(rq, wi, cqe_bcnt, + !!(le32_to_cpu(cqe->data0) & + XSC_CQE_HAS_PPH)); + if (!skb) + goto free_wqe; + + xsc_complete_rx_cqe(rq, cqe, + (le32_to_cpu(cqe->data0) & XSC_CQE_HAS_PPH) ? + cqe_bcnt - XSC_PPH_HEAD_LEN : cqe_bcnt, + skb, wi); + + napi_gro_receive(rq->cq.napi, skb); + +free_wqe: + xsc_free_rx_wqe(rq, wi, true); + xsc_wq_cyc_pop(wq); } int xsc_poll_rx_cq(struct xsc_cq *cq, int budget) { - // TBD + struct xsc_rq *rq = container_of(cq, struct xsc_rq, cq); + struct xsc_cqwq *cqwq = &cq->wq; + struct xsc_cqe *cqe; + int work_done = 0; + + if (!test_bit(XSC_ETH_RQ_STATE_ENABLED, &rq->state)) + return 0; + + while ((work_done < budget) && (cqe = xsc_cqwq_get_cqe(cqwq))) { + rq->handle_rx_cqe(cqwq, rq, cqe); + ++work_done; + + xsc_cqwq_pop(cqwq); + } + + if (!work_done) + goto out; + + xsc_cq_notify_hw(cq); + /* ensure cq space is freed before enabling more cqes */ + wmb(); + +out: + + return work_done; +} + +static int xsc_page_alloc_mapped(struct xsc_rq *rq, + struct xsc_dma_info *dma_info) +{ + struct xsc_channel *c = rq->cq.channel; + struct device *dev = c->adapter->dev; + + dma_info->page = page_pool_dev_alloc_pages(rq->page_pool); + if (unlikely(!dma_info->page)) + return -ENOMEM; + + dma_info->addr = dma_map_page(dev, dma_info->page, 0, + XSC_RX_FRAG_SZ, rq->buff.map_dir); + if (unlikely(dma_mapping_error(dev, dma_info->addr))) { + page_pool_recycle_direct(rq->page_pool, dma_info->page); + dma_info->page = NULL; + return -ENOMEM; + } + + return 0; +} + +static int xsc_get_rx_frag(struct xsc_rq *rq, + struct xsc_wqe_frag_info *frag) +{ + int err = 0; + + if (!frag->offset && !frag->is_available) + /* On first frag (offset == 0), replenish page (dma_info + * actually). Other frags that point to the same dma_info + * (with a different offset) should just use the new one + * without replenishing again by themselves. + */ + err = xsc_page_alloc_mapped(rq, frag->di); + + return err; +} + +static int xsc_alloc_rx_wqe(struct xsc_rq *rq, + struct xsc_eth_rx_wqe_cyc *wqe, + u16 ix) +{ + struct xsc_wqe_frag_info *frag = get_frag(rq, ix); + u64 addr; + int err; + int i; + + for (i = 0; i < rq->wqe.info.num_frags; i++, frag++) { + err = xsc_get_rx_frag(rq, frag); + if (unlikely(err)) + goto free_frags; + + addr = frag->di->addr + frag->offset + rq->buff.headroom; + wqe->data[i].va = cpu_to_le64(addr); + } + return 0; + +free_frags: + while (--i >= 0) + xsc_put_rx_frag(rq, --frag, true); + + return err; } void xsc_eth_dealloc_rx_wqe(struct xsc_rq *rq, u16 ix) { - // TBD + struct xsc_wqe_frag_info *wi = get_frag(rq, ix); + + xsc_free_rx_wqe(rq, wi, false); } -bool xsc_eth_post_rx_wqes(struct xsc_rq *rq) +static int xsc_alloc_rx_wqes(struct xsc_rq *rq, u16 ix, u8 wqe_bulk) { - // TBD - return true; + struct xsc_wq_cyc *wq = &rq->wqe.wq; + struct xsc_eth_rx_wqe_cyc *wqe; + int err; + int idx; + int i; + + for (i = 0; i < wqe_bulk; i++) { + idx = xsc_wq_cyc_ctr2ix(wq, (ix + i)); + wqe = xsc_wq_cyc_get_wqe(wq, idx); + + err = xsc_alloc_rx_wqe(rq, wqe, idx); + if (unlikely(err)) + goto free_wqes; + } + + return 0; + +free_wqes: + while (--i >= 0) + xsc_eth_dealloc_rx_wqe(rq, ix + i); + + return err; } +bool xsc_eth_post_rx_wqes(struct xsc_rq *rq) +{ + struct xsc_wq_cyc *wq = &rq->wqe.wq; + u8 wqe_bulk, wqe_bulk_min; + int alloc; + u16 head; + int err; + + wqe_bulk = rq->wqe.info.wqe_bulk; + wqe_bulk_min = rq->wqe.info.wqe_bulk_min; + if (xsc_wq_cyc_missing(wq) < wqe_bulk) + return false; + + do { + head = xsc_wq_cyc_get_head(wq); + + alloc = min_t(int, wqe_bulk, xsc_wq_cyc_missing(wq)); + if (alloc < wqe_bulk && alloc >= wqe_bulk_min) + alloc = alloc & 0xfffffffe; + + if (alloc > 0) { + err = xsc_alloc_rx_wqes(rq, head, alloc); + if (unlikely(err)) + break; + + xsc_wq_cyc_push_n(wq, alloc); + } + } while (xsc_wq_cyc_missing(wq) >= wqe_bulk_min); + + dma_wmb(); + + /* ensure wqes are visible to device before updating doorbell record */ + xsc_rq_notify_hw(rq); + + return !!err; +} diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c index 9ed67f95b..b3e09768c 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.c @@ -42,10 +42,98 @@ static bool xsc_channel_no_affinity_change(struct xsc_channel *c) return cpumask_test_cpu(current_cpu, c->aff_mask); } +static void xsc_dump_error_sqcqe(struct xsc_sq *sq, + struct xsc_cqe *cqe) +{ + struct net_device *netdev = sq->channel->netdev; + u32 ci = xsc_cqwq_get_ci(&sq->cq.wq); + + net_err_ratelimited("Err cqe on dev %s cqn=0x%x ci=0x%x sqn=0x%x err_code=0x%x qpid=0x%lx\n", + netdev->name, sq->cq.xcq.cqn, ci, + sq->sqn, get_cqe_opcode(cqe), + FIELD_GET(XSC_CQE_QP_ID_MASK, + le32_to_cpu(cqe->data0))); +} + static bool xsc_poll_tx_cq(struct xsc_cq *cq, int napi_budget) { - // TBD - return true; + struct xsc_adapter *adapter; + struct xsc_cqe *cqe; + struct device *dev; + struct xsc_sq *sq; + u32 dma_fifo_cc; + u32 nbytes = 0; + u16 npkts = 0; + int i = 0; + u16 sqcc; + + sq = container_of(cq, struct xsc_sq, cq); + if (!test_bit(XSC_ETH_SQ_STATE_ENABLED, &sq->state)) + return false; + + adapter = sq->channel->adapter; + dev = adapter->dev; + + cqe = xsc_cqwq_get_cqe(&cq->wq); + if (!cqe) + goto out; + + if (unlikely(get_cqe_opcode(cqe) & BIT(7))) { + xsc_dump_error_sqcqe(sq, cqe); + return false; + } + + sqcc = sq->cc; + + /* avoid dirtying sq cache line every cqe */ + dma_fifo_cc = sq->dma_fifo_cc; + i = 0; + do { + struct xsc_tx_wqe_info *wi; + struct sk_buff *skb; + int j; + u16 ci; + + xsc_cqwq_pop(&cq->wq); + + ci = xsc_wq_cyc_ctr2ix(&sq->wq, sqcc); + wi = &sq->db.wqe_info[ci]; + skb = wi->skb; + + /*cqe may be overstanding in real test, not by nop in other*/ + if (unlikely(!skb)) + continue; + + for (j = 0; j < wi->num_dma; j++) { + struct xsc_sq_dma *dma = xsc_dma_get(sq, dma_fifo_cc++); + + xsc_tx_dma_unmap(dev, dma); + } + + npkts++; + nbytes += wi->num_bytes; + sqcc += wi->num_wqebbs; + napi_consume_skb(skb, 0); + + } while ((++i <= napi_budget) && (cqe = xsc_cqwq_get_cqe(&cq->wq))); + + xsc_cq_notify_hw(cq); + + /* ensure cq space is freed before enabling more cqes */ + wmb(); + + sq->dma_fifo_cc = dma_fifo_cc; + sq->cc = sqcc; + + netdev_tx_completed_queue(sq->txq, npkts, nbytes); + + if (netif_tx_queue_stopped(sq->txq) && + xsc_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room)) { + netif_tx_wake_queue(sq->txq); + } + +out: + return (i == napi_budget); } int xsc_eth_napi_poll(struct napi_struct *napi, int budget) diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h index 68f3347e4..d0d303efa 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_txrx.h @@ -60,4 +60,32 @@ static inline bool xsc_wqc_has_room_for(struct xsc_wq_cyc *wq, return (xsc_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc); } +static inline struct xsc_cqe *xsc_cqwq_get_cqe_buff(struct xsc_cqwq *wq, u32 ix) +{ + struct xsc_cqe *cqe = xsc_frag_buf_get_wqe(&wq->fbc, ix); + + return cqe; +} + +static inline struct xsc_cqe *xsc_cqwq_get_cqe(struct xsc_cqwq *wq) +{ + u32 ci = xsc_cqwq_get_ci(wq); + u8 cqe_ownership_bit; + struct xsc_cqe *cqe; + u8 sw_ownership_val; + + cqe = xsc_cqwq_get_cqe_buff(wq, ci); + + cqe_ownership_bit = !!(le32_to_cpu(cqe->data1) & XSC_CQE_OWNER); + sw_ownership_val = xsc_cqwq_get_wrap_cnt(wq) & 1; + + if (cqe_ownership_bit != sw_ownership_val) + return NULL; + + /* ensure cqe content is read after cqe ownership bit */ + dma_rmb(); + + return cqe; +} + #endif /* XSC_RXTX_H */ From patchwork Mon Feb 24 17:24:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xin Tian X-Patchwork-Id: 13988564 X-Patchwork-Delegate: kuba@kernel.org Received: from lf-2-14.ptr.blmpb.com (lf-2-14.ptr.blmpb.com [101.36.218.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 206E8264F81 for ; Mon, 24 Feb 2025 17:26:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=101.36.218.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417964; cv=none; b=m4/aD5y2oI52y9Q49epxn2nWbGSVf1GRMAZ7QrntJT7COne3nviwIy6Nub0idmu+hcY9oIpkWIUTRqA7GjCiNjf7g0o5H/cD/87wRJ3K4rIW0AXCqu7W9DzMOy+cOk1Bd00tXnbFvqWNuJKWJzBFkxKsn7uj3aPFOAys2Lb/v9k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740417964; c=relaxed/simple; bh=FuUEz1FEo4YK61WCDTMi9ukv8pBG3tZ9R3acS51ncIQ=; h=Date:References:In-Reply-To:Cc:Message-Id:Subject:Mime-Version: Content-Type:To:From; b=UaFS3whpYwkpsNRdbOBQpNe2dMxD6XEKOcvWf24t5+XQxsUPU+o5qOchoXvoUzsm310Hy8LYVS9jFLLuQuFMv4LAFS8WyrVFoq7bFx6AbH1s4lMu8MsVBoW6tT6lmrzsQgoB8yGwzfb3SgvbuTiWBy3ZIivcnVVT7EECSsjG20A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com; spf=pass smtp.mailfrom=yunsilicon.com; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b=IGMJXgo4; arc=none smtp.client-ip=101.36.218.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yunsilicon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yunsilicon.com header.i=@yunsilicon.com header.b="IGMJXgo4" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=feishu2403070942; d=yunsilicon.com; t=1740417888; h=from:subject: mime-version:from:date:message-id:subject:to:cc:reply-to:content-type: mime-version:in-reply-to:message-id; bh=692La8fOVdGmd1+VxH9dakL0Tp+wnjex879VegRBH1c=; b=IGMJXgo4yX6tLUvA1c4AOalPqhaeUnAVimpMy4VQD3ygSPhVmonnkafJvA71cfJA477ANP yEt8LyC1j1nLQYNeX2mYsPtu8KgNlN64Y36uoLt7QN/S/lwwYKC4q2Dicf1Ey41AMVuyfS 4bqNIs7lRX+Wgk30SpkT1uD43L1n0R6msuUkJm1ZJksmyxRz8LavpH9HDrjn2qyG2dwse9 AyYFSrU5yKfQYHq3UbveSnsjma3GecdQMgm7Xc3vvHqnqyl7UODTyKJF7P7yyzMHa3Vbvw 1xOI9e2+S4LLouKaEOesxO/EBCOLqkV9WR6Jkjg8XzstZXoye1Km36DB9mD02A== Date: Tue, 25 Feb 2025 01:24:46 +0800 Received: from ubuntu-liun.yunsilicon.com ([58.34.192.114]) by smtp.feishu.cn with ESMTPS; Tue, 25 Feb 2025 01:24:46 +0800 X-Lms-Return-Path: References: <20250224172416.2455751-1-tianx@yunsilicon.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250224172416.2455751-1-tianx@yunsilicon.com> Cc: , , , , , , , , , , , , Message-Id: <20250224172445.2455751-15-tianx@yunsilicon.com> Subject: [PATCH net-next v5 14/14] xsc: add ndo_get_stats64 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Original-From: Xin Tian To: From: "Xin Tian" X-Patchwork-Delegate: kuba@kernel.org Added basic network interface statistics. Co-developed-by: Honggang Wei Signed-off-by: Honggang Wei Co-developed-by: Lei Yan Signed-off-by: Lei Yan Signed-off-by: Xin Tian --- .../net/ethernet/yunsilicon/xsc/net/Makefile | 2 +- .../net/ethernet/yunsilicon/xsc/net/main.c | 29 +++++++++++- .../net/ethernet/yunsilicon/xsc/net/xsc_eth.h | 3 ++ .../ethernet/yunsilicon/xsc/net/xsc_eth_rx.c | 4 ++ .../yunsilicon/xsc/net/xsc_eth_stats.c | 46 +++++++++++++++++++ .../yunsilicon/xsc/net/xsc_eth_stats.h | 34 ++++++++++++++ .../ethernet/yunsilicon/xsc/net/xsc_eth_tx.c | 5 ++ .../ethernet/yunsilicon/xsc/net/xsc_queue.h | 2 + 8 files changed, 122 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c create mode 100644 drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile index 7cfc2aaa2..e1cfa3cdf 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/Makefile +++ b/drivers/net/ethernet/yunsilicon/xsc/net/Makefile @@ -6,4 +6,4 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/yunsilicon/xsc obj-$(CONFIG_YUNSILICON_XSC_ETH) += xsc_eth.o -xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o +xsc_eth-y := main.o xsc_eth_wq.o xsc_eth_txrx.o xsc_eth_tx.o xsc_eth_rx.o xsc_eth_stats.o diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c index cd9797ed7..43db3e7b1 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/main.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/main.c @@ -560,15 +560,20 @@ static int xsc_eth_open_qp_sq(struct xsc_channel *c, u8 ele_log_size = psq_param->sq_attr.ele_log_size; u8 q_log_size = psq_param->sq_attr.q_log_size; struct xsc_modify_raw_qp_mbox_in *modify_in; + struct xsc_channel_stats *channel_stats; struct xsc_create_qp_mbox_in *in; struct xsc_core_device *xdev; struct xsc_adapter *adapter; + struct xsc_stats *stats; unsigned int hw_npages; int inlen; int ret; adapter = c->adapter; xdev = adapter->xdev; + stats = adapter->stats; + channel_stats = &stats->channel_stats[c->chl_idx]; + psq->stats = &channel_stats->sq[sq_idx]; psq_param->wq.db_numa_node = cpu_to_node(c->cpu); ret = xsc_eth_wq_cyc_create(xdev, &psq_param->wq, @@ -869,11 +874,16 @@ static int xsc_eth_alloc_rq(struct xsc_channel *c, u8 q_log_size = prq_param->rq_attr.q_log_size; struct page_pool_params pagepool_params = {0}; struct xsc_adapter *adapter = c->adapter; + struct xsc_channel_stats *channel_stats; u32 pool_size = 1 << q_log_size; + struct xsc_stats *stats; int ret = 0; u32 wq_sz; int i, f; + stats = c->adapter->stats; + channel_stats = &stats->channel_stats[c->chl_idx]; + prq->stats = &channel_stats->rq; prq_param->wq.db_numa_node = cpu_to_node(c->cpu); ret = xsc_eth_wq_cyc_create(c->adapter->xdev, &prq_param->wq, @@ -1669,6 +1679,14 @@ static int xsc_eth_close(struct net_device *netdev) return ret; } +static void xsc_eth_get_stats(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + struct xsc_adapter *adapter = netdev_priv(netdev); + + xsc_eth_fold_sw_stats64(adapter, stats); +} + static int xsc_eth_set_hw_mtu(struct xsc_core_device *xdev, u16 mtu, u16 rx_buf_sz) { @@ -1700,6 +1718,7 @@ static const struct net_device_ops xsc_netdev_ops = { .ndo_open = xsc_eth_open, .ndo_stop = xsc_eth_close, .ndo_start_xmit = xsc_eth_xmit_start, + .ndo_get_stats64 = xsc_eth_get_stats, }; static void xsc_eth_build_nic_netdev(struct xsc_adapter *adapter) @@ -1911,14 +1930,20 @@ static int xsc_eth_probe(struct auxiliary_device *adev, goto err_nic_cleanup; } + adapter->stats = kvzalloc(sizeof(*adapter->stats), GFP_KERNEL); + if (!adapter->stats) + goto err_detach; + err = register_netdev(netdev); if (err) { netdev_err(netdev, "register_netdev failed, err=%d\n", err); - goto err_detach; + goto err_free_stats; } return 0; +err_free_stats: + kfree(adapter->stats); err_detach: xsc_eth_detach(xdev, adapter); err_nic_cleanup: @@ -1945,7 +1970,7 @@ static void xsc_eth_remove(struct auxiliary_device *adev) } unregister_netdev(adapter->netdev); - + kfree(adapter->stats); free_netdev(adapter->netdev); xdev->eth_priv = NULL; diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h index 2e11d1c68..4ffa81f0c 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth.h @@ -10,6 +10,7 @@ #include "common/xsc_device.h" #include "xsc_eth_common.h" +#include "xsc_eth_stats.h" #define XSC_INVALID_LKEY 0x100 @@ -49,6 +50,8 @@ struct xsc_adapter { u32 status; struct mutex status_lock; // protect status + + struct xsc_stats *stats; }; #endif /* __XSC_ETH_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c index b87105c26..9e55cb763 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c @@ -159,6 +159,10 @@ static void xsc_complete_rx_cqe(struct xsc_rq *rq, struct sk_buff *skb, struct xsc_wqe_frag_info *wi) { + struct xsc_rq_stats *stats = rq->stats; + + stats->packets++; + stats->bytes += cqe_bcnt; xsc_build_rx_skb(cqe, cqe_bcnt, rq, skb, wi); } diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c new file mode 100644 index 000000000..38bab39b9 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.c @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#include "xsc_eth_stats.h" +#include "xsc_eth.h" + +static int xsc_get_netdev_max_channels(struct xsc_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + + return min_t(unsigned int, netdev->num_rx_queues, + netdev->num_tx_queues); +} + +static int xsc_get_netdev_max_tc(struct xsc_adapter *adapter) +{ + return adapter->nic_param.num_tc; +} + +void xsc_eth_fold_sw_stats64(struct xsc_adapter *adapter, + struct rtnl_link_stats64 *s) +{ + int i, j; + + for (i = 0; i < xsc_get_netdev_max_channels(adapter); i++) { + struct xsc_channel_stats *channel_stats; + struct xsc_rq_stats *rq_stats; + + channel_stats = &adapter->stats->channel_stats[i]; + rq_stats = &channel_stats->rq; + + s->rx_packets += rq_stats->packets; + s->rx_bytes += rq_stats->bytes; + + for (j = 0; j < xsc_get_netdev_max_tc(adapter); j++) { + struct xsc_sq_stats *sq_stats = &channel_stats->sq[j]; + + s->tx_packets += sq_stats->packets; + s->tx_bytes += sq_stats->bytes; + s->tx_dropped += sq_stats->dropped; + } + } +} + diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h new file mode 100644 index 000000000..1828f4608 --- /dev/null +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_stats.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021-2025, Shanghai Yunsilicon Technology Co., Ltd. + * All rights reserved. + */ + +#ifndef __XSC_EN_STATS_H +#define __XSC_EN_STATS_H + +#include "xsc_eth_common.h" + +struct xsc_rq_stats { + u64 packets; + u64 bytes; +}; + +struct xsc_sq_stats { + u64 packets; + u64 bytes; + u64 dropped; +}; + +struct xsc_channel_stats { + struct xsc_sq_stats sq[XSC_MAX_NUM_TC]; + struct xsc_rq_stats rq; +} ____cacheline_aligned_in_smp; + +struct xsc_stats { + struct xsc_channel_stats channel_stats[XSC_ETH_MAX_NUM_CHANNELS]; +}; + +void xsc_eth_fold_sw_stats64(struct xsc_adapter *adapter, + struct rtnl_link_stats64 *s); + +#endif /* XSC_EN_STATS_H */ diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c index e41ff0d77..b16bff45c 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_tx.c @@ -221,6 +221,7 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, u16 pi) { struct xsc_core_device *xdev = sq->cq.xdev; + struct xsc_sq_stats *stats = sq->stats; struct xsc_send_wqe_ctrl_seg *cseg; struct xsc_wqe_data_seg *dseg; struct xsc_tx_wqe_info *wi; @@ -241,11 +242,13 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, mss = skb_shinfo(skb)->gso_size; ihs = xsc_tx_get_gso_ihs(sq, skb); num_bytes = skb->len; + stats->packets += skb_shinfo(skb)->gso_segs; } else { opcode = XSC_OPCODE_RAW; mss = 0; ihs = 0; num_bytes = skb->len; + stats->packets++; } /*linear data in skb*/ @@ -283,10 +286,12 @@ static uint32_t xsc_eth_xmit_frame(struct sk_buff *skb, xsc_txwqe_complete(sq, skb, opcode, ds_cnt, num_wqebbs, num_bytes, num_dma, wi); + stats->bytes += num_bytes; return NETDEV_TX_OK; err_drop: + stats->dropped++; dev_kfree_skb_any(skb); return NETDEV_TX_OK; diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h index 623df4a51..e883a3da1 100644 --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_queue.h @@ -132,6 +132,7 @@ struct xsc_rq { unsigned long state; struct work_struct recover_work; + struct xsc_rq_stats *stats; u32 hw_mtu; u32 frags_sz; @@ -180,6 +181,7 @@ struct xsc_sq { /* read only */ struct xsc_wq_cyc wq; u32 dma_fifo_mask; + struct xsc_sq_stats *stats; struct { struct xsc_sq_dma *dma_fifo; struct xsc_tx_wqe_info *wqe_info;