From patchwork Fri Aug 2 09:59:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Louis Peens X-Patchwork-Id: 13751373 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2115.outbound.protection.outlook.com [40.107.244.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0F7214B940 for ; Fri, 2 Aug 2024 10:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.115 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722592813; cv=fail; b=DfVF1KZX8dY30XsIXDYO1VchxgEIMACvFu7jaeE5bJzOotDX3Tse4dcSq7wZ+fPtL1nLEsD+rYgy8jg3pn2R/ZivwuMeYtR5ipHeYDhFW02oEU5eU52AGCJhkb7Aq4BnqSmYq9jrwZICEbUblohYJ+Dkq5tVAz9Yjq5bkwc+vkQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722592813; c=relaxed/simple; bh=dodJQ6BWMSgX3MysqykAO2bUL5x3POejFwCx41GHSzg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: Content-Type:MIME-Version; b=SPAv5Yx9CpPwkxzCGQR0POfdXHIaIKc/DLJ1Z+X69OSBrSFcaNcrH+YqBytSowZoO28b3eDtQuaX+4TSj1RnLK/yiwOwnIFiBeiUQNqnmkj9QkJaD9zIOt9reeUMMAEykGYKOre0EVkz1dVZ5F9fcQc3Q0kRSHC3dyOujrrHjJc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=corigine.com; spf=pass smtp.mailfrom=corigine.com; dkim=pass (1024-bit key) header.d=corigine.onmicrosoft.com header.i=@corigine.onmicrosoft.com header.b=dRSLGB3s; arc=fail smtp.client-ip=40.107.244.115 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=corigine.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=corigine.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=corigine.onmicrosoft.com header.i=@corigine.onmicrosoft.com header.b="dRSLGB3s" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LIBVi8ee0iN9sEtyyDQTHO8BuNlVwT0wclFeXEDpD7xB1PxrXFQ6FafcxbqzyWy/OODlQBK3vwGxFA3EhTOlBAQAy9x0SkHin8KetnsptPy99ds1RFOCU/MOWPSpIYKvMnlBIZ7+3aZ/piF/IVPRVVtFNvPb1qU02fCiMJED3QdvvfvGGQ3HtNsJqpv0PgXGuxESX1LeFWxq428NXi04bjmV1y64v3aUoMH/uIn63EyTNdYSyOOkogMjSTmlLiB2nsHTV6ye5Tui5KQaidRJct20+wFhX47sR8EW9N8dGW6b3EWzwqE/DO5qCYLiEJHNwUfybBNncfkrDyf977z+Cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bHjf2OykDfoE6ZX9aPTjRiWqumWeMpDJUjnEs6ypYuA=; b=cOC3FFDy58mVIH3EQcfIw+qdUj/OADcJvyhROubL00R6CYj/X1XGn/hVaaitUconk0HgIyuRWwLm/A3OLpaPkxpPtY1GjWIPFpAt4km0behJATCouDoYCmEvBflIbL+KD4bp9I2GKlIiigGvHGgsY8W5LJ9CXLuVrmSV7J0Syc3HrbHgzbJSNuUAMb60UMDYS3muI9ye2+CWiQ7Hi+O/NeWG44mxn7KIBK6Z3yubgt2BG6n7x4EWz2qITDGOuPhMlM36vs8fcZnmOuQh3eQIfZo4t6+5BqX7R5uNZh56oBlDRIdCpnYATn1XZOg02AIFpbWeYmk6mDtPkiwycEthsA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bHjf2OykDfoE6ZX9aPTjRiWqumWeMpDJUjnEs6ypYuA=; b=dRSLGB3sIWiKQcWvO7zwtFnJ61AImPQVMd4YP4gBXq0pPPw37NT7iUX0dsONEXAetu4szp/XzMq/Mw8UHWjSTfeovE2zOkmfuHMb2g0iCH8SMcVQXSndWC0DnACmzJIKpocHZ8vYKmGDWYDeMTWEjkCYS8lm47FsgEwLL+rYGfU= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from BL0PR13MB4403.namprd13.prod.outlook.com (2603:10b6:208:1c4::8) by DS1PR13MB7169.namprd13.prod.outlook.com (2603:10b6:8:215::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7828.23; Fri, 2 Aug 2024 10:00:08 +0000 Received: from BL0PR13MB4403.namprd13.prod.outlook.com ([fe80::bbcb:1c13:7639:bdc0]) by BL0PR13MB4403.namprd13.prod.outlook.com ([fe80::bbcb:1c13:7639:bdc0%5]) with mapi id 15.20.7828.023; Fri, 2 Aug 2024 10:00:08 +0000 From: Louis Peens To: David Miller , Jakub Kicinski , "Michael S. Tsirkin" , Jason Wang Cc: eperezma@redhat.com, Kyle Xu , netdev@vger.kernel.org, virtualization@lists.linux.dev, oss-drivers@corigine.com Subject: [RFC net-next 3/3] drivers/vdpa: add NFP devices vDPA driver Date: Fri, 2 Aug 2024 11:59:31 +0200 Message-Id: <20240802095931.24376-4-louis.peens@corigine.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240802095931.24376-1-louis.peens@corigine.com> References: <20240802095931.24376-1-louis.peens@corigine.com> X-ClientProxiedBy: JNXP275CA0019.ZAFP275.PROD.OUTLOOK.COM (2603:1086:0:19::31) To BL0PR13MB4403.namprd13.prod.outlook.com (2603:10b6:208:1c4::8) Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL0PR13MB4403:EE_|DS1PR13MB7169:EE_ X-MS-Office365-Filtering-Correlation-Id: 348b7de0-699b-48e3-dc0f-08dcb2d9e41d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|52116014|376014|38350700014; X-Microsoft-Antispam-Message-Info: =?utf-8?q?4E0f/Qcv+6Pg/585u92/a3cLTox7+0Y?= =?utf-8?q?2zK3ph7lQt8jMOGcvamdklvuiCZUG0aZkK0DrgddRtk+pyoYAOuKkt+WlIzkbPKpO?= =?utf-8?q?emE/EWJq6KBxsl/ePCxDiTGtNfNh7Ax0vMJ4kHpXn+GnB1HUSmaxYyZ0kV8ibTPF1?= =?utf-8?q?hix5tgFhlXLGYMOKAfUZhd3DoKryicaUxXBx/PxTDYpGdNOtPUsc1eSj7YWCk6yaY?= =?utf-8?q?2DYjcz1TRn/bV2ZEm1/NJkjDHyeEAIL9CGAra5eb7KeYMvzkqWOu7/LWTqAnbLN7e?= =?utf-8?q?dq3tmsbO9Js/gCgQGk+XzLhmvZr2PGCjBz6ZfRn1beUGjUkgmkkVi2zGAPitCNvp7?= =?utf-8?q?j9XNODLaSKMPqjCAB+nyT1RIUbnEXxmN1KeR6rLbQvQu/RYLZM2kvCg3AXg9/ODS8?= =?utf-8?q?NCZ2i1mN6hHhaCk5M3PJ4KCLg0P/GoUQRaZzwyc8hBhHyryTvFAXmt92EbLxzZQZG?= =?utf-8?q?8V51KPnORKufCrs2DEqQejqwf9BmRMNlmqHnaCNx6JUbzYIiA5hiGb/mTdLTuZ336?= =?utf-8?q?KCG/JSdTvP6yQ8wI7Ehv66jnTll0ySd5eibhJXOO/s/5Sroiu7DgwMgwJYQQfOvEa?= =?utf-8?q?G/o51zRqWZEhyPA/JPe5muML2ixRTlFS5ReYwdACSOA7ImJAOfVVQcjXxlkR1p39N?= =?utf-8?q?M7VGv2MinYBDDIpdnctdCnrDWRVw4Fl8DJBSLctfoktlmcjxZfXisxz8S1h16OYHn?= =?utf-8?q?NdnAIV011c3MaFw7ZKu6rf2ckYhPKoV2HVzkXqXbZL8abK8AcfX5D7fCghWNM5ACu?= =?utf-8?q?fdjAazcaq7GvM6M/nHY4ZFGCeGb4AZWwYT37p9eQb6AaN99exI+D6mizQDlvO/c4K?= =?utf-8?q?TnbStc4KVqPoqQBGIGnqOZZWxvGxo0phEo53pHy9brI71UvIMm0zNvcSTx7Vn5Xa+?= =?utf-8?q?Hkh/Wu7pW5xdqphLYw15HB00gEqS7nGnJgMMTwISLmEuNUv80qvdc3nmlFbgrLi21?= =?utf-8?q?JjCtACah25/KBHfn2zCbcl9nIhIM5JmoEU05ndjgrdxUok89KtCc2pOkrmiyEtakA?= =?utf-8?q?HpAOKeCesma+kTfOw0Igob19r3ZkiPCEBF9AyUQWH14ss8XQETn+luXTAhuCI4xRN?= =?utf-8?q?tVm/O22s/rsggi/M1TGp+x+6iJtAzsjuOGwd+KWvXZu3zZBNextmm9wT/vmlaQ6pP?= =?utf-8?q?YgeVy0fgi7vYxluENzJS+lQfHHFnOBj25m3ONmqvSCckPYvj4Nr9h/GZksxjekZJg?= =?utf-8?q?mbTwCvynpDUTRztpsKo39ZnKbJDMIpoNzD8ykIEqTVOh1UIBfCJ9TiJimlY4A9pTp?= =?utf-8?q?4cARgOmBc1W7iq0fYHjix3ymlu96O1vaUN4CApReylfoFr67ldEs+hMd7KDRXNjk3?= =?utf-8?q?fw5OSafZ5Mi0HgpsTWLW6Z2SPhveUkSQEw=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL0PR13MB4403.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(52116014)(376014)(38350700014);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?IIqPwePmj5cfiSz7bO48jOnUgsBe?= =?utf-8?q?e2IdD5Y3JwUZ7pcKsykbzohJYwgunC9RwG3tadURdacfr5Gn6VNANo0ylphklFLKl?= =?utf-8?q?mFqB3S3qdzf16CW+cEodkmj4GY0+b1Kja+UocM2DhlOYvzWMGqPaAMhZw3c7407+I?= =?utf-8?q?L06+5wLW9mWc828Gfw6Gx/KUi9u7aWtzKWnqx0jCyCQpu+C144M14ZYXfEEjdN5H8?= =?utf-8?q?DHCqVYIY9JQANpMa6PsXFKsbYgWuwY4Chkry0OTbTeEnZ0bV7TnnaLdbO6fejtoGw?= =?utf-8?q?TfMRP2ZYxAqEOnPi8Y+A0zARyBCRPLAhU7rt6rtF0RilwjOnIh9TEafU8WUPG/lPR?= =?utf-8?q?FsTQ+qx08rM20daMVoD4bolhxmSXtk933Kt5FOnXB9FZUajen7lvi9tdxcRcPgq4r?= =?utf-8?q?5Y89CAYjnU3kp0fOUlr3xYoZ7FMGhKtHFSMOsli8a55bEHxAkVlNuxchMo+BFcn62?= =?utf-8?q?TBui4mwejCYNnOg2oL06R6w5tlsrS5Adt/l0zAnBbUCeNI7WHbsqWixtptAVDWpD5?= =?utf-8?q?evvKgF01H2aJWEivMPP3sS2qJ9N0DCuuuSITjgcKLbs6TuRocsQxP0Vm/ZpYoKmui?= =?utf-8?q?wmJUNtDaLGC3/DryeEo+j9IDcyxElExfgluBQEk4LtlSlOrj3ZqTmNpavPssJ2+CH?= =?utf-8?q?7YMmLTPcWkdbyeZDqDx7bq3o62WrxXTl5AVMRi5ldwawZcpeQHNajxOqTCYHRq1V0?= =?utf-8?q?PtZt8Uz9vGYxj0H5O6vzR6O5xml9uGETbT9kDg6dB0YPhKAoxtPKmJnlPYQBXL9ns?= =?utf-8?q?6/it5GMbDy9ejlskYLw0jx2xCbeyhpMNFQcvemtidkBKx34Fw9CN+/nPRAx7dl4J5?= =?utf-8?q?OkeeeZO9IlkfmCQxySdNT8xlInDAJCADlI95yGlQMwC2G4ZRcz3byqIzzAiPm9Lf9?= =?utf-8?q?C8imLBFOJmqBT1FJBr38QgnNKPDrqGWnIOBguVsfl8/a7MPoAkhZAcCYGyo5EknX/?= =?utf-8?q?K1SAhlEX0hdRIlQXth/x/VwskR+QnDpqcAh+Hi0T+ka3YqOvSOZnyOPpYit8nLO7W?= =?utf-8?q?H9hFMyzo7L0qSf0wsuojUwNNSE25opM+GkZZmKsLoUgXo0naak7ffdZoC4q5eGimc?= =?utf-8?q?kr/j7vCJaNu9xD8chVOl7kmKCgT6DatCromKhXKrbPpVWFIUyrk3hABGq0e35284q?= =?utf-8?q?BU61bf9mZJ0digiK08jPAHQNg6ZKfIEnUQJMjHaK1Lk9uaomKxzrQy3nzOTHQ6fdX?= =?utf-8?q?5iL/SkiXD5MVYEIWEXfgcGjq/HNqJ1BBbfF5iN0WhELMkaJ209ENMZBZeFEyppLqq?= =?utf-8?q?y25PN+Hpi76PvDU7qnbWbG8Xp+YYHCVQUvCvRAGtfiOO30nIcYXYnFbgudIRiZo8Y?= =?utf-8?q?arMG8nwg9tYJsb7SRtn/uB7uFTQbiYcx4SYkKc18CQ+XEH337UA2g9Ax8yivO02AC?= =?utf-8?q?6YXrVh26qJePXON0tys6E1azmzQ4bSlLtbM2p3MuHFVB5wFDKfWh0AgJypIUfIPCq?= =?utf-8?q?wEnxzgjw6wAvvJf7KHN/N9VPZhl9Fvhs06Ep7nTZqkKT8ZomnnBdEo32SeKZdcdNt?= =?utf-8?q?q+Rci0gY0oOawX8xqe1B+450xBfU9B3SCw=3D=3D?= X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 348b7de0-699b-48e3-dc0f-08dcb2d9e41d X-MS-Exchange-CrossTenant-AuthSource: BL0PR13MB4403.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Aug 2024 10:00:08.7678 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: xUYOqe4UstB8eni32EPdma7XkYb3iAcOaxS2NOojv3lnCTEUwgD7QzZzmAzgvp+SWke6fCdP+nefJJQN3wwzx90+QP9TZEuWIElSD2100So= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS1PR13MB7169 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Kyle Xu Add a new kernel module ‘nfp_vdpa’ for the NFP vDPA networking driver. The vDPA driver initializes the necessary resources on the VF and the data path will be offloaded. It also implements the ‘vdpa_config_ops’ and the corresponding callback interfaces according to the requirement of kernel vDPA framework. Signed-off-by: Kyle Xu Signed-off-by: Louis Peens --- MAINTAINERS | 1 + drivers/vdpa/Kconfig | 10 + drivers/vdpa/Makefile | 1 + drivers/vdpa/netronome/Makefile | 5 + drivers/vdpa/netronome/nfp_vdpa_main.c | 821 +++++++++++++++++++++++++ 5 files changed, 838 insertions(+) create mode 100644 drivers/vdpa/netronome/Makefile create mode 100644 drivers/vdpa/netronome/nfp_vdpa_main.c diff --git a/MAINTAINERS b/MAINTAINERS index c0a3d9e93689..3231b80af331 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15836,6 +15836,7 @@ R: Jakub Kicinski L: oss-drivers@corigine.com S: Maintained F: drivers/net/ethernet/netronome/ +F: drivers/vdpa/netronome/ NETWORK BLOCK DEVICE (NBD) M: Josef Bacik diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig index 5265d09fc1c4..da5a8461359e 100644 --- a/drivers/vdpa/Kconfig +++ b/drivers/vdpa/Kconfig @@ -137,4 +137,14 @@ config OCTEONEP_VDPA Please note that this driver must be built as a module and it cannot be loaded until the Octeon emulation software is running. +config NFP_VDPA + tristate "vDPA driver for NFP devices" + depends on NFP + help + VDPA network driver for NFP4000 NFP5000 NFP6000 and newer. Provides + offloading of virtio_net datapath such that descriptors put on the + ring will be executed by the hardware. It also supports a variety + of stateless offloads depending on the actual device used and + firmware version. + endif # VDPA diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile index 5654d36707af..a8e335756829 100644 --- a/drivers/vdpa/Makefile +++ b/drivers/vdpa/Makefile @@ -9,3 +9,4 @@ obj-$(CONFIG_ALIBABA_ENI_VDPA) += alibaba/ obj-$(CONFIG_SNET_VDPA) += solidrun/ obj-$(CONFIG_PDS_VDPA) += pds/ obj-$(CONFIG_OCTEONEP_VDPA) += octeon_ep/ +obj-$(CONFIG_NFP_VDPA) += netronome/ diff --git a/drivers/vdpa/netronome/Makefile b/drivers/vdpa/netronome/Makefile new file mode 100644 index 000000000000..ccba4ead3e4f --- /dev/null +++ b/drivers/vdpa/netronome/Makefile @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 +ccflags-y += -I$(srctree)/drivers/net/ethernet/netronome/nfp +ccflags-y += -I$(srctree)/drivers/net/ethernet/netronome/nfp/nfpcore +obj-$(CONFIG_NFP_VDPA) += nfp_vdpa.o +nfp_vdpa-$(CONFIG_NFP_VDPA) += nfp_vdpa_main.o diff --git a/drivers/vdpa/netronome/nfp_vdpa_main.c b/drivers/vdpa/netronome/nfp_vdpa_main.c new file mode 100644 index 000000000000..a60905848094 --- /dev/null +++ b/drivers/vdpa/netronome/nfp_vdpa_main.c @@ -0,0 +1,821 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2023 Corigine, Inc. */ +/* + * nfp_vdpa_main.c + * Main entry point for vDPA device driver. + * Author: Xinying Yu + * Zhenbing Xu + */ +#include +#include +#include + +#include +#include +#include +#include + +#include "nfp_net.h" +#include "nfp_dev.h" + +/* Only one queue pair for now. */ +#define NFP_VDPA_NUM_QUEUES 2 + +/* RX queue index in queue pair */ +#define NFP_VDPA_RX_QUEUE 0 + +/* TX queue index in queue pair */ +#define NFP_VDPA_TX_QUEUE 1 + +/* Max MTU supported */ +#define NFP_VDPA_MTU_MAX 9216 + +/* Default freelist buffer size */ +#define NFP_VDPA_FL_BUF_SZ 10240 + +/* Max queue supported */ +#define NFP_VDPA_QUEUE_MAX 256 + +/* Queue space stride */ +#define NFP_VDPA_QUEUE_SPACE_STRIDE 4 + +/* Notification area base on VF CFG BAR */ +#define NFP_VDPA_NOTIFY_AREA_BASE 0x4000 + +/* Notification area offset of each queue */ +#define NFP_VDPA_QUEUE_NOTIFY_OFFSET 0x1000 + +/* Maximum number of rings supported */ +#define NFP_VDPA_QUEUE_RING_MAX 1 + +/* VF auxiliary device name */ +#define NFP_NET_VF_ADEV_NAME "nfp" + +#define NFP_NET_SUPPORTED_FEATURES \ + ((1ULL << VIRTIO_F_ANY_LAYOUT) | \ + (1ULL << VIRTIO_F_VERSION_1) | \ + (1ULL << VIRTIO_F_ACCESS_PLATFORM) | \ + (1ULL << VIRTIO_NET_F_MTU) | \ + (1ULL << VIRTIO_NET_F_MAC) | \ + (1ULL << VIRTIO_NET_F_STATUS)) + +struct nfp_vdpa_virtqueue { + u64 desc; + u64 avail; + u64 used; + u16 size; + u16 last_avail_idx; + u16 last_used_idx; + bool ready; + + void __iomem *kick_addr; + struct vdpa_callback cb; +}; + +struct nfp_vdpa_net { + struct vdpa_device vdpa; + + void __iomem *ctrl_bar; + void __iomem *q_bar; + void __iomem *qcp_cfg; + + struct nfp_vdpa_virtqueue vring[NFP_VDPA_NUM_QUEUES]; + + u32 ctrl; + u32 ctrl_w1; + + u32 reconfig_in_progress_update; + struct semaphore bar_lock; + + u8 status; + u64 features; + struct virtio_net_config config; + + struct msix_entry vdpa_rx_irq; + struct nfp_net_r_vector vdpa_rx_vec; + + struct msix_entry vdpa_tx_irq; + struct nfp_net_r_vector vdpa_tx_vec; +}; + +struct nfp_vdpa_mgmt_dev { + struct vdpa_mgmt_dev mdev; + struct nfp_vdpa_net *ndev; + struct pci_dev *pdev; + const struct nfp_dev_info *dev_info; +}; + +static uint16_t vdpa_cfg_readw(struct nfp_vdpa_net *ndev, int off) +{ + return readw(ndev->ctrl_bar + off); +} + +static u32 vdpa_cfg_readl(struct nfp_vdpa_net *ndev, int off) +{ + return readl(ndev->ctrl_bar + off); +} + +static void vdpa_cfg_writeb(struct nfp_vdpa_net *ndev, int off, uint8_t val) +{ + writeb(val, ndev->ctrl_bar + off); +} + +static void vdpa_cfg_writel(struct nfp_vdpa_net *ndev, int off, u32 val) +{ + writel(val, ndev->ctrl_bar + off); +} + +static void vdpa_cfg_writeq(struct nfp_vdpa_net *ndev, int off, u64 val) +{ + writeq(val, ndev->ctrl_bar + off); +} + +static bool nfp_vdpa_is_little_endian(struct nfp_vdpa_net *ndev) +{ + return virtio_legacy_is_little_endian() || + (ndev->features & BIT_ULL(VIRTIO_F_VERSION_1)); +} + +static __virtio16 cpu_to_nfpvdpa16(struct nfp_vdpa_net *ndev, u16 val) +{ + return __cpu_to_virtio16(nfp_vdpa_is_little_endian(ndev), val); +} + +static void nfp_vdpa_net_reconfig_start(struct nfp_vdpa_net *ndev, u32 update) +{ + vdpa_cfg_writel(ndev, NFP_NET_CFG_UPDATE, update); + /* Flush posted PCI writes by reading something without side effects */ + vdpa_cfg_readl(ndev, NFP_NET_CFG_VERSION); + /* Write a none-zero value to the QCP pointer for configuration notification */ + writel(1, ndev->qcp_cfg + NFP_QCP_QUEUE_ADD_WPTR); + ndev->reconfig_in_progress_update |= update; +} + +static bool nfp_vdpa_net_reconfig_check_done(struct nfp_vdpa_net *ndev, bool last_check) +{ + u32 reg; + + reg = vdpa_cfg_readl(ndev, NFP_NET_CFG_UPDATE); + if (reg == 0) + return true; + if (reg & NFP_NET_CFG_UPDATE_ERR) { + dev_err(ndev->vdpa.dma_dev, "Reconfig error (status: 0x%08x update: 0x%08x ctrl: 0x%08x)\n", + reg, ndev->reconfig_in_progress_update, + vdpa_cfg_readl(ndev, NFP_NET_CFG_CTRL)); + return true; + } else if (last_check) { + dev_err(ndev->vdpa.dma_dev, "Reconfig timeout (status: 0x%08x update: 0x%08x ctrl: 0x%08x)\n", + reg, ndev->reconfig_in_progress_update, + vdpa_cfg_readl(ndev, NFP_NET_CFG_CTRL)); + return true; + } + + return false; +} + +static bool __nfp_vdpa_net_reconfig_wait(struct nfp_vdpa_net *ndev, unsigned long deadline) +{ + bool timed_out = false; + int i; + + /* Poll update field, waiting for NFP to ack the config. + * Do an opportunistic wait-busy loop, afterward sleep. + */ + for (i = 0; i < 50; i++) { + if (nfp_vdpa_net_reconfig_check_done(ndev, false)) + return false; + udelay(4); + } + + while (!nfp_vdpa_net_reconfig_check_done(ndev, timed_out)) { + usleep_range(250, 500); + timed_out = time_is_before_eq_jiffies(deadline); + } + + return timed_out; +} + +static int nfp_vdpa_net_reconfig_wait(struct nfp_vdpa_net *ndev, unsigned long deadline) +{ + if (__nfp_vdpa_net_reconfig_wait(ndev, deadline)) + return -EIO; + + if (vdpa_cfg_readl(ndev, NFP_NET_CFG_UPDATE) & NFP_NET_CFG_UPDATE_ERR) + return -EIO; + + return 0; +} + +static int nfp_vdpa_net_reconfig(struct nfp_vdpa_net *ndev, u32 update) +{ + int ret; + + down(&ndev->bar_lock); + + nfp_vdpa_net_reconfig_start(ndev, update); + ret = nfp_vdpa_net_reconfig_wait(ndev, jiffies + HZ * NFP_NET_POLL_TIMEOUT); + ndev->reconfig_in_progress_update = 0; + + up(&ndev->bar_lock); + return ret; +} + +static irqreturn_t nfp_vdpa_irq_rx(int irq, void *data) +{ + struct nfp_net_r_vector *r_vec = data; + struct nfp_vdpa_net *ndev; + + ndev = container_of(r_vec, struct nfp_vdpa_net, vdpa_rx_vec); + + ndev->vring[NFP_VDPA_RX_QUEUE].cb.callback(ndev->vring[NFP_VDPA_RX_QUEUE].cb.private); + + vdpa_cfg_writeq(ndev, NFP_NET_CFG_ICR(ndev->vdpa_rx_irq.entry), NFP_NET_CFG_ICR_UNMASKED); + + /* The FW auto-masks any interrupt, either via the MASK bit in + * the MSI-X table or via the per entry ICR field. So there + * is no need to disable interrupts here. + */ + return IRQ_HANDLED; +} + +static irqreturn_t nfp_vdpa_irq_tx(int irq, void *data) +{ + struct nfp_net_r_vector *r_vec = data; + struct nfp_vdpa_net *ndev; + + ndev = container_of(r_vec, struct nfp_vdpa_net, vdpa_tx_vec); + + /* This memory barrier is needed to make sure the used ring and index + * has been written back before we notify the frontend driver. + */ + dma_rmb(); + + ndev->vring[NFP_VDPA_TX_QUEUE].cb.callback(ndev->vring[NFP_VDPA_TX_QUEUE].cb.private); + + vdpa_cfg_writeq(ndev, NFP_NET_CFG_ICR(ndev->vdpa_tx_irq.entry), NFP_NET_CFG_ICR_UNMASKED); + + /* The FW auto-masks any interrupt, either via the MASK bit in + * the MSI-X table or via the per entry ICR field. So there + * is no need to disable interrupts here. + */ + return IRQ_HANDLED; +} + +static struct nfp_vdpa_net *vdpa_to_ndev(struct vdpa_device *vdpa_dev) +{ + return container_of(vdpa_dev, struct nfp_vdpa_net, vdpa); +} + +static void nfp_vdpa_ring_addr_cfg(struct nfp_vdpa_net *ndev) +{ + vdpa_cfg_writeq(ndev, NFP_NET_CFG_TXR_ADDR(0), ndev->vring[NFP_VDPA_TX_QUEUE].desc); + vdpa_cfg_writeb(ndev, NFP_NET_CFG_TXR_SZ(0), ilog2(ndev->vring[NFP_VDPA_TX_QUEUE].size)); + vdpa_cfg_writeq(ndev, NFP_NET_CFG_TXR_ADDR(1), ndev->vring[NFP_VDPA_TX_QUEUE].avail); + vdpa_cfg_writeq(ndev, NFP_NET_CFG_TXR_ADDR(2), ndev->vring[NFP_VDPA_TX_QUEUE].used); + + vdpa_cfg_writeq(ndev, NFP_NET_CFG_RXR_ADDR(0), ndev->vring[NFP_VDPA_RX_QUEUE].desc); + vdpa_cfg_writeb(ndev, NFP_NET_CFG_RXR_SZ(0), ilog2(ndev->vring[NFP_VDPA_RX_QUEUE].size)); + vdpa_cfg_writeq(ndev, NFP_NET_CFG_RXR_ADDR(1), ndev->vring[NFP_VDPA_RX_QUEUE].avail); + vdpa_cfg_writeq(ndev, NFP_NET_CFG_RXR_ADDR(2), ndev->vring[NFP_VDPA_RX_QUEUE].used); +} + +static int nfp_vdpa_setup_driver(struct vdpa_device *vdpa_dev) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + u32 new_ctrl, new_ctrl_w1, update = 0; + + nfp_vdpa_ring_addr_cfg(ndev); + + vdpa_cfg_writeb(ndev, NFP_NET_CFG_TXR_VEC(1), ndev->vdpa_tx_vec.irq_entry); + vdpa_cfg_writeb(ndev, NFP_NET_CFG_RXR_VEC(0), ndev->vdpa_rx_vec.irq_entry); + + vdpa_cfg_writeq(ndev, NFP_NET_CFG_TXRS_ENABLE, 1); + vdpa_cfg_writeq(ndev, NFP_NET_CFG_RXRS_ENABLE, 1); + + vdpa_cfg_writel(ndev, NFP_NET_CFG_MTU, NFP_VDPA_MTU_MAX); + vdpa_cfg_writel(ndev, NFP_NET_CFG_FLBUFSZ, NFP_VDPA_FL_BUF_SZ); + + /* Enable device */ + new_ctrl = NFP_NET_CFG_CTRL_ENABLE; + new_ctrl_w1 = NFP_NET_CFG_CTRL_VIRTIO | NFP_NET_CFG_CTRL_ENABLE_VNET; + update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | NFP_NET_CFG_UPDATE_MSIX; + + vdpa_cfg_writel(ndev, NFP_NET_CFG_CTRL, new_ctrl); + vdpa_cfg_writel(ndev, NFP_NET_CFG_CTRL_WORD1, new_ctrl_w1); + if (nfp_vdpa_net_reconfig(ndev, update) < 0) + return -EINVAL; + + ndev->ctrl = new_ctrl; + ndev->ctrl_w1 = new_ctrl_w1; + return 0; +} + +static void nfp_reset_vring(struct nfp_vdpa_net *ndev) +{ + unsigned int i; + + for (i = 0; i < NFP_VDPA_NUM_QUEUES; i++) { + ndev->vring[i].last_avail_idx = 0; + ndev->vring[i].desc = 0; + ndev->vring[i].avail = 0; + ndev->vring[i].used = 0; + ndev->vring[i].ready = 0; + ndev->vring[i].cb.callback = NULL; + ndev->vring[i].cb.private = NULL; + } +} + +static int nfp_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, + u64 desc_area, u64 driver_area, + u64 device_area) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + ndev->vring[qid].desc = desc_area; + ndev->vring[qid].avail = driver_area; + ndev->vring[qid].used = device_area; + + return 0; +} + +static void nfp_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + ndev->vring[qid].size = num; +} + +static void nfp_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + if (!ndev->vring[qid].ready) + return; + + writel(qid, ndev->vring[qid].kick_addr); +} + +static void nfp_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_callback *cb) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + ndev->vring[qid].cb = *cb; +} + +static void nfp_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, + bool ready) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + ndev->vring[qid].ready = ready; +} + +static bool nfp_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + return ndev->vring[qid].ready; +} + +static int nfp_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx, + const struct vdpa_vq_state *state) +{ + /* Required by live migration, leave for future work */ + return 0; +} + +static int nfp_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, + struct vdpa_vq_state *state) +{ + /* Required by live migration, leave for future work */ + return 0; +} + +static u32 nfp_vdpa_get_vq_align(struct vdpa_device *vdpa_dev) +{ + return PAGE_SIZE; +} + +static u64 nfp_vdpa_get_features(struct vdpa_device *vdpa_dev) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + return ndev->features; +} + +static int nfp_vdpa_set_features(struct vdpa_device *vdpa_dev, u64 features) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + /* DMA mapping must be done by driver */ + if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM))) + return -EINVAL; + + ndev->features = features & NFP_NET_SUPPORTED_FEATURES; + + return 0; +} + +static void nfp_vdpa_set_config_cb(struct vdpa_device *vdpa_dev, + struct vdpa_callback *cb) +{ + /* Don't support config interrupt yet */ +} + +static u16 nfp_vdpa_get_vq_num_max(struct vdpa_device *vdpa) +{ + /* Currently the firmware for kernel vDPA only support ring size 256 */ + return NFP_VDPA_QUEUE_MAX; +} + +static u32 nfp_vdpa_get_device_id(struct vdpa_device *vdpa_dev) +{ + return VIRTIO_ID_NET; +} + +static u32 nfp_vdpa_get_vendor_id(struct vdpa_device *vdpa_dev) +{ + struct nfp_vdpa_mgmt_dev *mgmt; + + mgmt = container_of(vdpa_dev->mdev, struct nfp_vdpa_mgmt_dev, mdev); + return mgmt->pdev->vendor; +} + +static u8 nfp_vdpa_get_status(struct vdpa_device *vdpa_dev) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + return ndev->status; +} + +static void nfp_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + + if ((status ^ ndev->status) & VIRTIO_CONFIG_S_DRIVER_OK) { + if ((status & VIRTIO_CONFIG_S_DRIVER_OK) == 0) { + dev_err(ndev->vdpa.dma_dev, + "Did not expect DRIVER_OK to be cleared\n"); + return; + } + + if (nfp_vdpa_setup_driver(vdpa_dev)) { + ndev->status |= VIRTIO_CONFIG_S_FAILED; + dev_err(ndev->vdpa.dma_dev, + "Failed to setup driver\n"); + return; + } + } + + ndev->status = status; +} + +static int nfp_vdpa_reset(struct vdpa_device *vdpa_dev) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdpa_dev); + u32 new_ctrl, new_ctrl_w1, update = 0; + + if (ndev->status == 0) + return 0; + + vdpa_cfg_writeb(ndev, NFP_NET_CFG_TXR_VEC(1), 0); + vdpa_cfg_writeb(ndev, NFP_NET_CFG_RXR_VEC(0), 0); + + nfp_vdpa_ring_addr_cfg(ndev); + + vdpa_cfg_writeq(ndev, NFP_NET_CFG_TXRS_ENABLE, 0); + vdpa_cfg_writeq(ndev, NFP_NET_CFG_RXRS_ENABLE, 0); + + new_ctrl = ndev->ctrl & ~NFP_NET_CFG_CTRL_ENABLE; + update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | NFP_NET_CFG_UPDATE_MSIX; + vdpa_cfg_writel(ndev, NFP_NET_CFG_CTRL, new_ctrl); + + new_ctrl_w1 = ndev->ctrl_w1 & ~NFP_NET_CFG_CTRL_VIRTIO; + vdpa_cfg_writel(ndev, NFP_NET_CFG_CTRL_WORD1, new_ctrl_w1); + + if (nfp_vdpa_net_reconfig(ndev, update) < 0) + return -EINVAL; + + nfp_reset_vring(ndev); + + ndev->ctrl = new_ctrl; + ndev->ctrl_w1 = new_ctrl_w1; + + ndev->status = 0; + return 0; +} + +static size_t nfp_vdpa_get_config_size(struct vdpa_device *vdev) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdev); + + return sizeof(ndev->config); +} + +static void nfp_vdpa_get_config(struct vdpa_device *vdev, unsigned int offset, + void *buf, unsigned int len) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdev); + + if (offset + len > sizeof(ndev->config)) + return; + + memcpy(buf, (void *)&ndev->config + offset, len); +} + +static void nfp_vdpa_set_config(struct vdpa_device *vdev, unsigned int offset, + const void *buf, unsigned int len) +{ + struct nfp_vdpa_net *ndev = vdpa_to_ndev(vdev); + + if (offset + len > sizeof(ndev->config)) + return; + + memcpy((void *)&ndev->config + offset, buf, len); +} + +static const struct vdpa_config_ops nfp_vdpa_ops = { + .set_vq_address = nfp_vdpa_set_vq_address, + .set_vq_num = nfp_vdpa_set_vq_num, + .kick_vq = nfp_vdpa_kick_vq, + .set_vq_cb = nfp_vdpa_set_vq_cb, + .set_vq_ready = nfp_vdpa_set_vq_ready, + .get_vq_ready = nfp_vdpa_get_vq_ready, + .set_vq_state = nfp_vdpa_set_vq_state, + .get_vq_state = nfp_vdpa_get_vq_state, + .get_vq_align = nfp_vdpa_get_vq_align, + .get_device_features = nfp_vdpa_get_features, + .get_driver_features = nfp_vdpa_get_features, + .set_driver_features = nfp_vdpa_set_features, + .set_config_cb = nfp_vdpa_set_config_cb, + .get_vq_num_max = nfp_vdpa_get_vq_num_max, + .get_device_id = nfp_vdpa_get_device_id, + .get_vendor_id = nfp_vdpa_get_vendor_id, + .get_status = nfp_vdpa_get_status, + .set_status = nfp_vdpa_set_status, + .reset = nfp_vdpa_reset, + .get_config_size = nfp_vdpa_get_config_size, + .get_config = nfp_vdpa_get_config, + .set_config = nfp_vdpa_set_config, +}; + +static int nfp_vdpa_map_resources(struct nfp_vdpa_net *ndev, + struct pci_dev *pdev, + const struct nfp_dev_info *dev_info) +{ + unsigned int bar_off, bar_sz, tx_bar_sz, rx_bar_sz; + unsigned int max_tx_rings, max_rx_rings, txq, rxq; + u64 tx_bar_off, rx_bar_off; + resource_size_t map_addr; + void __iomem *tx_bar; + void __iomem *rx_bar; + int err; + + /* Map CTRL BAR */ + ndev->ctrl_bar = ioremap(pci_resource_start(pdev, NFP_NET_CTRL_BAR), + NFP_NET_CFG_BAR_SZ); + if (!ndev->ctrl_bar) + return -EIO; + + /* Find out how many rings are supported */ + max_tx_rings = readl(ndev->ctrl_bar + NFP_NET_CFG_MAX_TXRINGS); + max_rx_rings = readl(ndev->ctrl_bar + NFP_NET_CFG_MAX_RXRINGS); + /* Currently, only one ring is supported */ + if (max_tx_rings != NFP_VDPA_QUEUE_RING_MAX || max_rx_rings != NFP_VDPA_QUEUE_RING_MAX) { + err = -EINVAL; + goto ctrl_bar_unmap; + } + + /* Map Q0_BAR as a single overlapping BAR mapping */ + tx_bar_sz = NFP_QCP_QUEUE_ADDR_SZ * max_tx_rings * NFP_VDPA_QUEUE_SPACE_STRIDE; + rx_bar_sz = NFP_QCP_QUEUE_ADDR_SZ * max_rx_rings * NFP_VDPA_QUEUE_SPACE_STRIDE; + + txq = readl(ndev->ctrl_bar + NFP_NET_CFG_START_TXQ); + tx_bar_off = nfp_qcp_queue_offset(dev_info, txq); + rxq = readl(ndev->ctrl_bar + NFP_NET_CFG_START_RXQ); + rx_bar_off = nfp_qcp_queue_offset(dev_info, rxq); + + bar_off = min(tx_bar_off, rx_bar_off); + bar_sz = max(tx_bar_off + tx_bar_sz, rx_bar_off + rx_bar_sz); + bar_sz -= bar_off; + + map_addr = pci_resource_start(pdev, NFP_NET_Q0_BAR) + bar_off; + ndev->q_bar = ioremap(map_addr, bar_sz); + if (!ndev->q_bar) { + err = -EIO; + goto ctrl_bar_unmap; + } + + tx_bar = ndev->q_bar + (tx_bar_off - bar_off); + rx_bar = ndev->q_bar + (rx_bar_off - bar_off); + + /* TX queues */ + ndev->vring[txq].kick_addr = ndev->ctrl_bar + NFP_VDPA_NOTIFY_AREA_BASE + + txq * NFP_VDPA_QUEUE_NOTIFY_OFFSET; + /* RX queues */ + ndev->vring[rxq].kick_addr = ndev->ctrl_bar + NFP_VDPA_NOTIFY_AREA_BASE + + rxq * NFP_VDPA_QUEUE_NOTIFY_OFFSET; + /* Stash the re-configuration queue away. First odd queue in TX Bar */ + ndev->qcp_cfg = tx_bar + NFP_QCP_QUEUE_ADDR_SZ; + + return 0; + +ctrl_bar_unmap: + iounmap(ndev->ctrl_bar); + return err; +} + +static int nfp_vdpa_init_ndev(struct nfp_vdpa_net *ndev) +{ + ndev->features = NFP_NET_SUPPORTED_FEATURES; + + ndev->config.mtu = cpu_to_nfpvdpa16(ndev, NFP_NET_DEFAULT_MTU); + ndev->config.status = cpu_to_nfpvdpa16(ndev, VIRTIO_NET_S_LINK_UP); + + put_unaligned_be32(vdpa_cfg_readl(ndev, NFP_NET_CFG_MACADDR + 0), &ndev->config.mac[0]); + put_unaligned_be16(vdpa_cfg_readw(ndev, NFP_NET_CFG_MACADDR + 6), &ndev->config.mac[4]); + + return 0; +} + +static int nfp_vdpa_mgmt_dev_add(struct vdpa_mgmt_dev *mdev, + const char *name, + const struct vdpa_dev_set_config *add_config) +{ + struct nfp_vdpa_mgmt_dev *mgmt = container_of(mdev, struct nfp_vdpa_mgmt_dev, mdev); + struct msix_entry vdpa_irq[NFP_VDPA_NUM_QUEUES]; + struct device *dev = &mgmt->pdev->dev; + struct nfp_vdpa_net *ndev; + int ret; + + /* Only allow one ndev at a time. */ + if (mgmt->ndev) + return -EOPNOTSUPP; + + ndev = vdpa_alloc_device(struct nfp_vdpa_net, vdpa, dev, &nfp_vdpa_ops, 1, 1, name, false); + + if (IS_ERR(ndev)) + return PTR_ERR(ndev); + + mgmt->ndev = ndev; + + ret = nfp_net_irqs_alloc(mgmt->pdev, (struct msix_entry *)&vdpa_irq, 2, 2); + if (!ret) { + ret = -ENOMEM; + goto free_dev; + } + + ndev->vdpa_rx_irq.entry = vdpa_irq[NFP_VDPA_RX_QUEUE].entry; + ndev->vdpa_rx_irq.vector = vdpa_irq[NFP_VDPA_RX_QUEUE].vector; + + snprintf(ndev->vdpa_rx_vec.name, sizeof(ndev->vdpa_rx_vec.name), "nfp-vdpa-rx0"); + ndev->vdpa_rx_vec.irq_entry = ndev->vdpa_rx_irq.entry; + ndev->vdpa_rx_vec.irq_vector = ndev->vdpa_rx_irq.vector; + + ndev->vdpa_tx_irq.entry = vdpa_irq[NFP_VDPA_TX_QUEUE].entry; + ndev->vdpa_tx_irq.vector = vdpa_irq[NFP_VDPA_TX_QUEUE].vector; + + snprintf(ndev->vdpa_tx_vec.name, sizeof(ndev->vdpa_tx_vec.name), "nfp-vdpa-tx0"); + ndev->vdpa_tx_vec.irq_entry = ndev->vdpa_tx_irq.entry; + ndev->vdpa_tx_vec.irq_vector = ndev->vdpa_tx_irq.vector; + + ret = request_irq(ndev->vdpa_tx_vec.irq_vector, nfp_vdpa_irq_tx, + 0, ndev->vdpa_tx_vec.name, &ndev->vdpa_tx_vec); + if (ret) + goto disable_irq; + + ret = request_irq(ndev->vdpa_rx_vec.irq_vector, nfp_vdpa_irq_rx, + 0, ndev->vdpa_rx_vec.name, &ndev->vdpa_rx_vec); + if (ret) + goto free_tx_irq; + + ret = nfp_vdpa_map_resources(mgmt->ndev, mgmt->pdev, mgmt->dev_info); + if (ret) + goto free_rx_irq; + + ret = nfp_vdpa_init_ndev(mgmt->ndev); + if (ret) + goto unmap_resources; + + sema_init(&ndev->bar_lock, 1); + + ndev->vdpa.dma_dev = dev; + ndev->vdpa.mdev = &mgmt->mdev; + + mdev->supported_features = NFP_NET_SUPPORTED_FEATURES; + mdev->max_supported_vqs = NFP_VDPA_QUEUE_MAX; + + ret = _vdpa_register_device(&ndev->vdpa, NFP_VDPA_NUM_QUEUES); + if (ret) + goto unmap_resources; + + return 0; + +unmap_resources: + iounmap(ndev->ctrl_bar); + iounmap(ndev->q_bar); +free_rx_irq: + free_irq(ndev->vdpa_rx_vec.irq_vector, &ndev->vdpa_rx_vec); +free_tx_irq: + free_irq(ndev->vdpa_tx_vec.irq_vector, &ndev->vdpa_tx_vec); +disable_irq: + nfp_net_irqs_disable(mgmt->pdev); +free_dev: + put_device(&ndev->vdpa.dev); + return ret; +} + +static void nfp_vdpa_mgmt_dev_del(struct vdpa_mgmt_dev *mdev, + struct vdpa_device *dev) +{ + struct nfp_vdpa_mgmt_dev *mgmt = container_of(mdev, struct nfp_vdpa_mgmt_dev, mdev); + struct nfp_vdpa_net *ndev = vdpa_to_ndev(dev); + + free_irq(ndev->vdpa_rx_vec.irq_vector, &ndev->vdpa_rx_vec); + free_irq(ndev->vdpa_tx_vec.irq_vector, &ndev->vdpa_tx_vec); + nfp_net_irqs_disable(mgmt->pdev); + _vdpa_unregister_device(dev); + + iounmap(ndev->ctrl_bar); + iounmap(ndev->q_bar); + + mgmt->ndev = NULL; +} + +static const struct vdpa_mgmtdev_ops nfp_vdpa_mgmt_dev_ops = { + .dev_add = nfp_vdpa_mgmt_dev_add, + .dev_del = nfp_vdpa_mgmt_dev_del, +}; + +static struct virtio_device_id nfp_vdpa_mgmt_id_table[] = { + { VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +static int nfp_vdpa_probe(struct auxiliary_device *adev, const struct auxiliary_device_id *id) +{ + struct nfp_net_vf_aux_dev *nfp_vf_aux_dev; + struct nfp_vdpa_mgmt_dev *mgmt; + int ret; + + nfp_vf_aux_dev = container_of(adev, struct nfp_net_vf_aux_dev, aux_dev); + + mgmt = kzalloc(sizeof(*mgmt), GFP_KERNEL); + if (!mgmt) + return -ENOMEM; + + mgmt->pdev = nfp_vf_aux_dev->pdev; + + mgmt->mdev.device = &nfp_vf_aux_dev->pdev->dev; + mgmt->mdev.ops = &nfp_vdpa_mgmt_dev_ops; + mgmt->mdev.id_table = nfp_vdpa_mgmt_id_table; + mgmt->dev_info = nfp_vf_aux_dev->dev_info; + + ret = vdpa_mgmtdev_register(&mgmt->mdev); + if (ret) + goto err_free_mgmt; + + auxiliary_set_drvdata(adev, mgmt); + + return 0; + +err_free_mgmt: + kfree(mgmt); + + return ret; +} + +static void nfp_vdpa_remove(struct auxiliary_device *adev) +{ + struct nfp_vdpa_mgmt_dev *mgmt; + + mgmt = auxiliary_get_drvdata(adev); + if (!mgmt) + return; + + vdpa_mgmtdev_unregister(&mgmt->mdev); + kfree(mgmt); + + auxiliary_set_drvdata(adev, NULL); +} + +static const struct auxiliary_device_id nfp_vdpa_id_table[] = { + { .name = NFP_NET_VF_ADEV_NAME "." NFP_NET_VF_ADEV_DRV_MATCH_NAME, }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, nfp_vdpa_id_table); + +static struct auxiliary_driver nfp_vdpa_driver = { + .name = NFP_NET_VF_ADEV_DRV_MATCH_NAME, + .probe = nfp_vdpa_probe, + .remove = nfp_vdpa_remove, + .id_table = nfp_vdpa_id_table, +}; + +module_auxiliary_driver(nfp_vdpa_driver); + +MODULE_AUTHOR("Corigine, Inc. "); +MODULE_DESCRIPTION("NFP vDPA driver"); +MODULE_LICENSE("GPL");