From patchwork Thu Mar 9 01:30:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB099C64EC4 for ; Thu, 9 Mar 2023 01:31:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229916AbjCIBbl (ORCPT ); Wed, 8 Mar 2023 20:31:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229754AbjCIBbe (ORCPT ); Wed, 8 Mar 2023 20:31:34 -0500 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2055.outbound.protection.outlook.com [40.107.223.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 262CD94397 for ; Wed, 8 Mar 2023 17:31:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CuZkT8EKgMgwmbtetkXtGEvmvC8DmdDIMT1eBdzevYO7QZOZ/oBoKmEF+Br7a2818WEqGdR1ySsqwGNHaVYp3W7h5nbJBWyoz+n7ly24R3D5oUHFFi9m9xYo5NG3F2XFdSrd6SRul5FzA9KE0EDLfdzT3AcbKfl3bBi00rWgKAm1YKMxY78Acu4W7vmiTKbpJ5dfCfvVfjSNAkq06f3I61VOpG4N4FCAZo2KEx1ERrIFU5T5OKK9A3oD9BkoClzh+gC6TnHHY0FpTH3VEWBt9Chm1HapjiTQC4tLHg5oPFaLqz3TTtLX8dLc1WefHZzwiz8wofdG6c08+XvvTiFTbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=N6r3LOs18LEl+8qCDQg2zhXaO8hz2HKSyiOCJEjDmEs=; b=f/XPO8g77B3pjDA8cRmlXYEtylwsMW+GkzmGo/8lmtKpZG3AnCkFkmUgr7YSweDeDosGzdTAqU70j7pGU+FGnrMZVppasVV9X8hYge8zD5hFUXP3k+6o6fvWobJ/tzoIHHEJBVlyFudCbC2cnGgOQG4T6SPM8TvgPhVvoZ0+ajeSlCuXUJ9MQAYcrPi13yrjJgsI69qHr6LIoYESKhBrlqiRweXCyIdJsMiLKrXaZmbUMs7Ph6rmuy+yW1BtaSzC6e3X8zd97KCyLDi99z0xCbav5C/m5d3MLiRKAdkLG+2tr6Us9H+urVDBntyQ7Ir2twiv7l2DXtFpS/HBTf/4cw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=N6r3LOs18LEl+8qCDQg2zhXaO8hz2HKSyiOCJEjDmEs=; b=zLd/2ET7SJIPY+14Q09fQbcglXbyoPLENBGt6D/lHf8u6RwzeSWxZPpukVmmjDocMMAAMqWaGyCH0PnpLt+aQZ70JoLFlFJUC6TsxieguHi0bUEy9L1tgwk2GHoUQUnt6oTZSkvoILAgUxrE8fZBzA9bkQxuQh9laIWHU2unmog= Received: from BN9PR03CA0351.namprd03.prod.outlook.com (2603:10b6:408:f6::26) by IA0PR12MB7700.namprd12.prod.outlook.com (2603:10b6:208:430::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6156.28; Thu, 9 Mar 2023 01:31:27 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::cc) by BN9PR03CA0351.outlook.office365.com (2603:10b6:408:f6::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.19 via Frontend Transport; Thu, 9 Mar 2023 01:31:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:27 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:23 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 1/7] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Date: Wed, 8 Mar 2023 17:30:40 -0800 Message-ID: <20230309013046.23523-2-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|IA0PR12MB7700:EE_ X-MS-Office365-Filtering-Correlation-Id: f6971bcb-9b5c-4059-1b7b-08db203e00d6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DazaTpG3P9bkKhin/xmFstXatyjEzUzVXGvB+a4tGe+61uvRYGH/N1UfV5EpQ2jCfKFbkAkhxwIXbgC7U0HnSh2fs8ilFOP7v3gaBER/MLhZN4aM/PaLIlJJwwOSNec05FcedGRRMqqSDehD1PU+Lg8Hl3YNGir7IYtdezJ1DamNXOmYcuP8TJI7KyymLOidoAkd5iVXS441RAFZRuIwHYc6rvRQnPd4d/qWp8eBaRk/4oK3NVtJ6IytbtzHRBQKZ37IeEAyOMW/L4K14ZpP6YTGP9Czi0zgl18hJeXp9J9DMxUBQrcDCr3Oz3rqSGYAPEwPasJuqc2Yaxtw8g8uPyB9m2gTQn4ILMRafHcmCirXVLh3ptrWousd5o5KrKEyHSdOO5+taTmxiY4039xqJBu8h5dn9xse3wUVkrJPXvC5yCt1Ops6XKItxL7N69Tj7e2FUBJU6sTBVEUClLYLNEN+biyPrDojbgOWx3n+gDNbKwqTAXosTC2k/fPu1hpkJm5WnyOb8v1elbTh/+jl/g5S0Kj1Fghe40nnPRUiosYd0PSukZkx4bABzSuOX24I906nI8KeIyqcNKW30mXoi5s51A995WirtEX5/24PIUn1umP0ge69NQTp7ujKpMtRgKRGjPtPXwi1eGKggqWoncxRiZxUmn2tVsjnFpWaAY0EPsAXjWd4A/q7A2NVGhMmVfooH+usGTHuu9z7lgcaJwu2b6kVPVBIxRqccrPQxOA= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199018)(40470700004)(36840700001)(46966006)(36756003)(41300700001)(8676002)(70206006)(4326008)(8936002)(70586007)(5660300002)(44832011)(2906002)(82740400003)(81166007)(36860700001)(40480700001)(356005)(86362001)(316002)(478600001)(6666004)(1076003)(110136005)(47076005)(82310400005)(40460700003)(426003)(83380400001)(2616005)(16526019)(186003)(336012)(26005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:27.6681 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f6971bcb-9b5c-4059-1b7b-08db203e00d6 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7700 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This is the initial auxiliary driver framework for a new vDPA device driver, an auxiliary_bus client of the pds_core driver. The pds_core driver supplies the PCI services for the VF device and for accessing the adminq in the PF device. This patch adds the very basics of registering for the auxiliary device, setting up debugfs entries, and registering with devlink. Signed-off-by: Shannon Nelson --- drivers/vdpa/Makefile | 1 + drivers/vdpa/pds/Makefile | 8 +++ drivers/vdpa/pds/aux_drv.c | 99 ++++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/aux_drv.h | 15 ++++++ drivers/vdpa/pds/debugfs.c | 25 +++++++++ drivers/vdpa/pds/debugfs.h | 18 +++++++ include/linux/pds/pds_vdpa.h | 12 +++++ 7 files changed, 178 insertions(+) create mode 100644 drivers/vdpa/pds/Makefile create mode 100644 drivers/vdpa/pds/aux_drv.c create mode 100644 drivers/vdpa/pds/aux_drv.h create mode 100644 drivers/vdpa/pds/debugfs.c create mode 100644 drivers/vdpa/pds/debugfs.h create mode 100644 include/linux/pds/pds_vdpa.h diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile index 59396ff2a318..8f53c6f3cca7 100644 --- a/drivers/vdpa/Makefile +++ b/drivers/vdpa/Makefile @@ -7,3 +7,4 @@ obj-$(CONFIG_MLX5_VDPA) += mlx5/ obj-$(CONFIG_VP_VDPA) += virtio_pci/ obj-$(CONFIG_ALIBABA_ENI_VDPA) += alibaba/ obj-$(CONFIG_SNET_VDPA) += solidrun/ +obj-$(CONFIG_PDS_VDPA) += pds/ diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile new file mode 100644 index 000000000000..a9cd2f450ae1 --- /dev/null +++ b/drivers/vdpa/pds/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright(c) 2023 Advanced Micro Devices, Inc + +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o + +pds_vdpa-y := aux_drv.o + +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c new file mode 100644 index 000000000000..b3f36170253c --- /dev/null +++ b/drivers/vdpa/pds/aux_drv.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include + +#include +#include +#include + +#include "aux_drv.h" +#include "debugfs.h" + +static const struct auxiliary_device_id pds_vdpa_id_table[] = { + { .name = PDS_VDPA_DEV_NAME, }, + {}, +}; + +static int pds_vdpa_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *id) + +{ + struct pds_auxiliary_dev *padev = + container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); + struct device *dev = &aux_dev->dev; + struct pds_vdpa_aux *vdpa_aux; + int err; + + vdpa_aux = kzalloc(sizeof(*vdpa_aux), GFP_KERNEL); + if (!vdpa_aux) + return -ENOMEM; + + vdpa_aux->padev = padev; + auxiliary_set_drvdata(aux_dev, vdpa_aux); + + /* Register our PDS client with the pds_core */ + err = padev->ops->register_client(padev); + if (err) { + dev_err(dev, "%s: Failed to register as client: %pe\n", + __func__, ERR_PTR(err)); + goto err_free_mem; + } + + return 0; + +err_free_mem: + kfree(vdpa_aux); + auxiliary_set_drvdata(aux_dev, NULL); + + return err; +} + +static void pds_vdpa_remove(struct auxiliary_device *aux_dev) +{ + struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); + struct device *dev = &aux_dev->dev; + + vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev); + + kfree(vdpa_aux); + auxiliary_set_drvdata(aux_dev, NULL); + + dev_info(dev, "Removed\n"); +} + +static struct auxiliary_driver pds_vdpa_driver = { + .name = PDS_DEV_TYPE_VDPA_STR, + .probe = pds_vdpa_probe, + .remove = pds_vdpa_remove, + .id_table = pds_vdpa_id_table, +}; + +static void __exit pds_vdpa_cleanup(void) +{ + auxiliary_driver_unregister(&pds_vdpa_driver); + + pds_vdpa_debugfs_destroy(); +} +module_exit(pds_vdpa_cleanup); + +static int __init pds_vdpa_init(void) +{ + int err; + + pds_vdpa_debugfs_create(); + + err = auxiliary_driver_register(&pds_vdpa_driver); + if (err) { + pr_err("%s: aux driver register failed: %pe\n", + PDS_VDPA_DRV_NAME, ERR_PTR(err)); + pds_vdpa_debugfs_destroy(); + } + + return err; +} +module_init(pds_vdpa_init); + +MODULE_DESCRIPTION(PDS_VDPA_DRV_DESCRIPTION); +MODULE_AUTHOR("AMD/Pensando Systems, Inc"); +MODULE_LICENSE("GPL"); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h new file mode 100644 index 000000000000..14e465944dfd --- /dev/null +++ b/drivers/vdpa/pds/aux_drv.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _AUX_DRV_H_ +#define _AUX_DRV_H_ + +#define PDS_VDPA_DRV_DESCRIPTION "AMD/Pensando vDPA VF Device Driver" +#define PDS_VDPA_DRV_NAME "pds_vdpa" + +struct pds_vdpa_aux { + struct pds_auxiliary_dev *padev; + + struct dentry *dentry; +}; +#endif /* _AUX_DRV_H_ */ diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c new file mode 100644 index 000000000000..3c163dc7b66f --- /dev/null +++ b/drivers/vdpa/pds/debugfs.c @@ -0,0 +1,25 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include +#include + +#include "aux_drv.h" +#include "debugfs.h" + +#ifdef CONFIG_DEBUG_FS + +static struct dentry *dbfs_dir; + +void pds_vdpa_debugfs_create(void) +{ + dbfs_dir = debugfs_create_dir(PDS_VDPA_DRV_NAME, NULL); +} + +void pds_vdpa_debugfs_destroy(void) +{ + debugfs_remove_recursive(dbfs_dir); + dbfs_dir = NULL; +} + +#endif /* CONFIG_DEBUG_FS */ diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h new file mode 100644 index 000000000000..fff078a869e5 --- /dev/null +++ b/drivers/vdpa/pds/debugfs.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_VDPA_DEBUGFS_H_ +#define _PDS_VDPA_DEBUGFS_H_ + +#include + +#ifdef CONFIG_DEBUG_FS + +void pds_vdpa_debugfs_create(void); +void pds_vdpa_debugfs_destroy(void); +#else +static inline void pds_vdpa_debugfs_create(void) { } +static inline void pds_vdpa_debugfs_destroy(void) { } +#endif + +#endif /* _PDS_VDPA_DEBUGFS_H_ */ diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h new file mode 100644 index 000000000000..b5154e3b298e --- /dev/null +++ b/include/linux/pds/pds_vdpa.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_VDPA_IF_H_ +#define _PDS_VDPA_IF_H_ + +#include + +#define PDS_DEV_TYPE_VDPA_STR "vDPA" +#define PDS_VDPA_DEV_NAME PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR + +#endif /* _PDS_VDPA_IF_H_ */ From patchwork Thu Mar 9 01:30:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6314BC678D5 for ; Thu, 9 Mar 2023 01:31:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229941AbjCIBbj (ORCPT ); Wed, 8 Mar 2023 20:31:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229751AbjCIBbe (ORCPT ); Wed, 8 Mar 2023 20:31:34 -0500 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2057.outbound.protection.outlook.com [40.107.220.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA82D943BD for ; Wed, 8 Mar 2023 17:31:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Mt1EbLw90YB3wyYcKufQuwm6Iey1WMHuzeGxTvJbUWD+VwVZml13nurUVUCBs6n/dpvyuZ1osak0glhkbVXVu1du5zjy/00sFLH/6HBxKBhERxBuybzvXZm11ZWrpdTSJPlhUPldRLO2RXgBqsIfRJWycn7NpMa8AA6XiSZUc9v2GucuBs4/oEl4yfIuBfdC0TyObUdwshxJioWN+UAZCQ7zVMkzT5MmZThPYv21upC7Lz+a+h8SbhKatMDYLxyjzHbB/GkgCznil5C1qxfymQLQ8QQ2QGLjvYn2wCzyuXXeseoywJKo6YCiGvAMhV3g3Abfz0mDBSb5BgD3V4iNCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=K0yXQIWyVBl4s1hR9sbjyhpNBgUS8QuHJtOnMEMNtp0=; b=hSbchqwJoJOhTdPVUrHB7E8n1EqImnQEg2TO+m8FL0lugPGioYxaWpIjwWT+EhdPwTq+WgAjPBTBU9yipWQrQxurbsCisFYKazdbJpefqIMRFp5uTrLaW6OxqWBS3HP+mGLLRVXYTiPhd23q8UEm4zqaNgTpM52wvbnx8KVzTNBP5VWrN3tZ21/JvjDlHe0o2ub91zTBlQkz2c2vam2RPeU4L/jppffmHBddMNdmcIM/o1eiqHEIa19KxdG4odd88NWTm8PsHVAsuYzlVbGWb/Md3LukQQosJaobIZY4BKAbSxXNBGCZupp2NxdMzJjHqAJqFdYml10QkgO5VZ5+yw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K0yXQIWyVBl4s1hR9sbjyhpNBgUS8QuHJtOnMEMNtp0=; b=z1PvxL/e69lcfw5h0phXnmFPmoJ6B4Fmd3nduZPirQDweGzPCxXqXsq9X+i5pj1Q3Oyex3oWlRvyBJXj2qMaXd5nLAxzax310UIqMfil8pOAMumUFUlS1b9XlNKdolBxrhxW2kRnxN8WPNpvbgSxApOxJ4lNOPYztyG20fOl1gY= Received: from BN9PR03CA0345.namprd03.prod.outlook.com (2603:10b6:408:f6::20) by MN2PR12MB4424.namprd12.prod.outlook.com (2603:10b6:208:26a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17; Thu, 9 Mar 2023 01:31:28 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::77) by BN9PR03CA0345.outlook.office365.com (2603:10b6:408:f6::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:27 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:24 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 2/7] pds_vdpa: get vdpa management info Date: Wed, 8 Mar 2023 17:30:41 -0800 Message-ID: <20230309013046.23523-3-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|MN2PR12MB4424:EE_ X-MS-Office365-Filtering-Correlation-Id: 2bd1db68-c678-46e6-04f6-08db203e0101 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +hRV1CZ08TkmV3THFDiJ2qAPnAHXtuT9euvqmpvZQrnBkXYwtKOZNZXdY4DtXk8/sVEGns3oxR4dBdRWxKTpOeIc423KFFZCHF8EIvrJCfqZsBjyp++c+DEt9vcVoshmeQ/RDzVFiOjD3ZhMHBms4YjvI8+vlJYd6136uH5Umf6Lnl3mwTpnY1K4s1CmF5dw6FtDEjmT4++Jz7rCzie5DuB6hsiqdmrz/ZW97cBNDZATiEisMrLE7fs7ZZvZGNqaZSW3FGH1LIgHpblzb4AhGlyjzJvGgxMUJ5NR0oE8m2RpgyQ0Jzsx1Gk7I6fipD86yL+8/KEC9Og+AZ6WUuYLGD1F/yu7IB3kfrgoninFich2jBb0XPOVfORJvfskS3h4yCcpXtGAT/M2phT9scCOYpX9lLOlsRmTxCn5TZBcb2Nxu3KEhcYjwnhR5dKuj5CTXzE9ngRavtOoeEJm/oaUeh8yuaI8t/4xhpVMOk4kX+079L2f4JpIW/Z4Kq5Zd2FJPiwYqtewURqvQmaKHQFFMLCXDhSkjw3QhQ2BMi6ecfOwKanE+DKwhX07FoNvjZatuP3TBLj8U8+uh0N+ys1RY+rWNH5mGfA4RlWJK4ehDSKEB8SWHhDBYpiIurW5B0IIUiT/bKJQGr3eglqn/KzxNy8O5t9ZqxtQbVjrIJEeJ6MC5cHFTwGVTq9HmYe2K8YOkoO30YNY1dTh8yoWDYt/+IEs7ruVr2Wa8GAqDYuzNtc= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199018)(46966006)(36840700001)(40470700004)(16526019)(2616005)(40480700001)(41300700001)(186003)(8676002)(316002)(110136005)(70206006)(40460700003)(70586007)(26005)(1076003)(81166007)(82310400005)(47076005)(82740400003)(426003)(86362001)(83380400001)(4326008)(356005)(336012)(30864003)(5660300002)(36756003)(2906002)(6666004)(44832011)(478600001)(8936002)(36860700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:27.9493 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2bd1db68-c678-46e6-04f6-08db203e0101 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4424 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC Find the vDPA management information from the DSC in order to advertise it to the vdpa subsystem. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/Makefile | 3 +- drivers/vdpa/pds/aux_drv.c | 13 ++++ drivers/vdpa/pds/aux_drv.h | 7 +++ drivers/vdpa/pds/debugfs.c | 3 + drivers/vdpa/pds/vdpa_dev.c | 113 +++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/vdpa_dev.h | 15 +++++ include/linux/pds/pds_vdpa.h | 92 ++++++++++++++++++++++++++++ 7 files changed, 245 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/pds/vdpa_dev.c create mode 100644 drivers/vdpa/pds/vdpa_dev.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index a9cd2f450ae1..13b50394ec64 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o -pds_vdpa-y := aux_drv.o +pds_vdpa-y := aux_drv.o \ + vdpa_dev.o pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index b3f36170253c..63e40ae68211 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -2,6 +2,8 @@ /* Copyright(c) 2023 Advanced Micro Devices, Inc */ #include +#include +#include #include #include @@ -9,6 +11,7 @@ #include "aux_drv.h" #include "debugfs.h" +#include "vdpa_dev.h" static const struct auxiliary_device_id pds_vdpa_id_table[] = { { .name = PDS_VDPA_DEV_NAME, }, @@ -30,6 +33,7 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, return -ENOMEM; vdpa_aux->padev = padev; + vdpa_aux->vf_id = pci_iov_vf_id(padev->vf->pdev); auxiliary_set_drvdata(aux_dev, vdpa_aux); /* Register our PDS client with the pds_core */ @@ -40,8 +44,15 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, goto err_free_mem; } + /* Get device ident info and set up the vdpa_mgmt_dev */ + err = pds_vdpa_get_mgmt_info(vdpa_aux); + if (err) + goto err_aux_unreg; + return 0; +err_aux_unreg: + padev->ops->unregister_client(padev); err_free_mem: kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); @@ -54,6 +65,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + pci_free_irq_vectors(vdpa_aux->padev->vf->pdev); + vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev); kfree(vdpa_aux); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 14e465944dfd..94ba7abcaa43 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -10,6 +10,13 @@ struct pds_vdpa_aux { struct pds_auxiliary_dev *padev; + struct vdpa_mgmt_dev vdpa_mdev; + + struct pds_vdpa_ident ident; + + int vf_id; struct dentry *dentry; + + int nintrs; }; #endif /* _AUX_DRV_H_ */ diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index 3c163dc7b66f..7b7e90fd6578 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -1,7 +1,10 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2023 Advanced Micro Devices, Inc */ +#include + #include +#include #include #include "aux_drv.h" diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c new file mode 100644 index 000000000000..bd840688503c --- /dev/null +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include +#include +#include + +#include +#include +#include +#include + +#include "vdpa_dev.h" +#include "aux_drv.h" + +static struct virtio_device_id pds_vdpa_id_table[] = { + {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, + {0}, +}; + +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *add_config) +{ + return -EOPNOTSUPP; +} + +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, + struct vdpa_device *vdpa_dev) +{ +} + +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = { + .dev_add = pds_vdpa_dev_add, + .dev_del = pds_vdpa_dev_del +}; + +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux) +{ + struct pds_vdpa_ident_cmd ident_cmd = { + .opcode = PDS_VDPA_CMD_IDENT, + .vf_id = cpu_to_le16(vdpa_aux->vf_id), + }; + struct pds_vdpa_comp ident_comp = {0}; + struct vdpa_mgmt_dev *mgmt; + struct device *pf_dev; + struct pci_dev *pdev; + dma_addr_t ident_pa; + struct device *dev; + u16 max_vqs; + int err; + + dev = &vdpa_aux->padev->aux_dev.dev; + pdev = vdpa_aux->padev->vf->pdev; + mgmt = &vdpa_aux->vdpa_mdev; + + /* Get resource info through the PF's adminq. It is a block of info, + * so we need to map some memory for PF to make available to the + * firmware for writing the data. + */ + pf_dev = vdpa_aux->padev->pf->dev; + ident_pa = dma_map_single(pf_dev, &vdpa_aux->ident, + sizeof(vdpa_aux->ident), DMA_FROM_DEVICE); + if (dma_mapping_error(pf_dev, ident_pa)) { + dev_err(dev, "Failed to map ident space\n"); + return -ENOMEM; + } + + ident_cmd.ident_pa = cpu_to_le64(ident_pa); + ident_cmd.len = cpu_to_le32(sizeof(vdpa_aux->ident)); + err = vdpa_aux->padev->ops->adminq_cmd(vdpa_aux->padev, + (union pds_core_adminq_cmd *)&ident_cmd, + sizeof(ident_cmd), + (union pds_core_adminq_comp *)&ident_comp, + 0); + dma_unmap_single(pf_dev, ident_pa, + sizeof(vdpa_aux->ident), DMA_FROM_DEVICE); + if (err) { + dev_err(dev, "Failed to ident hw, status %d: %pe\n", + ident_comp.status, ERR_PTR(err)); + return err; + } + + max_vqs = le16_to_cpu(vdpa_aux->ident.max_vqs); + mgmt->max_supported_vqs = min_t(u16, PDS_VDPA_MAX_QUEUES, max_vqs); + if (max_vqs > PDS_VDPA_MAX_QUEUES) + dev_info(dev, "FYI - Device supports more vqs (%d) than driver (%d)\n", + max_vqs, PDS_VDPA_MAX_QUEUES); + + mgmt->ops = &pds_vdpa_mgmt_dev_ops; + mgmt->id_table = pds_vdpa_id_table; + mgmt->device = dev; + mgmt->supported_features = le64_to_cpu(vdpa_aux->ident.hw_features); + mgmt->config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR); + mgmt->config_attr_mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP); + + /* Set up interrupts now that we know how many we might want + * each gets one, than add another for a control queue if supported + */ + vdpa_aux->nintrs = mgmt->max_supported_vqs; + if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) + vdpa_aux->nintrs++; + + err = pci_alloc_irq_vectors(pdev, vdpa_aux->nintrs, vdpa_aux->nintrs, + PCI_IRQ_MSIX); + if (err < 0) { + dev_err(dev, "Couldn't get %d msix vectors: %pe\n", + vdpa_aux->nintrs, ERR_PTR(err)); + return err; + } + vdpa_aux->nintrs = err; + + return 0; +} diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h new file mode 100644 index 000000000000..97fab833a0aa --- /dev/null +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _VDPA_DEV_H_ +#define _VDPA_DEV_H_ + +#define PDS_VDPA_MAX_QUEUES 65 + +struct pds_vdpa_device { + struct vdpa_device vdpa_dev; + struct pds_vdpa_aux *vdpa_aux; +}; + +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); +#endif /* _VDPA_DEV_H_ */ diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h index b5154e3b298e..3f7c08551163 100644 --- a/include/linux/pds/pds_vdpa.h +++ b/include/linux/pds/pds_vdpa.h @@ -9,4 +9,96 @@ #define PDS_DEV_TYPE_VDPA_STR "vDPA" #define PDS_VDPA_DEV_NAME PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR +/* + * enum pds_vdpa_cmd_opcode - vDPA Device commands + */ +enum pds_vdpa_cmd_opcode { + PDS_VDPA_CMD_INIT = 48, + PDS_VDPA_CMD_IDENT = 49, + PDS_VDPA_CMD_RESET = 51, + PDS_VDPA_CMD_VQ_RESET = 52, + PDS_VDPA_CMD_VQ_INIT = 53, + PDS_VDPA_CMD_STATUS_UPDATE = 54, + PDS_VDPA_CMD_SET_FEATURES = 55, + PDS_VDPA_CMD_SET_ATTR = 56, + PDS_VDPA_CMD_VQ_SET_STATE = 57, + PDS_VDPA_CMD_VQ_GET_STATE = 58, +}; + +/** + * struct pds_vdpa_cmd - generic command + * @opcode: Opcode + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + */ +struct pds_vdpa_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; +}; + +/** + * struct pds_vdpa_comp - generic command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @color: Color bit + */ +struct pds_vdpa_comp { + u8 status; + u8 rsvd[14]; + u8 color; +}; + +/** + * struct pds_vdpa_init_cmd - INIT command + * @opcode: Opcode PDS_VDPA_CMD_INIT + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @len: length of config info DMA space + * @config_pa: address for DMA of virtio config struct + */ +struct pds_vdpa_init_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le32 len; + __le64 config_pa; +}; + +/** + * struct pds_vdpa_ident - vDPA identification data + * @hw_features: vDPA features supported by device + * @max_vqs: max queues available (2 queues for a single queuepair) + * @max_qlen: log(2) of maximum number of descriptors + * @min_qlen: log(2) of minimum number of descriptors + * + * This struct is used in a DMA block that is set up for the PDS_VDPA_CMD_IDENT + * transaction. Set up the DMA block and send the address in the IDENT cmd + * data, the DSC will write the ident information, then we can remove the DMA + * block after reading the answer. If the completion status is 0, then there + * is valid information, else there was an error and the data should be invalid. + */ +struct pds_vdpa_ident { + __le64 hw_features; + __le16 max_vqs; + __le16 max_qlen; + __le16 min_qlen; +}; + +/** + * struct pds_vdpa_ident_cmd - IDENT command + * @opcode: Opcode PDS_VDPA_CMD_IDENT + * @rsvd: Word boundary padding + * @vf_id: VF id + * @len: length of ident info DMA space + * @ident_pa: address for DMA of ident info (struct pds_vdpa_ident) + * only used for this transaction, then forgotten by DSC + */ +struct pds_vdpa_ident_cmd { + u8 opcode; + u8 rsvd; + __le16 vf_id; + __le32 len; + __le64 ident_pa; +}; #endif /* _PDS_VDPA_IF_H_ */ From patchwork Thu Mar 9 01:30:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A919C678D5 for ; Thu, 9 Mar 2023 01:31:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229963AbjCIBbp (ORCPT ); Wed, 8 Mar 2023 20:31:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229926AbjCIBbi (ORCPT ); Wed, 8 Mar 2023 20:31:38 -0500 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2064.outbound.protection.outlook.com [40.107.94.64]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A92E49AFC3 for ; Wed, 8 Mar 2023 17:31:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZSZeKVKSmHSpSd5WzK8hmDWBFnk2KMzA6VM1iDdvETLTokSdLhqcQYr1TY74ALYDLTTeP4p2DkhmhXoek/8Z+oAiiLQSOuj4Yt7rKfErML+Psca+lPOIzJFsTRd+DRgWGebHF5Fl8Yb2KKyP5Saf9N/ImXAPrsW/WHHNG8hoCczvvYRT+H9I/Rco3NcU1z2cl+242F0XbkOoTqTo3bu1fUlP4WJ7y6G/N04bNdnFC09WKjaET3OMJr5ennd5as9zsqS3WMImvCl6VXz6ZIN0vbmXTh/YrW4oCTleSl4/JEL1DF/V2jPQqe1zqOus3V0n24Y/+Z1DwMAdxPUxVJ5zag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QSDtJ+XkQ10yZTMF6PVjT5BXOMPpYCza/UEoYZoV760=; b=oCSH4IzPW4tJEieXW8y4ZBpxz62k3UE/KkW/VPkfZv0KERPqUYy3Mf2hlqNvmVDd0Os44oNsGSe0XSW9Jtr85G64dtMyBG3Kz9V9vxTibgn4IYXqGO9w9vjYFVF3fV+QW0TDXLBvyP8n/ctw4KrdqPcnzvQbmni4UyUvbhK3Dw5MwLbT74HVZlmcIhRextNMro78i1YCoj1jkBODNXXrHDXuEHCGCRzY6U2FIvDlIT/NHxaryMU9TU+1qeif+X04oHZTkwL/Aw1u4zZD68tnQAebvyS2U62INItslPwh31Xzha3UPnh8HqO9wW4tZH3xRxJNENWwDupbekc8h3TC/g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QSDtJ+XkQ10yZTMF6PVjT5BXOMPpYCza/UEoYZoV760=; b=ZereJTevIkcGure6Fw2Goiz2/JyEsoe/B1Hybb3KpIUOa9qM/TvSq6Bw0PmlZZA/Fw4L3S+j0rO63swlSzP1OYR1U2GWoV7gZK4aZIUsLM+gsDHnA4a15VfVi+fnm77Fx315uIylCe+/UIUlnPjyqJjRhjKoyNt3FPP97bzgu2U= Received: from BN9PR03CA0350.namprd03.prod.outlook.com (2603:10b6:408:f6::25) by DM4PR12MB5375.namprd12.prod.outlook.com (2603:10b6:5:389::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.19; Thu, 9 Mar 2023 01:31:28 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::9c) by BN9PR03CA0350.outlook.office365.com (2603:10b6:408:f6::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:28 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:25 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 3/7] pds_vdpa: virtio bar setup for vdpa Date: Wed, 8 Mar 2023 17:30:42 -0800 Message-ID: <20230309013046.23523-4-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|DM4PR12MB5375:EE_ X-MS-Office365-Filtering-Correlation-Id: 18e61053-7ae9-4670-ea61-08db203e0169 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KTUQMzP1eiABYFDIm4hgoZ4mDF9jOvPcnXS92Oy7RfqkV9KjrqL2BBfqZnh3p4GLUu0AavOtcsFc9FXfw5ojzfnhKjQd/xMf1mxeloUv4wd0JDo6LMG2KgxE4ncuu2IWo399e2u7vTqQZNgtABLgBTtYWuEVxYffDwrYgH2819aCo6mQGZ5+leKftCzGNJdgqEZmIufO365Aaage2M0mfynVnIR12b/s+38Q3zZTHCKSPE1Rq01Nly4KxLHp8Sod3ZU/kNDRfTW0UAC/YHJOKgXrUj2zvsVNhOxR2501H6rMhVhWYydINb9X3zB2w4TsuemLNgaYeTS5oquuBmJqNjeANKmVxvV2vf9SApmE/0MCsF6DH0TNQg0oHAdN0qs4NTwW2iCL/TMTrngvVlL9i7F+5hJSlR39IWdpqA3fPImRYfR+vAWsEmyqnn216CcxIaEy2TC/ZEG8gb4gOVw9QJr0ZyOuBOxSeCJ9+4XXLri2mD/sDSlVgtxdY3JuS/qC8wChyaM+Yx2VZIqHjCOhR09davxaR3dmlsAS0DB01Gpr57re0ZsAbl/JLU2+r9bxFRPNV1BqcIXUW8gjpotzcrMDDgSnwylWTng8RdyoY98FPPKRkFaJvac8fJMIXbnjAD3Ake+F4XiLNwrYjGtHxunENUiLiyL8s5oNURZKMHWS5QzrwI/wEoEehU2www3/SpPP+oOocN/PsbzZmiRD3dwtBJPkf9TbGsmNtzBfM2Q= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199018)(36840700001)(46966006)(40470700004)(40460700003)(36756003)(110136005)(478600001)(316002)(5660300002)(4326008)(2906002)(8936002)(30864003)(70586007)(8676002)(70206006)(44832011)(41300700001)(82740400003)(81166007)(36860700001)(356005)(40480700001)(86362001)(186003)(16526019)(6666004)(2616005)(1076003)(26005)(83380400001)(82310400005)(336012)(47076005)(426003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:28.6368 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 18e61053-7ae9-4670-ea61-08db203e0169 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5375 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC The PDS vDPA device has a virtio BAR for describing itself, and the pds_vdpa driver needs to access it. Here we copy liberally from the existing drivers/virtio/virtio_pci_modern_dev.c as it has what we need, but we need to modify it so that it can work with our device id and so we can use our own DMA mask. We suspect there is room for discussion here about making the existing code a little more flexible, but we thought we'd at least start the discussion here. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/Makefile | 1 + drivers/vdpa/pds/aux_drv.c | 14 ++ drivers/vdpa/pds/aux_drv.h | 1 + drivers/vdpa/pds/debugfs.c | 1 + drivers/vdpa/pds/vdpa_dev.c | 1 + drivers/vdpa/pds/virtio_pci.c | 281 ++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/virtio_pci.h | 8 + 7 files changed, 307 insertions(+) create mode 100644 drivers/vdpa/pds/virtio_pci.c create mode 100644 drivers/vdpa/pds/virtio_pci.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index 13b50394ec64..ca2efa8c6eb5 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o pds_vdpa-y := aux_drv.o \ + virtio_pci.o \ vdpa_dev.o pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index 63e40ae68211..28158d0d98a5 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -12,6 +13,7 @@ #include "aux_drv.h" #include "debugfs.h" #include "vdpa_dev.h" +#include "virtio_pci.h" static const struct auxiliary_device_id pds_vdpa_id_table[] = { { .name = PDS_VDPA_DEV_NAME, }, @@ -49,8 +51,19 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, if (err) goto err_aux_unreg; + /* Find the virtio configuration */ + vdpa_aux->vd_mdev.pci_dev = padev->vf->pdev; + err = pds_vdpa_probe_virtio(&vdpa_aux->vd_mdev); + if (err) { + dev_err(dev, "Unable to probe for virtio configuration: %pe\n", + ERR_PTR(err)); + goto err_free_mgmt_info; + } + return 0; +err_free_mgmt_info: + pci_free_irq_vectors(padev->vf->pdev); err_aux_unreg: padev->ops->unregister_client(padev); err_free_mem: @@ -65,6 +78,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev); pci_free_irq_vectors(vdpa_aux->padev->vf->pdev); vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 94ba7abcaa43..87ac3c01c476 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -16,6 +16,7 @@ struct pds_vdpa_aux { int vf_id; struct dentry *dentry; + struct virtio_pci_modern_device vd_mdev; int nintrs; }; diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index 7b7e90fd6578..aa5e9677fe74 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2023 Advanced Micro Devices, Inc */ +#include #include #include diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index bd840688503c..15d623297203 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include diff --git a/drivers/vdpa/pds/virtio_pci.c b/drivers/vdpa/pds/virtio_pci.c new file mode 100644 index 000000000000..cb879619dac3 --- /dev/null +++ b/drivers/vdpa/pds/virtio_pci.c @@ -0,0 +1,281 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +/* + * adapted from drivers/virtio/virtio_pci_modern_dev.c, v6.0-rc1 + */ + +#include +#include + +#include "virtio_pci.h" + +/* + * pds_vdpa_map_capability - map a part of virtio pci capability + * @mdev: the modern virtio-pci device + * @off: offset of the capability + * @minlen: minimal length of the capability + * @align: align requirement + * @start: start from the capability + * @size: map size + * @len: the length that is actually mapped + * @pa: physical address of the capability + * + * Returns the io address of for the part of the capability + */ +static void __iomem * +pds_vdpa_map_capability(struct virtio_pci_modern_device *mdev, int off, + size_t minlen, u32 align, u32 start, u32 size, + size_t *len, resource_size_t *pa) +{ + struct pci_dev *dev = mdev->pci_dev; + u8 bar; + u32 offset, length; + void __iomem *p; + + pci_read_config_byte(dev, off + offsetof(struct virtio_pci_cap, + bar), + &bar); + pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, offset), + &offset); + pci_read_config_dword(dev, off + offsetof(struct virtio_pci_cap, length), + &length); + + /* Check if the BAR may have changed since we requested the region. */ + if (bar >= PCI_STD_NUM_BARS || !(mdev->modern_bars & (1 << bar))) { + dev_err(&dev->dev, + "virtio_pci: bar unexpectedly changed to %u\n", bar); + return NULL; + } + + if (length <= start) { + dev_err(&dev->dev, + "virtio_pci: bad capability len %u (>%u expected)\n", + length, start); + return NULL; + } + + if (length - start < minlen) { + dev_err(&dev->dev, + "virtio_pci: bad capability len %u (>=%zu expected)\n", + length, minlen); + return NULL; + } + + length -= start; + + if (start + offset < offset) { + dev_err(&dev->dev, + "virtio_pci: map wrap-around %u+%u\n", + start, offset); + return NULL; + } + + offset += start; + + if (offset & (align - 1)) { + dev_err(&dev->dev, + "virtio_pci: offset %u not aligned to %u\n", + offset, align); + return NULL; + } + + if (length > size) + length = size; + + if (len) + *len = length; + + if (minlen + offset < minlen || + minlen + offset > pci_resource_len(dev, bar)) { + dev_err(&dev->dev, + "virtio_pci: map virtio %zu@%u out of range on bar %i length %lu\n", + minlen, offset, + bar, (unsigned long)pci_resource_len(dev, bar)); + return NULL; + } + + p = pci_iomap_range(dev, bar, offset, length); + if (!p) + dev_err(&dev->dev, + "virtio_pci: unable to map virtio %u@%u on bar %i\n", + length, offset, bar); + else if (pa) + *pa = pci_resource_start(dev, bar) + offset; + + return p; +} + +/** + * virtio_pci_find_capability - walk capabilities to find device info. + * @dev: the pci device + * @cfg_type: the VIRTIO_PCI_CAP_* value we seek + * @ioresource_types: IORESOURCE_MEM and/or IORESOURCE_IO. + * @bars: the bitmask of BARs + * + * Returns offset of the capability, or 0. + */ +static inline int virtio_pci_find_capability(struct pci_dev *dev, u8 cfg_type, + u32 ioresource_types, int *bars) +{ + int pos; + + for (pos = pci_find_capability(dev, PCI_CAP_ID_VNDR); + pos > 0; + pos = pci_find_next_capability(dev, pos, PCI_CAP_ID_VNDR)) { + u8 type, bar; + + pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap, + cfg_type), + &type); + pci_read_config_byte(dev, pos + offsetof(struct virtio_pci_cap, + bar), + &bar); + + /* Ignore structures with reserved BAR values */ + if (bar >= PCI_STD_NUM_BARS) + continue; + + if (type == cfg_type) { + if (pci_resource_len(dev, bar) && + pci_resource_flags(dev, bar) & ioresource_types) { + *bars |= (1 << bar); + return pos; + } + } + } + return 0; +} + +/* + * pds_vdpa_probe_virtio: probe the modern virtio pci device, note that the + * caller is required to enable PCI device before calling this function. + * @mdev: the modern virtio-pci device + * + * Return 0 on succeed otherwise fail + */ +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev) +{ + struct pci_dev *pci_dev = mdev->pci_dev; + int err, common, isr, notify, device; + u32 notify_length; + u32 notify_offset; + + /* check for a common config: if not, use legacy mode (bar 0). */ + common = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_COMMON_CFG, + IORESOURCE_IO | IORESOURCE_MEM, + &mdev->modern_bars); + if (!common) { + dev_info(&pci_dev->dev, + "virtio_pci: missing common config\n"); + return -ENODEV; + } + + /* If common is there, these should be too... */ + isr = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_ISR_CFG, + IORESOURCE_IO | IORESOURCE_MEM, + &mdev->modern_bars); + notify = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_NOTIFY_CFG, + IORESOURCE_IO | IORESOURCE_MEM, + &mdev->modern_bars); + if (!isr || !notify) { + dev_err(&pci_dev->dev, + "virtio_pci: missing capabilities %i/%i/%i\n", + common, isr, notify); + return -EINVAL; + } + + /* Device capability is only mandatory for devices that have + * device-specific configuration. + */ + device = virtio_pci_find_capability(pci_dev, VIRTIO_PCI_CAP_DEVICE_CFG, + IORESOURCE_IO | IORESOURCE_MEM, + &mdev->modern_bars); + + err = pci_request_selected_regions(pci_dev, mdev->modern_bars, + "virtio-pci-modern"); + if (err) + return err; + + err = -EINVAL; + mdev->common = pds_vdpa_map_capability(mdev, common, + sizeof(struct virtio_pci_common_cfg), + 4, 0, + sizeof(struct virtio_pci_common_cfg), + NULL, NULL); + if (!mdev->common) + goto err_map_common; + mdev->isr = pds_vdpa_map_capability(mdev, isr, sizeof(u8), 1, + 0, 1, NULL, NULL); + if (!mdev->isr) + goto err_map_isr; + + /* Read notify_off_multiplier from config space. */ + pci_read_config_dword(pci_dev, + notify + offsetof(struct virtio_pci_notify_cap, + notify_off_multiplier), + &mdev->notify_offset_multiplier); + /* Read notify length and offset from config space. */ + pci_read_config_dword(pci_dev, + notify + offsetof(struct virtio_pci_notify_cap, + cap.length), + ¬ify_length); + + pci_read_config_dword(pci_dev, + notify + offsetof(struct virtio_pci_notify_cap, + cap.offset), + ¬ify_offset); + + /* We don't know how many VQs we'll map, ahead of the time. + * If notify length is small, map it all now. + * Otherwise, map each VQ individually later. + */ + if ((u64)notify_length + (notify_offset % PAGE_SIZE) <= PAGE_SIZE) { + mdev->notify_base = pds_vdpa_map_capability(mdev, notify, + 2, 2, + 0, notify_length, + &mdev->notify_len, + &mdev->notify_pa); + if (!mdev->notify_base) + goto err_map_notify; + } else { + mdev->notify_map_cap = notify; + } + + /* Again, we don't know how much we should map, but PAGE_SIZE + * is more than enough for all existing devices. + */ + if (device) { + mdev->device = pds_vdpa_map_capability(mdev, device, 0, 4, + 0, PAGE_SIZE, + &mdev->device_len, + NULL); + if (!mdev->device) + goto err_map_device; + } + + return 0; + +err_map_device: + if (mdev->notify_base) + pci_iounmap(pci_dev, mdev->notify_base); +err_map_notify: + pci_iounmap(pci_dev, mdev->isr); +err_map_isr: + pci_iounmap(pci_dev, mdev->common); +err_map_common: + pci_release_selected_regions(pci_dev, mdev->modern_bars); + return err; +} + +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev) +{ + struct pci_dev *pci_dev = mdev->pci_dev; + + if (mdev->device) + pci_iounmap(pci_dev, mdev->device); + if (mdev->notify_base) + pci_iounmap(pci_dev, mdev->notify_base); + pci_iounmap(pci_dev, mdev->isr); + pci_iounmap(pci_dev, mdev->common); + pci_release_selected_regions(pci_dev, mdev->modern_bars); +} diff --git a/drivers/vdpa/pds/virtio_pci.h b/drivers/vdpa/pds/virtio_pci.h new file mode 100644 index 000000000000..f017cfa1173c --- /dev/null +++ b/drivers/vdpa/pds/virtio_pci.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_VIRTIO_PCI_H_ +#define _PDS_VIRTIO_PCI_H_ +int pds_vdpa_probe_virtio(struct virtio_pci_modern_device *mdev); +void pds_vdpa_remove_virtio(struct virtio_pci_modern_device *mdev); +#endif /* _PDS_VIRTIO_PCI_H_ */ From patchwork Thu Mar 9 01:30:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76A4FC742A7 for ; Thu, 9 Mar 2023 01:31:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229936AbjCIBbm (ORCPT ); Wed, 8 Mar 2023 20:31:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229892AbjCIBbg (ORCPT ); Wed, 8 Mar 2023 20:31:36 -0500 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2044.outbound.protection.outlook.com [40.107.95.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A80815146 for ; Wed, 8 Mar 2023 17:31:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EeQRW7JYV8eXE2k1U+cOEMVL9+7Xjeboa9Rv9NyEFwNHepwEzEWK9iMI/uYLhk5CrJlWuIGKk5E2kH4GpEBGZMB8JEz04H4+cSBnr6Fapk0TsHvvjDRTJ6RfBL/t7Xs2ftf0s2vjYfsEoOrW4c3Frf3x91DcudNty96nroLFgzkAI/dQoYwECsgZtWRcZaUGV53wyEEOw1zvgiLcYygWP+tPLKjacxCd+4a0Qk+Y7yq1zTi/+IEYGIGZEGoXR7JIvxyCuwTh+pgvrf0lbZuVrtrz07pWeLrgYyzyv1eOxDMnKeCZmMMvFr7pW0faZDXIwLD9Nuejk0e/TPAP7cvHpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=U+jCeoByliCsJfWGozLXz7UpN/wqTd+Bc+7/F0Eh8gA=; b=JkvnjjT/LlohEeS6gWNPnCnt/wOp55+Jn4M9omHR6J9JpVuN9OqV+H6RG4p+Bk5buqj1hAbUPDQA7saaV5/ET38tl8vxpPQAeE7TnQ+Kngc5j/pkp6KAjh/yN4gljqooMBonYZsulmMi1us3Oup4tf9+1maUkMWSN4/p/2itp7nmsQU8974BI01KB+Gk0SuE0bX3ppgZpwnv6z+LKhGgGX1N2Qm799fHONJi2G3T8hhjc5hlHK8IPFJ0lPIS0RGssxDBb+lkgEpqN3zBND8lAtB5/ZgumMk6zhOeeAhaMRLdGJqXvsWKGrpQyGrgTWdVQfod3ekFQrZXBnr31j0YHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=U+jCeoByliCsJfWGozLXz7UpN/wqTd+Bc+7/F0Eh8gA=; b=WBLfkYgY0ytGkilPxX6/oV5KC2IEvxO6HNQaqY4n1cxtUavNTz0v2fSOyNRcHBMqfyy5QKsODThazvdZmxQrBfpYuuXXcyF21oKssW3NCyMb1hWonOX7knh3oS6L4n//2ZrZBXA6rwEhE8Kra/ow6dbk//bVVDe9Q+3bxZmoDLw= Received: from MW3PR05CA0001.namprd05.prod.outlook.com (2603:10b6:303:2b::6) by BL1PR12MB5924.namprd12.prod.outlook.com (2603:10b6:208:39b::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17; Thu, 9 Mar 2023 01:31:29 +0000 Received: from CO1NAM11FT111.eop-nam11.prod.protection.outlook.com (2603:10b6:303:2b:cafe::fe) by MW3PR05CA0001.outlook.office365.com (2603:10b6:303:2b::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.18 via Frontend Transport; Thu, 9 Mar 2023 01:31:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1NAM11FT111.mail.protection.outlook.com (10.13.174.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.18 via Frontend Transport; Thu, 9 Mar 2023 01:31:29 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:26 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 4/7] pds_vdpa: add vdpa config client commands Date: Wed, 8 Mar 2023 17:30:43 -0800 Message-ID: <20230309013046.23523-5-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT111:EE_|BL1PR12MB5924:EE_ X-MS-Office365-Filtering-Correlation-Id: ea4b5be8-8d1a-4675-4b87-08db203e01f1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DFg9LiuObI9mZd/TLO2idzIvgAQ1joWX5SQFPzRsgIuu+mQH18GSUvK/L4MLE8/fyuMzF8tCke+3yARPwouA/BQUVB+Tv+yPBnMs0iIqQ6cFgSCnC8GGz7QnslOY5jVey9wos25EbBSpzQHyYJghZH8TfiW1DKoK8j0CZwrmadnJvt0qHphoq6ckQhnlFO9f74hEyMNc3YKEVjtuf7Ge/M3qvg531STkROSz6ztpBv4TItLRHLe4vWU3pshKbjkjc7BOq8rQUDPFqkfOPERZb/wbRmhNmV0sawCbccuOr2idx1lUmtMMupNczFg9Hxwa36GpI1uetBTKchGO7EsRVowM6p0+C4+suRaDhqijl2/Pqr9n8sT6y8dcvRcAVoPP52s0XY8MA8ozZyRpTruvBI78bnXm0d4N2GGjWLXU1yRjXbsJpL44nAtdyPzmckieA3n+BBjoevQugKUyU2stjQo7Y/6D7zO6YwbtYduCdwK9gvGP/uhHGdw0f3afHNLdVQin5B57O4SaydxiZFntOOetdox6zsCLAYY9dl1u7v23rUyFLdEMY1xpWUHmEXFBkOvfUPAXgK+HZ/3Mh4YcOKj0B0fdHCeuK9gokipUUOy3C0gapzG/1nUNjU14bDE55bpy315Rhroli/iQtfgZnLAQT5LPV2JpAh796EyzuK0u9J+JGl3lNHg9lhbwZGk9ODDnDzdW3/MTVhbbbaRjLYEb36S8ev7jmRG7Zj2QGM0= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(376002)(396003)(346002)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(36756003)(30864003)(26005)(8936002)(356005)(5660300002)(82740400003)(6666004)(81166007)(36860700001)(47076005)(82310400005)(336012)(426003)(83380400001)(186003)(16526019)(2616005)(86362001)(316002)(1076003)(110136005)(41300700001)(8676002)(70586007)(40480700001)(40460700003)(478600001)(70206006)(2906002)(4326008)(44832011)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:29.4146 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ea4b5be8-8d1a-4675-4b87-08db203e01f1 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT111.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5924 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC These are the adminq commands that will be needed for setting up and using the vDPA device. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/Makefile | 1 + drivers/vdpa/pds/cmds.c | 207 +++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/cmds.h | 16 +++ drivers/vdpa/pds/vdpa_dev.h | 36 +++++- include/linux/pds/pds_vdpa.h | 175 +++++++++++++++++++++++++++++ 5 files changed, 434 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/pds/cmds.c create mode 100644 drivers/vdpa/pds/cmds.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index ca2efa8c6eb5..7211eba3d942 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o pds_vdpa-y := aux_drv.o \ + cmds.o \ virtio_pci.o \ vdpa_dev.o diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c new file mode 100644 index 000000000000..45410739107c --- /dev/null +++ b/drivers/vdpa/pds/cmds.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include +#include + +#include +#include +#include +#include + +#include "vdpa_dev.h" +#include "aux_drv.h" +#include "cmds.h" + +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_init_cmd init_cmd = { + .opcode = PDS_VDPA_CMD_INIT, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .len = cpu_to_le32(sizeof(struct virtio_net_config)), + .config_pa = 0, /* we use the PCI space, not an alternate space */ + }; + struct pds_vdpa_comp init_comp = {0}; + int err; + + /* Initialize the vdpa/virtio device */ + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&init_cmd, + sizeof(init_cmd), + (union pds_core_adminq_comp *)&init_comp, + 0); + if (err) + dev_err(dev, "Failed to init hw, status %d: %pe\n", + init_comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_cmd cmd = { + .opcode = PDS_VDPA_CMD_RESET, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to reset hw, status %d: %pe\n", + comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_setattr_cmd cmd = { + .opcode = PDS_VDPA_CMD_SET_ATTR, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .attr = PDS_VDPA_ATTR_MAC, + }; + struct pds_vdpa_comp comp = {0}; + int err; + + ether_addr_copy(cmd.mac, mac); + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to set mac address %pM, status %d: %pe\n", + mac, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_setattr_cmd cmd = { + .opcode = PDS_VDPA_CMD_SET_ATTR, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS, + .max_vq_pairs = cpu_to_le16(max_vqp), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to set max vq pairs %u, status %d: %pe\n", + max_vqp, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid, + struct pds_vdpa_vq_info *vq_info) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_init_comp comp = {0}; + struct pds_vdpa_vq_init_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_INIT, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + .len = cpu_to_le16(ilog2(vq_info->q_len)), + .desc_addr = cpu_to_le64(vq_info->desc_addr), + .avail_addr = cpu_to_le64(vq_info->avail_addr), + .used_addr = cpu_to_le64(vq_info->used_addr), + .intr_index = cpu_to_le16(qid), + }; + int err; + + dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n", + __func__, qid, ilog2(vq_info->q_len), + vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr); + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) { + dev_err(dev, "Failed to init vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + return err; + } + + vq_info->hw_qtype = comp.hw_qtype; + vq_info->hw_qindex = le16_to_cpu(comp.hw_qindex); + + return 0; +} + +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_reset_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_RESET, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to reset vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_set_features_cmd cmd = { + .opcode = PDS_VDPA_CMD_SET_FEATURES, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .features = cpu_to_le64(features), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to set features %#llx, status %d: %pe\n", + features, comp.status, ERR_PTR(err)); + + return err; +} diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h new file mode 100644 index 000000000000..72e19f4efde6 --- /dev/null +++ b/drivers/vdpa/pds/cmds.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _VDPA_CMDS_H_ +#define _VDPA_CMDS_H_ + +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv); + +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv); +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac); +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp); +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid, + struct pds_vdpa_vq_info *vq_info); +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid); +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features); +#endif /* _VDPA_CMDS_H_ */ diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h index 97fab833a0aa..33284ebe538c 100644 --- a/drivers/vdpa/pds/vdpa_dev.h +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -4,11 +4,45 @@ #ifndef _VDPA_DEV_H_ #define _VDPA_DEV_H_ -#define PDS_VDPA_MAX_QUEUES 65 +#include +#include + +struct pds_vdpa_vq_info { + bool ready; + u64 desc_addr; + u64 avail_addr; + u64 used_addr; + u32 q_len; + u16 qid; + int irq; + char irq_name[32]; + + void __iomem *notify; + dma_addr_t notify_pa; + + u64 doorbell; + u16 avail_idx; + u16 used_idx; + + u8 hw_qtype; + u16 hw_qindex; + struct vdpa_callback event_cb; + struct pds_vdpa_device *pdsv; +}; + +#define PDS_VDPA_MAX_QUEUES 65 +#define PDS_VDPA_MAX_QLEN 32768 struct pds_vdpa_device { struct vdpa_device vdpa_dev; struct pds_vdpa_aux *vdpa_aux; + + struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES]; + u64 req_features; /* features requested by vdpa */ + u64 actual_features; /* features negotiated and in use */ + u8 vdpa_index; /* rsvd for future subdevice use */ + u8 num_vqs; /* num vqs in use */ + struct vdpa_callback config_cb; }; int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h index 3f7c08551163..b6a4cb4d3c6b 100644 --- a/include/linux/pds/pds_vdpa.h +++ b/include/linux/pds/pds_vdpa.h @@ -101,4 +101,179 @@ struct pds_vdpa_ident_cmd { __le32 len; __le64 ident_pa; }; + +/** + * struct pds_vdpa_status_cmd - STATUS_UPDATE command + * @opcode: Opcode PDS_VDPA_CMD_STATUS_UPDATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @status: new status bits + */ +struct pds_vdpa_status_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + u8 status; +}; + +/** + * enum pds_vdpa_attr - List of VDPA device attributes + * @PDS_VDPA_ATTR_MAC: MAC address + * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs + */ +enum pds_vdpa_attr { + PDS_VDPA_ATTR_MAC = 1, + PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2, +}; + +/** + * struct pds_vdpa_setattr_cmd - SET_ATTR command + * @opcode: Opcode PDS_VDPA_CMD_SET_ATTR + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @attr: attribute to be changed (enum pds_vdpa_attr) + * @pad: Word boundary padding + * @mac: new mac address to be assigned as vdpa device address + * @max_vq_pairs: new limit of virtqueue pairs + */ +struct pds_vdpa_setattr_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + u8 attr; + u8 pad[3]; + union { + u8 mac[6]; + __le16 max_vq_pairs; + } __packed; +}; + +/** + * struct pds_vdpa_vq_init_cmd - queue init command + * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq) + * @len: log(2) of max descriptor count + * @desc_addr: DMA address of descriptor area + * @avail_addr: DMA address of available descriptors (aka driver area) + * @used_addr: DMA address of used descriptors (aka device area) + * @intr_index: interrupt index + */ +struct pds_vdpa_vq_init_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; + __le16 len; + __le64 desc_addr; + __le64 avail_addr; + __le64 used_addr; + __le16 intr_index; +}; + +/** + * struct pds_vdpa_vq_init_comp - queue init completion + * @status: Status of the command (enum pds_core_status_code) + * @hw_qtype: HW queue type, used in doorbell selection + * @hw_qindex: HW queue index, used in doorbell selection + * @rsvd: Word boundary padding + * @color: Color bit + */ +struct pds_vdpa_vq_init_comp { + u8 status; + u8 hw_qtype; + __le16 hw_qindex; + u8 rsvd[11]; + u8 color; +}; + +/** + * struct pds_vdpa_vq_reset_cmd - queue reset command + * @opcode: Opcode PDS_VDPA_CMD_VQ_RESET + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + */ +struct pds_vdpa_vq_reset_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; +}; + +/** + * struct pds_vdpa_set_features_cmd - set hw features + * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @rsvd: Word boundary padding + * @features: Feature bit mask + */ +struct pds_vdpa_set_features_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le32 rsvd; + __le64 features; +}; + +/** + * struct pds_vdpa_vq_set_state_cmd - set vq state + * @opcode: Opcode PDS_VDPA_CMD_VQ_SET_STATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + * @avail: Device avail index. + * @used: Device used index. + * + * If the virtqueue uses packed descriptor format, then the avail and used + * index must have a wrap count. The bits should be arranged like the upper + * 16 bits in the device available notification data: 15 bit index, 1 bit wrap. + */ +struct pds_vdpa_vq_set_state_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; + __le16 avail; + __le16 used; +}; + +/** + * struct pds_vdpa_vq_get_state_cmd - get vq state + * @opcode: Opcode PDS_VDPA_CMD_VQ_GET_STATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + */ +struct pds_vdpa_vq_get_state_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; +}; + +/** + * struct pds_vdpa_vq_get_state_comp - get vq state completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd0: Word boundary padding + * @avail: Device avail index. + * @used: Device used index. + * @rsvd: Word boundary padding + * @color: Color bit + * + * If the virtqueue uses packed descriptor format, then the avail and used + * index will have a wrap count. The bits will be arranged like the "next" + * part of device available notification data: 15 bit index, 1 bit wrap. + */ +struct pds_vdpa_vq_get_state_comp { + u8 status; + u8 rsvd0; + __le16 avail; + __le16 used; + u8 rsvd[9]; + u8 color; +}; + #endif /* _PDS_VDPA_IF_H_ */ From patchwork Thu Mar 9 01:30:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7953C742A7 for ; Thu, 9 Mar 2023 01:31:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229652AbjCIBbr (ORCPT ); Wed, 8 Mar 2023 20:31:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229927AbjCIBbi (ORCPT ); Wed, 8 Mar 2023 20:31:38 -0500 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2049.outbound.protection.outlook.com [40.107.101.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5BAF94A7B for ; Wed, 8 Mar 2023 17:31:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E9H8axbFmyxEvwdIXOL3L3hpzWuYMRSJ7bvV/vDKEhVXiY+O35BBXaQFdhOnmy+X3PtKqHwE3VPQU8iJsGWPkAkW801K92obyGk6idnhK9rt8a0Rufcr6kuUPBN0PmCHe47sl8QxbiTyt9t80r2z7mwkVIvREkFEtbtAKez/yv6z+DNmAgkUvcQ37w8c6JiS12IUz6JTLUf0BwomYql3dKNeaYGfLe+xG8LcTTsxIlSKbo9L4tgStPz9F/gaKlELCAzcFoVbe0OwqjBkQYy6o+dmCi4GhIMvCjrKGVuW9Qw+pljCZ3dpUMfeqPUIfQnby1W2f071bElDoGRBNgHtGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LZJLil3DfWJT/1W8l+CFNlOxTDPrTNS208idm/hnQ4g=; b=NbNDVq0IhLXyXtBrdvS7XWIPK7x425Y26XHhxUSiy9NDb4Q2soSIx2m/thUQLiB60GeWWtrxcHJ0yiiTHIdjVaV2qZuzYSfmEHGeSlh6Rv3c8hIC8X4DtMBDepkxKWgE64Exgu9Y6mETs4u3LtEDkLZLL2Xl9VTKaXTLWwTo3kUKzaMXhG33k+6Iytz4LYVRyED8Kgzy3HxujT21gDCHpc6T5HmCUVdES8vlvh53axsAsx+SAbIvdA01j5qnlZHZUNPfCquX+3UJz0pULJER/nXvEGJTpK/Od+Uns46ICSZNINm6Q2vpgnpvuqM0x/MtfIRWIKsT8l5oTJHGqJYFJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LZJLil3DfWJT/1W8l+CFNlOxTDPrTNS208idm/hnQ4g=; b=UL9g6h8RXeUznMSZxx+MZ2w2qRnU1iQrai4+OYAjEe9ZCLk2Pv9oHQ59Ps0DvCGBIpnRrdR0TzrUVybNxxOj3VyBO4xxhYTg/8hoeYp8pwIO6Kb+GYZyFEnPpKU28lzUeOs5xCWF7SlONTK4ZgXAhI71W3pO+lY97uHM3z70kXg= Received: from BN9PR03CA0335.namprd03.prod.outlook.com (2603:10b6:408:f6::10) by CH0PR12MB8531.namprd12.prod.outlook.com (2603:10b6:610:181::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6156.28; Thu, 9 Mar 2023 01:31:30 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::80) by BN9PR03CA0335.outlook.office365.com (2603:10b6:408:f6::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:30 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:27 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 5/7] pds_vdpa: add support for vdpa and vdpamgmt interfaces Date: Wed, 8 Mar 2023 17:30:44 -0800 Message-ID: <20230309013046.23523-6-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|CH0PR12MB8531:EE_ X-MS-Office365-Filtering-Correlation-Id: 62880e12-da09-4cd1-5f34-08db203e02a9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: U8WmKBqZj5iVv/zGa1Sftn4tq1gqUntXO+ZJUu5CbMqyIK+AHTVVs20TConhNFsldjNJFkHvvc3WEiL1Etz2kOcP0P8cOFwH6aEaCP+Wi9OoOLP6MZIpdR3wnQS3iAUJ2qnoMV6o0my7YKqDnFF4zGIOEMJUfUSxC8eiSIpD0M4ktPrZEZcrRHKWeggcZj2UP5QaRxvvbuunscm0Dd2gkiqTILtK/keQZOu2U/amTWuSbCNWhrc0jn/rqY4oRPZW/94w5yvZ5ORW34L5VQQI5nYy2GdzqoNuFqixQ1XCLfwLfdE0X3RnXBFgbCBwKE303F9gU3V0RADOtuC7ZzriadAMcggqWGsj3VIujiYzRo718cUmyMkkSDfjV6MnDWdRgQgGTEDLxqfOgGUnBm9o0CaZcyzM7dZR6rfyrEkFioPAhwNvRlJiyakyhDO1gjjQZzMv9Rw6Nmv6ZPDGvZHLNX9kwFQMTlmT/J7K7VTIO5F0PMc+DSCsdN3GgyPJ11fJEYw8wScQJb/L7qaxCnyMxd+9nIuNDYXD1nwVNQ0kQEaZQyF4cUnp8UfRAgJFnM0Y8pACn9faLql8ldL14HZXyDjb1GDmQZlFNuF45UyjBPIwFCAIqCzEhZoMsYk0ebeGMVDBDQ8wf8sQF8E7GMcgHmHdnZK9TFdU5A5SIqYaCrERo/wTIxoO0LhRDvcWFnH5rdagDYPCQDvXUgMjf8OoMhi4v+iz4hdVYPitVUxvgJs= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(346002)(396003)(39860400002)(136003)(451199018)(36840700001)(40470700004)(46966006)(8936002)(30864003)(5660300002)(44832011)(70206006)(70586007)(4326008)(8676002)(316002)(66899018)(478600001)(110136005)(2906002)(36756003)(6666004)(426003)(47076005)(1076003)(36860700001)(356005)(82740400003)(41300700001)(2616005)(81166007)(86362001)(40480700001)(83380400001)(82310400005)(40460700003)(26005)(186003)(16526019)(336012)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:30.7304 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 62880e12-da09-4cd1-5f34-08db203e02a9 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB8531 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This is the vDPA device support, where we advertise that we can support the virtio queues and deal with the configuration work through the pds_core's adminq. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/aux_drv.c | 15 + drivers/vdpa/pds/aux_drv.h | 1 + drivers/vdpa/pds/debugfs.c | 172 ++++++++++++ drivers/vdpa/pds/debugfs.h | 8 + drivers/vdpa/pds/vdpa_dev.c | 545 +++++++++++++++++++++++++++++++++++- 5 files changed, 740 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index 28158d0d98a5..d706f06f7400 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -60,8 +60,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, goto err_free_mgmt_info; } + /* Let vdpa know that we can provide devices */ + err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev); + if (err) { + dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n", + __func__, ERR_PTR(err)); + goto err_free_virtio; + } + + pds_vdpa_debugfs_add_pcidev(vdpa_aux); + pds_vdpa_debugfs_add_ident(vdpa_aux); + return 0; +err_free_virtio: + pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev); err_free_mgmt_info: pci_free_irq_vectors(padev->vf->pdev); err_aux_unreg: @@ -78,11 +91,13 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev); pds_vdpa_remove_virtio(&vdpa_aux->vd_mdev); pci_free_irq_vectors(vdpa_aux->padev->vf->pdev); vdpa_aux->padev->ops->unregister_client(vdpa_aux->padev); + pds_vdpa_debugfs_del_vdpadev(vdpa_aux); kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 87ac3c01c476..1ab1ce64da7c 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -11,6 +11,7 @@ struct pds_vdpa_aux { struct pds_auxiliary_dev *padev; struct vdpa_mgmt_dev vdpa_mdev; + struct pds_vdpa_device *pdsv; struct pds_vdpa_ident ident; diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index aa5e9677fe74..b3ee4f42f3b6 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -9,6 +9,7 @@ #include #include "aux_drv.h" +#include "vdpa_dev.h" #include "debugfs.h" #ifdef CONFIG_DEBUG_FS @@ -26,4 +27,175 @@ void pds_vdpa_debugfs_destroy(void) dbfs_dir = NULL; } +#define PRINT_SBIT_NAME(__seq, __f, __name) \ + do { \ + if ((__f) & (__name)) \ + seq_printf(__seq, " %s", &#__name[16]); \ + } while (0) + +static void print_status_bits(struct seq_file *seq, u16 status) +{ + seq_puts(seq, "status:"); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED); + seq_puts(seq, "\n"); +} + +#define PRINT_FBIT_NAME(__seq, __f, __name) \ + do { \ + if ((__f) & BIT_ULL(__name)) \ + seq_printf(__seq, " %s", #__name); \ + } while (0) + +static void print_feature_bits(struct seq_file *seq, u64 features) +{ + seq_puts(seq, "features:"); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CSUM); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_CSUM); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MTU); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MAC); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO4); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_TSO6); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ECN); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_UFO); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO4); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_TSO6); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_ECN); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HOST_UFO); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MRG_RXBUF); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STATUS); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VQ); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_VLAN); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_RX_EXTRA); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_GUEST_ANNOUNCE); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_MQ); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_CTRL_MAC_ADDR); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_HASH_REPORT); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSS); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_RSC_EXT); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_STANDBY); + PRINT_FBIT_NAME(seq, features, VIRTIO_NET_F_SPEED_DUPLEX); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_NOTIFY_ON_EMPTY); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_ANY_LAYOUT); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_VERSION_1); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_ACCESS_PLATFORM); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_RING_PACKED); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_ORDER_PLATFORM); + PRINT_FBIT_NAME(seq, features, VIRTIO_F_SR_IOV); + seq_puts(seq, "\n"); +} + +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) +{ + vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf->pdev), dbfs_dir); +} + +static int identity_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_aux *vdpa_aux = seq->private; + struct vdpa_mgmt_dev *mgmt; + + seq_printf(seq, "aux_dev: %s\n", + dev_name(&vdpa_aux->padev->aux_dev.dev)); + + mgmt = &vdpa_aux->vdpa_mdev; + seq_printf(seq, "max_vqs: %d\n", mgmt->max_supported_vqs); + seq_printf(seq, "config_attr_mask: %#llx\n", mgmt->config_attr_mask); + seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features); + print_feature_bits(seq, mgmt->supported_features); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(identity); + +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) +{ + debugfs_create_file("identity", 0400, vdpa_aux->dentry, + vdpa_aux, &identity_fops); +} + +static int config_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_device *pdsv = seq->private; + struct virtio_net_config vc; + + memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device, + sizeof(struct virtio_net_config)); + + seq_printf(seq, "mac: %pM\n", vc.mac); + seq_printf(seq, "max_virtqueue_pairs: %d\n", + __virtio16_to_cpu(true, vc.max_virtqueue_pairs)); + seq_printf(seq, "mtu: %d\n", __virtio16_to_cpu(true, vc.mtu)); + seq_printf(seq, "speed: %d\n", le32_to_cpu(vc.speed)); + seq_printf(seq, "duplex: %d\n", vc.duplex); + seq_printf(seq, "rss_max_key_size: %d\n", vc.rss_max_key_size); + seq_printf(seq, "rss_max_indirection_table_length: %d\n", + le16_to_cpu(vc.rss_max_indirection_table_length)); + seq_printf(seq, "supported_hash_types: %#x\n", + le32_to_cpu(vc.supported_hash_types)); + seq_printf(seq, "vn_status: %#x\n", + __virtio16_to_cpu(true, vc.status)); + print_status_bits(seq, __virtio16_to_cpu(true, vc.status)); + + seq_printf(seq, "req_features: %#llx\n", pdsv->req_features); + print_feature_bits(seq, pdsv->req_features); + seq_printf(seq, "actual_features: %#llx\n", pdsv->actual_features); + print_feature_bits(seq, pdsv->actual_features); + seq_printf(seq, "vdpa_index: %d\n", pdsv->vdpa_index); + seq_printf(seq, "num_vqs: %d\n", pdsv->num_vqs); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(config); + +static int vq_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_vq_info *vq = seq->private; + + seq_printf(seq, "ready: %d\n", vq->ready); + seq_printf(seq, "desc_addr: %#llx\n", vq->desc_addr); + seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr); + seq_printf(seq, "used_addr: %#llx\n", vq->used_addr); + seq_printf(seq, "q_len: %d\n", vq->q_len); + seq_printf(seq, "qid: %d\n", vq->qid); + + seq_printf(seq, "doorbell: %#llx\n", vq->doorbell); + seq_printf(seq, "avail_idx: %d\n", vq->avail_idx); + seq_printf(seq, "used_idx: %d\n", vq->used_idx); + seq_printf(seq, "irq: %d\n", vq->irq); + seq_printf(seq, "irq-name: %s\n", vq->irq_name); + + seq_printf(seq, "hw_qtype: %d\n", vq->hw_qtype); + seq_printf(seq, "hw_qindex: %d\n", vq->hw_qindex); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(vq); + +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + int i; + + debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops); + + for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) { + char name[8]; + + snprintf(name, sizeof(name), "vq%02d", i); + debugfs_create_file(name, 0400, vdpa_aux->dentry, + &vdpa_aux->pdsv->vqs[i], &vq_fops); + } +} + +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + debugfs_remove_recursive(vdpa_aux->dentry); + vdpa_aux->dentry = NULL; +} #endif /* CONFIG_DEBUG_FS */ diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h index fff078a869e5..23e8345add0d 100644 --- a/drivers/vdpa/pds/debugfs.h +++ b/drivers/vdpa/pds/debugfs.h @@ -10,9 +10,17 @@ void pds_vdpa_debugfs_create(void); void pds_vdpa_debugfs_destroy(void); +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux); #else static inline void pds_vdpa_debugfs_create(void) { } static inline void pds_vdpa_debugfs_destroy(void) { } +static inline void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) { } #endif #endif /* _PDS_VDPA_DEBUGFS_H_ */ diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index 15d623297203..2e0a5078d379 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -13,7 +14,426 @@ #include "vdpa_dev.h" #include "aux_drv.h" +#include "cmds.h" +#include "debugfs.h" +static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev) +{ + return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev); +} + +static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, + u64 desc_addr, u64 driver_addr, u64 device_addr) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].desc_addr = desc_addr; + pdsv->vqs[qid].avail_addr = driver_addr; + pdsv->vqs[qid].used_addr = device_addr; + + return 0; +} + +static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].q_len = num; +} + +static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + iowrite16(qid, pdsv->vqs[qid].notify); +} + +static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_callback *cb) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].event_cb = *cb; +} + +static irqreturn_t pds_vdpa_isr(int irq, void *data) +{ + struct pds_vdpa_vq_info *vq; + + vq = data; + if (vq->event_cb.callback) + vq->event_cb.callback(vq->event_cb.private); + + return IRQ_HANDLED; +} + +static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid) +{ + if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR) + return; + + free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]); + pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR; +} + +static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf->pdev; + struct device *dev = &pdsv->vdpa_dev.dev; + int irq; + int err; + + dev_dbg(dev, "%s: qid %d ready %d => %d\n", + __func__, qid, pdsv->vqs[qid].ready, ready); + if (ready == pdsv->vqs[qid].ready) + return; + + if (ready) { + irq = pci_irq_vector(pdev, qid); + snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name), + "vdpa-%s-%d", dev_name(dev), qid); + + err = request_irq(irq, pds_vdpa_isr, 0, + pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]); + if (err) { + dev_err(dev, "%s: no irq for qid %d: %pe\n", + __func__, qid, ERR_PTR(err)); + return; + } + pdsv->vqs[qid].irq = irq; + + /* Pass vq setup info to DSC */ + err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]); + if (err) { + pds_vdpa_release_irq(pdsv, qid); + ready = false; + } + } else { + err = pds_vdpa_cmd_reset_vq(pdsv, qid); + if (err) + dev_err(dev, "%s: reset_vq failed qid %d: %pe\n", + __func__, qid, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, qid); + } + + pdsv->vqs[qid].ready = ready; +} + +static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->vqs[qid].ready; +} + +static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid, + const struct vdpa_vq_state *state) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_set_state_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_SET_STATE, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + dev_dbg(dev, "%s: qid %d avail %#x\n", + __func__, qid, state->packed.last_avail_idx); + + if (pdsv->actual_features & VIRTIO_F_RING_PACKED) { + cmd.avail = cpu_to_le16(state->packed.last_avail_idx | + (state->packed.last_avail_counter << 15)); + cmd.used = cpu_to_le16(state->packed.last_used_idx | + (state->packed.last_used_counter << 15)); + } else { + cmd.avail = cpu_to_le16(state->split.avail_index); + /* state->split does not provide a used_index: + * the vq will be set to "empty" here, and the vq will read + * the current used index the next time the vq is kicked. + */ + cmd.used = cpu_to_le16(state->split.avail_index); + } + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to set vq state qid %u, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} + +static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_vq_state *state) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_get_state_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_GET_STATE, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + }; + struct pds_vdpa_vq_get_state_comp comp = {0}; + int err; + + dev_dbg(dev, "%s: qid %d\n", __func__, qid); + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) { + dev_err(dev, "Failed to get vq state qid %u, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + return err; + } + + if (pdsv->actual_features & VIRTIO_F_RING_PACKED) { + state->packed.last_avail_idx = le16_to_cpu(comp.avail) & 0x7fff; + state->packed.last_avail_counter = le16_to_cpu(comp.avail) >> 15; + } else { + state->split.avail_index = le16_to_cpu(comp.avail); + /* state->split does not provide a used_index. */ + } + + return err; +} + +static struct vdpa_notification_area +pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct virtio_pci_modern_device *vd_mdev; + struct vdpa_notification_area area; + + area.addr = pdsv->vqs[qid].notify_pa; + + vd_mdev = &pdsv->vdpa_aux->vd_mdev; + if (!vd_mdev->notify_offset_multiplier) + area.size = PAGE_SIZE; + else + area.size = vd_mdev->notify_offset_multiplier; + + return area; +} + +static int pds_vdpa_get_vq_irq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->vqs[qid].irq; +} + +static u32 pds_vdpa_get_vq_align(struct vdpa_device *vdpa_dev) +{ + return PAGE_SIZE; +} + +static u32 pds_vdpa_get_vq_group(struct vdpa_device *vdpa_dev, u16 idx) +{ + return 0; +} + +static u64 pds_vdpa_get_device_features(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return le64_to_cpu(pdsv->vdpa_aux->ident.hw_features); +} + +static int pds_vdpa_set_driver_features(struct vdpa_device *vdpa_dev, u64 features) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct device *dev = &pdsv->vdpa_dev.dev; + u64 nego_features; + u64 missing; + int err; + + if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)) && features) { + dev_err(dev, "VIRTIO_F_ACCESS_PLATFORM is not negotiated\n"); + return -EOPNOTSUPP; + } + + pdsv->req_features = features; + + /* Check for valid feature bits */ + nego_features = features & le64_to_cpu(pdsv->vdpa_aux->ident.hw_features); + missing = pdsv->req_features & ~nego_features; + if (missing) { + dev_err(dev, "Can't support all requested features in %#llx, missing %#llx features\n", + pdsv->req_features, missing); + return -EOPNOTSUPP; + } + + dev_dbg(dev, "%s: %#llx => %#llx\n", + __func__, pdsv->actual_features, nego_features); + + if (pdsv->actual_features == nego_features) + return 0; + + err = pds_vdpa_cmd_set_features(pdsv, nego_features); + if (!err) + pdsv->actual_features = nego_features; + + return err; +} + +static u64 pds_vdpa_get_driver_features(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->actual_features; +} + +static void pds_vdpa_set_config_cb(struct vdpa_device *vdpa_dev, + struct vdpa_callback *cb) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->config_cb.callback = cb->callback; + pdsv->config_cb.private = cb->private; +} + +static u16 pds_vdpa_get_vq_num_max(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + /* qemu has assert() that vq_num_max <= VIRTQUEUE_MAX_SIZE (1024) */ + return min_t(u16, 1024, BIT(le16_to_cpu(pdsv->vdpa_aux->ident.max_qlen))); +} + +static u32 pds_vdpa_get_device_id(struct vdpa_device *vdpa_dev) +{ + return VIRTIO_ID_NET; +} + +static u32 pds_vdpa_get_vendor_id(struct vdpa_device *vdpa_dev) +{ + return PCI_VENDOR_ID_PENSANDO; +} + +static u8 pds_vdpa_get_status(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return vp_modern_get_status(&pdsv->vdpa_aux->vd_mdev); +} + +static void pds_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + vp_modern_set_status(&pdsv->vdpa_aux->vd_mdev, status); +} + +static int pds_vdpa_reset(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct device *dev = pdsv->vdpa_aux->padev->vf->dev; + int err = 0; + u8 status; + int i; + + status = pds_vdpa_get_status(vdpa_dev); + + if (status == 0) + return 0; + + if (status & VIRTIO_CONFIG_S_DRIVER_OK) { + /* Reset the vqs */ + for (i = 0; i < pdsv->num_vqs && !err; i++) { + err = pds_vdpa_cmd_reset_vq(pdsv, i); + if (err) + dev_err(dev, "%s: reset_vq failed qid %d: %pe\n", + __func__, i, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, i); + memset(&pdsv->vqs[i], 0, sizeof(pdsv->vqs[0])); + pdsv->vqs[i].ready = false; + } + } + + if (err != -ETIMEDOUT && err != -ENXIO) + pds_vdpa_set_status(vdpa_dev, 0); + + return 0; +} + +static size_t pds_vdpa_get_config_size(struct vdpa_device *vdpa_dev) +{ + return sizeof(struct virtio_net_config); +} + +static void pds_vdpa_get_config(struct vdpa_device *vdpa_dev, + unsigned int offset, + void *buf, unsigned int len) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + void __iomem *device; + + if (offset + len > sizeof(struct virtio_net_config)) { + WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len); + return; + } + + device = pdsv->vdpa_aux->vd_mdev.device; + memcpy_fromio(buf, device + offset, len); +} + +static void pds_vdpa_set_config(struct vdpa_device *vdpa_dev, + unsigned int offset, const void *buf, + unsigned int len) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + void __iomem *device; + + if (offset + len > sizeof(struct virtio_net_config)) { + WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len); + return; + } + + device = pdsv->vdpa_aux->vd_mdev.device; + memcpy_toio(device + offset, buf, len); +} + +static const struct vdpa_config_ops pds_vdpa_ops = { + .set_vq_address = pds_vdpa_set_vq_address, + .set_vq_num = pds_vdpa_set_vq_num, + .kick_vq = pds_vdpa_kick_vq, + .set_vq_cb = pds_vdpa_set_vq_cb, + .set_vq_ready = pds_vdpa_set_vq_ready, + .get_vq_ready = pds_vdpa_get_vq_ready, + .set_vq_state = pds_vdpa_set_vq_state, + .get_vq_state = pds_vdpa_get_vq_state, + .get_vq_notification = pds_vdpa_get_vq_notification, + .get_vq_irq = pds_vdpa_get_vq_irq, + .get_vq_align = pds_vdpa_get_vq_align, + .get_vq_group = pds_vdpa_get_vq_group, + + .get_device_features = pds_vdpa_get_device_features, + .set_driver_features = pds_vdpa_set_driver_features, + .get_driver_features = pds_vdpa_get_driver_features, + .set_config_cb = pds_vdpa_set_config_cb, + .get_vq_num_max = pds_vdpa_get_vq_num_max, + .get_device_id = pds_vdpa_get_device_id, + .get_vendor_id = pds_vdpa_get_vendor_id, + .get_status = pds_vdpa_get_status, + .set_status = pds_vdpa_set_status, + .reset = pds_vdpa_reset, + .get_config_size = pds_vdpa_get_config_size, + .get_config = pds_vdpa_get_config, + .set_config = pds_vdpa_set_config, +}; static struct virtio_device_id pds_vdpa_id_table[] = { {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, {0}, @@ -22,12 +442,135 @@ static struct virtio_device_id pds_vdpa_id_table[] = { static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, const struct vdpa_dev_set_config *add_config) { - return -EOPNOTSUPP; + struct pds_vdpa_aux *vdpa_aux; + struct pds_vdpa_device *pdsv; + struct vdpa_mgmt_dev *mgmt; + u16 fw_max_vqs, vq_pairs; + struct device *dma_dev; + struct pci_dev *pdev; + struct device *dev; + u8 mac[ETH_ALEN]; + int err; + int i; + + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); + dev = &vdpa_aux->padev->aux_dev.dev; + mgmt = &vdpa_aux->vdpa_mdev; + + if (vdpa_aux->pdsv) { + dev_warn(dev, "Multiple vDPA devices on a VF is not supported.\n"); + return -EOPNOTSUPP; + } + + pdsv = vdpa_alloc_device(struct pds_vdpa_device, vdpa_dev, + dev, &pds_vdpa_ops, 1, 1, name, false); + if (IS_ERR(pdsv)) { + dev_err(dev, "Failed to allocate vDPA structure: %pe\n", pdsv); + return PTR_ERR(pdsv); + } + + vdpa_aux->pdsv = pdsv; + vdpa_aux->padev->priv = pdsv; + pdsv->vdpa_aux = vdpa_aux; + + pdev = vdpa_aux->padev->vf->pdev; + dma_dev = &pdev->dev; + pdsv->vdpa_dev.dma_dev = dma_dev; + + err = pds_vdpa_init_hw(pdsv); + if (err) { + dev_err(dev, "Failed to init hw: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + fw_max_vqs = le16_to_cpu(pdsv->vdpa_aux->ident.max_vqs); + vq_pairs = fw_max_vqs / 2; + + /* Make sure we have the queues being requested */ + if (add_config->mask & (1 << VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) + vq_pairs = add_config->net.max_vq_pairs; + + pdsv->num_vqs = 2 * vq_pairs; + if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) + pdsv->num_vqs++; + + if (pdsv->num_vqs > fw_max_vqs) { + dev_err(dev, "%s: queue count requested %u greater than max %u\n", + __func__, pdsv->num_vqs, fw_max_vqs); + err = -ENOSPC; + goto err_unmap; + } + + if (pdsv->num_vqs != fw_max_vqs) { + err = pds_vdpa_cmd_set_max_vq_pairs(pdsv, vq_pairs); + if (err) { + dev_err(dev, "Failed to set max_vq_pairs: %pe\n", + ERR_PTR(err)); + goto err_unmap; + } + } + + /* Set a mac, either from the user config if provided + * or set a random mac if default is 00:..:00 + */ + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR)) { + ether_addr_copy(mac, add_config->net.mac); + pds_vdpa_cmd_set_mac(pdsv, mac); + } else { + struct virtio_net_config __iomem *vc; + + vc = pdsv->vdpa_aux->vd_mdev.device; + memcpy_fromio(mac, vc->mac, sizeof(mac)); + if (is_zero_ether_addr(mac)) { + eth_random_addr(mac); + dev_info(dev, "setting random mac %pM\n", mac); + pds_vdpa_cmd_set_mac(pdsv, mac); + } + } + + for (i = 0; i < pdsv->num_vqs; i++) { + pdsv->vqs[i].qid = i; + pdsv->vqs[i].pdsv = pdsv; + pdsv->vqs[i].irq = VIRTIO_MSI_NO_VECTOR; + pdsv->vqs[i].notify = vp_modern_map_vq_notify(&pdsv->vdpa_aux->vd_mdev, + i, &pdsv->vqs[i].notify_pa); + } + + pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev; + + /* We use the _vdpa_register_device() call rather than the + * vdpa_register_device() to avoid a deadlock because our + * dev_add() is called with the vdpa_dev_lock already set + * by vdpa_nl_cmd_dev_add_set_doit() + */ + err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs); + if (err) { + dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + pds_vdpa_debugfs_add_vdpadev(vdpa_aux); + + return 0; + +err_unmap: + put_device(&pdsv->vdpa_dev.dev); + vdpa_aux->pdsv = NULL; + return err; } static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *vdpa_dev) { + struct pds_vdpa_aux *vdpa_aux; + + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); + _vdpa_unregister_device(vdpa_dev); + pds_vdpa_debugfs_del_vdpadev(vdpa_aux); + + vdpa_aux->pdsv = NULL; + + dev_info(vdpa_aux->padev->vf->dev, "Removed vdpa device\n"); } static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = { From patchwork Thu Mar 9 01:30:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 819DBC678D5 for ; Thu, 9 Mar 2023 01:31:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229757AbjCIBbs (ORCPT ); Wed, 8 Mar 2023 20:31:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229930AbjCIBbi (ORCPT ); Wed, 8 Mar 2023 20:31:38 -0500 Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2072.outbound.protection.outlook.com [40.107.100.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 605029CFD7 for ; Wed, 8 Mar 2023 17:31:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gOXcO+AhVaZ/foa9vd//LiHd2QjyVQULsU/UTlaqT/CpELIKxiLOt7NxAqMFNRknl9E62YuuVwCNPHuXmgUQbzIi/XTh/Q3BzD5atBljiAzPyz1gQi0D/v8JlBECb1sqTpkAG4Re2yDPq4yKcvO/XIrknibj6E388BHx4rk0YNWVRAgbvEpj2sQbR+ugPrcMAHfg+/e0X8WduoTDmkdJFHCoLY5Zd4LiVZwMgLwHnZj58xM0rjqFtYkzCHPOq8nQNvtDgpqPGCxNhXt3+3d+fEJFabBY1enwZjGOhZCtnK28IqFzc43WPaDrRXYiURMmVmHoXJQKow6sttu/dP3rJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+xJJlltgY70GEkqfIJxcI5+yImfrN8Fgrv5ulsNgNmg=; b=gzwr/rjCjF/AdLxIMGlOhtE1vMYngMLLZqPLqlGZN1C1JJoO0DXtxonbRqXbY4IViawfoifZYlE65oH83c4dmelEI4RIMqRre1Igez/1ljt6hmyLk/MZtUwFbhIGuTdaJtVx3Qo/cCqU6eFes10nx+2VYC8XtF4E1AAmZ4nOwDZZubhZahBcpLowv6UErZvao5XhokkgtEKdawrQyRl5mpCj6Sh0sDTt7zp8Ez+uhq8ypWohqo8X7d6mgFsxkBKezpztv18mF60kkO0XYyZeEa4Xp/T7mopZmr9wPxi5h/irvOVvqARmKaCoP7euZW1HRQdOtN8PE+D+uRCUvz2Crw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+xJJlltgY70GEkqfIJxcI5+yImfrN8Fgrv5ulsNgNmg=; b=jtnC/padZYrUQp4ZSGFYb4sFmGYS1es1/zVMgPE5wrZpUlOVC5s0+vk+f4obo7fyq9wQJ2rvDzNoi0rPflQG0R/1CWh4MS78HBAxUeT5XarUbwtRzSEpd+1LE25O4DCPDG/VhF4bp9fFNzUD6RzqqCUHC982ZhaksKt1hzyrr5A= Received: from BN9PR03CA0332.namprd03.prod.outlook.com (2603:10b6:408:f6::7) by PH0PR12MB5452.namprd12.prod.outlook.com (2603:10b6:510:d7::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17; Thu, 9 Mar 2023 01:31:31 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f6:cafe::d0) by BN9PR03CA0332.outlook.office365.com (2603:10b6:408:f6::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:31 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:28 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 6/7] pds_vdpa: subscribe to the pds_core events Date: Wed, 8 Mar 2023 17:30:45 -0800 Message-ID: <20230309013046.23523-7-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|PH0PR12MB5452:EE_ X-MS-Office365-Filtering-Correlation-Id: 54444e58-4092-4fcc-44a0-08db203e0312 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: T8wAy5cgC1g4SkkDcPYXi1lqL8C0E6SigS6GIXOpPrwgaBlKqdZLd3BhaEjF8rgNPUVjwsQPPVVV+cEA4jhK8I9DJkkdH9V6Bf49iA5w7u6dUsEtFpH2D+E6bEpZn7cHGMnY0Xk6R3cPDqqL4MW6z1wvnrg4SN7oNHx+HFnvd0VDYt/JqO63RfffLWnCssXrZRLtR25Mz51QbHgrHZHx0o0Cr2CHZ84tbAEY2dQRiEQRadvpCDFsSy9UGo+oYMXqoT/vNRUX5BGKqkywhwEIJJwoJSO80F1GsDAm+hSrzPZmH+C+TXZfSe9UgnQ2LV9w701qkpMAoFWgs97HQ88CNQWTRGg7uPLK6g8EL1paxUGqpl5Z4b100x8DfqlIkpoiJ0HeJHH/F0/CVPst05DIGQozw5tEPdbslWtvqKydmhoc4FVQLwZiMvbZmuhKm+bHqRZUp1+HiysoRwwTaX6na4KmwQqK8MwlNqTb8U4JCkSsDrgQGEQXa9RU1NG0GJ/JQzLBxodLOUohIy/rdmGBFrM69066W9RsxPL7b7Q3//nF4ueOJ7wCFh+y6C6nUhGq9QXcahmnfnvlWhjTQ0l5MOlTWQ+5V8MW9mzeEYgbDqbjGUY3WwZGH0E7ri2jqJo8sNLjcPF+lu62srLkRN2JgfNdQ9eds00qKs/eHWaBBiAE9VhugwQM6pn3VaJDE7ISJaB/T1h43cmagi6krn+DtYcncqjfpNcK3sAI08FRGEQ= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199018)(40470700004)(36840700001)(46966006)(8936002)(6666004)(5660300002)(36860700001)(82310400005)(40480700001)(8676002)(70206006)(4326008)(70586007)(86362001)(40460700003)(83380400001)(36756003)(478600001)(426003)(316002)(110136005)(47076005)(41300700001)(336012)(82740400003)(356005)(186003)(2906002)(16526019)(81166007)(1076003)(44832011)(26005)(2616005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:31.4178 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 54444e58-4092-4fcc-44a0-08db203e0312 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5452 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC Register for the pds_core's notification events, primarily to find out when the FW has been reset so we can pass this on back up the chain. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/vdpa_dev.c | 68 ++++++++++++++++++++++++++++++++++++- drivers/vdpa/pds/vdpa_dev.h | 1 + 2 files changed, 68 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index 2e0a5078d379..d99adb4f9fb1 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -22,6 +22,61 @@ static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev) return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev); } +static int pds_vdpa_notify_handler(struct notifier_block *nb, + unsigned long ecode, + void *data) +{ + struct pds_vdpa_device *pdsv = container_of(nb, struct pds_vdpa_device, nb); + struct device *dev = pdsv->vdpa_aux->padev->vf->dev; + + dev_dbg(dev, "%s: event code %lu\n", __func__, ecode); + + /* Give the upper layers a hint that something interesting + * may have happened. It seems that the only thing this + * triggers in the virtio-net drivers above us is a check + * of link status. + * + * We don't set the NEEDS_RESET flag for EVENT_RESET + * because we're likely going through a recovery or + * fw_update and will be back up and running soon. + */ + if (ecode == PDS_EVENT_RESET || ecode == PDS_EVENT_LINK_CHANGE) { + if (pdsv->config_cb.callback) + pdsv->config_cb.callback(pdsv->config_cb.private); + } + + return 0; +} + +static int pds_vdpa_register_event_handler(struct pds_vdpa_device *pdsv) +{ + struct device *dev = pdsv->vdpa_aux->padev->vf->dev; + struct notifier_block *nb = &pdsv->nb; + int err; + + if (!nb->notifier_call) { + nb->notifier_call = pds_vdpa_notify_handler; + err = pdsc_register_notify(nb); + if (err) { + nb->notifier_call = NULL; + dev_err(dev, "failed to register pds event handler: %ps\n", + ERR_PTR(err)); + return -EINVAL; + } + dev_dbg(dev, "pds event handler registered\n"); + } + + return 0; +} + +static void pds_vdpa_unregister_event_handler(struct pds_vdpa_device *pdsv) +{ + if (pdsv->nb.notifier_call) { + pdsc_unregister_notify(&pdsv->nb); + pdsv->nb.notifier_call = NULL; + } +} + static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, u64 desc_addr, u64 driver_addr, u64 device_addr) { @@ -538,6 +593,12 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev; + err = pds_vdpa_register_event_handler(pdsv); + if (err) { + dev_err(dev, "Failed to register for PDS events: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + /* We use the _vdpa_register_device() call rather than the * vdpa_register_device() to avoid a deadlock because our * dev_add() is called with the vdpa_dev_lock already set @@ -546,13 +607,15 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs); if (err) { dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err)); - goto err_unmap; + goto err_unevent; } pds_vdpa_debugfs_add_vdpadev(vdpa_aux); return 0; +err_unevent: + pds_vdpa_unregister_event_handler(pdsv); err_unmap: put_device(&pdsv->vdpa_dev.dev); vdpa_aux->pdsv = NULL; @@ -562,8 +625,11 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *vdpa_dev) { + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); struct pds_vdpa_aux *vdpa_aux; + pds_vdpa_unregister_event_handler(pdsv); + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); _vdpa_unregister_device(vdpa_dev); pds_vdpa_debugfs_del_vdpadev(vdpa_aux); diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h index 33284ebe538c..4e7a1b04a12a 100644 --- a/drivers/vdpa/pds/vdpa_dev.h +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -43,6 +43,7 @@ struct pds_vdpa_device { u8 vdpa_index; /* rsvd for future subdevice use */ u8 num_vqs; /* num vqs in use */ struct vdpa_callback config_cb; + struct notifier_block nb; }; int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); From patchwork Thu Mar 9 01:30:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13166708 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23239C64EC4 for ; Thu, 9 Mar 2023 01:31:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229945AbjCIBbo (ORCPT ); Wed, 8 Mar 2023 20:31:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229919AbjCIBbi (ORCPT ); Wed, 8 Mar 2023 20:31:38 -0500 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2054.outbound.protection.outlook.com [40.107.244.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6041D9CBD9 for ; Wed, 8 Mar 2023 17:31:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dOOGl40TN3abJ2AOtF55ZryjU9e7wcCSkj3ySVx1i0N4eFPVTyftV2NYkDUD+h1eA+1Uexf0p66zFaWyXKnzFfblx5Ug6fvZyFGC1xusc5SP64SVQYoZTUY9ioF+p7lusmoK9W/GsLRdcSLktmvNT32KBLMoKEfiY/VOtwhrGAJln3aBNLNSm93rxDngKA8yw3DBaVZZ7tBwHoAT3KM2xuCVHhSqiVxQE0YdXsGXBOpHPOSGWlYWIfnMEosLzUprgq2ZHjJgFUN2ioViT9z9kwAe8w+4eyTjIVDraa4BqoEIdievf++dWmq6HXfctMKdr6dqQ6/QfmOA3UKNs53Gww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=l2JQeWYNtO+U9gBSZZTevJ6ONbU1sP/zWCwDLoiOwk4=; b=i9GwRSWK/e7zx3S/cYh5r+sEg8R/C4Fzg05pGlBKVOaZq5nIZYB+Q29RTQsQWvhDXbiFn5Hv3BxG1c1kmyCMQj93o4KdUjKXaa+MiwYCqSfRRTpTR7LrL68Y5mCED1WcsONNKaKrbKEsVRc0OiVY8UO9ryFua5AvRmKPyDT3FApWdKGdE9tTV2N9OCE+54RnJK9a7xvDx1jIPfvoA8IkqCfQ4i6aShUqWkvg0nDyEKYX3tBJa4s2/whHUX3VxIjm3iOTWQFPf4usks11Ls+1otihk0eBPAIbVO76lmtjJd+57r9HhuxXjL/dZswN2csL56u00sXXv8YM4X1Ib6p9Lw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l2JQeWYNtO+U9gBSZZTevJ6ONbU1sP/zWCwDLoiOwk4=; b=EvdEumesYJ7RzPwNBwJp1iUWpXzVC4m8RK1Cbf8sp/qoefLS9mYK7J315KIiXT5azSnrQOT4zge95+B92BuuQbRDlQGeBzTji65Mj6dIUaUyqE7IXGiDGhq1XBl1jvUswg3pQGjTvysQbBt2Pkwqu0MAh+7Mmz4E1sEsEqBVjAE= Received: from MW3PR05CA0008.namprd05.prod.outlook.com (2603:10b6:303:2b::13) by DS0PR12MB7584.namprd12.prod.outlook.com (2603:10b6:8:13b::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6156.29; Thu, 9 Mar 2023 01:31:32 +0000 Received: from CO1NAM11FT111.eop-nam11.prod.protection.outlook.com (2603:10b6:303:2b:cafe::44) by MW3PR05CA0008.outlook.office365.com (2603:10b6:303:2b::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.17 via Frontend Transport; Thu, 9 Mar 2023 01:31:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1NAM11FT111.mail.protection.outlook.com (10.13.174.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6178.18 via Frontend Transport; Thu, 9 Mar 2023 01:31:31 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 8 Mar 2023 19:31:29 -0600 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH RFC v2 virtio 7/7] pds_vdpa: pds_vdps.rst and Kconfig Date: Wed, 8 Mar 2023 17:30:46 -0800 Message-ID: <20230309013046.23523-8-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230309013046.23523-1-shannon.nelson@amd.com> References: <20230309013046.23523-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT111:EE_|DS0PR12MB7584:EE_ X-MS-Office365-Filtering-Correlation-Id: b1f20e77-367c-4f9e-23de-08db203e0371 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xFZYGLFa7EgaFB5bIFcFCOACTplDckalDvfYGI0kd+wU6zIeL3XXa13IEo5z5WaarUC3zLiW5mb1hEAgPSa06N57rjzRD9GjCNzR5cEWZIzobnTShMwf6CTNB8QvdxdteT0W3JnB60CEw+MLyn9tWeFBTVv0lEtr+lqWu5sVQ4tjpcRT3Dm6dvzFWATAEb92LFh9vKZ6LdzNfN9ceEPSeJu4s5CdZpt1XDrRjZ4rKzoEhs7WI9KRDBLqrom3O+qT6NHo5EdLdBv2bSRyJQzWOW/ISwt2Er54wi9UAX8cnGUZPW/9LYheJgMDhklrWexG2J12Y6Eh6s3xXpvm6yDcD1565jhlRXAmLYlVrCT/AjcVIeJ9AEKalR+9P8RiYXTK4gGPp8BjAy0p8JA7+yuEZcFJZlxdw47P/za4SHkL0dDWIyLesj2K+Uc4lGmetTlbN3MWELJAUurTmYs3OkkAbogK7jxDanGDRfpP1EfINi7BefBN46zvLjmIVkzknvmyJ5U/RDTS00NA3MHwxsPGNeR31MOLtFPKzDHjeVdcbhBe4EL7N2UhipDT5fYk91K3/3Df+8wpzXtm5uRXU5AyhTs02nPBXKTPAKtBagWfWyTuDQVvlYFOH94V3VlvRFNwW31pkRHNiTuCieDFH4j4SlCPZETOwWLLKlloBhpAN2KQ/Xkz0It4BX4wCLhi9NqQdqlHdoNETFspqlTPXfIcJ0ZC5p+lFn7HfYnArLDcHAM5VWjB+UKtO4FIjcHiUPb62gPtxSPlgm58ndqOWMk90Q== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199018)(40470700004)(36840700001)(46966006)(2906002)(82740400003)(16526019)(186003)(81166007)(40480700001)(336012)(66899018)(44832011)(1076003)(26005)(356005)(6666004)(2616005)(36860700001)(40460700003)(70206006)(4326008)(5660300002)(8936002)(70586007)(478600001)(8676002)(43170500006)(47076005)(41300700001)(82310400005)(426003)(36756003)(316002)(83380400001)(86362001)(110136005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2023 01:31:31.9146 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b1f20e77-367c-4f9e-23de-08db203e0371 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT111.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7584 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add the documentation and Kconfig entry for pds_vdpa driver. Signed-off-by: Shannon Nelson --- .../ethernet/pensando/pds_vdpa.rst | 84 +++++++++++++++++++ MAINTAINERS | 4 + drivers/vdpa/Kconfig | 8 ++ 3 files changed, 96 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst diff --git a/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst new file mode 100644 index 000000000000..d41f6dd66e3e --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/pensando/pds_vdpa.rst @@ -0,0 +1,84 @@ +.. SPDX-License-Identifier: GPL-2.0+ +.. note: can be edited and viewed with /usr/bin/formiko-vim + +========================================================== +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family +========================================================== + +AMD/Pensando vDPA VF Device Driver +Copyright(c) 2023 Advanced Micro Devices, Inc + +Overview +======== + +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies +a vDPA device for use by the virtio network stack. It is used with +the Pensando Virtual Function devices that offer vDPA and virtio queue +services. It depends on the ``pds_core`` driver and hardware for the PF +and VF PCI handling as well as for device configuration services. + +Using the device +================ + +The ``pds_vdpa`` device is enabled via multiple configuration steps and +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual +Function devices. + +Shown below are the steps to bind the driver to a VF and also to the +associated auxiliary device created by the ``pds_core`` driver. + +.. code-block:: bash + + #!/bin/bash + + modprobe pds_core + modprobe vdpa + modprobe pds_vdpa + + PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'` + + # Enable vDPA VF auxiliary device(s) in the PF + devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime + + # Create a VF for vDPA use + echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs + + # Find the vDPA services/devices available + PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1` + + # Create a vDPA device for use in virtio network configurations + vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55 + + # Set up an ethernet interface on the vdpa device + modprobe virtio_vdpa + + + +Enabling the driver +=================== + +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> Network device support (NETDEVICES [=y]) + -> Ethernet driver support + -> Pensando devices + -> Pensando Ethernet PDS_VDPA Support + +Support +======= + +For general Linux networking support, please use the netdev mailing +list, which is monitored by Pensando personnel:: + + netdev@vger.kernel.org + +For more specific support needs, please use the Pensando driver support +email:: + + drivers@pensando.io diff --git a/MAINTAINERS b/MAINTAINERS index cb21dcd3a02a..da981c5bc830 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22120,6 +22120,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR R: Alvaro Karsz F: drivers/vdpa/solidrun/ +PDS DSC VIRTIO DATA PATH ACCELERATOR +R: Shannon Nelson +F: drivers/vdpa/pds/ + VIRTIO BALLOON M: "Michael S. Tsirkin" M: David Hildenbrand diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig index cd6ad92f3f05..c910cb119c1b 100644 --- a/drivers/vdpa/Kconfig +++ b/drivers/vdpa/Kconfig @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA This driver includes a HW monitor device that reads health values from the DPU. +config PDS_VDPA + tristate "vDPA driver for AMD/Pensando DSC devices" + depends on PDS_CORE + help + VDPA network driver for AMD/Pensando's PDS Core devices. + With this driver, the VirtIO dataplane can be + offloaded to an AMD/Pensando DSC device. + endif # VDPA